Merge remote-tracking branch 'origin/main' into onboarding

Kujtim Hoxha created

Change summary

.github/workflows/build.yml                                                                                                          |    2 
.golangci.yml                                                                                                                        |    2 
README.md                                                                                                                            |   76 
go.mod                                                                                                                               |   15 
go.sum                                                                                                                               |   26 
internal/config/config.go                                                                                                            |   11 
internal/config/load.go                                                                                                              |   14 
internal/config/load_test.go                                                                                                         |    9 
internal/fsext/fileutil.go                                                                                                           |    9 
internal/fur/provider/provider.go                                                                                                    |    2 
internal/llm/agent/mcp-tools.go                                                                                                      |    4 
internal/log/log.go                                                                                                                  |   13 
internal/tui/components/anim/anim.go                                                                                                 |    5 
internal/tui/components/anim/example/main.go                                                                                         |    6 
internal/tui/components/chat/chat.go                                                                                                 |   18 
internal/tui/components/chat/editor/editor.go                                                                                        |   17 
internal/tui/components/chat/header/header.go                                                                                        |    6 
internal/tui/components/chat/messages/messages.go                                                                                    |   22 
internal/tui/components/chat/messages/renderer.go                                                                                    |    2 
internal/tui/components/chat/messages/tool.go                                                                                        |    8 
internal/tui/components/chat/sidebar/sidebar.go                                                                                      |   37 
internal/tui/components/chat/splash/splash.go                                                                                        |   63 
internal/tui/components/completions/completions.go                                                                                   |    9 
internal/tui/components/completions/item.go                                                                                          |    4 
internal/tui/components/core/layout/container.go                                                                                     |   14 
internal/tui/components/core/layout/split.go                                                                                         |   43 
internal/tui/components/core/list/list.go                                                                                            |   23 
internal/tui/components/core/status/status.go                                                                                        |    4 
internal/tui/components/dialogs/commands/arguments.go                                                                                |   20 
internal/tui/components/dialogs/commands/commands.go                                                                                 |   22 
internal/tui/components/dialogs/commands/item.go                                                                                     |    4 
internal/tui/components/dialogs/compact/compact.go                                                                                   |    4 
internal/tui/components/dialogs/dialogs.go                                                                                           |    9 
internal/tui/components/dialogs/filepicker/filepicker.go                                                                             |    4 
internal/tui/components/dialogs/models/apikey.go                                                                                     |   10 
internal/tui/components/dialogs/models/list.go                                                                                       |   12 
internal/tui/components/dialogs/models/models.go                                                                                     |   18 
internal/tui/components/dialogs/permissions/permissions.go                                                                           |    4 
internal/tui/components/dialogs/quit/quit.go                                                                                         |   11 
internal/tui/components/dialogs/sessions/sessions.go                                                                                 |   20 
internal/tui/exp/list/list.go                                                                                                        |    2 
internal/tui/page/chat/chat.go                                                                                                       |   86 
internal/tui/tui.go                                                                                                                  |   27 
internal/tui/util/util.go                                                                                                            |    6 
vendor/cloud.google.com/go/LICENSE                                                                                                   |  202 
vendor/cloud.google.com/go/auth/CHANGES.md                                                                                           |  368 
vendor/cloud.google.com/go/auth/LICENSE                                                                                              |  202 
vendor/cloud.google.com/go/auth/README.md                                                                                            |   40 
vendor/cloud.google.com/go/auth/auth.go                                                                                              |  618 
vendor/cloud.google.com/go/auth/credentials/compute.go                                                                               |   90 
vendor/cloud.google.com/go/auth/credentials/detect.go                                                                                |  279 
vendor/cloud.google.com/go/auth/credentials/doc.go                                                                                   |   45 
vendor/cloud.google.com/go/auth/credentials/filetypes.go                                                                             |  231 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/aws_provider.go                                                 |  531 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/executable_provider.go                                          |  284 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/externalaccount.go                                              |  428 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/file_provider.go                                                |   78 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/info.go                                                         |   74 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/programmatic_provider.go                                        |   30 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/url_provider.go                                                 |   93 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/x509_provider.go                                                |   63 
vendor/cloud.google.com/go/auth/credentials/internal/externalaccountuser/externalaccountuser.go                                      |  115 
vendor/cloud.google.com/go/auth/credentials/internal/gdch/gdch.go                                                                    |  191 
vendor/cloud.google.com/go/auth/credentials/internal/impersonate/impersonate.go                                                      |  156 
vendor/cloud.google.com/go/auth/credentials/internal/stsexchange/sts_exchange.go                                                     |  167 
vendor/cloud.google.com/go/auth/credentials/selfsignedjwt.go                                                                         |   89 
vendor/cloud.google.com/go/auth/httptransport/httptransport.go                                                                       |  247 
vendor/cloud.google.com/go/auth/httptransport/transport.go                                                                           |  234 
vendor/cloud.google.com/go/auth/internal/credsfile/credsfile.go                                                                      |  107 
vendor/cloud.google.com/go/auth/internal/credsfile/filetype.go                                                                       |  157 
vendor/cloud.google.com/go/auth/internal/credsfile/parse.go                                                                          |   98 
vendor/cloud.google.com/go/auth/internal/internal.go                                                                                 |  219 
vendor/cloud.google.com/go/auth/internal/jwt/jwt.go                                                                                  |  171 
vendor/cloud.google.com/go/auth/internal/transport/cba.go                                                                            |  368 
vendor/cloud.google.com/go/auth/internal/transport/cert/default_cert.go                                                              |   65 
vendor/cloud.google.com/go/auth/internal/transport/cert/enterprise_cert.go                                                           |   54 
vendor/cloud.google.com/go/auth/internal/transport/cert/secureconnect_cert.go                                                        |  124 
vendor/cloud.google.com/go/auth/internal/transport/cert/workload_cert.go                                                             |  114 
vendor/cloud.google.com/go/auth/internal/transport/s2a.go                                                                            |  138 
vendor/cloud.google.com/go/auth/internal/transport/transport.go                                                                      |  106 
vendor/cloud.google.com/go/auth/threelegged.go                                                                                       |  382 
vendor/cloud.google.com/go/civil/civil.go                                                                                            |  350 
vendor/cloud.google.com/go/compute/metadata/CHANGES.md                                                                               |   66 
vendor/cloud.google.com/go/compute/metadata/LICENSE                                                                                  |  202 
vendor/cloud.google.com/go/compute/metadata/README.md                                                                                |   27 
vendor/cloud.google.com/go/compute/metadata/log.go                                                                                   |  149 
vendor/cloud.google.com/go/compute/metadata/metadata.go                                                                              |  872 
vendor/cloud.google.com/go/compute/metadata/retry.go                                                                                 |  114 
vendor/cloud.google.com/go/compute/metadata/retry_linux.go                                                                           |   31 
vendor/cloud.google.com/go/compute/metadata/syscheck.go                                                                              |   26 
vendor/cloud.google.com/go/compute/metadata/syscheck_linux.go                                                                        |   28 
vendor/cloud.google.com/go/compute/metadata/syscheck_windows.go                                                                      |   38 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/CHANGELOG.md                                                                     |  849 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/LICENSE.txt                                                                      |   21 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/README.md                                                                        |   39 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_identifier.go                                     |  239 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_type.go                                           |  114 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy/policy.go                                                             |  108 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/pipeline.go                                                          |   70 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_bearer_token.go                                               |  102 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_register_rp.go                                                |  322 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_trace_namespace.go                                            |   30 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/runtime.go                                                           |   24 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/ci.yml                                                                           |   29 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/cloud.go                                                                   |   44 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/doc.go                                                                     |   53 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/core.go                                                                          |  173 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/doc.go                                                                           |  264 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/errors.go                                                                        |   17 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/etag.go                                                                          |   57 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/exported.go                                                    |  175 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/pipeline.go                                                    |   77 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/request.go                                                     |  260 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/response_error.go                                              |  201 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log/log.go                                                              |   50 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async/async.go                                                  |  159 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body/body.go                                                    |  135 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake/fake.go                                                    |  133 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc/loc.go                                                      |  123 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op/op.go                                                        |  148 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/poller.go                                                       |   24 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/util.go                                                         |  212 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/constants.go                                                     |   44 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/shared.go                                                        |  149 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/doc.go                                                                       |   10 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/log.go                                                                       |   55 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/doc.go                                                                    |   10 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/policy.go                                                                 |  198 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/doc.go                                                                   |   10 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/errors.go                                                                |   27 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pager.go                                                                 |  138 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pipeline.go                                                              |   94 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_api_version.go                                                    |   75 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_bearer_token.go                                                   |  236 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_body_download.go                                                  |   72 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_header.go                                                    |   40 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_trace.go                                                     |  154 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_include_response.go                                               |   35 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_key_credential.go                                                 |   64 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_logging.go                                                        |  264 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_request_id.go                                                     |   34 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_retry.go                                                          |  276 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_sas_credential.go                                                 |   55 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_telemetry.go                                                      |   83 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/poller.go                                                                |  396 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/request.go                                                               |  281 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/response.go                                                              |  109 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_other.go                                        |   15 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_wasm.go                                         |   15 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_http_client.go                                         |   48 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/doc.go                                                                 |    9 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/progress.go                                                            |   89 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/constants.go                                                             |   41 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/tracing.go                                                               |  191 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/.gitignore                                                                   |    4 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/CHANGELOG.md                                                                 |  575 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/LICENSE.txt                                                                  |   21 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/MIGRATION.md                                                                 |  307 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/README.md                                                                    |  258 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TOKEN_CACHING.MD                                                             |   71 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TROUBLESHOOTING.md                                                           |  241 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/assets.json                                                                  |    6 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/authentication_record.go                                                     |   95 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azidentity.go                                                                |  190 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_cli_credential.go                                                      |  190 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_developer_cli_credential.go                                            |  169 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_pipelines_credential.go                                                |  140 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/chained_token_credential.go                                                  |  138 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/ci.yml                                                                       |   46 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_assertion_credential.go                                               |   85 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_certificate_credential.go                                             |  174 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_secret_credential.go                                                  |   75 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/confidential_client.go                                                       |  184 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/default_azure_credential.go                                                  |  165 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/developer_credential_util.go                                                 |   38 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/device_code_credential.go                                                    |  138 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/environment_credential.go                                                    |  167 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/errors.go                                                                    |  170 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/go.work.sum                                                                  |   60 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/interactive_browser_credential.go                                            |  118 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/internal/exported.go                                                         |   18 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/internal/internal.go                                                         |   31 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/logging.go                                                                   |   14 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed-identity-matrix.json                                                 |   17 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_client.go                                                   |  501 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_credential.go                                               |  128 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/on_behalf_of_credential.go                                                   |  113 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/public_client.go                                                             |  273 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-post.ps1                                                      |  112 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-pre.ps1                                                       |   44 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources.bicep                                                         |  219 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/username_password_credential.go                                              |   90 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/version.go                                                                   |   18 
vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/workload_identity.go                                                         |  131 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/LICENSE.txt                                                                    |   21 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/diag.go                                                                   |   51 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/doc.go                                                                    |    7 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/doc.go                                                               |    7 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/errorinfo.go                                                         |   46 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/exported/exported.go                                                           |  129 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/doc.go                                                                     |    7 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/log.go                                                                     |  104 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/poller/util.go                                                                 |  155 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/temporal/resource.go                                                           |  123 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/doc.go                                                                    |    7 
vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/uuid.go                                                                   |   76 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/LICENSE                                                            |   21 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache/cache.go                                                |   54 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential/confidential.go                                  |  719 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/error_design.md                                        |  111 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/errors.go                                              |   89 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/base.go                                         |  477 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/items.go                       |  213 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/partitioned_storage.go         |  442 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/storage.go                     |  583 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported/exported.go                                 |   34 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/design.md                                       |  140 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/json.go                                         |  184 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/mapslice.go                                     |  333 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/marshal.go                                      |  346 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/struct.go                                       |  290 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time/time.go                              |   70 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local/server.go                                      |  177 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/oauth.go                                       |  354 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/accesstokens.go               |  457 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/apptype_string.go             |   25 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/tokens.go                     |  339 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authority.go                     |  589 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authorizetype_string.go          |   30 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/comm.go                      |  320 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/compress.go                  |   33 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant/grant.go                    |   17 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/ops.go                                     |   56 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/endpointtype_string.go        |   25 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/mex_document_definitions.go   |  394 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/saml_assertion_definitions.go |  230 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/version_string.go             |   25 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_endpoint.go           |  199 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_mex_document.go       |  159 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/wstrust.go                         |  136 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/resolvers.go                                   |  149 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options/options.go                                   |   52 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared/shared.go                                     |   72 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version/version.go                                   |    8 
vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/public/public.go                                              |  756 
vendor/github.com/JohannesKaufmann/html-to-markdown/.gitignore                                                                       |   14 
vendor/github.com/JohannesKaufmann/html-to-markdown/CONTRIBUTING.md                                                                  |    0 
vendor/github.com/JohannesKaufmann/html-to-markdown/LICENSE                                                                          |   21 
vendor/github.com/JohannesKaufmann/html-to-markdown/README.md                                                                        |  242 
vendor/github.com/JohannesKaufmann/html-to-markdown/SECURITY.md                                                                      |    6 
vendor/github.com/JohannesKaufmann/html-to-markdown/commonmark.go                                                                    |  393 
vendor/github.com/JohannesKaufmann/html-to-markdown/escape/escape.go                                                                 |   65 
vendor/github.com/JohannesKaufmann/html-to-markdown/from.go                                                                          |  464 
vendor/github.com/JohannesKaufmann/html-to-markdown/logo_five_years.png                                                              |    0 
vendor/github.com/JohannesKaufmann/html-to-markdown/markdown.go                                                                      |  212 
vendor/github.com/JohannesKaufmann/html-to-markdown/utils.go                                                                         |  533 
vendor/github.com/MakeNowJust/heredoc/LICENSE                                                                                        |   21 
vendor/github.com/MakeNowJust/heredoc/README.md                                                                                      |   52 
vendor/github.com/MakeNowJust/heredoc/heredoc.go                                                                                     |  105 
vendor/github.com/PuerkitoBio/goquery/.gitattributes                                                                                 |    1 
vendor/github.com/PuerkitoBio/goquery/.gitignore                                                                                     |   16 
vendor/github.com/PuerkitoBio/goquery/LICENSE                                                                                        |   12 
vendor/github.com/PuerkitoBio/goquery/README.md                                                                                      |  202 
vendor/github.com/PuerkitoBio/goquery/array.go                                                                                       |  124 
vendor/github.com/PuerkitoBio/goquery/doc.go                                                                                         |  123 
vendor/github.com/PuerkitoBio/goquery/expand.go                                                                                      |   70 
vendor/github.com/PuerkitoBio/goquery/filter.go                                                                                      |  163 
vendor/github.com/PuerkitoBio/goquery/iteration.go                                                                                   |   47 
vendor/github.com/PuerkitoBio/goquery/manipulation.go                                                                                |  679 
vendor/github.com/PuerkitoBio/goquery/property.go                                                                                    |  275 
vendor/github.com/PuerkitoBio/goquery/query.go                                                                                       |   49 
vendor/github.com/PuerkitoBio/goquery/traversal.go                                                                                   |  704 
vendor/github.com/PuerkitoBio/goquery/type.go                                                                                        |  203 
vendor/github.com/PuerkitoBio/goquery/utilities.go                                                                                   |  178 
vendor/github.com/alecthomas/chroma/v2/.editorconfig                                                                                 |   17 
vendor/github.com/alecthomas/chroma/v2/.gitignore                                                                                    |   25 
vendor/github.com/alecthomas/chroma/v2/.golangci.yml                                                                                 |   95 
vendor/github.com/alecthomas/chroma/v2/.goreleaser.yml                                                                               |   37 
vendor/github.com/alecthomas/chroma/v2/Bitfile                                                                                       |   24 
vendor/github.com/alecthomas/chroma/v2/COPYING                                                                                       |   19 
vendor/github.com/alecthomas/chroma/v2/Makefile                                                                                      |   23 
vendor/github.com/alecthomas/chroma/v2/README.md                                                                                     |  297 
vendor/github.com/alecthomas/chroma/v2/coalesce.go                                                                                   |   35 
vendor/github.com/alecthomas/chroma/v2/colour.go                                                                                     |  192 
vendor/github.com/alecthomas/chroma/v2/delegate.go                                                                                   |  152 
vendor/github.com/alecthomas/chroma/v2/doc.go                                                                                        |    7 
vendor/github.com/alecthomas/chroma/v2/emitters.go                                                                                   |  218 
vendor/github.com/alecthomas/chroma/v2/formatter.go                                                                                  |   43 
vendor/github.com/alecthomas/chroma/v2/formatters/api.go                                                                             |   57 
vendor/github.com/alecthomas/chroma/v2/formatters/html/html.go                                                                       |  623 
vendor/github.com/alecthomas/chroma/v2/formatters/json.go                                                                            |   39 
vendor/github.com/alecthomas/chroma/v2/formatters/svg/font_liberation_mono.go                                                        |   50 
vendor/github.com/alecthomas/chroma/v2/formatters/svg/svg.go                                                                         |  222 
vendor/github.com/alecthomas/chroma/v2/formatters/tokens.go                                                                          |   18 
vendor/github.com/alecthomas/chroma/v2/formatters/tty_indexed.go                                                                     |  284 
vendor/github.com/alecthomas/chroma/v2/formatters/tty_truecolour.go                                                                  |   76 
vendor/github.com/alecthomas/chroma/v2/iterator.go                                                                                   |   76 
vendor/github.com/alecthomas/chroma/v2/lexer.go                                                                                      |  162 
vendor/github.com/alecthomas/chroma/v2/lexers/README.md                                                                              |   46 
vendor/github.com/alecthomas/chroma/v2/lexers/caddyfile.go                                                                           |  275 
vendor/github.com/alecthomas/chroma/v2/lexers/cl.go                                                                                  |  243 
vendor/github.com/alecthomas/chroma/v2/lexers/dns.go                                                                                 |   17 
vendor/github.com/alecthomas/chroma/v2/lexers/emacs.go                                                                               |  533 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/abap.xml                                                                      |  102 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/abnf.xml                                                                      |   66 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/actionscript.xml                                                              |   41 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/actionscript_3.xml                                                            |  163 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ada.xml                                                                       |  321 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/agda.xml                                                                      |   56 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/al.xml                                                                        |   39 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/alloy.xml                                                                     |   58 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/angular2.xml                                                                  |  108 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/antlr.xml                                                                     |  317 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/apacheconf.xml                                                                |   74 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/apl.xml                                                                       |   59 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/applescript.xml                                                               |   47 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/arangodb_aql.xml                                                              |  154 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/arduino.xml                                                                   |  187 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/armasm.xml                                                                    |  126 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/atl.xml                                                                       |  165 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/autohotkey.xml                                                                |   42 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/autoit.xml                                                                    |   33 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/awk.xml                                                                       |   95 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ballerina.xml                                                                 |   97 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bash.xml                                                                      |  220 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bash_session.xml                                                              |   25 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/batchfile.xml                                                                 |  660 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/beef.xml                                                                      |  120 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bibtex.xml                                                                    |  152 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bicep.xml                                                                     |   84 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/blitzbasic.xml                                                                |  141 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bnf.xml                                                                       |   28 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bqn.xml                                                                       |   83 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/brainfuck.xml                                                                 |   51 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/c#.xml                                                                        |  121 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/c++.xml                                                                       |  331 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/c.xml                                                                         |  260 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cap_n_proto.xml                                                               |  122 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cassandra_cql.xml                                                             |  137 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ceylon.xml                                                                    |  151 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cfengine3.xml                                                                 |  197 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cfstatement.xml                                                               |   92 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/chaiscript.xml                                                                |  134 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/chapel.xml                                                                    |  143 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cheetah.xml                                                                   |   55 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/clojure.xml                                                                   |   50 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cmake.xml                                                                     |   90 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cobol.xml                                                                     |   63 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/coffeescript.xml                                                              |  210 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/common_lisp.xml                                                               |  184 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/coq.xml                                                                       |  136 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/crystal.xml                                                                   |  762 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/css.xml                                                                       |   65 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/csv.xml                                                                       |   53 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cue.xml                                                                       |   85 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cython.xml                                                                    |  372 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/d.xml                                                                         |  133 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dart.xml                                                                      |  213 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dax.xml                                                                       |   12 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/desktop_entry.xml                                                             |   17 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/diff.xml                                                                      |   52 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/django_jinja.xml                                                              |  153 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dns.xml                                                                       |   17 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/docker.xml                                                                    |   57 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dtd.xml                                                                       |  168 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dylan.xml                                                                     |  176 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ebnf.xml                                                                      |   90 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/elixir.xml                                                                    |  744 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/elm.xml                                                                       |  119 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/emacslisp.xml                                                                 |  132 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/erlang.xml                                                                    |   21 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/factor.xml                                                                    |  303 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fennel.xml                                                                    |   47 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fish.xml                                                                      |  159 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/forth.xml                                                                     |   35 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fortran.xml                                                                   |   30 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fortranfixed.xml                                                              |   71 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fsharp.xml                                                                    |  245 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gas.xml                                                                       |  150 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gdscript.xml                                                                  |  136 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gdscript3.xml                                                                 |   22 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gherkin.xml                                                                   |   18 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gleam.xml                                                                     |  117 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/glsl.xml                                                                      |   65 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gnuplot.xml                                                                   |  219 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/go_template.xml                                                               |  114 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/graphql.xml                                                                   |   88 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/groff.xml                                                                     |   90 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/groovy.xml                                                                    |  135 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/handlebars.xml                                                                |  147 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hare.xml                                                                      |   98 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/haskell.xml                                                                   |  275 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hcl.xml                                                                       |  143 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hexdump.xml                                                                   |  189 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hlb.xml                                                                       |  149 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hlsl.xml                                                                      |   68 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/holyc.xml                                                                     |  252 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/html.xml                                                                      |  159 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hy.xml                                                                        |  104 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/idris.xml                                                                     |  216 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/igor.xml                                                                      |   26 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ini.xml                                                                       |   45 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/io.xml                                                                        |   71 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/iscdhcpd.xml                                                                  |   96 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/j.xml                                                                         |  157 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/java.xml                                                                      |  193 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/javascript.xml                                                                |  160 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/json.xml                                                                      |  112 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/jsonata.xml                                                                   |   83 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/jsonnet.xml                                                                   |  138 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/julia.xml                                                                     |  198 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/jungle.xml                                                                    |   98 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/kotlin.xml                                                                    |  223 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/lighttpd_configuration_file.xml                                               |   42 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/llvm.xml                                                                      |   61 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/lua.xml                                                                       |  158 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/makefile.xml                                                                  |  131 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mako.xml                                                                      |  120 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mason.xml                                                                     |   89 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/materialize_sql_dialect.xml                                                   |   47 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mathematica.xml                                                               |   60 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/matlab.xml                                                                    |   65 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mcfunction.xml                                                                |  138 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/meson.xml                                                                     |   85 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/metal.xml                                                                     |  270 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/minizinc.xml                                                                  |   82 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mlir.xml                                                                      |   73 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/modula-2.xml                                                                  |  245 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/monkeyc.xml                                                                   |  153 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/morrowindscript.xml                                                           |   63 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/myghty.xml                                                                    |   77 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mysql.xml                                                                     |   77 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nasm.xml                                                                      |  126 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/natural.xml                                                                   |  102 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ndisasm.xml                                                                   |  123 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/newspeak.xml                                                                  |  121 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nginx_configuration_file.xml                                                  |   98 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nim.xml                                                                       |  211 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nix.xml                                                                       |  258 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nsis.xml                                                                      |   33 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/objective-c.xml                                                               |  510 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/objectpascal.xml                                                              |  145 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ocaml.xml                                                                     |  145 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/octave.xml                                                                    |   19 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/odin.xml                                                                      |  113 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/onesenterprise.xml                                                            |   92 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/openedge_abl.xml                                                              |   39 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/openscad.xml                                                                  |   96 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/org_mode.xml                                                                  |  329 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pacmanconf.xml                                                                |   37 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/perl.xml                                                                      |   84 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/php.xml                                                                       |  212 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pig.xml                                                                       |  105 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pkgconfig.xml                                                                 |   73 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pl_pgsql.xml                                                                  |   35 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/plaintext.xml                                                                 |   21 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/plutus_core.xml                                                               |  105 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pony.xml                                                                      |  135 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/postgresql_sql_dialect.xml                                                    |   47 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/postscript.xml                                                                |   89 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/povray.xml                                                                    |   22 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/powerquery.xml                                                                |   51 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/powershell.xml                                                                |  230 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/prolog.xml                                                                    |  115 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/promela.xml                                                                   |  119 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/promql.xml                                                                    |  123 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/properties.xml                                                                |   45 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/protocol_buffer.xml                                                           |  118 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/prql.xml                                                                      |  161 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/psl.xml                                                                       |  213 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/puppet.xml                                                                    |   94 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/python.xml                                                                    |  260 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/python_2.xml                                                                  |  356 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/qbasic.xml                                                                    |  173 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/qml.xml                                                                       |  113 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/r.xml                                                                         |  128 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/racket.xml                                                                    |  213 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ragel.xml                                                                     |  149 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/react.xml                                                                     |  236 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/reasonml.xml                                                                  |  147 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/reg.xml                                                                       |   68 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rego.xml                                                                      |   94 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rexx.xml                                                                      |  127 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rpm_spec.xml                                                                  |   58 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ruby.xml                                                                      |  724 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rust.xml                                                                      |  375 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sas.xml                                                                       |  129 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sass.xml                                                                      |  123 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scala.xml                                                                     |  163 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scheme.xml                                                                    |   59 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scilab.xml                                                                    |   21 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scss.xml                                                                      |   53 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sed.xml                                                                       |   28 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sieve.xml                                                                     |   61 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/smali.xml                                                                     |   73 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/smalltalk.xml                                                                 |  294 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/smarty.xml                                                                    |   79 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/snbt.xml                                                                      |   58 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/snobol.xml                                                                    |   95 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/solidity.xml                                                                  |  125 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sourcepawn.xml                                                                |   59 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sparql.xml                                                                    |  160 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sql.xml                                                                       |   29 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/squidconf.xml                                                                 |   20 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/standard_ml.xml                                                               |  548 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/stas.xml                                                                      |   85 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/stylus.xml                                                                    |   16 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/swift.xml                                                                     |  106 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/systemd.xml                                                                   |   63 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/systemverilog.xml                                                             |  129 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tablegen.xml                                                                  |   69 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tal.xml                                                                       |   43 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tasm.xml                                                                      |  135 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tcl.xml                                                                       |  272 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tcsh.xml                                                                      |  121 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/termcap.xml                                                                   |   75 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/terminfo.xml                                                                  |   84 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/terraform.xml                                                                 |  140 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tex.xml                                                                       |  113 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/thrift.xml                                                                    |  154 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/toml.xml                                                                      |   44 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tradingview.xml                                                               |   42 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/transact-sql.xml                                                              |   38 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/turing.xml                                                                    |   82 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/turtle.xml                                                                    |  170 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/twig.xml                                                                      |  155 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typescript.xml                                                                |  295 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typoscript.xml                                                                |  178 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typoscriptcssdata.xml                                                         |   52 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typoscripthtmldata.xml                                                        |   52 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typst.xml                                                                     |  108 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ucode.xml                                                                     |  147 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/v.xml                                                                         |  100 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/v_shell.xml                                                                   |  144 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vala.xml                                                                      |   72 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vb_net.xml                                                                    |  162 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/verilog.xml                                                                   |  158 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vhdl.xml                                                                      |  171 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vhs.xml                                                                       |   48 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/viml.xml                                                                      |   85 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vue.xml                                                                       |  307 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/wdte.xml                                                                      |   43 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/webgpu_shading_language.xml                                                   |   32 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/webvtt.xml                                                                    |  283 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/whiley.xml                                                                    |   57 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/xml.xml                                                                       |   95 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/xorg.xml                                                                      |   35 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/yaml.xml                                                                      |  122 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/yang.xml                                                                      |   99 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/z80_assembly.xml                                                              |   74 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/zed.xml                                                                       |   51 
vendor/github.com/alecthomas/chroma/v2/lexers/embedded/zig.xml                                                                       |  112 
vendor/github.com/alecthomas/chroma/v2/lexers/genshi.go                                                                              |  118 
vendor/github.com/alecthomas/chroma/v2/lexers/go.go                                                                                  |   81 
vendor/github.com/alecthomas/chroma/v2/lexers/haxe.go                                                                                |  647 
vendor/github.com/alecthomas/chroma/v2/lexers/html.go                                                                                |    8 
vendor/github.com/alecthomas/chroma/v2/lexers/http.go                                                                                |  131 
vendor/github.com/alecthomas/chroma/v2/lexers/lexers.go                                                                              |   79 
vendor/github.com/alecthomas/chroma/v2/lexers/markdown.go                                                                            |   46 
vendor/github.com/alecthomas/chroma/v2/lexers/mysql.go                                                                               |   33 
vendor/github.com/alecthomas/chroma/v2/lexers/php.go                                                                                 |   37 
vendor/github.com/alecthomas/chroma/v2/lexers/raku.go                                                                                | 1721 
vendor/github.com/alecthomas/chroma/v2/lexers/rst.go                                                                                 |   89 
vendor/github.com/alecthomas/chroma/v2/lexers/svelte.go                                                                              |   70 
vendor/github.com/alecthomas/chroma/v2/lexers/typoscript.go                                                                          |   85 
vendor/github.com/alecthomas/chroma/v2/lexers/zed.go                                                                                 |   24 
vendor/github.com/alecthomas/chroma/v2/mutators.go                                                                                   |  201 
vendor/github.com/alecthomas/chroma/v2/pygments-lexers.txt                                                                           |  322 
vendor/github.com/alecthomas/chroma/v2/quick/quick.go                                                                                |   44 
vendor/github.com/alecthomas/chroma/v2/regexp.go                                                                                     |  489 
vendor/github.com/alecthomas/chroma/v2/registry.go                                                                                   |  210 
vendor/github.com/alecthomas/chroma/v2/remap.go                                                                                      |   94 
vendor/github.com/alecthomas/chroma/v2/renovate.json5                                                                                |   18 
vendor/github.com/alecthomas/chroma/v2/serialise.go                                                                                  |  479 
vendor/github.com/alecthomas/chroma/v2/style.go                                                                                      |  481 
vendor/github.com/alecthomas/chroma/v2/styles/abap.xml                                                                               |   11 
vendor/github.com/alecthomas/chroma/v2/styles/algol.xml                                                                              |   18 
vendor/github.com/alecthomas/chroma/v2/styles/algol_nu.xml                                                                           |   18 
vendor/github.com/alecthomas/chroma/v2/styles/api.go                                                                                 |   65 
vendor/github.com/alecthomas/chroma/v2/styles/arduino.xml                                                                            |   18 
vendor/github.com/alecthomas/chroma/v2/styles/autumn.xml                                                                             |   36 
vendor/github.com/alecthomas/chroma/v2/styles/average.xml                                                                            |   74 
vendor/github.com/alecthomas/chroma/v2/styles/base16-snazzy.xml                                                                      |   74 
vendor/github.com/alecthomas/chroma/v2/styles/borland.xml                                                                            |   26 
vendor/github.com/alecthomas/chroma/v2/styles/bw.xml                                                                                 |   23 
vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-frappe.xml                                                                  |   83 
vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-latte.xml                                                                   |   83 
vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-macchiato.xml                                                               |   83 
vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-mocha.xml                                                                   |   83 
vendor/github.com/alecthomas/chroma/v2/styles/colorful.xml                                                                           |   52 
vendor/github.com/alecthomas/chroma/v2/styles/compat.go                                                                              |   66 
vendor/github.com/alecthomas/chroma/v2/styles/doom-one.xml                                                                           |   51 
vendor/github.com/alecthomas/chroma/v2/styles/doom-one2.xml                                                                          |   64 
vendor/github.com/alecthomas/chroma/v2/styles/dracula.xml                                                                            |   74 
vendor/github.com/alecthomas/chroma/v2/styles/emacs.xml                                                                              |   44 
vendor/github.com/alecthomas/chroma/v2/styles/evergarden.xml                                                                         |   33 
vendor/github.com/alecthomas/chroma/v2/styles/friendly.xml                                                                           |   44 
vendor/github.com/alecthomas/chroma/v2/styles/fruity.xml                                                                             |   19 
vendor/github.com/alecthomas/chroma/v2/styles/github-dark.xml                                                                        |   45 
vendor/github.com/alecthomas/chroma/v2/styles/github.xml                                                                             |   39 
vendor/github.com/alecthomas/chroma/v2/styles/gruvbox-light.xml                                                                      |   33 
vendor/github.com/alecthomas/chroma/v2/styles/gruvbox.xml                                                                            |   33 
vendor/github.com/alecthomas/chroma/v2/styles/hr_high_contrast.xml                                                                   |   12 
vendor/github.com/alecthomas/chroma/v2/styles/hrdark.xml                                                                             |   10 
vendor/github.com/alecthomas/chroma/v2/styles/igor.xml                                                                               |    9 
vendor/github.com/alecthomas/chroma/v2/styles/lovelace.xml                                                                           |   53 
vendor/github.com/alecthomas/chroma/v2/styles/manni.xml                                                                              |   44 
vendor/github.com/alecthomas/chroma/v2/styles/modus-operandi.xml                                                                     |   13 
vendor/github.com/alecthomas/chroma/v2/styles/modus-vivendi.xml                                                                      |   13 
vendor/github.com/alecthomas/chroma/v2/styles/monokai.xml                                                                            |   29 
vendor/github.com/alecthomas/chroma/v2/styles/monokailight.xml                                                                       |   26 
vendor/github.com/alecthomas/chroma/v2/styles/murphy.xml                                                                             |   52 
vendor/github.com/alecthomas/chroma/v2/styles/native.xml                                                                             |   35 
vendor/github.com/alecthomas/chroma/v2/styles/nord.xml                                                                               |   46 
vendor/github.com/alecthomas/chroma/v2/styles/nordic.xml                                                                             |   46 
vendor/github.com/alecthomas/chroma/v2/styles/onedark.xml                                                                            |   25 
vendor/github.com/alecthomas/chroma/v2/styles/onesenterprise.xml                                                                     |   10 
vendor/github.com/alecthomas/chroma/v2/styles/paraiso-dark.xml                                                                       |   37 
vendor/github.com/alecthomas/chroma/v2/styles/paraiso-light.xml                                                                      |   37 
vendor/github.com/alecthomas/chroma/v2/styles/pastie.xml                                                                             |   45 
vendor/github.com/alecthomas/chroma/v2/styles/perldoc.xml                                                                            |   37 
vendor/github.com/alecthomas/chroma/v2/styles/pygments.xml                                                                           |   42 
vendor/github.com/alecthomas/chroma/v2/styles/rainbow_dash.xml                                                                       |   40 
vendor/github.com/alecthomas/chroma/v2/styles/rose-pine-dawn.xml                                                                     |   29 
vendor/github.com/alecthomas/chroma/v2/styles/rose-pine-moon.xml                                                                     |   29 
vendor/github.com/alecthomas/chroma/v2/styles/rose-pine.xml                                                                          |   29 
vendor/github.com/alecthomas/chroma/v2/styles/rrt.xml                                                                                |   13 
vendor/github.com/alecthomas/chroma/v2/styles/solarized-dark.xml                                                                     |   39 
vendor/github.com/alecthomas/chroma/v2/styles/solarized-dark256.xml                                                                  |   41 
vendor/github.com/alecthomas/chroma/v2/styles/solarized-light.xml                                                                    |   17 
vendor/github.com/alecthomas/chroma/v2/styles/swapoff.xml                                                                            |   18 
vendor/github.com/alecthomas/chroma/v2/styles/tango.xml                                                                              |   72 
vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-day.xml                                                                     |   83 
vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-moon.xml                                                                    |   83 
vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-night.xml                                                                   |   83 
vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-storm.xml                                                                   |   83 
vendor/github.com/alecthomas/chroma/v2/styles/trac.xml                                                                               |   35 
vendor/github.com/alecthomas/chroma/v2/styles/vim.xml                                                                                |   29 
vendor/github.com/alecthomas/chroma/v2/styles/vs.xml                                                                                 |   16 
vendor/github.com/alecthomas/chroma/v2/styles/vulcan.xml                                                                             |   74 
vendor/github.com/alecthomas/chroma/v2/styles/witchhazel.xml                                                                         |   31 
vendor/github.com/alecthomas/chroma/v2/styles/xcode-dark.xml                                                                         |   31 
vendor/github.com/alecthomas/chroma/v2/styles/xcode.xml                                                                              |   22 
vendor/github.com/alecthomas/chroma/v2/table.py                                                                                      |   31 
vendor/github.com/alecthomas/chroma/v2/tokentype_enumer.go                                                                           |    9 
vendor/github.com/alecthomas/chroma/v2/types.go                                                                                      |  343 
vendor/github.com/andybalholm/cascadia/.travis.yml                                                                                   |   14 
vendor/github.com/andybalholm/cascadia/LICENSE                                                                                       |   24 
vendor/github.com/andybalholm/cascadia/README.md                                                                                     |  144 
vendor/github.com/andybalholm/cascadia/parser.go                                                                                     |  889 
vendor/github.com/andybalholm/cascadia/pseudo_classes.go                                                                             |  458 
vendor/github.com/andybalholm/cascadia/selector.go                                                                                   |  586 
vendor/github.com/andybalholm/cascadia/serialize.go                                                                                  |  176 
vendor/github.com/andybalholm/cascadia/specificity.go                                                                                |   26 
vendor/github.com/anthropics/anthropic-sdk-go/.gitignore                                                                             |    4 
vendor/github.com/anthropics/anthropic-sdk-go/.release-please-manifest.json                                                          |    3 
vendor/github.com/anthropics/anthropic-sdk-go/.stats.yml                                                                             |    4 
vendor/github.com/anthropics/anthropic-sdk-go/Brewfile                                                                               |    1 
vendor/github.com/anthropics/anthropic-sdk-go/CHANGELOG.md                                                                           |  240 
vendor/github.com/anthropics/anthropic-sdk-go/CONTRIBUTING.md                                                                        |   66 
vendor/github.com/anthropics/anthropic-sdk-go/LICENSE                                                                                |    8 
vendor/github.com/anthropics/anthropic-sdk-go/README.md                                                                              |  876 
vendor/github.com/anthropics/anthropic-sdk-go/SECURITY.md                                                                            |   27 
vendor/github.com/anthropics/anthropic-sdk-go/aliases.go                                                                             |   50 
vendor/github.com/anthropics/anthropic-sdk-go/api.md                                                                                 |  329 
vendor/github.com/anthropics/anthropic-sdk-go/bedrock/bedrock.go                                                                     |  245 
vendor/github.com/anthropics/anthropic-sdk-go/beta.go                                                                                |  364 
vendor/github.com/anthropics/anthropic-sdk-go/betafile.go                                                                            |  270 
vendor/github.com/anthropics/anthropic-sdk-go/betamessage.go                                                                         | 5823 
vendor/github.com/anthropics/anthropic-sdk-go/betamessagebatch.go                                                                    |  879 
vendor/github.com/anthropics/anthropic-sdk-go/betamodel.go                                                                           |  149 
vendor/github.com/anthropics/anthropic-sdk-go/client.go                                                                              |  126 
vendor/github.com/anthropics/anthropic-sdk-go/completion.go                                                                          |  194 
vendor/github.com/anthropics/anthropic-sdk-go/field.go                                                                               |   45 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apierror/apierror.go                                                          |   50 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiform/encoder.go                                                            |  465 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiform/form.go                                                               |    5 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiform/richparam.go                                                          |   20 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiform/tag.go                                                                |   51 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/decoder.go                                                            |  691 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/encoder.go                                                            |  392 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/enum.go                                                               |  145 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/field.go                                                              |   23 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/port.go                                                               |  120 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/registry.go                                                           |   51 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/subfield.go                                                           |   67 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/tag.go                                                                |   47 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/union.go                                                              |  202 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/encoder.go                                                           |  415 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/query.go                                                             |   55 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/richparam.go                                                         |   19 
vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/tag.go                                                               |   44 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/decode.go                                                       | 1324 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/encode.go                                                       | 1398 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/fold.go                                                         |   48 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/indent.go                                                       |  182 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/scanner.go                                                      |  610 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/sentinel/null.go                                                |   57 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/shims/shims.go                                                  |  111 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/stream.go                                                       |  512 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/tables.go                                                       |  218 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/tags.go                                                         |   38 
vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/time.go                                                         |   61 
vendor/github.com/anthropics/anthropic-sdk-go/internal/paramutil/field.go                                                            |   30 
vendor/github.com/anthropics/anthropic-sdk-go/internal/paramutil/sentinel.go                                                         |   31 
vendor/github.com/anthropics/anthropic-sdk-go/internal/paramutil/union.go                                                            |   48 
vendor/github.com/anthropics/anthropic-sdk-go/internal/requestconfig/requestconfig.go                                                |  629 
vendor/github.com/anthropics/anthropic-sdk-go/internal/version.go                                                                    |    5 
vendor/github.com/anthropics/anthropic-sdk-go/message.go                                                                             | 4562 
vendor/github.com/anthropics/anthropic-sdk-go/messagebatch.go                                                                        |  825 
vendor/github.com/anthropics/anthropic-sdk-go/model.go                                                                               |  149 
vendor/github.com/anthropics/anthropic-sdk-go/option/requestoption.go                                                                |  284 
vendor/github.com/anthropics/anthropic-sdk-go/packages/jsonl/jsonl.go                                                                |   57 
vendor/github.com/anthropics/anthropic-sdk-go/packages/pagination/pagination.go                                                      |  134 
vendor/github.com/anthropics/anthropic-sdk-go/packages/param/encoder.go                                                              |  101 
vendor/github.com/anthropics/anthropic-sdk-go/packages/param/option.go                                                               |  121 
vendor/github.com/anthropics/anthropic-sdk-go/packages/param/param.go                                                                |  158 
vendor/github.com/anthropics/anthropic-sdk-go/packages/respjson/respjson.go                                                          |   88 
vendor/github.com/anthropics/anthropic-sdk-go/packages/ssestream/ssestream.go                                                        |  198 
vendor/github.com/anthropics/anthropic-sdk-go/release-please-config.json                                                             |   67 
vendor/github.com/anthropics/anthropic-sdk-go/shared/constant/constants.go                                                           |  304 
vendor/github.com/anthropics/anthropic-sdk-go/shared/shared.go                                                                       |  334 
vendor/github.com/atotto/clipboard/.travis.yml                                                                                       |   22 
vendor/github.com/atotto/clipboard/LICENSE                                                                                           |   27 
vendor/github.com/atotto/clipboard/README.md                                                                                         |   48 
vendor/github.com/atotto/clipboard/clipboard.go                                                                                      |   20 
vendor/github.com/atotto/clipboard/clipboard_darwin.go                                                                               |   52 
vendor/github.com/atotto/clipboard/clipboard_plan9.go                                                                                |   42 
vendor/github.com/atotto/clipboard/clipboard_unix.go                                                                                 |  149 
vendor/github.com/atotto/clipboard/clipboard_windows.go                                                                              |  157 
vendor/github.com/aws/aws-sdk-go-v2/LICENSE.txt                                                                                      |  202 
vendor/github.com/aws/aws-sdk-go-v2/NOTICE.txt                                                                                       |    3 
vendor/github.com/aws/aws-sdk-go-v2/aws/accountid_endpoint_mode.go                                                                   |   18 
vendor/github.com/aws/aws-sdk-go-v2/aws/config.go                                                                                    |  211 
vendor/github.com/aws/aws-sdk-go-v2/aws/context.go                                                                                   |   22 
vendor/github.com/aws/aws-sdk-go-v2/aws/credential_cache.go                                                                          |  224 
vendor/github.com/aws/aws-sdk-go-v2/aws/credentials.go                                                                               |  173 
vendor/github.com/aws/aws-sdk-go-v2/aws/defaults/auto.go                                                                             |   38 
vendor/github.com/aws/aws-sdk-go-v2/aws/defaults/configuration.go                                                                    |   43 
vendor/github.com/aws/aws-sdk-go-v2/aws/defaults/defaults.go                                                                         |   50 
vendor/github.com/aws/aws-sdk-go-v2/aws/defaults/doc.go                                                                              |    2 
vendor/github.com/aws/aws-sdk-go-v2/aws/defaultsmode.go                                                                              |   95 
vendor/github.com/aws/aws-sdk-go-v2/aws/doc.go                                                                                       |   62 
vendor/github.com/aws/aws-sdk-go-v2/aws/endpoints.go                                                                                 |  247 
vendor/github.com/aws/aws-sdk-go-v2/aws/errors.go                                                                                    |    9 
vendor/github.com/aws/aws-sdk-go-v2/aws/from_ptr.go                                                                                  |  365 
vendor/github.com/aws/aws-sdk-go-v2/aws/go_module_metadata.go                                                                        |    6 
vendor/github.com/aws/aws-sdk-go-v2/aws/logging.go                                                                                   |  119 
vendor/github.com/aws/aws-sdk-go-v2/aws/logging_generate.go                                                                          |   95 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/metadata.go                                                                       |  213 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/middleware.go                                                                     |  168 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/osname.go                                                                         |   24 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/osname_go115.go                                                                   |   24 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/private/metrics/metrics.go                                                        |  320 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/recursion_detection.go                                                            |   94 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/request_id.go                                                                     |   27 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/request_id_retriever.go                                                           |   53 
vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/user_agent.go                                                                     |  305 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/CHANGELOG.md                                                            |  114 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/LICENSE.txt                                                             |  202 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/debug.go                                                                |  144 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/decode.go                                                               |  218 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/encode.go                                                               |  167 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/error.go                                                                |   23 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi/headers.go                                               |   24 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi/middleware.go                                            |   71 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi/transport.go                                             |   13 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi/transport_go117.go                                       |   12 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/go_module_metadata.go                                                   |    6 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/header.go                                                               |  175 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/header_value.go                                                         |  521 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/message.go                                                              |  117 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/array.go                                                                      |   72 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/encoder.go                                                                    |   80 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/map.go                                                                        |   78 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/middleware.go                                                                 |   62 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/object.go                                                                     |   69 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/value.go                                                                      |  115 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/restjson/decoder_util.go                                                            |   85 
vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/xml/error_utils.go                                                                  |   48 
vendor/github.com/aws/aws-sdk-go-v2/aws/ratelimit/none.go                                                                            |   20 
vendor/github.com/aws/aws-sdk-go-v2/aws/ratelimit/token_bucket.go                                                                    |   96 
vendor/github.com/aws/aws-sdk-go-v2/aws/ratelimit/token_rate_limit.go                                                                |   83 
vendor/github.com/aws/aws-sdk-go-v2/aws/request.go                                                                                   |   25 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/adaptive.go                                                                            |  156 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/adaptive_ratelimit.go                                                                  |  158 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/adaptive_token_bucket.go                                                               |   83 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/doc.go                                                                                 |   80 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/errors.go                                                                              |   20 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/jitter_backoff.go                                                                      |   49 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/metadata.go                                                                            |   52 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/middleware.go                                                                          |  383 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/retry.go                                                                               |   90 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/retryable_error.go                                                                     |  222 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/standard.go                                                                            |  269 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/throttle_error.go                                                                      |   60 
vendor/github.com/aws/aws-sdk-go-v2/aws/retry/timeout_error.go                                                                       |   52 
vendor/github.com/aws/aws-sdk-go-v2/aws/retryer.go                                                                                   |  127 
vendor/github.com/aws/aws-sdk-go-v2/aws/runtime.go                                                                                   |   14 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/cache.go                                                                  |  115 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/const.go                                                                  |   40 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/header_rules.go                                                           |   82 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/headers.go                                                                |   70 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/hmac.go                                                                   |   13 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/host.go                                                                   |   75 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/scope.go                                                                  |   13 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/time.go                                                                   |   36 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/util.go                                                                   |   80 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/middleware.go                                                                      |  414 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/presign_middleware.go                                                              |  127 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/stream.go                                                                          |   86 
vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/v4.go                                                                              |  559 
vendor/github.com/aws/aws-sdk-go-v2/aws/to_ptr.go                                                                                    |  297 
vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/client.go                                                                     |  310 
vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/content_type.go                                                               |   42 
vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/response_error.go                                                             |   33 
vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/response_error_middleware.go                                                  |   56 
vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/timeout_read_closer.go                                                        |  104 
vendor/github.com/aws/aws-sdk-go-v2/aws/types.go                                                                                     |   42 
vendor/github.com/aws/aws-sdk-go-v2/aws/version.go                                                                                   |    8 
vendor/github.com/aws/aws-sdk-go-v2/config/CHANGELOG.md                                                                              |  678 
vendor/github.com/aws/aws-sdk-go-v2/config/LICENSE.txt                                                                               |  202 
vendor/github.com/aws/aws-sdk-go-v2/config/config.go                                                                                 |  222 
vendor/github.com/aws/aws-sdk-go-v2/config/defaultsmode.go                                                                           |   47 
vendor/github.com/aws/aws-sdk-go-v2/config/doc.go                                                                                    |   20 
vendor/github.com/aws/aws-sdk-go-v2/config/env_config.go                                                                             |  856 
vendor/github.com/aws/aws-sdk-go-v2/config/generate.go                                                                               |    4 
vendor/github.com/aws/aws-sdk-go-v2/config/go_module_metadata.go                                                                     |    6 
vendor/github.com/aws/aws-sdk-go-v2/config/load_options.go                                                                           | 1141 
vendor/github.com/aws/aws-sdk-go-v2/config/local.go                                                                                  |   51 
vendor/github.com/aws/aws-sdk-go-v2/config/provider.go                                                                               |  721 
vendor/github.com/aws/aws-sdk-go-v2/config/resolve.go                                                                                |  383 
vendor/github.com/aws/aws-sdk-go-v2/config/resolve_bearer_token.go                                                                   |  122 
vendor/github.com/aws/aws-sdk-go-v2/config/resolve_credentials.go                                                                    |  566 
vendor/github.com/aws/aws-sdk-go-v2/config/shared_config.go                                                                          | 1618 
vendor/github.com/aws/aws-sdk-go-v2/credentials/CHANGELOG.md                                                                         |  591 
vendor/github.com/aws/aws-sdk-go-v2/credentials/LICENSE.txt                                                                          |  202 
vendor/github.com/aws/aws-sdk-go-v2/credentials/doc.go                                                                               |    4 
vendor/github.com/aws/aws-sdk-go-v2/credentials/ec2rolecreds/doc.go                                                                  |   58 
vendor/github.com/aws/aws-sdk-go-v2/credentials/ec2rolecreds/provider.go                                                             |  229 
vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/auth.go                                                |   48 
vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/client.go                                              |  165 
vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/endpoints.go                                           |   20 
vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/middleware.go                                          |  164 
vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/provider.go                                                            |  193 
vendor/github.com/aws/aws-sdk-go-v2/credentials/go_module_metadata.go                                                                |    6 
vendor/github.com/aws/aws-sdk-go-v2/credentials/processcreds/doc.go                                                                  |   92 
vendor/github.com/aws/aws-sdk-go-v2/credentials/processcreds/provider.go                                                             |  285 
vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/doc.go                                                                      |   81 
vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/sso_cached_token.go                                                         |  233 
vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/sso_credentials_provider.go                                                 |  153 
vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/sso_token_provider.go                                                       |  147 
vendor/github.com/aws/aws-sdk-go-v2/credentials/static_provider.go                                                                   |   53 
vendor/github.com/aws/aws-sdk-go-v2/credentials/stscreds/assume_role_provider.go                                                     |  326 
vendor/github.com/aws/aws-sdk-go-v2/credentials/stscreds/web_identity_provider.go                                                    |  169 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/CHANGELOG.md                                                                    |  355 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/LICENSE.txt                                                                     |  202 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_client.go                                                                   |  352 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetDynamicData.go                                                        |   77 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetIAMInfo.go                                                            |  103 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetInstanceIdentityDocument.go                                           |  110 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetMetadata.go                                                           |   77 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetRegion.go                                                             |   73 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetToken.go                                                              |  119 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetUserData.go                                                           |   61 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/auth.go                                                                         |   48 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/doc.go                                                                          |   12 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/endpoints.go                                                                    |   20 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/go_module_metadata.go                                                           |    6 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/internal/config/resolvers.go                                                    |  114 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/request_middleware.go                                                           |  313 
vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/token_provider.go                                                               |  261 
vendor/github.com/aws/aws-sdk-go-v2/internal/auth/auth.go                                                                            |   45 
vendor/github.com/aws/aws-sdk-go-v2/internal/auth/scheme.go                                                                          |  191 
vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/bearer_token_adapter.go                                                     |   43 
vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/bearer_token_signer_adapter.go                                              |   35 
vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/credentials_adapter.go                                                      |   46 
vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/smithy.go                                                                   |    2 
vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/v4signer_adapter.go                                                         |   57 
vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/CHANGELOG.md                                                              |  320 
vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/LICENSE.txt                                                               |  202 
vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/config.go                                                                 |   65 
vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/endpoints.go                                                              |   57 
vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/go_module_metadata.go                                                     |    6 
vendor/github.com/aws/aws-sdk-go-v2/internal/context/context.go                                                                      |   52 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/arn.go                                                             |   94 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/doc.go                                                             |    3 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/generate.go                                                        |    7 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/host.go                                                            |   51 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/partition.go                                                       |   76 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/partitions.go                                                      |  403 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/partitions.json                                                    |  220 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/endpoints.go                                                                  |  201 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/CHANGELOG.md                                                               |  294 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/LICENSE.txt                                                                |  202 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/endpoints.go                                                               |  302 
vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/go_module_metadata.go                                                      |    6 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/CHANGELOG.md                                                                        |  271 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/LICENSE.txt                                                                         |  202 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/errors.go                                                                           |   22 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/go_module_metadata.go                                                               |    6 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/ini.go                                                                              |   56 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/parse.go                                                                            |  109 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/sections.go                                                                         |  157 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/strings.go                                                                          |   89 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/token.go                                                                            |   32 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/tokenize.go                                                                         |   92 
vendor/github.com/aws/aws-sdk-go-v2/internal/ini/value.go                                                                            |   93 
vendor/github.com/aws/aws-sdk-go-v2/internal/middleware/middleware.go                                                                |   42 
vendor/github.com/aws/aws-sdk-go-v2/internal/rand/rand.go                                                                            |   33 
vendor/github.com/aws/aws-sdk-go-v2/internal/sdk/interfaces.go                                                                       |    9 
vendor/github.com/aws/aws-sdk-go-v2/internal/sdk/time.go                                                                             |   74 
vendor/github.com/aws/aws-sdk-go-v2/internal/sdkio/byte.go                                                                           |   12 
vendor/github.com/aws/aws-sdk-go-v2/internal/shareddefaults/shared_config.go                                                         |   47 
vendor/github.com/aws/aws-sdk-go-v2/internal/strings/strings.go                                                                      |   11 
vendor/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight/LICENSE                                                               |   28 
vendor/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight/docs.go                                                               |    7 
vendor/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight/singleflight.go                                                       |  210 
vendor/github.com/aws/aws-sdk-go-v2/internal/timeconv/duration.go                                                                    |   13 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/CHANGELOG.md                                                    |  140 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/LICENSE.txt                                                     |  202 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/accept_encoding_gzip.go                                         |  176 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/doc.go                                                          |   22 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/go_module_metadata.go                                           |    6 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/CHANGELOG.md                                                      |  346 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/LICENSE.txt                                                       |  202 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/context.go                                                        |   56 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/doc.go                                                            |    3 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/go_module_metadata.go                                             |    6 
vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/middleware.go                                                     |  110 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/CHANGELOG.md                                                                         |  475 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/LICENSE.txt                                                                          |  202 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_client.go                                                                        |  627 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_GetRoleCredentials.go                                                         |  153 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_ListAccountRoles.go                                                           |  251 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_ListAccounts.go                                                               |  249 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_Logout.go                                                                     |  152 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/auth.go                                                                              |  308 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/deserializers.go                                                                     | 1161 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/doc.go                                                                               |   27 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/endpoints.go                                                                         |  550 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/generated.json                                                                       |   35 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/go_module_metadata.go                                                                |    6 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/internal/endpoints/endpoints.go                                                      |  566 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/options.go                                                                           |  227 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/serializers.go                                                                       |  284 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/types/errors.go                                                                      |  115 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/types/types.go                                                                       |   63 
vendor/github.com/aws/aws-sdk-go-v2/service/sso/validators.go                                                                        |  175 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/CHANGELOG.md                                                                     |  469 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/LICENSE.txt                                                                      |  202 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_client.go                                                                    |  627 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_CreateToken.go                                                            |  225 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_CreateTokenWithIAM.go                                                     |  256 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_RegisterClient.go                                                         |  186 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_StartDeviceAuthorization.go                                               |  176 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/auth.go                                                                          |  302 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/deserializers.go                                                                 | 2167 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/doc.go                                                                           |   46 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/endpoints.go                                                                     |  550 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/generated.json                                                                   |   35 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/go_module_metadata.go                                                            |    6 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/internal/endpoints/endpoints.go                                                  |  566 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/options.go                                                                       |  227 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/serializers.go                                                                   |  487 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/types/errors.go                                                                  |  428 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/types/types.go                                                                   |    9 
vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/validators.go                                                                    |  184 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/CHANGELOG.md                                                                         |  493 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/LICENSE.txt                                                                          |  202 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_client.go                                                                        |  779 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_AssumeRole.go                                                                 |  520 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_AssumeRoleWithSAML.go                                                         |  436 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_AssumeRoleWithWebIdentity.go                                                  |  447 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_DecodeAuthorizationMessage.go                                                 |  177 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetAccessKeyInfo.go                                                           |  168 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetCallerIdentity.go                                                          |  180 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetFederationToken.go                                                         |  381 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetSessionToken.go                                                            |  227 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/auth.go                                                                              |  296 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/deserializers.go                                                                     | 2516 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/doc.go                                                                               |   13 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/endpoints.go                                                                         | 1130 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/generated.json                                                                       |   41 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/go_module_metadata.go                                                                |    6 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/internal/endpoints/endpoints.go                                                      |  512 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/options.go                                                                           |  227 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/serializers.go                                                                       |  862 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/types/errors.go                                                                      |  248 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/types/types.go                                                                       |  144 
vendor/github.com/aws/aws-sdk-go-v2/service/sts/validators.go                                                                        |  305 
vendor/github.com/aws/smithy-go/.gitignore                                                                                           |   29 
vendor/github.com/aws/smithy-go/.travis.yml                                                                                          |   28 
vendor/github.com/aws/smithy-go/CHANGELOG.md                                                                                         |  239 
vendor/github.com/aws/smithy-go/CODE_OF_CONDUCT.md                                                                                   |    4 
vendor/github.com/aws/smithy-go/CONTRIBUTING.md                                                                                      |   59 
vendor/github.com/aws/smithy-go/LICENSE                                                                                              |  175 
vendor/github.com/aws/smithy-go/Makefile                                                                                             |  102 
vendor/github.com/aws/smithy-go/NOTICE                                                                                               |    1 
vendor/github.com/aws/smithy-go/README.md                                                                                            |   27 
1,000 files changed, 162,262 insertions(+), 289 deletions(-)

Detailed changes

.github/workflows/build.yml 🔗

@@ -7,3 +7,5 @@ jobs:
     with:
       go-version: ""
       go-version-file: ./go.mod
+    secrets:
+      gh_pat: "${{ secrets.PERSONAL_ACCESS_TOKEN }}"

.golangci.yml 🔗

@@ -6,7 +6,7 @@ linters:
     # - goconst
     # - godot
     # - godox
-    - gomoddirectives
+    # - gomoddirectives
     - goprintffuncname
     # - gosec
     - misspell

README.md 🔗

@@ -12,10 +12,12 @@ Crush is a tool for building software with AI.
 
 ## Installation
 
-* [Packages](https://github.com/charmbracelet/crush/releases/tag/nightly) are available in Debian and RPM formats
-* [Binaries](https://github.com/charmbracelet/crush/releases/tag/nightly) are available for Linux and macOS
+Nightly builds are available while Crush is in development.
 
-Or just install it with go:
+- [Packages](https://github.com/charmbracelet/crush/releases/tag/nightly) are available in Debian and RPM formats
+- [Binaries](https://github.com/charmbracelet/crush/releases/tag/nightly) are available for Linux and macOS
+
+You can also just install it with go:
 
 ```
 git clone git@github.com:charmbracelet/crush.git
@@ -23,6 +25,8 @@ cd crush
 go install
 ```
 
+Note that Crush doesn't support Windows yet, however Windows support is planned and in progress.
+
 ## Getting Started
 
 For now, the quickest way to get started is to set an environment variable for
@@ -45,13 +49,77 @@ providers.
 | `AZURE_OPENAI_API_KEY`     | Azure OpenAI models (optional when using Entra ID) |
 | `AZURE_OPENAI_API_VERSION` | Azure OpenAI models                                |
 
+## Configuration
+
+For many use cases, Crush can be run with no config. That said, if you do need config, it can be added either local to the project itself, or globally. Configuration has the following priority:
+
+1. `.crush.json`
+2. `crush.json`
+3. `$HOME/.config/crush/crush.json`
+
+### LSPs
+
+Crush can use LSPs for additional context to help inform its decisions, just like you would. LSPs can be added manually like so:
+
+```json
+{
+  "lsp": {
+    "go": {
+      "command": "gopls"
+    },
+    "typescript": {
+      "command": "typescript-language-server",
+      "args": ["--stdio"]
+    },
+    "nix": {
+      "command": "alejandra"
+    }
+  }
+}
+```
+
+### OpenAI-Compatible APIs
+
+Crush supports all OpenAI-compatible APIs. Here's an example configuration for Deepseek, which uses an OpenAI-compatible API. Don't forget to set `DEEPSEEK_API_KEY` in your environment.
+
+```json
+{
+  "providers": {
+    "deepseek": {
+      "provider_type": "openai",
+      "base_url": "https://api.deepseek.com/v1",
+      "models": [
+        {
+          "id": "deepseek-chat",
+          "model": "Deepseek V3",
+          "cost_per_1m_in": 0.27,
+          "cost_per_1m_out": 1.1,
+          "cost_per_1m_in_cached": 0.07,
+          "cost_per_1m_out_cached": 1.1,
+          "context_window": 64000,
+          "default_max_tokens": 5000
+        }
+      ]
+    }
+  }
+}
+```
+
+## Whatcha think?
+
+We’d love to hear your thoughts on this project. Feel free to drop us a note!
+
+- [Twitter](https://twitter.com/charmcli)
+- [The Fediverse](https://mastodon.social/@charmcli)
+- [Discord](https://charm.sh/chat)
+
 ## License
 
 [MIT](https://github.com/charmbracelet/crush/raw/main/LICENSE)
 
 ---
 
-Part of [Charm](https://charm.sh).
+Part of [Charm](https://charm.land).
 
 <a href="https://charm.sh/"><img alt="The Charm logo" width="400" src="https://stuff.charm.sh/charm-banner-next.jpg" /></a>
 

go.mod 🔗

@@ -2,6 +2,10 @@ module github.com/charmbracelet/crush
 
 go 1.24.3
 
+replace github.com/charmbracelet/bubbletea/v2 => github.com/charmbracelet/bubbletea-internal/v2 v2.0.0-20250708152737-144080f3d891
+
+replace github.com/charmbracelet/lipgloss/v2 => github.com/charmbracelet/lipgloss-internal/v2 v2.0.0-20250708152830-0fa4ef151093
+
 require (
 	github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0
 	github.com/JohannesKaufmann/html-to-markdown v1.6.0
@@ -13,12 +17,12 @@ require (
 	github.com/bmatcuk/doublestar/v4 v4.8.1
 	github.com/charlievieth/fastwalk v1.0.11
 	github.com/charmbracelet/bubbles/v2 v2.0.0-beta.1.0.20250607113720-eb5e1cf3b09e
-	github.com/charmbracelet/bubbletea/v2 v2.0.0-beta.3.0.20250609143341-c76fa36f1b94
+	github.com/charmbracelet/bubbletea/v2 v2.0.0-beta.1
 	github.com/charmbracelet/fang v0.1.0
 	github.com/charmbracelet/glamour/v2 v2.0.0-20250516160903-6f1e2c8f9ebe
 	github.com/charmbracelet/lipgloss/v2 v2.0.0-beta.2.0.20250703152125-8e1c474f8a71
 	github.com/charmbracelet/log/v2 v2.0.0-20250226163916-c379e29ff706
-	github.com/charmbracelet/x/ansi v0.9.3-0.20250602153603-fb931ed90413
+	github.com/charmbracelet/x/ansi v0.9.3
 	github.com/charmbracelet/x/exp/charmtone v0.0.0-20250627134340-c144409e381c
 	github.com/charmbracelet/x/exp/golden v0.0.0-20250207160936-21c02780d27a
 	github.com/disintegration/imageorient v0.0.0-20180920195336-8147d86e83ec
@@ -71,10 +75,11 @@ require (
 	github.com/aymanbagabas/go-osc52/v2 v2.0.1 // indirect
 	github.com/aymerick/douceur v0.2.0 // indirect
 	github.com/charmbracelet/colorprofile v0.3.1 // indirect
+	github.com/charmbracelet/ultraviolet v0.0.0-20250708152637-0fe0235c8db9 // indirect
 	github.com/charmbracelet/x/cellbuf v0.0.14-0.20250516160309-24eee56f89fa // indirect
 	github.com/charmbracelet/x/exp/slice v0.0.0-20250611152503-f53cdd7e01ef
-	github.com/charmbracelet/x/input v0.3.5-0.20250509021451-13796e822d86 // indirect
 	github.com/charmbracelet/x/term v0.2.1
+	github.com/charmbracelet/x/termios v0.1.1 // indirect
 	github.com/charmbracelet/x/windows v0.2.1 // indirect
 	github.com/davecgh/go-spew v1.1.1 // indirect
 	github.com/disintegration/gift v1.1.2 // indirect
@@ -128,8 +133,8 @@ require (
 	golang.org/x/crypto v0.37.0 // indirect
 	golang.org/x/image v0.26.0 // indirect
 	golang.org/x/net v0.39.0 // indirect
-	golang.org/x/sync v0.13.0 // indirect
-	golang.org/x/sys v0.32.0 // indirect
+	golang.org/x/sync v0.15.0 // indirect
+	golang.org/x/sys v0.33.0 // indirect
 	golang.org/x/term v0.31.0 // indirect
 	golang.org/x/text v0.24.0 // indirect
 	google.golang.org/genai v1.3.0

go.sum 🔗

@@ -70,20 +70,22 @@ github.com/charlievieth/fastwalk v1.0.11 h1:5sLT/q9+d9xMdpKExawLppqvXFZCVKf6JHnr
 github.com/charlievieth/fastwalk v1.0.11/go.mod h1:yGy1zbxog41ZVMcKA/i8ojXLFsuayX5VvwhQVoj9PBI=
 github.com/charmbracelet/bubbles/v2 v2.0.0-beta.1.0.20250607113720-eb5e1cf3b09e h1:99Ugtt633rqauFsXjZobZmtkNpeaWialfj8dl6COC6A=
 github.com/charmbracelet/bubbles/v2 v2.0.0-beta.1.0.20250607113720-eb5e1cf3b09e/go.mod h1:6HamsBKWqEC/FVHuQMHgQL+knPyvHH55HwJDHl/adMw=
-github.com/charmbracelet/bubbletea/v2 v2.0.0-beta.3.0.20250609143341-c76fa36f1b94 h1:QIi50k+uNTJmp2sMs+33D1m/EWr/7OPTJ8x92AY3eOc=
-github.com/charmbracelet/bubbletea/v2 v2.0.0-beta.3.0.20250609143341-c76fa36f1b94/go.mod h1:oOn1YZGZyJHxJfh4sFAna9vDzxJRNuErLETr/lnlB/I=
+github.com/charmbracelet/bubbletea-internal/v2 v2.0.0-20250708152737-144080f3d891 h1:wh6N1dR4XkDh6XsiZh1/tImJAZvYB0yVLmaUKvJXvK0=
+github.com/charmbracelet/bubbletea-internal/v2 v2.0.0-20250708152737-144080f3d891/go.mod h1:SwBB+WoaQVMMOM9hknbN/7FNT86kgKG0LSHGTmLphX8=
 github.com/charmbracelet/colorprofile v0.3.1 h1:k8dTHMd7fgw4bnFd7jXTLZrSU/CQrKnL3m+AxCzDz40=
 github.com/charmbracelet/colorprofile v0.3.1/go.mod h1:/GkGusxNs8VB/RSOh3fu0TJmQ4ICMMPApIIVn0KszZ0=
 github.com/charmbracelet/fang v0.1.0 h1:SlZS2crf3/zQh7Mr4+W+7QR1k+L08rrPX5rm5z3d7Wg=
 github.com/charmbracelet/fang v0.1.0/go.mod h1:Zl/zeUQ8EtQuGyiV0ZKZlZPDowKRTzu8s/367EpN/fc=
 github.com/charmbracelet/glamour/v2 v2.0.0-20250516160903-6f1e2c8f9ebe h1:i6ce4CcAlPpTj2ER69m1DBeLZ3RRcHnKExuwhKa3GfY=
 github.com/charmbracelet/glamour/v2 v2.0.0-20250516160903-6f1e2c8f9ebe/go.mod h1:p3Q+aN4eQKeM5jhrmXPMgPrlKbmc59rWSnMsSA3udhk=
-github.com/charmbracelet/lipgloss/v2 v2.0.0-beta.2.0.20250703152125-8e1c474f8a71 h1:X0tsNa2UHCKNw+illiavosasVzqioRo32SRV35iwr2I=
-github.com/charmbracelet/lipgloss/v2 v2.0.0-beta.2.0.20250703152125-8e1c474f8a71/go.mod h1:EJWvaCrhOhNGVZMvcjc0yVryl4qqpMs8tz0r9WyEkdQ=
+github.com/charmbracelet/lipgloss-internal/v2 v2.0.0-20250708152830-0fa4ef151093 h1:c9vOmNJQUwy/lp/pNOB5ZDMhOuXJ3Y2LL9uZMYGgJxQ=
+github.com/charmbracelet/lipgloss-internal/v2 v2.0.0-20250708152830-0fa4ef151093/go.mod h1:XmxjFJcMEfYIHa4Mw4ra+uMjploDkTlkKIs7wLt9v4Q=
 github.com/charmbracelet/log/v2 v2.0.0-20250226163916-c379e29ff706 h1:WkwO6Ks3mSIGnGuSdKl9qDSyfbYK50z2wc2gGMggegE=
 github.com/charmbracelet/log/v2 v2.0.0-20250226163916-c379e29ff706/go.mod h1:mjJGp00cxcfvD5xdCa+bso251Jt4owrQvuimJtVmEmM=
-github.com/charmbracelet/x/ansi v0.9.3-0.20250602153603-fb931ed90413 h1:L07QkDqRF274IZ2UJ/mCTL8DR95efU9BNWLYCDXEjvQ=
-github.com/charmbracelet/x/ansi v0.9.3-0.20250602153603-fb931ed90413/go.mod h1:3RQDQ6lDnROptfpWuUVIUG64bD2g2BgntdxH0Ya5TeE=
+github.com/charmbracelet/ultraviolet v0.0.0-20250708152637-0fe0235c8db9 h1:+LLFCLxtb/sHegwY3zYdFAbaOgI/I9pv/pxdUlI1Q9s=
+github.com/charmbracelet/ultraviolet v0.0.0-20250708152637-0fe0235c8db9/go.mod h1:/O+B00+dYG6lqRAWIaNxSvywnDrIH6dmLYQAsH0LRTg=
+github.com/charmbracelet/x/ansi v0.9.3 h1:BXt5DHS/MKF+LjuK4huWrC6NCvHtexww7dMayh6GXd0=
+github.com/charmbracelet/x/ansi v0.9.3/go.mod h1:3RQDQ6lDnROptfpWuUVIUG64bD2g2BgntdxH0Ya5TeE=
 github.com/charmbracelet/x/cellbuf v0.0.14-0.20250516160309-24eee56f89fa h1:lphz0Z3rsiOtMYiz8axkT24i9yFiueDhJbzyNUADmME=
 github.com/charmbracelet/x/cellbuf v0.0.14-0.20250516160309-24eee56f89fa/go.mod h1:xBlh2Yi3DL3zy/2n15kITpg0YZardf/aa/hgUaIM6Rk=
 github.com/charmbracelet/x/exp/charmtone v0.0.0-20250627134340-c144409e381c h1:2GELBLPgfSbHU53bsQhR9XIgNuVZ6w+Rz8RWV5Lq+A4=
@@ -92,10 +94,10 @@ github.com/charmbracelet/x/exp/golden v0.0.0-20250207160936-21c02780d27a h1:FsHE
 github.com/charmbracelet/x/exp/golden v0.0.0-20250207160936-21c02780d27a/go.mod h1:wDlXFlCrmJ8J+swcL/MnGUuYnqgQdW9rhSD61oNMb6U=
 github.com/charmbracelet/x/exp/slice v0.0.0-20250611152503-f53cdd7e01ef h1:v7qwsZ2OxzlwvpKwz8dtZXp7fIJlcDEUOyFBNE4fz4Q=
 github.com/charmbracelet/x/exp/slice v0.0.0-20250611152503-f53cdd7e01ef/go.mod h1:vI5nDVMWi6veaYH+0Fmvpbe/+cv/iJfMntdh+N0+Tms=
-github.com/charmbracelet/x/input v0.3.5-0.20250509021451-13796e822d86 h1:BxAEmOBIDajkgao3EsbBxKQCYvgYPGdT62WASLvtf4Y=
-github.com/charmbracelet/x/input v0.3.5-0.20250509021451-13796e822d86/go.mod h1:62Rp/6EtTxoeJDSdtpA3tJp3y3ZRpsiekBSje+K8htA=
 github.com/charmbracelet/x/term v0.2.1 h1:AQeHeLZ1OqSXhrAWpYUtZyX1T3zVxfpZuEQMIQaGIAQ=
 github.com/charmbracelet/x/term v0.2.1/go.mod h1:oQ4enTYFV7QN4m0i9mzHrViD7TQKvNEEkHUMCmsxdUg=
+github.com/charmbracelet/x/termios v0.1.1 h1:o3Q2bT8eqzGnGPOYheoYS8eEleT5ZVNYNy8JawjaNZY=
+github.com/charmbracelet/x/termios v0.1.1/go.mod h1:rB7fnv1TgOPOyyKRJ9o+AsTU/vK5WHJ2ivHeut/Pcwo=
 github.com/charmbracelet/x/windows v0.2.1 h1:3x7vnbpQrjpuq/4L+I4gNsG5htYoCiA5oe9hLjAij5I=
 github.com/charmbracelet/x/windows v0.2.1/go.mod h1:ptZp16h40gDYqs5TSawSVW+yiLB13j4kSMA0lSCHL0M=
 github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
@@ -307,8 +309,8 @@ golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
 golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
 golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
 golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
-golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
+golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
+golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
 golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
 golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
 golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -323,8 +325,8 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
 golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
 golang.org/x/sys v0.19.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
 golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
-golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
-golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
+golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
+golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
 golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
 golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
 golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=

internal/config/config.go 🔗

@@ -90,11 +90,12 @@ const (
 )
 
 type MCPConfig struct {
-	Command string   `json:"command,omitempty" `
-	Env     []string `json:"env,omitempty"`
-	Args    []string `json:"args,omitempty"`
-	Type    MCPType  `json:"type"`
-	URL     string   `json:"url,omitempty"`
+	Command  string   `json:"command,omitempty" `
+	Env      []string `json:"env,omitempty"`
+	Args     []string `json:"args,omitempty"`
+	Type     MCPType  `json:"type"`
+	URL      string   `json:"url,omitempty"`
+	Disabled bool     `json:"disabled,omitempty"`
 
 	// TODO: maybe make it possible to get the value from the env
 	Headers map[string]string `json:"headers,omitempty"`

internal/config/load.go 🔗

@@ -54,16 +54,12 @@ func Load(workingDir string, debug bool) (*Config, error) {
 		cfg.Options.Debug = true
 	}
 
-	// Init logs
-	log.Init(
+	// Setup logs
+	log.Setup(
 		filepath.Join(cfg.Options.DataDirectory, "logs", fmt.Sprintf("%s.log", appName)),
 		cfg.Options.Debug,
 	)
 
-	if err != nil {
-		return nil, fmt.Errorf("failed to load config: %w", err)
-	}
-
 	// Load known providers, this loads the config from fur
 	providers, err := LoadProviders(client.New())
 	if err != nil || len(providers) == 0 {
@@ -147,6 +143,9 @@ func (cfg *Config) configureProviders(env env.Env, resolver VariableResolver, kn
 						continue
 					}
 					seen[model.ID] = true
+					if model.Model == "" {
+						model.Model = model.ID
+					}
 					models = append(models, model)
 				}
 				for _, model := range p.Models {
@@ -154,6 +153,9 @@ func (cfg *Config) configureProviders(env env.Env, resolver VariableResolver, kn
 						continue
 					}
 					seen[model.ID] = true
+					if model.Model == "" {
+						model.Model = model.ID
+					}
 					models = append(models, model)
 				}
 

internal/config/load_test.go 🔗

@@ -4,6 +4,7 @@ import (
 	"io"
 	"log/slog"
 	"os"
+	"path/filepath"
 	"strings"
 	"testing"
 
@@ -45,7 +46,7 @@ func TestConfig_setDefaults(t *testing.T) {
 	assert.NotNil(t, cfg.Models)
 	assert.NotNil(t, cfg.LSP)
 	assert.NotNil(t, cfg.MCP)
-	assert.Equal(t, "/tmp/.crush", cfg.Options.DataDirectory)
+	assert.Equal(t, filepath.Join("/tmp", ".crush"), cfg.Options.DataDirectory)
 	for _, path := range defaultContextPaths {
 		assert.Contains(t, cfg.Options.ContextPaths, path)
 	}
@@ -97,8 +98,8 @@ func TestConfig_configureProvidersWithOverride(t *testing.T) {
 				BaseURL: "https://api.openai.com/v2",
 				Models: []provider.Model{
 					{
-						ID:   "test-model",
-						Name: "Updated",
+						ID:    "test-model",
+						Model: "Updated",
 					},
 					{
 						ID: "another-model",
@@ -121,7 +122,7 @@ func TestConfig_configureProvidersWithOverride(t *testing.T) {
 	assert.Equal(t, "xyz", cfg.Providers["openai"].APIKey)
 	assert.Equal(t, "https://api.openai.com/v2", cfg.Providers["openai"].BaseURL)
 	assert.Len(t, cfg.Providers["openai"].Models, 2)
-	assert.Equal(t, "Updated", cfg.Providers["openai"].Models[0].Name)
+	assert.Equal(t, "Updated", cfg.Providers["openai"].Models[0].Model)
 }
 
 func TestConfig_configureProvidersWithNewProvider(t *testing.T) {

internal/fsext/fileutil.go 🔗

@@ -12,6 +12,7 @@ import (
 
 	"github.com/bmatcuk/doublestar/v4"
 	"github.com/charlievieth/fastwalk"
+	"github.com/charmbracelet/crush/internal/log"
 
 	ignore "github.com/sabhiram/go-gitignore"
 )
@@ -25,11 +26,15 @@ func init() {
 	var err error
 	rgPath, err = exec.LookPath("rg")
 	if err != nil {
-		slog.Warn("Ripgrep (rg) not found in $PATH. Some features might be limited or slower.")
+		if log.Initialized() {
+			slog.Warn("Ripgrep (rg) not found in $PATH. Some features might be limited or slower.")
+		}
 	}
 	fzfPath, err = exec.LookPath("fzf")
 	if err != nil {
-		slog.Warn("FZF not found in $PATH. Some features might be limited or slower.")
+		if log.Initialized() {
+			slog.Warn("FZF not found in $PATH. Some features might be limited or slower.")
+		}
 	}
 }
 

internal/fur/provider/provider.go 🔗

@@ -45,7 +45,7 @@ type Provider struct {
 // Model represents an AI model configuration.
 type Model struct {
 	ID                     string  `json:"id"`
-	Name                   string  `json:"model"`
+	Model                  string  `json:"model"`
 	CostPer1MIn            float64 `json:"cost_per_1m_in"`
 	CostPer1MOut           float64 `json:"cost_per_1m_out"`
 	CostPer1MInCached      float64 `json:"cost_per_1m_in_cached"`

internal/llm/agent/mcp-tools.go 🔗

@@ -188,6 +188,10 @@ func GetMcpTools(ctx context.Context, permissions permission.Service, cfg *confi
 		return mcpTools
 	}
 	for name, m := range cfg.MCP {
+		if m.Disabled {
+			slog.Debug("skipping disabled mcp", "name", name)
+			continue
+		}
 		switch m.Type {
 		case config.MCPStdio:
 			c, err := client.NewStdioMCPClient(

internal/log/log.go 🔗

@@ -6,14 +6,18 @@ import (
 	"os"
 	"runtime/debug"
 	"sync"
+	"sync/atomic"
 	"time"
 
 	"gopkg.in/natefinch/lumberjack.v2"
 )
 
-var initOnce sync.Once
+var (
+	initOnce    sync.Once
+	initialized atomic.Bool
+)
 
-func Init(logFile string, debug bool) {
+func Setup(logFile string, debug bool) {
 	initOnce.Do(func() {
 		logRotator := &lumberjack.Logger{
 			Filename:   logFile,
@@ -34,9 +38,14 @@ func Init(logFile string, debug bool) {
 		})
 
 		slog.SetDefault(slog.New(logger))
+		initialized.Store(true)
 	})
 }
 
+func Initialized() bool {
+	return initialized.Load()
+}
+
 func RecoverPanic(name string, cleanup func()) {
 	if r := recover(); r != nil {
 		// Create a timestamped panic log file

internal/tui/components/anim/anim.go 🔗

@@ -261,7 +261,7 @@ func (a Anim) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 }
 
 // View renders the current state of the animation.
-func (a Anim) View() tea.View {
+func (a Anim) View() string {
 	var b strings.Builder
 	for i := range a.width {
 		switch {
@@ -284,7 +284,8 @@ func (a Anim) View() tea.View {
 	if a.initialized && a.labelWidth > 0 {
 		b.WriteString(a.ellipsisFrames[a.ellipsisStep/ellipsisAnimSpeed])
 	}
-	return tea.NewView(b.String())
+
+	return b.String()
 }
 
 // Step is a command that triggers the next step in the animation.

internal/tui/components/anim/example/main.go 🔗

@@ -50,20 +50,20 @@ func (m model) View() tea.View {
 	}
 
 	v := tea.NewView("")
-	v.SetBackgroundColor(m.bgColor)
+	v.BackgroundColor = m.bgColor
 
 	if m.quitting {
 		return v
 	}
 
 	if a, ok := m.anim.(anim.Anim); ok {
-		l := lipgloss.NewLayer(a.View().String()).
+		l := lipgloss.NewLayer(a.View()).
 			Width(a.Width()).
 			X(m.w/2 - a.Width()/2).
 			Y(m.h / 2)
 
 		v = tea.NewView(lipgloss.NewCanvas(l))
-		v.SetBackgroundColor(m.bgColor)
+		v.BackgroundColor = m.bgColor
 		return v
 	}
 	return v

internal/tui/components/chat/chat.go 🔗

@@ -104,17 +104,15 @@ func (m *messageListCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 }
 
 // View renders the message list or an initial screen if empty.
-func (m *messageListCmp) View() tea.View {
+func (m *messageListCmp) View() string {
 	t := styles.CurrentTheme()
-	return tea.NewView(
-		t.S().Base.
-			Padding(1).
-			Width(m.width).
-			Height(m.height).
-			Render(
-				m.listCmp.View().String(),
-			),
-	)
+	return t.S().Base.
+		Padding(1).
+		Width(m.width).
+		Height(m.height).
+		Render(
+			m.listCmp.View(),
+		)
 }
 
 // handleChildSession handles messages from child sessions (agent tools).

internal/tui/components/chat/editor/editor.go 🔗

@@ -35,6 +35,7 @@ type Editor interface {
 	layout.Positional
 
 	SetSession(session session.Session) tea.Cmd
+	Cursor() *tea.Cursor
 }
 
 type FileCompletionItem struct {
@@ -269,20 +270,22 @@ func (m *editorCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return m, tea.Batch(cmds...)
 }
 
-func (m *editorCmp) View() tea.View {
-	t := styles.CurrentTheme()
+func (m *editorCmp) Cursor() *tea.Cursor {
 	cursor := m.textarea.Cursor()
 	if cursor != nil {
 		cursor.X = cursor.X + m.x + 1
 		cursor.Y = cursor.Y + m.y + 1 // adjust for padding
 	}
+	return cursor
+}
+
+func (m *editorCmp) View() string {
+	t := styles.CurrentTheme()
 	if len(m.attachments) == 0 {
 		content := t.S().Base.Padding(1).Render(
 			m.textarea.View(),
 		)
-		view := tea.NewView(content)
-		view.SetCursor(cursor)
-		return view
+		return content
 	}
 	content := t.S().Base.Padding(0, 1, 1, 1).Render(
 		lipgloss.JoinVertical(lipgloss.Top,
@@ -290,9 +293,7 @@ func (m *editorCmp) View() tea.View {
 			m.textarea.View(),
 		),
 	)
-	view := tea.NewView(content)
-	view.SetCursor(cursor)
-	return view
+	return content
 }
 
 func (m *editorCmp) SetSize(width, height int) tea.Cmd {

internal/tui/components/chat/header/header.go 🔗

@@ -53,9 +53,9 @@ func (p *header) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return p, nil
 }
 
-func (p *header) View() tea.View {
+func (p *header) View() string {
 	if p.session.ID == "" {
-		return tea.NewView("")
+		return ""
 	}
 
 	t := styles.CurrentTheme()
@@ -82,7 +82,7 @@ func (p *header) View() tea.View {
 			parts...,
 		),
 	)
-	return tea.NewView(content)
+	return content
 }
 
 func (h *header) details() string {

internal/tui/components/chat/messages/messages.go 🔗

@@ -89,20 +89,20 @@ func (m *messageCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 
 // View renders the message component based on its current state.
 // Returns different views for spinning, user, and assistant messages.
-func (m *messageCmp) View() tea.View {
+func (m *messageCmp) View() string {
 	if m.spinning {
-		return tea.NewView(m.style().PaddingLeft(1).Render(m.anim.View().String()))
+		return m.style().PaddingLeft(1).Render(m.anim.View())
 	}
 	if m.message.ID != "" {
 		// this is a user or assistant message
 		switch m.message.Role {
 		case message.User:
-			return tea.NewView(m.renderUserMessage())
+			return m.renderUserMessage()
 		default:
-			return tea.NewView(m.renderAssistantMessage())
+			return m.renderAssistantMessage()
 		}
 	}
-	return tea.NewView(m.style().Render("No message content"))
+	return m.style().Render("No message content")
 }
 
 // GetMessage returns the underlying message data
@@ -289,7 +289,7 @@ func (m *assistantSectionModel) Update(tea.Msg) (tea.Model, tea.Cmd) {
 	return m, nil
 }
 
-func (m *assistantSectionModel) View() tea.View {
+func (m *assistantSectionModel) View() string {
 	t := styles.CurrentTheme()
 	finishData := m.message.FinishPart()
 	finishTime := time.Unix(finishData.Time, 0)
@@ -300,15 +300,13 @@ func (m *assistantSectionModel) View() tea.View {
 	if model == nil {
 		// This means the model is not configured anymore
 		model = &provider.Model{
-			Name: "Unknown Model",
+			Model: "Unknown Model",
 		}
 	}
-	modelFormatted := t.S().Muted.Render(model.Name)
+	modelFormatted := t.S().Muted.Render(model.Model)
 	assistant := fmt.Sprintf("%s %s %s", icon, modelFormatted, infoMsg)
-	return tea.NewView(
-		t.S().Base.PaddingLeft(2).Render(
-			core.Section(assistant, m.width-2),
-		),
+	return t.S().Base.PaddingLeft(2).Render(
+		core.Section(assistant, m.width-2),
 	)
 }
 

internal/tui/components/chat/messages/renderer.go 🔗

@@ -542,7 +542,7 @@ func (tr agentRenderer) Render(v *toolCallCmp) string {
 
 	if v.result.ToolCallID == "" {
 		v.spinning = true
-		parts = append(parts, v.anim.View().String())
+		parts = append(parts, v.anim.View())
 	} else {
 		v.spinning = false
 	}

internal/tui/components/chat/messages/tool.go 🔗

@@ -145,19 +145,19 @@ func (m *toolCallCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 
 // View renders the tool call component based on its current state.
 // Shows either a pending animation or the tool-specific rendered result.
-func (m *toolCallCmp) View() tea.View {
+func (m *toolCallCmp) View() string {
 	box := m.style()
 
 	if !m.call.Finished && !m.cancelled {
-		return tea.NewView(box.Render(m.renderPending()))
+		return box.Render(m.renderPending())
 	}
 
 	r := registry.lookup(m.call.Name)
 
 	if m.isNested {
-		return tea.NewView(box.Render(r.Render(m)))
+		return box.Render(r.Render(m))
 	}
-	return tea.NewView(box.Render(r.Render(m)))
+	return box.Render(r.Render(m))
 }
 
 // State management methods

internal/tui/components/chat/sidebar/sidebar.go 🔗

@@ -97,7 +97,7 @@ func (m *sidebarCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return m, nil
 }
 
-func (m *sidebarCmp) View() tea.View {
+func (m *sidebarCmp) View() string {
 	t := styles.CurrentTheme()
 	parts := []string{}
 	if !m.compactMode {
@@ -129,19 +129,15 @@ func (m *sidebarCmp) View() tea.View {
 		m.mcpBlock(),
 	)
 
-	// TODO: CHECK out why we need to set the background here weird issue
 	style := t.S().Base.
-		Background(t.BgBase).
 		Width(m.width).
 		Height(m.height).
 		Padding(1)
 	if m.compactMode {
 		style = style.PaddingTop(0)
 	}
-	return tea.NewView(
-		style.Render(
-			lipgloss.JoinVertical(lipgloss.Left, parts...),
-		),
+	return style.Render(
+		lipgloss.JoinVertical(lipgloss.Left, parts...),
 	)
 }
 
@@ -335,7 +331,7 @@ func (m *sidebarCmp) lspBlock() string {
 
 	lspList := []string{section, ""}
 
-	lsp := config.Get().LSP
+	lsp := config.Get().LSP.Sorted()
 	if len(lsp) == 0 {
 		return lipgloss.JoinVertical(
 			lipgloss.Left,
@@ -345,9 +341,9 @@ func (m *sidebarCmp) lspBlock() string {
 		)
 	}
 
-	for n, l := range lsp {
+	for _, l := range lsp {
 		iconColor := t.Success
-		if l.Disabled {
+		if l.LSP.Disabled {
 			iconColor = t.FgMuted
 		}
 		lspErrs := map[protocol.DiagnosticSeverity]int{
@@ -356,7 +352,7 @@ func (m *sidebarCmp) lspBlock() string {
 			protocol.SeverityHint:        0,
 			protocol.SeverityInformation: 0,
 		}
-		if client, ok := m.lspClients[n]; ok {
+		if client, ok := m.lspClients[l.Name]; ok {
 			for _, diagnostics := range client.GetDiagnostics() {
 				for _, diagnostic := range diagnostics {
 					if severity, ok := lspErrs[diagnostic.Severity]; ok {
@@ -384,8 +380,8 @@ func (m *sidebarCmp) lspBlock() string {
 			core.Status(
 				core.StatusOpts{
 					IconColor:    iconColor,
-					Title:        n,
-					Description:  l.Command,
+					Title:        l.Name,
+					Description:  l.LSP.Command,
 					ExtraContent: strings.Join(errs, " "),
 				},
 				m.getMaxWidth(),
@@ -408,8 +404,8 @@ func (m *sidebarCmp) mcpBlock() string {
 
 	mcpList := []string{section, ""}
 
-	mcp := config.Get().MCP
-	if len(mcp) == 0 {
+	mcps := config.Get().MCP.Sorted()
+	if len(mcps) == 0 {
 		return lipgloss.JoinVertical(
 			lipgloss.Left,
 			section,
@@ -418,14 +414,17 @@ func (m *sidebarCmp) mcpBlock() string {
 		)
 	}
 
-	for n, l := range mcp {
+	for _, l := range mcps {
 		iconColor := t.Success
+		if l.MCP.Disabled {
+			iconColor = t.FgMuted
+		}
 		mcpList = append(mcpList,
 			core.Status(
 				core.StatusOpts{
 					IconColor:   iconColor,
-					Title:       n,
-					Description: l.Command,
+					Title:       l.Name,
+					Description: l.MCP.Command,
 				},
 				m.getMaxWidth(),
 			),
@@ -483,7 +482,7 @@ func (s *sidebarCmp) currentModelBlock() string {
 	t := styles.CurrentTheme()
 
 	modelIcon := t.S().Base.Foreground(t.FgSubtle).Render(styles.ModelIcon)
-	modelName := t.S().Text.Render(model.Name)
+	modelName := t.S().Text.Render(model.Model)
 	modelInfo := fmt.Sprintf("%s %s", modelIcon, modelName)
 	parts := []string{
 		modelInfo,

internal/tui/components/chat/splash/splash.go 🔗

@@ -25,6 +25,7 @@ type Splash interface {
 	util.Model
 	layout.Sizeable
 	layout.Help
+	Cursor() *tea.Cursor
 	// SetOnboarding controls whether the splash shows model selection UI
 	SetOnboarding(bool)
 	// SetProjectInit controls whether the splash shows project initialization prompt
@@ -40,15 +41,15 @@ const (
 type OnboardingCompleteMsg struct{}
 
 type splashCmp struct {
-	width, height        int
-	keyMap               KeyMap
-	logoRendered         string
-	
+	width, height int
+	keyMap        KeyMap
+	logoRendered  string
+
 	// State
 	isOnboarding     bool
 	needsProjectInit bool
 	selectedNo       bool
-	
+
 	modelList            *models.ModelListComponent
 	cursorRow, cursorCol int
 }
@@ -69,12 +70,12 @@ func New() Splash {
 	inputStyle := t.S().Base.Padding(0, 1, 0, 1)
 	modelList := models.NewModelListComponent(listKeyMap, inputStyle, "Find your fave")
 	return &splashCmp{
-		width:            0,
-		height:           0,
-		keyMap:           keyMap,
-		logoRendered:     "",
-		modelList:        modelList,
-		selectedNo:       false,
+		width:        0,
+		height:       0,
+		keyMap:       keyMap,
+		logoRendered: "",
+		modelList:    modelList,
+		selectedNo:   false,
 	}
 }
 
@@ -278,22 +279,19 @@ func (s *splashCmp) isProviderConfigured(providerID string) bool {
 	return false
 }
 
-// View implements SplashPage.
-func (s *splashCmp) View() tea.View {
+func (s *splashCmp) View() string {
 	t := styles.CurrentTheme()
-	var cursor *tea.Cursor
 
 	var content string
 	if s.isOnboarding {
 		remainingHeight := s.height - lipgloss.Height(s.logoRendered) - (SplashScreenPaddingY * 2)
 		modelListView := s.modelList.View()
-		cursor = s.moveCursor(modelListView.Cursor())
 		modelSelector := t.S().Base.AlignVertical(lipgloss.Bottom).Height(remainingHeight).Render(
 			lipgloss.JoinVertical(
 				lipgloss.Left,
 				t.S().Base.PaddingLeft(1).Foreground(t.Primary).Render("Choose a Model"),
 				"",
-				modelListView.String(),
+				modelListView,
 			),
 		)
 		content = lipgloss.JoinVertical(
@@ -354,19 +352,26 @@ func (s *splashCmp) View() tea.View {
 		content = s.logoRendered
 	}
 
-	view := tea.NewView(
-		t.S().Base.
-			Width(s.width).
-			Height(s.height).
-			PaddingTop(SplashScreenPaddingY).
-			PaddingLeft(SplashScreenPaddingX).
-			PaddingRight(SplashScreenPaddingX).
-			PaddingBottom(SplashScreenPaddingY).
-			Render(content),
-	)
-
-	view.SetCursor(cursor)
-	return view
+	return t.S().Base.
+		Width(s.width).
+		Height(s.height).
+		PaddingTop(SplashScreenPaddingY).
+		PaddingLeft(SplashScreenPaddingX).
+		PaddingRight(SplashScreenPaddingX).
+		PaddingBottom(SplashScreenPaddingY).
+		Render(content)
+}
+
+func (s *splashCmp) Cursor() *tea.Cursor {
+	if s.isOnboarding {
+		cursor := s.modelList.Cursor()
+		if cursor != nil {
+			return s.moveCursor(cursor)
+		}
+	} else {
+		return nil
+	}
+	return nil
 }
 
 func (s *splashCmp) logoBlock() string {

internal/tui/components/completions/completions.go 🔗

@@ -157,15 +157,12 @@ func (c *completionsCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 }
 
 // View implements Completions.
-func (c *completionsCmp) View() tea.View {
+func (c *completionsCmp) View() string {
 	if len(c.list.Items()) == 0 {
-		return tea.NewView(c.style().Render("No completions found"))
+		return c.style().Render("No completions found")
 	}
 
-	view := tea.NewView(
-		c.style().Render(c.list.View().String()),
-	)
-	return view
+	return c.style().Render(c.list.View())
 }
 
 func (c *completionsCmp) style() lipgloss.Style {

internal/tui/components/completions/item.go 🔗

@@ -75,7 +75,7 @@ func (c *completionItemCmp) Update(tea.Msg) (tea.Model, tea.Cmd) {
 }
 
 // View implements CommandItem.
-func (c *completionItemCmp) View() tea.View {
+func (c *completionItemCmp) View() string {
 	t := styles.CurrentTheme()
 
 	itemStyle := t.S().Base.Padding(0, 1).Width(c.width)
@@ -135,7 +135,7 @@ func (c *completionItemCmp) View() tea.View {
 			parts...,
 		),
 	)
-	return tea.NewView(item)
+	return item
 }
 
 // Blur implements CommandItem.

internal/tui/components/core/layout/container.go 🔗

@@ -72,7 +72,14 @@ func (c *container) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	}
 }
 
-func (c *container) View() tea.View {
+func (c *container) Cursor() *tea.Cursor {
+	if cursor, ok := c.content.(util.Cursor); ok {
+		return cursor.Cursor()
+	}
+	return nil
+}
+
+func (c *container) View() string {
 	t := styles.CurrentTheme()
 	width := c.width
 	height := c.height
@@ -106,10 +113,7 @@ func (c *container) View() tea.View {
 		PaddingLeft(c.paddingLeft)
 
 	contentView := c.content.View()
-	view := tea.NewView(style.Render(contentView.String()))
-	cursor := contentView.Cursor()
-	view.SetCursor(cursor)
-	return view
+	return style.Render(contentView)
 }
 
 func (c *container) SetSize(width, height int) tea.Cmd {

internal/tui/components/core/layout/split.go 🔗

@@ -103,17 +103,34 @@ func (s *splitPaneLayout) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return s, tea.Batch(cmds...)
 }
 
-func (s *splitPaneLayout) View() tea.View {
+func (s *splitPaneLayout) Cursor() *tea.Cursor {
+	if s.bottomPanel != nil {
+		if c, ok := s.bottomPanel.(util.Cursor); ok {
+			return c.Cursor()
+		}
+	} else if s.rightPanel != nil {
+		if c, ok := s.rightPanel.(util.Cursor); ok {
+			return c.Cursor()
+		}
+	} else if s.leftPanel != nil {
+		if c, ok := s.leftPanel.(util.Cursor); ok {
+			return c.Cursor()
+		}
+	}
+	return nil
+}
+
+func (s *splitPaneLayout) View() string {
 	var topSection string
 
 	if s.leftPanel != nil && s.rightPanel != nil {
 		leftView := s.leftPanel.View()
 		rightView := s.rightPanel.View()
-		topSection = lipgloss.JoinHorizontal(lipgloss.Top, leftView.String(), rightView.String())
+		topSection = lipgloss.JoinHorizontal(lipgloss.Top, leftView, rightView)
 	} else if s.leftPanel != nil {
-		topSection = s.leftPanel.View().String()
+		topSection = s.leftPanel.View()
 	} else if s.rightPanel != nil {
-		topSection = s.rightPanel.View().String()
+		topSection = s.rightPanel.View()
 	} else {
 		topSection = ""
 	}
@@ -122,32 +139,20 @@ func (s *splitPaneLayout) View() tea.View {
 
 	if s.bottomPanel != nil && topSection != "" {
 		bottomView := s.bottomPanel.View()
-		finalView = lipgloss.JoinVertical(lipgloss.Left, topSection, bottomView.String())
+		finalView = lipgloss.JoinVertical(lipgloss.Left, topSection, bottomView)
 	} else if s.bottomPanel != nil {
-		finalView = s.bottomPanel.View().String()
+		finalView = s.bottomPanel.View()
 	} else {
 		finalView = topSection
 	}
 
-	// TODO: think of a better way to handle multiple cursors
-	var cursor *tea.Cursor
-	if s.bottomPanel != nil {
-		cursor = s.bottomPanel.View().Cursor()
-	} else if s.rightPanel != nil {
-		cursor = s.rightPanel.View().Cursor()
-	} else if s.leftPanel != nil {
-		cursor = s.leftPanel.View().Cursor()
-	}
-
 	t := styles.CurrentTheme()
 
 	style := t.S().Base.
 		Width(s.width).
 		Height(s.height)
 
-	view := tea.NewView(style.Render(finalView))
-	view.SetCursor(cursor)
-	return view
+	return style.Render(finalView)
 }
 
 func (s *splitPaneLayout) SetSize(width, height int) tea.Cmd {

internal/tui/components/core/list/list.go 🔗

@@ -42,6 +42,7 @@ type ListModel interface {
 	SetSelected(int) tea.Cmd        // Set the selected item by index and scroll to it
 	Filter(string) tea.Cmd          // Filter items based on a search term
 	SetFilterPlaceholder(string)    // Set the placeholder text for the filter input
+	Cursor() *tea.Cursor            // Get the current cursor position in the filter input
 }
 
 // HasAnim interface identifies items that support animation.
@@ -281,12 +282,20 @@ func (m *model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return m, nil
 }
 
+// Cursor returns the current cursor position in the input field.
+func (m *model) Cursor() *tea.Cursor {
+	if m.filterable && !m.hideFilterInput {
+		return m.input.Cursor()
+	}
+	return nil
+}
+
 // View renders the list to a string for display.
 // Returns empty string if the list has no dimensions.
 // Triggers re-rendering if needed before returning content.
-func (m *model) View() tea.View {
+func (m *model) View() string {
 	if m.viewState.height == 0 || m.viewState.width == 0 {
-		return tea.NewView("") // No content to display
+		return "" // No content to display
 	}
 	if m.renderState.needsRerender {
 		m.renderVisible()
@@ -304,11 +313,7 @@ func (m *model) View() tea.View {
 			content,
 		)
 	}
-	view := tea.NewView(content)
-	if m.filterable && !m.hideFilterInput {
-		view.SetCursor(m.input.Cursor())
-	}
-	return view
+	return content
 }
 
 // handleKeyPress processes keyboard input for list navigation.
@@ -834,7 +839,7 @@ func (m *model) rerenderItem(inx int) {
 func (m *model) getItemLines(item util.Model) []string {
 	var itemLines []string
 
-	itemLines = strings.Split(item.View().String(), "\n")
+	itemLines = strings.Split(item.View(), "\n")
 
 	if m.gapSize > 0 {
 		gap := make([]string, m.gapSize)
@@ -1262,7 +1267,7 @@ func (m *model) filterSection(sect section, search string) *section {
 
 	// Check if section header itself matches
 	if sect.header != nil {
-		headerText := strings.ToLower(sect.header.View().String())
+		headerText := strings.ToLower(sect.header.View())
 		if strings.Contains(headerText, search) {
 			hasHeaderMatch = true
 			// If header matches, include all items in the section

internal/tui/components/core/status/status.go 🔗

@@ -60,13 +60,13 @@ func (m *statusCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return m, nil
 }
 
-func (m *statusCmp) View() tea.View {
+func (m *statusCmp) View() string {
 	t := styles.CurrentTheme()
 	status := t.S().Base.Padding(0, 1, 1, 1).Render(m.help.View(m.keyMap))
 	if m.info.Msg != "" {
 		status = m.infoMsg()
 	}
-	return tea.NewView(status)
+	return status
 }
 
 func (m *statusCmp) infoMsg() string {

internal/tui/components/dialogs/commands/arguments.go 🔗

@@ -139,7 +139,7 @@ func (c *commandArgumentsDialogCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 }
 
 // View implements CommandArgumentsDialog.
-func (c *commandArgumentsDialogCmp) View() tea.View {
+func (c *commandArgumentsDialogCmp) View() string {
 	t := styles.CurrentTheme()
 	baseStyle := t.S().Base
 
@@ -188,19 +188,19 @@ func (c *commandArgumentsDialogCmp) View() tea.View {
 		elements...,
 	)
 
-	view := tea.NewView(
-		baseStyle.Padding(1, 1, 0, 1).
-			Border(lipgloss.RoundedBorder()).
-			BorderForeground(t.BorderFocus).
-			Width(c.width).
-			Render(content),
-	)
+	return baseStyle.Padding(1, 1, 0, 1).
+		Border(lipgloss.RoundedBorder()).
+		BorderForeground(t.BorderFocus).
+		Width(c.width).
+		Render(content)
+}
+
+func (c *commandArgumentsDialogCmp) Cursor() *tea.Cursor {
 	cursor := c.inputs[c.focusIndex].Cursor()
 	if cursor != nil {
 		cursor = c.moveCursor(cursor)
 	}
-	view.SetCursor(cursor)
-	return view
+	return cursor
 }
 
 func (c *commandArgumentsDialogCmp) moveCursor(cursor *tea.Cursor) *tea.Cursor {

internal/tui/components/dialogs/commands/commands.go 🔗

@@ -143,23 +143,29 @@ func (c *commandDialogCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return c, nil
 }
 
-func (c *commandDialogCmp) View() tea.View {
+func (c *commandDialogCmp) View() string {
 	t := styles.CurrentTheme()
-	listView := c.commandList.View()
+	listView := c.commandList
 	radio := c.commandTypeRadio()
 	content := lipgloss.JoinVertical(
 		lipgloss.Left,
 		t.S().Base.Padding(0, 1, 1, 1).Render(core.Title("Commands", c.width-lipgloss.Width(radio)-5)+" "+radio),
-		listView.String(),
+		listView.View(),
 		"",
 		t.S().Base.Width(c.width-2).PaddingLeft(1).AlignHorizontal(lipgloss.Left).Render(c.help.View(c.keyMap)),
 	)
-	v := tea.NewView(c.style().Render(content))
-	if listView.Cursor() != nil {
-		c := c.moveCursor(listView.Cursor())
-		v.SetCursor(c)
+	return c.style().Render(content)
+}
+
+func (c *commandDialogCmp) Cursor() *tea.Cursor {
+	if cursor, ok := c.commandList.(util.Cursor); ok {
+		cursor := cursor.Cursor()
+		if cursor != nil {
+			cursor = c.moveCursor(cursor)
+		}
+		return cursor
 	}
-	return v
+	return nil
 }
 
 func (c *commandDialogCmp) commandTypeRadio() string {

internal/tui/components/dialogs/commands/item.go 🔗

@@ -36,7 +36,7 @@ func (m *itemSectionModel) Update(tea.Msg) (tea.Model, tea.Cmd) {
 	return m, nil
 }
 
-func (m *itemSectionModel) View() tea.View {
+func (m *itemSectionModel) View() string {
 	t := styles.CurrentTheme()
 	title := ansi.Truncate(m.title, m.width-2, "…")
 	style := t.S().Base.Padding(1, 1, 0, 1)
@@ -48,7 +48,7 @@ func (m *itemSectionModel) View() tea.View {
 		section = core.Section(title, m.width-2)
 	}
 
-	return tea.NewView(style.Render(section))
+	return style.Render(section)
 }
 
 func (m *itemSectionModel) GetSize() (int, int) {

internal/tui/components/dialogs/compact/compact.go 🔗

@@ -242,8 +242,8 @@ func (c *compactDialogCmp) render() string {
 		Render(dialogContent)
 }
 
-func (c *compactDialogCmp) View() tea.View {
-	return tea.NewView(c.render())
+func (c *compactDialogCmp) View() string {
+	return c.render()
 }
 
 // SetSize sets the size of the component.

internal/tui/components/dialogs/dialogs.go 🔗

@@ -37,7 +37,7 @@ type DialogCmp interface {
 	Dialogs() []DialogModel
 	HasDialogs() bool
 	GetLayers() []*lipgloss.Layer
-	ActiveView() *tea.View
+	ActiveModel() util.Model
 	ActiveDialogID() DialogID
 }
 
@@ -132,12 +132,11 @@ func (d dialogCmp) Dialogs() []DialogModel {
 	return d.dialogs
 }
 
-func (d dialogCmp) ActiveView() *tea.View {
+func (d dialogCmp) ActiveModel() util.Model {
 	if len(d.dialogs) == 0 {
 		return nil
 	}
-	view := d.dialogs[len(d.dialogs)-1].View()
-	return &view
+	return d.dialogs[len(d.dialogs)-1]
 }
 
 func (d dialogCmp) ActiveDialogID() DialogID {
@@ -150,7 +149,7 @@ func (d dialogCmp) ActiveDialogID() DialogID {
 func (d dialogCmp) GetLayers() []*lipgloss.Layer {
 	layers := []*lipgloss.Layer{}
 	for _, dialog := range d.Dialogs() {
-		dialogView := dialog.View().String()
+		dialogView := dialog.View()
 		row, col := dialog.Position()
 		layers = append(layers, lipgloss.NewLayer(dialogView).X(col).Y(row))
 	}

internal/tui/components/dialogs/filepicker/filepicker.go 🔗

@@ -144,7 +144,7 @@ func (m *model) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return m, tea.Batch(cmds...)
 }
 
-func (m *model) View() tea.View {
+func (m *model) View() string {
 	t := styles.CurrentTheme()
 
 	content := lipgloss.JoinVertical(
@@ -154,7 +154,7 @@ func (m *model) View() tea.View {
 		m.filePicker.View(),
 		t.S().Base.Width(m.width-2).PaddingLeft(1).AlignHorizontal(lipgloss.Left).Render(m.help.View(m.keyMap)),
 	)
-	return tea.NewView(m.style().Render(content))
+	return m.style().Render(content)
 }
 
 func (m *model) currentImage() string {

internal/tui/components/dialogs/models/apikey.go 🔗

@@ -50,7 +50,7 @@ func (a *APIKeyInput) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return a, cmd
 }
 
-func (a *APIKeyInput) View() tea.View {
+func (a *APIKeyInput) View() string {
 	t := styles.CurrentTheme()
 
 	title := t.S().Base.
@@ -74,13 +74,15 @@ func (a *APIKeyInput) View() tea.View {
 		helpText,
 	)
 
-	view := tea.NewView(content)
+	return content
+}
+
+func (a *APIKeyInput) Cursor() *tea.Cursor {
 	cursor := a.input.Cursor()
 	if cursor != nil {
 		cursor.Y += 2 // Adjust for title and spacing
 	}
-	view.SetCursor(cursor)
-	return view
+	return cursor
 }
 
 func (a *APIKeyInput) Value() string {

internal/tui/components/dialogs/models/list.go 🔗

@@ -55,10 +55,14 @@ func (m *ModelListComponent) Update(msg tea.Msg) (*ModelListComponent, tea.Cmd)
 	return m, cmd
 }
 
-func (m *ModelListComponent) View() tea.View {
+func (m *ModelListComponent) View() string {
 	return m.list.View()
 }
 
+func (m *ModelListComponent) Cursor() *tea.Cursor {
+	return m.list.Cursor()
+}
+
 func (m *ModelListComponent) SetSize(width, height int) tea.Cmd {
 	return m.list.SetSize(width, height)
 }
@@ -113,7 +117,7 @@ func (m *ModelListComponent) SetModelType(modelType int) tea.Cmd {
 			for i, model := range providerConfig.Models {
 				configProvider.Models[i] = provider.Model{
 					ID:                     model.ID,
-					Name:                   model.Name,
+					Model:                  model.Model,
 					CostPer1MIn:            model.CostPer1MIn,
 					CostPer1MOut:           model.CostPer1MOut,
 					CostPer1MInCached:      model.CostPer1MInCached,
@@ -136,7 +140,7 @@ func (m *ModelListComponent) SetModelType(modelType int) tea.Cmd {
 			section.SetInfo(configured)
 			modelItems = append(modelItems, section)
 			for _, model := range configProvider.Models {
-				modelItems = append(modelItems, completions.NewCompletionItem(model.Name, ModelOption{
+				modelItems = append(modelItems, completions.NewCompletionItem(model.Model, ModelOption{
 					Provider: configProvider,
 					Model:    model,
 				}))
@@ -171,7 +175,7 @@ func (m *ModelListComponent) SetModelType(modelType int) tea.Cmd {
 		}
 		modelItems = append(modelItems, section)
 		for _, model := range provider.Models {
-			modelItems = append(modelItems, completions.NewCompletionItem(model.Name, ModelOption{
+			modelItems = append(modelItems, completions.NewCompletionItem(model.Model, ModelOption{
 				Provider: provider,
 				Model:    model,
 			}))

internal/tui/components/dialogs/models/models.go 🔗

@@ -142,23 +142,27 @@ func (m *modelDialogCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return m, nil
 }
 
-func (m *modelDialogCmp) View() tea.View {
+func (m *modelDialogCmp) View() string {
 	t := styles.CurrentTheme()
 	listView := m.modelList.View()
 	radio := m.modelTypeRadio()
 	content := lipgloss.JoinVertical(
 		lipgloss.Left,
 		t.S().Base.Padding(0, 1, 1, 1).Render(core.Title("Switch Model", m.width-lipgloss.Width(radio)-5)+" "+radio),
-		listView.String(),
+		listView,
 		"",
 		t.S().Base.Width(m.width-2).PaddingLeft(1).AlignHorizontal(lipgloss.Left).Render(m.help.View(m.keyMap)),
 	)
-	v := tea.NewView(m.style().Render(content))
-	if listView.Cursor() != nil {
-		c := m.moveCursor(listView.Cursor())
-		v.SetCursor(c)
+	return m.style().Render(content)
+}
+
+func (m *modelDialogCmp) Cursor() *tea.Cursor {
+	cursor := m.modelList.Cursor()
+	if cursor != nil {
+		cursor = m.moveCursor(cursor)
+		return cursor
 	}
-	return v
+	return nil
 }
 
 func (m *modelDialogCmp) style() lipgloss.Style {

internal/tui/components/dialogs/permissions/permissions.go 🔗

@@ -478,8 +478,8 @@ func (p *permissionDialogCmp) render() string {
 		)
 }
 
-func (p *permissionDialogCmp) View() tea.View {
-	return tea.NewView(p.render())
+func (p *permissionDialogCmp) View() string {
+	return p.render()
 }
 
 func (p *permissionDialogCmp) SetSize() tea.Cmd {

internal/tui/components/dialogs/quit/quit.go 🔗

@@ -65,7 +65,7 @@ func (q *quitDialogCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 }
 
 // View renders the quit dialog with Yes/No buttons.
-func (q *quitDialogCmp) View() tea.View {
+func (q *quitDialogCmp) View() string {
 	t := styles.CurrentTheme()
 	baseStyle := t.S().Base
 	yesStyle := t.S().Text
@@ -79,8 +79,9 @@ func (q *quitDialogCmp) View() tea.View {
 		noStyle = noStyle.Background(t.BgSubtle)
 	}
 
-	yesButton := yesStyle.Padding(0, 1).Render("Yep!")
-	noButton := noStyle.Padding(0, 1).Render("Nope")
+	const horizontalPadding = 3
+	yesButton := yesStyle.Padding(0, horizontalPadding).Render("Yep!")
+	noButton := noStyle.Padding(0, horizontalPadding).Render("Nope")
 
 	buttons := baseStyle.Width(lipgloss.Width(question)).Align(lipgloss.Right).Render(
 		lipgloss.JoinHorizontal(lipgloss.Center, yesButton, "  ", noButton),
@@ -100,9 +101,7 @@ func (q *quitDialogCmp) View() tea.View {
 		Border(lipgloss.RoundedBorder()).
 		BorderForeground(t.BorderFocus)
 
-	return tea.NewView(
-		quitDialogStyle.Render(content),
-	)
+	return quitDialogStyle.Render(content)
 }
 
 func (q *quitDialogCmp) Position() (int, int) {

internal/tui/components/dialogs/sessions/sessions.go 🔗

@@ -122,23 +122,29 @@ func (s *sessionDialogCmp) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return s, nil
 }
 
-func (s *sessionDialogCmp) View() tea.View {
+func (s *sessionDialogCmp) View() string {
 	t := styles.CurrentTheme()
 	listView := s.sessionsList.View()
 	content := lipgloss.JoinVertical(
 		lipgloss.Left,
 		t.S().Base.Padding(0, 1, 1, 1).Render(core.Title("Switch Session", s.width-4)),
-		listView.String(),
+		listView,
 		"",
 		t.S().Base.Width(s.width-2).PaddingLeft(1).AlignHorizontal(lipgloss.Left).Render(s.help.View(s.keyMap)),
 	)
 
-	v := tea.NewView(s.style().Render(content))
-	if listView.Cursor() != nil {
-		c := s.moveCursor(listView.Cursor())
-		v.SetCursor(c)
+	return s.style().Render(content)
+}
+
+func (s *sessionDialogCmp) Cursor() *tea.Cursor {
+	if cursor, ok := s.sessionsList.(util.Cursor); ok {
+		cursor := cursor.Cursor()
+		if cursor != nil {
+			cursor = s.moveCursor(cursor)
+		}
+		return cursor
 	}
-	return v
+	return nil
 }
 
 func (s *sessionDialogCmp) style() lipgloss.Style {

internal/tui/exp/list/list.go 🔗

@@ -84,6 +84,6 @@ func (l *list) Update(tea.Msg) (tea.Model, tea.Cmd) {
 }
 
 // View implements List.
-func (l *list) View() tea.View {
+func (l *list) View() string {
 	panic("unimplemented")
 }

internal/tui/page/chat/chat.go 🔗

@@ -54,13 +54,13 @@ const (
 	SideBarWidth          = 31  // Width of the sidebar
 	SideBarDetailsPadding = 1   // Padding for the sidebar details section
 	HeaderHeight          = 1   // Height of the header
-	
+
 	// Layout constants for borders and padding
 	BorderWidth        = 1 // Width of component borders
 	LeftRightBorders   = 2 // Left + right border width (1 + 1)
 	TopBottomBorders   = 2 // Top + bottom border width (1 + 1)
 	DetailsPositioning = 2 // Positioning adjustment for details panel
-	
+
 	// Timing constants
 	CancelTimerDuration = 2 * time.Second // Duration before cancel timer expires
 )
@@ -81,23 +81,23 @@ type chatPage struct {
 	width, height               int
 	detailsWidth, detailsHeight int
 	app                         *app.App
-	
+
 	// Layout state
 	compact      bool
 	forceCompact bool
 	focusedPane  PanelType
-	
+
 	// Session
 	session session.Session
 	keyMap  KeyMap
-	
+
 	// Components
 	header  header.Header
 	sidebar sidebar.Sidebar
 	chat    chat.MessageListCmp
 	editor  editor.Editor
 	splash  splash.Splash
-	
+
 	// Simple state flags
 	showingDetails   bool
 	isCanceling      bool
@@ -123,7 +123,7 @@ func (p *chatPage) Init() tea.Cmd {
 	p.compact = compact
 	p.forceCompact = compact
 	p.sidebar.SetCompactMode(p.compact)
-	
+
 	// Set splash state based on config
 	if !config.HasInitialDataConfig() {
 		// First-time setup: show model selection
@@ -138,7 +138,7 @@ func (p *chatPage) Init() tea.Cmd {
 		p.focusedPane = PanelTypeEditor
 		p.splashFullScreen = false
 	}
-	
+
 	return tea.Batch(
 		p.header.Init(),
 		p.sidebar.Init(),
@@ -244,7 +244,7 @@ func (p *chatPage) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 			if model.SupportsImages {
 				return p, util.CmdHandler(OpenFilePickerMsg{})
 			} else {
-				return p, util.ReportWarn("File attachments are not supported by the current model: " + model.Name)
+				return p, util.ReportWarn("File attachments are not supported by the current model: " + model.Model)
 			}
 		case key.Matches(msg, p.keyMap.Tab):
 			if p.session.ID == "" {
@@ -279,10 +279,21 @@ func (p *chatPage) Update(msg tea.Msg) (tea.Model, tea.Cmd) {
 	return p, tea.Batch(cmds...)
 }
 
-func (p *chatPage) View() tea.View {
-	var chatView tea.View
+func (p *chatPage) Cursor() *tea.Cursor {
+	switch p.focusedPane {
+	case PanelTypeEditor:
+		return p.editor.Cursor()
+	case PanelTypeSplash:
+		return p.splash.Cursor()
+	default:
+		return nil
+	}
+}
+
+func (p *chatPage) View() string {
+	var chatView string
 	t := styles.CurrentTheme()
-	
+
 	if p.session.ID == "" {
 		splashView := p.splash.View()
 		// Full screen during onboarding or project initialization
@@ -291,49 +302,40 @@ func (p *chatPage) View() tea.View {
 		} else {
 			// Show splash + editor for new message state
 			editorView := p.editor.View()
-			chatView = tea.NewView(
-				lipgloss.JoinVertical(
-					lipgloss.Left,
-					t.S().Base.Render(splashView.String()),
-					editorView.String(),
-				),
+			chatView = lipgloss.JoinVertical(
+				lipgloss.Left,
+				t.S().Base.Render(splashView),
+				editorView,
 			)
-			chatView.SetCursor(editorView.Cursor())
 		}
 	} else {
 		messagesView := p.chat.View()
 		editorView := p.editor.View()
 		if p.compact {
 			headerView := p.header.View()
-			chatView = tea.NewView(
-				lipgloss.JoinVertical(
-					lipgloss.Left,
-					headerView.String(),
-					messagesView.String(),
-					editorView.String(),
-				),
+			chatView = lipgloss.JoinVertical(
+				lipgloss.Left,
+				headerView,
+				messagesView,
+				editorView,
 			)
-			chatView.SetCursor(editorView.Cursor())
 		} else {
 			sidebarView := p.sidebar.View()
 			messages := lipgloss.JoinHorizontal(
 				lipgloss.Left,
-				messagesView.String(),
-				sidebarView.String(),
+				messagesView,
+				sidebarView,
 			)
-			chatView = tea.NewView(
-				lipgloss.JoinVertical(
-					lipgloss.Left,
-					messages,
-					p.editor.View().String(),
-				),
+			chatView = lipgloss.JoinVertical(
+				lipgloss.Left,
+				messages,
+				p.editor.View(),
 			)
-			chatView.SetCursor(editorView.Cursor())
 		}
 	}
 
 	layers := []*lipgloss.Layer{
-		lipgloss.NewLayer(chatView.String()).X(0).Y(0),
+		lipgloss.NewLayer(chatView).X(0).Y(0),
 	}
 
 	if p.showingDetails {
@@ -345,7 +347,7 @@ func (p *chatPage) View() tea.View {
 		details := style.Render(
 			lipgloss.JoinVertical(
 				lipgloss.Left,
-				p.sidebar.View().String(),
+				p.sidebar.View(),
 				version,
 			),
 		)
@@ -354,9 +356,7 @@ func (p *chatPage) View() tea.View {
 	canvas := lipgloss.NewCanvas(
 		layers...,
 	)
-	view := tea.NewView(canvas.Render())
-	view.SetCursor(chatView.Cursor())
-	return view
+	return canvas.Render()
 }
 
 func (p *chatPage) updateCompactConfig(compact bool) tea.Cmd {
@@ -404,7 +404,7 @@ func (p *chatPage) SetSize(width, height int) tea.Cmd {
 	p.width = width
 	p.height = height
 	var cmds []tea.Cmd
-	
+
 	if p.session.ID == "" {
 		if p.splashFullScreen {
 			cmds = append(cmds, p.splash.SetSize(width, height))
@@ -451,7 +451,7 @@ func (p *chatPage) setSession(session session.Session) tea.Cmd {
 
 	var cmds []tea.Cmd
 	p.session = session
-	
+
 	cmds = append(cmds, p.SetSize(p.width, p.height))
 	cmds = append(cmds, p.chat.SetSession(session))
 	cmds = append(cmds, p.sidebar.SetSession(session))

internal/tui/tui.go 🔗

@@ -64,6 +64,8 @@ func (a appModel) Init() tea.Cmd {
 	cmd = a.status.Init()
 	cmds = append(cmds, cmd)
 
+	cmds = append(cmds, tea.EnableMouseAllMotion)
+
 	return tea.Batch(cmds...)
 }
 
@@ -358,9 +360,9 @@ func (a *appModel) View() tea.View {
 	a.status.SetKeyMap(a.keyMap)
 	pageView := page.View()
 	components := []string{
-		pageView.String(),
+		pageView,
 	}
-	components = append(components, a.status.View().String())
+	components = append(components, a.status.View())
 
 	appView := lipgloss.JoinVertical(lipgloss.Top, components...)
 	layers := []*lipgloss.Layer{
@@ -373,14 +375,20 @@ func (a *appModel) View() tea.View {
 		)
 	}
 
-	cursor := pageView.Cursor()
-	activeView := a.dialog.ActiveView()
+	var cursor *tea.Cursor
+	if v, ok := page.(util.Cursor); ok {
+		cursor = v.Cursor()
+	}
+	activeView := a.dialog.ActiveModel()
 	if activeView != nil {
-		cursor = activeView.Cursor()
+		cursor = nil // Reset cursor if a dialog is active unless it implements util.Cursor
+		if v, ok := activeView.(util.Cursor); ok {
+			cursor = v.Cursor()
+		}
 	}
 
 	if a.completions.Open() && cursor != nil {
-		cmp := a.completions.View().String()
+		cmp := a.completions.View()
 		x, y := a.completions.Position()
 		layers = append(
 			layers,
@@ -392,10 +400,11 @@ func (a *appModel) View() tea.View {
 		layers...,
 	)
 
+	var view tea.View
 	t := styles.CurrentTheme()
-	view := tea.NewView(canvas.Render())
-	view.SetBackgroundColor(t.BgBase)
-	view.SetCursor(cursor)
+	view.Layer = canvas
+	view.BackgroundColor = t.BgBase
+	view.Cursor = cursor
 	return view
 }
 

internal/tui/util/util.go 🔗

@@ -6,9 +6,13 @@ import (
 	tea "github.com/charmbracelet/bubbletea/v2"
 )
 
+type Cursor interface {
+	Cursor() *tea.Cursor
+}
+
 type Model interface {
 	tea.Model
-	tea.Viewable
+	tea.ViewModel
 }
 
 func CmdHandler(msg tea.Msg) tea.Cmd {

vendor/cloud.google.com/go/LICENSE 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/cloud.google.com/go/auth/CHANGES.md 🔗

@@ -0,0 +1,368 @@
+# Changelog
+
+## [0.13.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.12.1...auth/v0.13.0) (2024-12-13)
+
+
+### Features
+
+* **auth:** Add logging support ([#11079](https://github.com/googleapis/google-cloud-go/issues/11079)) ([c80e31d](https://github.com/googleapis/google-cloud-go/commit/c80e31df5ecb33a810be3dfb9d9e27ac531aa91d))
+* **auth:** Pass logger from auth layer to metadata package ([#11288](https://github.com/googleapis/google-cloud-go/issues/11288)) ([b552efd](https://github.com/googleapis/google-cloud-go/commit/b552efd6ab34e5dfded18438e0fbfd925805614f))
+
+
+### Bug Fixes
+
+* **auth:** Check compute cred type before non-default flag for DP ([#11255](https://github.com/googleapis/google-cloud-go/issues/11255)) ([4347ca1](https://github.com/googleapis/google-cloud-go/commit/4347ca141892be8ae813399b4b437662a103bc90))
+
+## [0.12.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.12.0...auth/v0.12.1) (2024-12-10)
+
+
+### Bug Fixes
+
+* **auth:** Correct typo in link ([#11160](https://github.com/googleapis/google-cloud-go/issues/11160)) ([af6fb46](https://github.com/googleapis/google-cloud-go/commit/af6fb46d7cd694ddbe8c9d63bc4cdcd62b9fb2c1))
+
+## [0.12.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.11.0...auth/v0.12.0) (2024-12-04)
+
+
+### Features
+
+* **auth:** Add support for providing custom certificate URL ([#11006](https://github.com/googleapis/google-cloud-go/issues/11006)) ([ebf3657](https://github.com/googleapis/google-cloud-go/commit/ebf36579724afb375d3974cf1da38f703e3b7dbc)), refs [#11005](https://github.com/googleapis/google-cloud-go/issues/11005)
+
+
+### Bug Fixes
+
+* **auth:** Ensure endpoints are present in Validator ([#11209](https://github.com/googleapis/google-cloud-go/issues/11209)) ([106cd53](https://github.com/googleapis/google-cloud-go/commit/106cd53309facaef1b8ea78376179f523f6912b9)), refs [#11006](https://github.com/googleapis/google-cloud-go/issues/11006) [#11190](https://github.com/googleapis/google-cloud-go/issues/11190) [#11189](https://github.com/googleapis/google-cloud-go/issues/11189) [#11188](https://github.com/googleapis/google-cloud-go/issues/11188)
+
+## [0.11.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.10.2...auth/v0.11.0) (2024-11-21)
+
+
+### Features
+
+* **auth:** Add universe domain support to mTLS ([#11159](https://github.com/googleapis/google-cloud-go/issues/11159)) ([117748b](https://github.com/googleapis/google-cloud-go/commit/117748ba1cfd4ae62a6a4feb7e30951cb2bc9344))
+
+## [0.10.2](https://github.com/googleapis/google-cloud-go/compare/auth/v0.10.1...auth/v0.10.2) (2024-11-12)
+
+
+### Bug Fixes
+
+* **auth:** Restore use of grpc.Dial ([#11118](https://github.com/googleapis/google-cloud-go/issues/11118)) ([2456b94](https://github.com/googleapis/google-cloud-go/commit/2456b943b7b8aaabd4d8bfb7572c0f477ae0db45)), refs [#7556](https://github.com/googleapis/google-cloud-go/issues/7556)
+
+## [0.10.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.10.0...auth/v0.10.1) (2024-11-06)
+
+
+### Bug Fixes
+
+* **auth:** Restore Application Default Credentials support to idtoken ([#11083](https://github.com/googleapis/google-cloud-go/issues/11083)) ([8771f2e](https://github.com/googleapis/google-cloud-go/commit/8771f2ea9807ab822083808e0678392edff3b4f2))
+* **auth:** Skip impersonate universe domain check if empty ([#11086](https://github.com/googleapis/google-cloud-go/issues/11086)) ([87159c1](https://github.com/googleapis/google-cloud-go/commit/87159c1059d4a18d1367ce62746a838a94964ab6))
+
+## [0.10.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.9...auth/v0.10.0) (2024-10-30)
+
+
+### Features
+
+* **auth:** Add universe domain support to credentials/impersonate ([#10953](https://github.com/googleapis/google-cloud-go/issues/10953)) ([e06cb64](https://github.com/googleapis/google-cloud-go/commit/e06cb6499f7eda3aef08ab18ff197016f667684b))
+
+## [0.9.9](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.8...auth/v0.9.9) (2024-10-22)
+
+
+### Bug Fixes
+
+* **auth:** Fallback cert lookups for missing files ([#11013](https://github.com/googleapis/google-cloud-go/issues/11013)) ([bd76695](https://github.com/googleapis/google-cloud-go/commit/bd766957ec238b7c40ddbabb369e612dc9b07313)), refs [#10844](https://github.com/googleapis/google-cloud-go/issues/10844)
+* **auth:** Replace MDS endpoint universe_domain with universe-domain ([#11000](https://github.com/googleapis/google-cloud-go/issues/11000)) ([6a1586f](https://github.com/googleapis/google-cloud-go/commit/6a1586f2ce9974684affaea84e7b629313b4d114))
+
+## [0.9.8](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.7...auth/v0.9.8) (2024-10-09)
+
+
+### Bug Fixes
+
+* **auth:** Restore OpenTelemetry handling in transports ([#10968](https://github.com/googleapis/google-cloud-go/issues/10968)) ([08c6d04](https://github.com/googleapis/google-cloud-go/commit/08c6d04901c1a20e219b2d86df41dbaa6d7d7b55)), refs [#10962](https://github.com/googleapis/google-cloud-go/issues/10962)
+* **auth:** Try talk to plaintext S2A if credentials can not be found for mTLS-S2A ([#10941](https://github.com/googleapis/google-cloud-go/issues/10941)) ([0f0bf2d](https://github.com/googleapis/google-cloud-go/commit/0f0bf2d18c97dd8b65bcf0099f0802b5631c6287))
+
+## [0.9.7](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.6...auth/v0.9.7) (2024-10-01)
+
+
+### Bug Fixes
+
+* **auth:** Restore support for non-default service accounts for DirectPath ([#10937](https://github.com/googleapis/google-cloud-go/issues/10937)) ([a38650e](https://github.com/googleapis/google-cloud-go/commit/a38650edbf420223077498cafa537aec74b37aad)), refs [#10907](https://github.com/googleapis/google-cloud-go/issues/10907)
+
+## [0.9.6](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.5...auth/v0.9.6) (2024-09-30)
+
+
+### Bug Fixes
+
+* **auth:** Make aws credentials provider retrieve fresh credentials ([#10920](https://github.com/googleapis/google-cloud-go/issues/10920)) ([250fbf8](https://github.com/googleapis/google-cloud-go/commit/250fbf87d858d865e399a241b7e537c4ff0c3dd8))
+
+## [0.9.5](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.4...auth/v0.9.5) (2024-09-25)
+
+
+### Bug Fixes
+
+* **auth:** Restore support for GOOGLE_CLOUD_UNIVERSE_DOMAIN env ([#10915](https://github.com/googleapis/google-cloud-go/issues/10915)) ([94caaaa](https://github.com/googleapis/google-cloud-go/commit/94caaaa061362d0e00ef6214afcc8a0a3e7ebfb2))
+* **auth:** Skip directpath credentials overwrite when it's not on GCE ([#10833](https://github.com/googleapis/google-cloud-go/issues/10833)) ([7e5e8d1](https://github.com/googleapis/google-cloud-go/commit/7e5e8d10b761b0a6e43e19a028528db361bc07b1))
+* **auth:** Use new context for non-blocking token refresh ([#10919](https://github.com/googleapis/google-cloud-go/issues/10919)) ([cf7102d](https://github.com/googleapis/google-cloud-go/commit/cf7102d33a21be1e5a9d47a49456b3a57c43b350))
+
+## [0.9.4](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.3...auth/v0.9.4) (2024-09-11)
+
+
+### Bug Fixes
+
+* **auth:** Enable self-signed JWT for non-GDU universe domain ([#10831](https://github.com/googleapis/google-cloud-go/issues/10831)) ([f9869f7](https://github.com/googleapis/google-cloud-go/commit/f9869f7903cfd34d1b97c25d0dc5669d2c5138e6))
+
+## [0.9.3](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.2...auth/v0.9.3) (2024-09-03)
+
+
+### Bug Fixes
+
+* **auth:** Choose quota project envvar over file when both present ([#10807](https://github.com/googleapis/google-cloud-go/issues/10807)) ([2d8dd77](https://github.com/googleapis/google-cloud-go/commit/2d8dd7700eff92d4b95027be55e26e1e7aa79181)), refs [#10804](https://github.com/googleapis/google-cloud-go/issues/10804)
+
+## [0.9.2](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.1...auth/v0.9.2) (2024-08-30)
+
+
+### Bug Fixes
+
+* **auth:** Handle non-Transport DefaultTransport ([#10733](https://github.com/googleapis/google-cloud-go/issues/10733)) ([98d91dc](https://github.com/googleapis/google-cloud-go/commit/98d91dc8316b247498fab41ab35e57a0446fe556)), refs [#10742](https://github.com/googleapis/google-cloud-go/issues/10742)
+* **auth:** Make sure quota option takes precedence over env/file ([#10797](https://github.com/googleapis/google-cloud-go/issues/10797)) ([f1b050d](https://github.com/googleapis/google-cloud-go/commit/f1b050d56d804b245cab048c2980d32b0eaceb4e)), refs [#10795](https://github.com/googleapis/google-cloud-go/issues/10795)
+
+
+### Documentation
+
+* **auth:** Fix Go doc comment link ([#10751](https://github.com/googleapis/google-cloud-go/issues/10751)) ([015acfa](https://github.com/googleapis/google-cloud-go/commit/015acfab4d172650928bb1119bc2cd6307b9a437))
+
+## [0.9.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.0...auth/v0.9.1) (2024-08-22)
+
+
+### Bug Fixes
+
+* **auth:** Setting expireEarly to default when the value is 0 ([#10732](https://github.com/googleapis/google-cloud-go/issues/10732)) ([5e67869](https://github.com/googleapis/google-cloud-go/commit/5e67869a31e9e8ecb4eeebd2cfa11a761c3b1948))
+
+## [0.9.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.8.1...auth/v0.9.0) (2024-08-16)
+
+
+### Features
+
+* **auth:** Auth library can talk to S2A over mTLS ([#10634](https://github.com/googleapis/google-cloud-go/issues/10634)) ([5250a13](https://github.com/googleapis/google-cloud-go/commit/5250a13ec95b8d4eefbe0158f82857ff2189cb45))
+
+## [0.8.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.8.0...auth/v0.8.1) (2024-08-13)
+
+
+### Bug Fixes
+
+* **auth:** Make default client creation more lenient ([#10669](https://github.com/googleapis/google-cloud-go/issues/10669)) ([1afb9ee](https://github.com/googleapis/google-cloud-go/commit/1afb9ee1ee9de9810722800018133304a0ca34d1)), refs [#10638](https://github.com/googleapis/google-cloud-go/issues/10638)
+
+## [0.8.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.7.3...auth/v0.8.0) (2024-08-07)
+
+
+### Features
+
+* **auth:** Adds support for X509 workload identity federation ([#10373](https://github.com/googleapis/google-cloud-go/issues/10373)) ([5d07505](https://github.com/googleapis/google-cloud-go/commit/5d075056cbe27bb1da4072a26070c41f8999eb9b))
+
+## [0.7.3](https://github.com/googleapis/google-cloud-go/compare/auth/v0.7.2...auth/v0.7.3) (2024-08-01)
+
+
+### Bug Fixes
+
+* **auth/oauth2adapt:** Update dependencies ([257c40b](https://github.com/googleapis/google-cloud-go/commit/257c40bd6d7e59730017cf32bda8823d7a232758))
+* **auth:** Disable automatic universe domain check for MDS ([#10620](https://github.com/googleapis/google-cloud-go/issues/10620)) ([7cea5ed](https://github.com/googleapis/google-cloud-go/commit/7cea5edd5a0c1e6bca558696f5607879141910e8))
+* **auth:** Update dependencies ([257c40b](https://github.com/googleapis/google-cloud-go/commit/257c40bd6d7e59730017cf32bda8823d7a232758))
+
+## [0.7.2](https://github.com/googleapis/google-cloud-go/compare/auth/v0.7.1...auth/v0.7.2) (2024-07-22)
+
+
+### Bug Fixes
+
+* **auth:** Use default client for universe metadata lookup ([#10551](https://github.com/googleapis/google-cloud-go/issues/10551)) ([d9046fd](https://github.com/googleapis/google-cloud-go/commit/d9046fdd1435d1ce48f374806c1def4cb5ac6cd3)), refs [#10544](https://github.com/googleapis/google-cloud-go/issues/10544)
+
+## [0.7.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.7.0...auth/v0.7.1) (2024-07-10)
+
+
+### Bug Fixes
+
+* **auth:** Bump google.golang.org/grpc@v1.64.1 ([8ecc4e9](https://github.com/googleapis/google-cloud-go/commit/8ecc4e9622e5bbe9b90384d5848ab816027226c5))
+
+## [0.7.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.6.1...auth/v0.7.0) (2024-07-09)
+
+
+### Features
+
+* **auth:** Add workload X509 cert provider as a default cert provider ([#10479](https://github.com/googleapis/google-cloud-go/issues/10479)) ([c51ee6c](https://github.com/googleapis/google-cloud-go/commit/c51ee6cf65ce05b4d501083e49d468c75ac1ea63))
+
+
+### Bug Fixes
+
+* **auth/oauth2adapt:** Bump google.golang.org/api@v0.187.0 ([8fa9e39](https://github.com/googleapis/google-cloud-go/commit/8fa9e398e512fd8533fd49060371e61b5725a85b))
+* **auth:** Bump google.golang.org/api@v0.187.0 ([8fa9e39](https://github.com/googleapis/google-cloud-go/commit/8fa9e398e512fd8533fd49060371e61b5725a85b))
+* **auth:** Check len of slices, not non-nil ([#10483](https://github.com/googleapis/google-cloud-go/issues/10483)) ([0a966a1](https://github.com/googleapis/google-cloud-go/commit/0a966a183e5f0e811977216d736d875b7233e942))
+
+## [0.6.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.6.0...auth/v0.6.1) (2024-07-01)
+
+
+### Bug Fixes
+
+* **auth:** Support gRPC API keys ([#10460](https://github.com/googleapis/google-cloud-go/issues/10460)) ([daa6646](https://github.com/googleapis/google-cloud-go/commit/daa6646d2af5d7fb5b30489f4934c7db89868c7c))
+* **auth:** Update http and grpc transports to support token exchange over mTLS ([#10397](https://github.com/googleapis/google-cloud-go/issues/10397)) ([c6dfdcf](https://github.com/googleapis/google-cloud-go/commit/c6dfdcf893c3f971eba15026c12db0a960ae81f2))
+
+## [0.6.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.5.2...auth/v0.6.0) (2024-06-25)
+
+
+### Features
+
+* **auth:** Add non-blocking token refresh for compute MDS ([#10263](https://github.com/googleapis/google-cloud-go/issues/10263)) ([9ac350d](https://github.com/googleapis/google-cloud-go/commit/9ac350da11a49b8e2174d3fc5b1a5070fec78b4e))
+
+
+### Bug Fixes
+
+* **auth:** Return error if envvar detected file returns an error ([#10431](https://github.com/googleapis/google-cloud-go/issues/10431)) ([e52b9a7](https://github.com/googleapis/google-cloud-go/commit/e52b9a7c45468827f5d220ab00965191faeb9d05))
+
+## [0.5.2](https://github.com/googleapis/google-cloud-go/compare/auth/v0.5.1...auth/v0.5.2) (2024-06-24)
+
+
+### Bug Fixes
+
+* **auth:** Fetch initial token when CachedTokenProviderOptions.DisableAutoRefresh is true ([#10415](https://github.com/googleapis/google-cloud-go/issues/10415)) ([3266763](https://github.com/googleapis/google-cloud-go/commit/32667635ca2efad05cd8c087c004ca07d7406913)), refs [#10414](https://github.com/googleapis/google-cloud-go/issues/10414)
+
+## [0.5.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.5.0...auth/v0.5.1) (2024-05-31)
+
+
+### Bug Fixes
+
+* **auth:** Pass through client to 2LO and 3LO flows ([#10290](https://github.com/googleapis/google-cloud-go/issues/10290)) ([685784e](https://github.com/googleapis/google-cloud-go/commit/685784ea84358c15e9214bdecb307d37aa3b6d2f))
+
+## [0.5.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.4.2...auth/v0.5.0) (2024-05-28)
+
+
+### Features
+
+* **auth:** Adds X509 workload certificate provider ([#10233](https://github.com/googleapis/google-cloud-go/issues/10233)) ([17a9db7](https://github.com/googleapis/google-cloud-go/commit/17a9db73af35e3d1a7a25ac4fd1377a103de6150))
+
+## [0.4.2](https://github.com/googleapis/google-cloud-go/compare/auth/v0.4.1...auth/v0.4.2) (2024-05-16)
+
+
+### Bug Fixes
+
+* **auth:** Enable client certificates by default only for GDU ([#10151](https://github.com/googleapis/google-cloud-go/issues/10151)) ([7c52978](https://github.com/googleapis/google-cloud-go/commit/7c529786275a39b7e00525f7d5e7be0d963e9e15))
+* **auth:** Handle non-Transport DefaultTransport ([#10162](https://github.com/googleapis/google-cloud-go/issues/10162)) ([fa3bfdb](https://github.com/googleapis/google-cloud-go/commit/fa3bfdb23aaa45b34394a8b61e753b3587506782)), refs [#10159](https://github.com/googleapis/google-cloud-go/issues/10159)
+* **auth:** Have refresh time match docs ([#10147](https://github.com/googleapis/google-cloud-go/issues/10147)) ([bcb5568](https://github.com/googleapis/google-cloud-go/commit/bcb5568c07a54dd3d2e869d15f502b0741a609e8))
+* **auth:** Update compute token fetching error with named prefix ([#10180](https://github.com/googleapis/google-cloud-go/issues/10180)) ([4573504](https://github.com/googleapis/google-cloud-go/commit/4573504828d2928bebedc875d87650ba227829ea))
+
+## [0.4.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.4.0...auth/v0.4.1) (2024-05-09)
+
+
+### Bug Fixes
+
+* **auth:** Don't try to detect default creds it opt configured ([#10143](https://github.com/googleapis/google-cloud-go/issues/10143)) ([804632e](https://github.com/googleapis/google-cloud-go/commit/804632e7c5b0b85ff522f7951114485e256eb5bc))
+
+## [0.4.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.3.0...auth/v0.4.0) (2024-05-07)
+
+
+### Features
+
+* **auth:** Enable client certificates by default ([#10102](https://github.com/googleapis/google-cloud-go/issues/10102)) ([9013e52](https://github.com/googleapis/google-cloud-go/commit/9013e5200a6ec0f178ed91acb255481ffb073a2c))
+
+
+### Bug Fixes
+
+* **auth:** Get s2a logic up to date ([#10093](https://github.com/googleapis/google-cloud-go/issues/10093)) ([4fe9ae4](https://github.com/googleapis/google-cloud-go/commit/4fe9ae4b7101af2a5221d6d6b2e77b479305bb06))
+
+## [0.3.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.2.2...auth/v0.3.0) (2024-04-23)
+
+
+### Features
+
+* **auth/httptransport:** Add ability to customize transport ([#10023](https://github.com/googleapis/google-cloud-go/issues/10023)) ([72c7f6b](https://github.com/googleapis/google-cloud-go/commit/72c7f6bbec3136cc7a62788fc7186bc33ef6c3b3)), refs [#9812](https://github.com/googleapis/google-cloud-go/issues/9812) [#9814](https://github.com/googleapis/google-cloud-go/issues/9814)
+
+
+### Bug Fixes
+
+* **auth/credentials:** Error on bad file name if explicitly set ([#10018](https://github.com/googleapis/google-cloud-go/issues/10018)) ([55beaa9](https://github.com/googleapis/google-cloud-go/commit/55beaa993aaf052d8be39766afc6777c3c2a0bdd)), refs [#9809](https://github.com/googleapis/google-cloud-go/issues/9809)
+
+## [0.2.2](https://github.com/googleapis/google-cloud-go/compare/auth/v0.2.1...auth/v0.2.2) (2024-04-19)
+
+
+### Bug Fixes
+
+* **auth:** Add internal opt to skip validation on transports ([#9999](https://github.com/googleapis/google-cloud-go/issues/9999)) ([9e20ef8](https://github.com/googleapis/google-cloud-go/commit/9e20ef89f6287d6bd03b8697d5898dc43b4a77cf)), refs [#9823](https://github.com/googleapis/google-cloud-go/issues/9823)
+* **auth:** Set secure flag for gRPC conn pools ([#10002](https://github.com/googleapis/google-cloud-go/issues/10002)) ([14e3956](https://github.com/googleapis/google-cloud-go/commit/14e3956dfd736399731b5ee8d9b178ae085cf7ba)), refs [#9833](https://github.com/googleapis/google-cloud-go/issues/9833)
+
+## [0.2.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.2.0...auth/v0.2.1) (2024-04-18)
+
+
+### Bug Fixes
+
+* **auth:** Default gRPC token type to Bearer if not set ([#9800](https://github.com/googleapis/google-cloud-go/issues/9800)) ([5284066](https://github.com/googleapis/google-cloud-go/commit/5284066670b6fe65d79089cfe0199c9660f87fc7))
+
+## [0.2.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.1.1...auth/v0.2.0) (2024-04-15)
+
+### Breaking Changes
+
+In the below mentioned commits there were a few large breaking changes since the
+last release of the module.
+
+1. The `Credentials` type has been moved to the root of the module as it is
+   becoming the core abstraction for the whole module.
+2. Because of the above mentioned change many functions that previously
+   returned a `TokenProvider` now return `Credentials`. Similarly, these
+   functions have been renamed to be more specific.
+3. Most places that used to take an optional `TokenProvider` now accept
+   `Credentials`. You can make a `Credentials` from a `TokenProvider` using the
+   constructor found in the `auth` package.
+4. The `detect` package has been renamed to `credentials`. With this change some
+   function signatures were also updated for better readability.
+5. Derivative auth flows like `impersonate` and `downscope` have been moved to
+   be under the new `credentials` package.
+
+Although these changes are disruptive we think that they are for the best of the
+long-term health of the module. We do not expect any more large breaking changes
+like these in future revisions, even before 1.0.0. This version will be the
+first version of the auth library that our client libraries start to use and
+depend on.
+
+### Features
+
+* **auth/credentials/externalaccount:** Add default TokenURL ([#9700](https://github.com/googleapis/google-cloud-go/issues/9700)) ([81830e6](https://github.com/googleapis/google-cloud-go/commit/81830e6848ceefd055aa4d08f933d1154455a0f6))
+* **auth:** Add downscope.Options.UniverseDomain ([#9634](https://github.com/googleapis/google-cloud-go/issues/9634)) ([52cf7d7](https://github.com/googleapis/google-cloud-go/commit/52cf7d780853594291c4e34302d618299d1f5a1d))
+* **auth:** Add universe domain to grpctransport and httptransport ([#9663](https://github.com/googleapis/google-cloud-go/issues/9663)) ([67d353b](https://github.com/googleapis/google-cloud-go/commit/67d353beefe3b607c08c891876fbd95ab89e5fe3)), refs [#9670](https://github.com/googleapis/google-cloud-go/issues/9670)
+* **auth:** Add UniverseDomain to DetectOptions ([#9536](https://github.com/googleapis/google-cloud-go/issues/9536)) ([3618d3f](https://github.com/googleapis/google-cloud-go/commit/3618d3f7061615c0e189f376c75abc201203b501))
+* **auth:** Make package externalaccount public ([#9633](https://github.com/googleapis/google-cloud-go/issues/9633)) ([a0978d8](https://github.com/googleapis/google-cloud-go/commit/a0978d8e96968399940ebd7d092539772bf9caac))
+* **auth:** Move credentials to base auth package ([#9590](https://github.com/googleapis/google-cloud-go/issues/9590)) ([1a04baf](https://github.com/googleapis/google-cloud-go/commit/1a04bafa83c27342b9308d785645e1e5423ea10d))
+* **auth:** Refactor public sigs to use Credentials ([#9603](https://github.com/googleapis/google-cloud-go/issues/9603)) ([69cb240](https://github.com/googleapis/google-cloud-go/commit/69cb240c530b1f7173a9af2555c19e9a1beb56c5))
+
+
+### Bug Fixes
+
+* **auth/oauth2adapt:** Update protobuf dep to v1.33.0 ([30b038d](https://github.com/googleapis/google-cloud-go/commit/30b038d8cac0b8cd5dd4761c87f3f298760dd33a))
+* **auth:** Fix uint32 conversion ([9221c7f](https://github.com/googleapis/google-cloud-go/commit/9221c7fa12cef9d5fb7ddc92f41f1d6204971c7b))
+* **auth:** Port sts expires fix ([#9618](https://github.com/googleapis/google-cloud-go/issues/9618)) ([7bec97b](https://github.com/googleapis/google-cloud-go/commit/7bec97b2f51ed3ac4f9b88bf100d301da3f5d1bd))
+* **auth:** Read universe_domain from all credentials files ([#9632](https://github.com/googleapis/google-cloud-go/issues/9632)) ([16efbb5](https://github.com/googleapis/google-cloud-go/commit/16efbb52e39ea4a319e5ee1e95c0e0305b6d9824))
+* **auth:** Remove content-type header from idms get requests ([#9508](https://github.com/googleapis/google-cloud-go/issues/9508)) ([8589f41](https://github.com/googleapis/google-cloud-go/commit/8589f41599d265d7c3d46a3d86c9fab2329cbdd9))
+* **auth:** Update protobuf dep to v1.33.0 ([30b038d](https://github.com/googleapis/google-cloud-go/commit/30b038d8cac0b8cd5dd4761c87f3f298760dd33a))
+
+## [0.1.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.1.0...auth/v0.1.1) (2024-03-10)
+
+
+### Bug Fixes
+
+* **auth/impersonate:** Properly send default detect params ([#9529](https://github.com/googleapis/google-cloud-go/issues/9529)) ([5b6b8be](https://github.com/googleapis/google-cloud-go/commit/5b6b8bef577f82707e51f5cc5d258d5bdf90218f)), refs [#9136](https://github.com/googleapis/google-cloud-go/issues/9136)
+* **auth:** Update grpc-go to v1.56.3 ([343cea8](https://github.com/googleapis/google-cloud-go/commit/343cea8c43b1e31ae21ad50ad31d3b0b60143f8c))
+* **auth:** Update grpc-go to v1.59.0 ([81a97b0](https://github.com/googleapis/google-cloud-go/commit/81a97b06cb28b25432e4ece595c55a9857e960b7))
+
+## 0.1.0 (2023-10-18)
+
+
+### Features
+
+* **auth:** Add base auth package ([#8465](https://github.com/googleapis/google-cloud-go/issues/8465)) ([6a45f26](https://github.com/googleapis/google-cloud-go/commit/6a45f26b809b64edae21f312c18d4205f96b180e))
+* **auth:** Add cert support to httptransport ([#8569](https://github.com/googleapis/google-cloud-go/issues/8569)) ([37e3435](https://github.com/googleapis/google-cloud-go/commit/37e3435f8e98595eafab481bdfcb31a4c56fa993))
+* **auth:** Add Credentials.UniverseDomain() ([#8654](https://github.com/googleapis/google-cloud-go/issues/8654)) ([af0aa1e](https://github.com/googleapis/google-cloud-go/commit/af0aa1ed8015bc8fe0dd87a7549ae029107cbdb8))
+* **auth:** Add detect package ([#8491](https://github.com/googleapis/google-cloud-go/issues/8491)) ([d977419](https://github.com/googleapis/google-cloud-go/commit/d977419a3269f6acc193df77a2136a6eb4b4add7))
+* **auth:** Add downscope package ([#8532](https://github.com/googleapis/google-cloud-go/issues/8532)) ([dda9bff](https://github.com/googleapis/google-cloud-go/commit/dda9bff8ec70e6d104901b4105d13dcaa4e2404c))
+* **auth:** Add grpctransport package ([#8625](https://github.com/googleapis/google-cloud-go/issues/8625)) ([69a8347](https://github.com/googleapis/google-cloud-go/commit/69a83470bdcc7ed10c6c36d1abc3b7cfdb8a0ee5))
+* **auth:** Add httptransport package ([#8567](https://github.com/googleapis/google-cloud-go/issues/8567)) ([6898597](https://github.com/googleapis/google-cloud-go/commit/6898597d2ea95d630fcd00fd15c58c75ea843bff))
+* **auth:** Add idtoken package ([#8580](https://github.com/googleapis/google-cloud-go/issues/8580)) ([a79e693](https://github.com/googleapis/google-cloud-go/commit/a79e693e97e4e3e1c6742099af3dbc58866d88fe))
+* **auth:** Add impersonate package ([#8578](https://github.com/googleapis/google-cloud-go/issues/8578)) ([e29ba0c](https://github.com/googleapis/google-cloud-go/commit/e29ba0cb7bd3888ab9e808087027dc5a32474c04))
+* **auth:** Add support for external accounts in detect ([#8508](https://github.com/googleapis/google-cloud-go/issues/8508)) ([62210d5](https://github.com/googleapis/google-cloud-go/commit/62210d5d3e56e8e9f35db8e6ac0defec19582507))
+* **auth:** Port external account changes ([#8697](https://github.com/googleapis/google-cloud-go/issues/8697)) ([5823db5](https://github.com/googleapis/google-cloud-go/commit/5823db5d633069999b58b9131a7f9cd77e82c899))
+
+
+### Bug Fixes
+
+* **auth/oauth2adapt:** Update golang.org/x/net to v0.17.0 ([174da47](https://github.com/googleapis/google-cloud-go/commit/174da47254fefb12921bbfc65b7829a453af6f5d))
+* **auth:** Update golang.org/x/net to v0.17.0 ([174da47](https://github.com/googleapis/google-cloud-go/commit/174da47254fefb12921bbfc65b7829a453af6f5d))

vendor/cloud.google.com/go/auth/LICENSE 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/cloud.google.com/go/auth/README.md 🔗

@@ -0,0 +1,40 @@
+# Google Auth Library for Go
+
+[![Go Reference](https://pkg.go.dev/badge/cloud.google.com/go/auth.svg)](https://pkg.go.dev/cloud.google.com/go/auth)
+
+## Install
+
+``` bash
+go get cloud.google.com/go/auth@latest
+```
+
+## Usage
+
+The most common way this library is used is transitively, by default, from any
+of our Go client libraries.
+
+### Notable use-cases
+
+- To create a credential directly please see examples in the
+  [credentials](https://pkg.go.dev/cloud.google.com/go/auth/credentials)
+  package.
+- To create a authenticated HTTP client please see examples in the
+  [httptransport](https://pkg.go.dev/cloud.google.com/go/auth/httptransport)
+  package.
+- To create a authenticated gRPC connection please see examples in the
+  [grpctransport](https://pkg.go.dev/cloud.google.com/go/auth/grpctransport)
+  package.
+- To create an ID token please see examples in the
+  [idtoken](https://pkg.go.dev/cloud.google.com/go/auth/credentials/idtoken)
+  package.
+
+## Contributing
+
+Contributions are welcome. Please, see the
+[CONTRIBUTING](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/main/CONTRIBUTING.md)
+document for details.
+
+Please note that this project is released with a Contributor Code of Conduct.
+By participating in this project you agree to abide by its terms.
+See [Contributor Code of Conduct](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/main/CONTRIBUTING.md#contributor-code-of-conduct)
+for more information.

vendor/cloud.google.com/go/auth/auth.go 🔗

@@ -0,0 +1,618 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package auth provides utilities for managing Google Cloud credentials,
+// including functionality for creating, caching, and refreshing OAuth2 tokens.
+// It offers customizable options for different OAuth2 flows, such as 2-legged
+// (2LO) and 3-legged (3LO) OAuth, along with support for PKCE and automatic
+// token management.
+package auth
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"net/url"
+	"strings"
+	"sync"
+	"time"
+
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/jwt"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+const (
+	// Parameter keys for AuthCodeURL method to support PKCE.
+	codeChallengeKey       = "code_challenge"
+	codeChallengeMethodKey = "code_challenge_method"
+
+	// Parameter key for Exchange method to support PKCE.
+	codeVerifierKey = "code_verifier"
+
+	// 3 minutes and 45 seconds before expiration. The shortest MDS cache is 4 minutes,
+	// so we give it 15 seconds to refresh it's cache before attempting to refresh a token.
+	defaultExpiryDelta = 225 * time.Second
+
+	universeDomainDefault = "googleapis.com"
+)
+
+// tokenState represents different states for a [Token].
+type tokenState int
+
+const (
+	// fresh indicates that the [Token] is valid. It is not expired or close to
+	// expired, or the token has no expiry.
+	fresh tokenState = iota
+	// stale indicates that the [Token] is close to expired, and should be
+	// refreshed. The token can be used normally.
+	stale
+	// invalid indicates that the [Token] is expired or invalid. The token
+	// cannot be used for a normal operation.
+	invalid
+)
+
+var (
+	defaultGrantType = "urn:ietf:params:oauth:grant-type:jwt-bearer"
+	defaultHeader    = &jwt.Header{Algorithm: jwt.HeaderAlgRSA256, Type: jwt.HeaderType}
+
+	// for testing
+	timeNow = time.Now
+)
+
+// TokenProvider specifies an interface for anything that can return a token.
+type TokenProvider interface {
+	// Token returns a Token or an error.
+	// The Token returned must be safe to use
+	// concurrently.
+	// The returned Token must not be modified.
+	// The context provided must be sent along to any requests that are made in
+	// the implementing code.
+	Token(context.Context) (*Token, error)
+}
+
+// Token holds the credential token used to authorized requests. All fields are
+// considered read-only.
+type Token struct {
+	// Value is the token used to authorize requests. It is usually an access
+	// token but may be other types of tokens such as ID tokens in some flows.
+	Value string
+	// Type is the type of token Value is. If uninitialized, it should be
+	// assumed to be a "Bearer" token.
+	Type string
+	// Expiry is the time the token is set to expire.
+	Expiry time.Time
+	// Metadata  may include, but is not limited to, the body of the token
+	// response returned by the server.
+	Metadata map[string]interface{} // TODO(codyoss): maybe make a method to flatten metadata to avoid []string for url.Values
+}
+
+// IsValid reports that a [Token] is non-nil, has a [Token.Value], and has not
+// expired. A token is considered expired if [Token.Expiry] has passed or will
+// pass in the next 225 seconds.
+func (t *Token) IsValid() bool {
+	return t.isValidWithEarlyExpiry(defaultExpiryDelta)
+}
+
+// MetadataString is a convenience method for accessing string values in the
+// token's metadata. Returns an empty string if the metadata is nil or the value
+// for the given key cannot be cast to a string.
+func (t *Token) MetadataString(k string) string {
+	if t.Metadata == nil {
+		return ""
+	}
+	s, ok := t.Metadata[k].(string)
+	if !ok {
+		return ""
+	}
+	return s
+}
+
+func (t *Token) isValidWithEarlyExpiry(earlyExpiry time.Duration) bool {
+	if t.isEmpty() {
+		return false
+	}
+	if t.Expiry.IsZero() {
+		return true
+	}
+	return !t.Expiry.Round(0).Add(-earlyExpiry).Before(timeNow())
+}
+
+func (t *Token) isEmpty() bool {
+	return t == nil || t.Value == ""
+}
+
+// Credentials holds Google credentials, including
+// [Application Default Credentials].
+//
+// [Application Default Credentials]: https://developers.google.com/accounts/docs/application-default-credentials
+type Credentials struct {
+	json           []byte
+	projectID      CredentialsPropertyProvider
+	quotaProjectID CredentialsPropertyProvider
+	// universeDomain is the default service domain for a given Cloud universe.
+	universeDomain CredentialsPropertyProvider
+
+	TokenProvider
+}
+
+// JSON returns the bytes associated with the the file used to source
+// credentials if one was used.
+func (c *Credentials) JSON() []byte {
+	return c.json
+}
+
+// ProjectID returns the associated project ID from the underlying file or
+// environment.
+func (c *Credentials) ProjectID(ctx context.Context) (string, error) {
+	if c.projectID == nil {
+		return internal.GetProjectID(c.json, ""), nil
+	}
+	v, err := c.projectID.GetProperty(ctx)
+	if err != nil {
+		return "", err
+	}
+	return internal.GetProjectID(c.json, v), nil
+}
+
+// QuotaProjectID returns the associated quota project ID from the underlying
+// file or environment.
+func (c *Credentials) QuotaProjectID(ctx context.Context) (string, error) {
+	if c.quotaProjectID == nil {
+		return internal.GetQuotaProject(c.json, ""), nil
+	}
+	v, err := c.quotaProjectID.GetProperty(ctx)
+	if err != nil {
+		return "", err
+	}
+	return internal.GetQuotaProject(c.json, v), nil
+}
+
+// UniverseDomain returns the default service domain for a given Cloud universe.
+// The default value is "googleapis.com".
+func (c *Credentials) UniverseDomain(ctx context.Context) (string, error) {
+	if c.universeDomain == nil {
+		return universeDomainDefault, nil
+	}
+	v, err := c.universeDomain.GetProperty(ctx)
+	if err != nil {
+		return "", err
+	}
+	if v == "" {
+		return universeDomainDefault, nil
+	}
+	return v, err
+}
+
+// CredentialsPropertyProvider provides an implementation to fetch a property
+// value for [Credentials].
+type CredentialsPropertyProvider interface {
+	GetProperty(context.Context) (string, error)
+}
+
+// CredentialsPropertyFunc is a type adapter to allow the use of ordinary
+// functions as a [CredentialsPropertyProvider].
+type CredentialsPropertyFunc func(context.Context) (string, error)
+
+// GetProperty loads the properly value provided the given context.
+func (p CredentialsPropertyFunc) GetProperty(ctx context.Context) (string, error) {
+	return p(ctx)
+}
+
+// CredentialsOptions are used to configure [Credentials].
+type CredentialsOptions struct {
+	// TokenProvider is a means of sourcing a token for the credentials. Required.
+	TokenProvider TokenProvider
+	// JSON is the raw contents of the credentials file if sourced from a file.
+	JSON []byte
+	// ProjectIDProvider resolves the project ID associated with the
+	// credentials.
+	ProjectIDProvider CredentialsPropertyProvider
+	// QuotaProjectIDProvider resolves the quota project ID associated with the
+	// credentials.
+	QuotaProjectIDProvider CredentialsPropertyProvider
+	// UniverseDomainProvider resolves the universe domain with the credentials.
+	UniverseDomainProvider CredentialsPropertyProvider
+}
+
+// NewCredentials returns new [Credentials] from the provided options.
+func NewCredentials(opts *CredentialsOptions) *Credentials {
+	creds := &Credentials{
+		TokenProvider:  opts.TokenProvider,
+		json:           opts.JSON,
+		projectID:      opts.ProjectIDProvider,
+		quotaProjectID: opts.QuotaProjectIDProvider,
+		universeDomain: opts.UniverseDomainProvider,
+	}
+
+	return creds
+}
+
+// CachedTokenProviderOptions provides options for configuring a cached
+// [TokenProvider].
+type CachedTokenProviderOptions struct {
+	// DisableAutoRefresh makes the TokenProvider always return the same token,
+	// even if it is expired. The default is false. Optional.
+	DisableAutoRefresh bool
+	// ExpireEarly configures the amount of time before a token expires, that it
+	// should be refreshed. If unset, the default value is 3 minutes and 45
+	// seconds. Optional.
+	ExpireEarly time.Duration
+	// DisableAsyncRefresh configures a synchronous workflow that refreshes
+	// tokens in a blocking manner. The default is false. Optional.
+	DisableAsyncRefresh bool
+}
+
+func (ctpo *CachedTokenProviderOptions) autoRefresh() bool {
+	if ctpo == nil {
+		return true
+	}
+	return !ctpo.DisableAutoRefresh
+}
+
+func (ctpo *CachedTokenProviderOptions) expireEarly() time.Duration {
+	if ctpo == nil || ctpo.ExpireEarly == 0 {
+		return defaultExpiryDelta
+	}
+	return ctpo.ExpireEarly
+}
+
+func (ctpo *CachedTokenProviderOptions) blockingRefresh() bool {
+	if ctpo == nil {
+		return false
+	}
+	return ctpo.DisableAsyncRefresh
+}
+
+// NewCachedTokenProvider wraps a [TokenProvider] to cache the tokens returned
+// by the underlying provider. By default it will refresh tokens asynchronously
+// a few minutes before they expire.
+func NewCachedTokenProvider(tp TokenProvider, opts *CachedTokenProviderOptions) TokenProvider {
+	if ctp, ok := tp.(*cachedTokenProvider); ok {
+		return ctp
+	}
+	return &cachedTokenProvider{
+		tp:              tp,
+		autoRefresh:     opts.autoRefresh(),
+		expireEarly:     opts.expireEarly(),
+		blockingRefresh: opts.blockingRefresh(),
+	}
+}
+
+type cachedTokenProvider struct {
+	tp              TokenProvider
+	autoRefresh     bool
+	expireEarly     time.Duration
+	blockingRefresh bool
+
+	mu          sync.Mutex
+	cachedToken *Token
+	// isRefreshRunning ensures that the non-blocking refresh will only be
+	// attempted once, even if multiple callers enter the Token method.
+	isRefreshRunning bool
+	// isRefreshErr ensures that the non-blocking refresh will only be attempted
+	// once per refresh window if an error is encountered.
+	isRefreshErr bool
+}
+
+func (c *cachedTokenProvider) Token(ctx context.Context) (*Token, error) {
+	if c.blockingRefresh {
+		return c.tokenBlocking(ctx)
+	}
+	return c.tokenNonBlocking(ctx)
+}
+
+func (c *cachedTokenProvider) tokenNonBlocking(ctx context.Context) (*Token, error) {
+	switch c.tokenState() {
+	case fresh:
+		c.mu.Lock()
+		defer c.mu.Unlock()
+		return c.cachedToken, nil
+	case stale:
+		// Call tokenAsync with a new Context because the user-provided context
+		// may have a short timeout incompatible with async token refresh.
+		c.tokenAsync(context.Background())
+		// Return the stale token immediately to not block customer requests to Cloud services.
+		c.mu.Lock()
+		defer c.mu.Unlock()
+		return c.cachedToken, nil
+	default: // invalid
+		return c.tokenBlocking(ctx)
+	}
+}
+
+// tokenState reports the token's validity.
+func (c *cachedTokenProvider) tokenState() tokenState {
+	c.mu.Lock()
+	defer c.mu.Unlock()
+	t := c.cachedToken
+	now := timeNow()
+	if t == nil || t.Value == "" {
+		return invalid
+	} else if t.Expiry.IsZero() {
+		return fresh
+	} else if now.After(t.Expiry.Round(0)) {
+		return invalid
+	} else if now.After(t.Expiry.Round(0).Add(-c.expireEarly)) {
+		return stale
+	}
+	return fresh
+}
+
+// tokenAsync uses a bool to ensure that only one non-blocking token refresh
+// happens at a time, even if multiple callers have entered this function
+// concurrently. This avoids creating an arbitrary number of concurrent
+// goroutines. Retries should be attempted and managed within the Token method.
+// If the refresh attempt fails, no further attempts are made until the refresh
+// window expires and the token enters the invalid state, at which point the
+// blocking call to Token should likely return the same error on the main goroutine.
+func (c *cachedTokenProvider) tokenAsync(ctx context.Context) {
+	fn := func() {
+		c.mu.Lock()
+		c.isRefreshRunning = true
+		c.mu.Unlock()
+		t, err := c.tp.Token(ctx)
+		c.mu.Lock()
+		defer c.mu.Unlock()
+		c.isRefreshRunning = false
+		if err != nil {
+			// Discard errors from the non-blocking refresh, but prevent further
+			// attempts.
+			c.isRefreshErr = true
+			return
+		}
+		c.cachedToken = t
+	}
+	c.mu.Lock()
+	defer c.mu.Unlock()
+	if !c.isRefreshRunning && !c.isRefreshErr {
+		go fn()
+	}
+}
+
+func (c *cachedTokenProvider) tokenBlocking(ctx context.Context) (*Token, error) {
+	c.mu.Lock()
+	defer c.mu.Unlock()
+	c.isRefreshErr = false
+	if c.cachedToken.IsValid() || (!c.autoRefresh && !c.cachedToken.isEmpty()) {
+		return c.cachedToken, nil
+	}
+	t, err := c.tp.Token(ctx)
+	if err != nil {
+		return nil, err
+	}
+	c.cachedToken = t
+	return t, nil
+}
+
+// Error is a error associated with retrieving a [Token]. It can hold useful
+// additional details for debugging.
+type Error struct {
+	// Response is the HTTP response associated with error. The body will always
+	// be already closed and consumed.
+	Response *http.Response
+	// Body is the HTTP response body.
+	Body []byte
+	// Err is the underlying wrapped error.
+	Err error
+
+	// code returned in the token response
+	code string
+	// description returned in the token response
+	description string
+	// uri returned in the token response
+	uri string
+}
+
+func (e *Error) Error() string {
+	if e.code != "" {
+		s := fmt.Sprintf("auth: %q", e.code)
+		if e.description != "" {
+			s += fmt.Sprintf(" %q", e.description)
+		}
+		if e.uri != "" {
+			s += fmt.Sprintf(" %q", e.uri)
+		}
+		return s
+	}
+	return fmt.Sprintf("auth: cannot fetch token: %v\nResponse: %s", e.Response.StatusCode, e.Body)
+}
+
+// Temporary returns true if the error is considered temporary and may be able
+// to be retried.
+func (e *Error) Temporary() bool {
+	if e.Response == nil {
+		return false
+	}
+	sc := e.Response.StatusCode
+	return sc == http.StatusInternalServerError || sc == http.StatusServiceUnavailable || sc == http.StatusRequestTimeout || sc == http.StatusTooManyRequests
+}
+
+func (e *Error) Unwrap() error {
+	return e.Err
+}
+
+// Style describes how the token endpoint wants to receive the ClientID and
+// ClientSecret.
+type Style int
+
+const (
+	// StyleUnknown means the value has not been initiated. Sending this in
+	// a request will cause the token exchange to fail.
+	StyleUnknown Style = iota
+	// StyleInParams sends client info in the body of a POST request.
+	StyleInParams
+	// StyleInHeader sends client info using Basic Authorization header.
+	StyleInHeader
+)
+
+// Options2LO is the configuration settings for doing a 2-legged JWT OAuth2 flow.
+type Options2LO struct {
+	// Email is the OAuth2 client ID. This value is set as the "iss" in the
+	// JWT.
+	Email string
+	// PrivateKey contains the contents of an RSA private key or the
+	// contents of a PEM file that contains a private key. It is used to sign
+	// the JWT created.
+	PrivateKey []byte
+	// TokenURL is th URL the JWT is sent to. Required.
+	TokenURL string
+	// PrivateKeyID is the ID of the key used to sign the JWT. It is used as the
+	// "kid" in the JWT header. Optional.
+	PrivateKeyID string
+	// Subject is the used for to impersonate a user. It is used as the "sub" in
+	// the JWT.m Optional.
+	Subject string
+	// Scopes specifies requested permissions for the token. Optional.
+	Scopes []string
+	// Expires specifies the lifetime of the token. Optional.
+	Expires time.Duration
+	// Audience specifies the "aud" in the JWT. Optional.
+	Audience string
+	// PrivateClaims allows specifying any custom claims for the JWT. Optional.
+	PrivateClaims map[string]interface{}
+
+	// Client is the client to be used to make the underlying token requests.
+	// Optional.
+	Client *http.Client
+	// UseIDToken requests that the token returned be an ID token if one is
+	// returned from the server. Optional.
+	UseIDToken bool
+	// Logger is used for debug logging. If provided, logging will be enabled
+	// at the loggers configured level. By default logging is disabled unless
+	// enabled by setting GOOGLE_SDK_GO_LOGGING_LEVEL in which case a default
+	// logger will be used. Optional.
+	Logger *slog.Logger
+}
+
+func (o *Options2LO) client() *http.Client {
+	if o.Client != nil {
+		return o.Client
+	}
+	return internal.DefaultClient()
+}
+
+func (o *Options2LO) validate() error {
+	if o == nil {
+		return errors.New("auth: options must be provided")
+	}
+	if o.Email == "" {
+		return errors.New("auth: email must be provided")
+	}
+	if len(o.PrivateKey) == 0 {
+		return errors.New("auth: private key must be provided")
+	}
+	if o.TokenURL == "" {
+		return errors.New("auth: token URL must be provided")
+	}
+	return nil
+}
+
+// New2LOTokenProvider returns a [TokenProvider] from the provided options.
+func New2LOTokenProvider(opts *Options2LO) (TokenProvider, error) {
+	if err := opts.validate(); err != nil {
+		return nil, err
+	}
+	return tokenProvider2LO{opts: opts, Client: opts.client(), logger: internallog.New(opts.Logger)}, nil
+}
+
+type tokenProvider2LO struct {
+	opts   *Options2LO
+	Client *http.Client
+	logger *slog.Logger
+}
+
+func (tp tokenProvider2LO) Token(ctx context.Context) (*Token, error) {
+	pk, err := internal.ParseKey(tp.opts.PrivateKey)
+	if err != nil {
+		return nil, err
+	}
+	claimSet := &jwt.Claims{
+		Iss:              tp.opts.Email,
+		Scope:            strings.Join(tp.opts.Scopes, " "),
+		Aud:              tp.opts.TokenURL,
+		AdditionalClaims: tp.opts.PrivateClaims,
+		Sub:              tp.opts.Subject,
+	}
+	if t := tp.opts.Expires; t > 0 {
+		claimSet.Exp = time.Now().Add(t).Unix()
+	}
+	if aud := tp.opts.Audience; aud != "" {
+		claimSet.Aud = aud
+	}
+	h := *defaultHeader
+	h.KeyID = tp.opts.PrivateKeyID
+	payload, err := jwt.EncodeJWS(&h, claimSet, pk)
+	if err != nil {
+		return nil, err
+	}
+	v := url.Values{}
+	v.Set("grant_type", defaultGrantType)
+	v.Set("assertion", payload)
+	req, err := http.NewRequestWithContext(ctx, "POST", tp.opts.TokenURL, strings.NewReader(v.Encode()))
+	if err != nil {
+		return nil, err
+	}
+	req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
+	tp.logger.DebugContext(ctx, "2LO token request", "request", internallog.HTTPRequest(req, []byte(v.Encode())))
+	resp, body, err := internal.DoRequest(tp.Client, req)
+	if err != nil {
+		return nil, fmt.Errorf("auth: cannot fetch token: %w", err)
+	}
+	tp.logger.DebugContext(ctx, "2LO token response", "response", internallog.HTTPResponse(resp, body))
+	if c := resp.StatusCode; c < http.StatusOK || c >= http.StatusMultipleChoices {
+		return nil, &Error{
+			Response: resp,
+			Body:     body,
+		}
+	}
+	// tokenRes is the JSON response body.
+	var tokenRes struct {
+		AccessToken string `json:"access_token"`
+		TokenType   string `json:"token_type"`
+		IDToken     string `json:"id_token"`
+		ExpiresIn   int64  `json:"expires_in"`
+	}
+	if err := json.Unmarshal(body, &tokenRes); err != nil {
+		return nil, fmt.Errorf("auth: cannot fetch token: %w", err)
+	}
+	token := &Token{
+		Value: tokenRes.AccessToken,
+		Type:  tokenRes.TokenType,
+	}
+	token.Metadata = make(map[string]interface{})
+	json.Unmarshal(body, &token.Metadata) // no error checks for optional fields
+
+	if secs := tokenRes.ExpiresIn; secs > 0 {
+		token.Expiry = time.Now().Add(time.Duration(secs) * time.Second)
+	}
+	if v := tokenRes.IDToken; v != "" {
+		// decode returned id token to get expiry
+		claimSet, err := jwt.DecodeJWS(v)
+		if err != nil {
+			return nil, fmt.Errorf("auth: error decoding JWT token: %w", err)
+		}
+		token.Expiry = time.Unix(claimSet.Exp, 0)
+	}
+	if tp.opts.UseIDToken {
+		if tokenRes.IDToken == "" {
+			return nil, fmt.Errorf("auth: response doesn't have JWT token")
+		}
+		token.Value = tokenRes.IDToken
+	}
+	return token, nil
+}

vendor/cloud.google.com/go/auth/credentials/compute.go 🔗

@@ -0,0 +1,90 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package credentials
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/url"
+	"strings"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/compute/metadata"
+)
+
+var (
+	computeTokenMetadata = map[string]interface{}{
+		"auth.google.tokenSource":    "compute-metadata",
+		"auth.google.serviceAccount": "default",
+	}
+	computeTokenURI = "instance/service-accounts/default/token"
+)
+
+// computeTokenProvider creates a [cloud.google.com/go/auth.TokenProvider] that
+// uses the metadata service to retrieve tokens.
+func computeTokenProvider(opts *DetectOptions, client *metadata.Client) auth.TokenProvider {
+	return auth.NewCachedTokenProvider(&computeProvider{
+		scopes: opts.Scopes,
+		client: client,
+	}, &auth.CachedTokenProviderOptions{
+		ExpireEarly:         opts.EarlyTokenRefresh,
+		DisableAsyncRefresh: opts.DisableAsyncRefresh,
+	})
+}
+
+// computeProvider fetches tokens from the google cloud metadata service.
+type computeProvider struct {
+	scopes []string
+	client *metadata.Client
+}
+
+type metadataTokenResp struct {
+	AccessToken  string `json:"access_token"`
+	ExpiresInSec int    `json:"expires_in"`
+	TokenType    string `json:"token_type"`
+}
+
+func (cs *computeProvider) Token(ctx context.Context) (*auth.Token, error) {
+	tokenURI, err := url.Parse(computeTokenURI)
+	if err != nil {
+		return nil, err
+	}
+	if len(cs.scopes) > 0 {
+		v := url.Values{}
+		v.Set("scopes", strings.Join(cs.scopes, ","))
+		tokenURI.RawQuery = v.Encode()
+	}
+	tokenJSON, err := cs.client.GetWithContext(ctx, tokenURI.String())
+	if err != nil {
+		return nil, fmt.Errorf("credentials: cannot fetch token: %w", err)
+	}
+	var res metadataTokenResp
+	if err := json.NewDecoder(strings.NewReader(tokenJSON)).Decode(&res); err != nil {
+		return nil, fmt.Errorf("credentials: invalid token JSON from metadata: %w", err)
+	}
+	if res.ExpiresInSec == 0 || res.AccessToken == "" {
+		return nil, errors.New("credentials: incomplete token received from metadata")
+	}
+	return &auth.Token{
+		Value:    res.AccessToken,
+		Type:     res.TokenType,
+		Expiry:   time.Now().Add(time.Duration(res.ExpiresInSec) * time.Second),
+		Metadata: computeTokenMetadata,
+	}, nil
+
+}

vendor/cloud.google.com/go/auth/credentials/detect.go 🔗

@@ -0,0 +1,279 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package credentials
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"os"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/credsfile"
+	"cloud.google.com/go/compute/metadata"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+const (
+	// jwtTokenURL is Google's OAuth 2.0 token URL to use with the JWT(2LO) flow.
+	jwtTokenURL = "https://oauth2.googleapis.com/token"
+
+	// Google's OAuth 2.0 default endpoints.
+	googleAuthURL  = "https://accounts.google.com/o/oauth2/auth"
+	googleTokenURL = "https://oauth2.googleapis.com/token"
+
+	// GoogleMTLSTokenURL is Google's default OAuth2.0 mTLS endpoint.
+	GoogleMTLSTokenURL = "https://oauth2.mtls.googleapis.com/token"
+
+	// Help on default credentials
+	adcSetupURL = "https://cloud.google.com/docs/authentication/external/set-up-adc"
+)
+
+var (
+	// for testing
+	allowOnGCECheck = true
+)
+
+// OnGCE reports whether this process is running in Google Cloud.
+func OnGCE() bool {
+	// TODO(codyoss): once all libs use this auth lib move metadata check here
+	return allowOnGCECheck && metadata.OnGCE()
+}
+
+// DetectDefault searches for "Application Default Credentials" and returns
+// a credential based on the [DetectOptions] provided.
+//
+// It looks for credentials in the following places, preferring the first
+// location found:
+//
+//   - A JSON file whose path is specified by the GOOGLE_APPLICATION_CREDENTIALS
+//     environment variable. For workload identity federation, refer to
+//     https://cloud.google.com/iam/docs/how-to#using-workload-identity-federation
+//     on how to generate the JSON configuration file for on-prem/non-Google
+//     cloud platforms.
+//   - A JSON file in a location known to the gcloud command-line tool. On
+//     Windows, this is %APPDATA%/gcloud/application_default_credentials.json. On
+//     other systems, $HOME/.config/gcloud/application_default_credentials.json.
+//   - On Google Compute Engine, Google App Engine standard second generation
+//     runtimes, and Google App Engine flexible environment, it fetches
+//     credentials from the metadata server.
+func DetectDefault(opts *DetectOptions) (*auth.Credentials, error) {
+	if err := opts.validate(); err != nil {
+		return nil, err
+	}
+	if len(opts.CredentialsJSON) > 0 {
+		return readCredentialsFileJSON(opts.CredentialsJSON, opts)
+	}
+	if opts.CredentialsFile != "" {
+		return readCredentialsFile(opts.CredentialsFile, opts)
+	}
+	if filename := os.Getenv(credsfile.GoogleAppCredsEnvVar); filename != "" {
+		creds, err := readCredentialsFile(filename, opts)
+		if err != nil {
+			return nil, err
+		}
+		return creds, nil
+	}
+
+	fileName := credsfile.GetWellKnownFileName()
+	if b, err := os.ReadFile(fileName); err == nil {
+		return readCredentialsFileJSON(b, opts)
+	}
+
+	if OnGCE() {
+		metadataClient := metadata.NewWithOptions(&metadata.Options{
+			Logger: opts.logger(),
+		})
+		return auth.NewCredentials(&auth.CredentialsOptions{
+			TokenProvider: computeTokenProvider(opts, metadataClient),
+			ProjectIDProvider: auth.CredentialsPropertyFunc(func(ctx context.Context) (string, error) {
+				return metadataClient.ProjectIDWithContext(ctx)
+			}),
+			UniverseDomainProvider: &internal.ComputeUniverseDomainProvider{
+				MetadataClient: metadataClient,
+			},
+		}), nil
+	}
+
+	return nil, fmt.Errorf("credentials: could not find default credentials. See %v for more information", adcSetupURL)
+}
+
+// DetectOptions provides configuration for [DetectDefault].
+type DetectOptions struct {
+	// Scopes that credentials tokens should have. Example:
+	// https://www.googleapis.com/auth/cloud-platform. Required if Audience is
+	// not provided.
+	Scopes []string
+	// Audience that credentials tokens should have. Only applicable for 2LO
+	// flows with service accounts. If specified, scopes should not be provided.
+	Audience string
+	// Subject is the user email used for [domain wide delegation](https://developers.google.com/identity/protocols/oauth2/service-account#delegatingauthority).
+	// Optional.
+	Subject string
+	// EarlyTokenRefresh configures how early before a token expires that it
+	// should be refreshed. Once the token’s time until expiration has entered
+	// this refresh window the token is considered valid but stale. If unset,
+	// the default value is 3 minutes and 45 seconds. Optional.
+	EarlyTokenRefresh time.Duration
+	// DisableAsyncRefresh configures a synchronous workflow that refreshes
+	// stale tokens while blocking. The default is false. Optional.
+	DisableAsyncRefresh bool
+	// AuthHandlerOptions configures an authorization handler and other options
+	// for 3LO flows. It is required, and only used, for client credential
+	// flows.
+	AuthHandlerOptions *auth.AuthorizationHandlerOptions
+	// TokenURL allows to set the token endpoint for user credential flows. If
+	// unset the default value is: https://oauth2.googleapis.com/token.
+	// Optional.
+	TokenURL string
+	// STSAudience is the audience sent to when retrieving an STS token.
+	// Currently this only used for GDCH auth flow, for which it is required.
+	STSAudience string
+	// CredentialsFile overrides detection logic and sources a credential file
+	// from the provided filepath. If provided, CredentialsJSON must not be.
+	// Optional.
+	CredentialsFile string
+	// CredentialsJSON overrides detection logic and uses the JSON bytes as the
+	// source for the credential. If provided, CredentialsFile must not be.
+	// Optional.
+	CredentialsJSON []byte
+	// UseSelfSignedJWT directs service account based credentials to create a
+	// self-signed JWT with the private key found in the file, skipping any
+	// network requests that would normally be made. Optional.
+	UseSelfSignedJWT bool
+	// Client configures the underlying client used to make network requests
+	// when fetching tokens. Optional.
+	Client *http.Client
+	// UniverseDomain is the default service domain for a given Cloud universe.
+	// The default value is "googleapis.com". This option is ignored for
+	// authentication flows that do not support universe domain. Optional.
+	UniverseDomain string
+	// Logger is used for debug logging. If provided, logging will be enabled
+	// at the loggers configured level. By default logging is disabled unless
+	// enabled by setting GOOGLE_SDK_GO_LOGGING_LEVEL in which case a default
+	// logger will be used. Optional.
+	Logger *slog.Logger
+}
+
+func (o *DetectOptions) validate() error {
+	if o == nil {
+		return errors.New("credentials: options must be provided")
+	}
+	if len(o.Scopes) > 0 && o.Audience != "" {
+		return errors.New("credentials: both scopes and audience were provided")
+	}
+	if len(o.CredentialsJSON) > 0 && o.CredentialsFile != "" {
+		return errors.New("credentials: both credentials file and JSON were provided")
+	}
+	return nil
+}
+
+func (o *DetectOptions) tokenURL() string {
+	if o.TokenURL != "" {
+		return o.TokenURL
+	}
+	return googleTokenURL
+}
+
+func (o *DetectOptions) scopes() []string {
+	scopes := make([]string, len(o.Scopes))
+	copy(scopes, o.Scopes)
+	return scopes
+}
+
+func (o *DetectOptions) client() *http.Client {
+	if o.Client != nil {
+		return o.Client
+	}
+	return internal.DefaultClient()
+}
+
+func (o *DetectOptions) logger() *slog.Logger {
+	return internallog.New(o.Logger)
+}
+
+func readCredentialsFile(filename string, opts *DetectOptions) (*auth.Credentials, error) {
+	b, err := os.ReadFile(filename)
+	if err != nil {
+		return nil, err
+	}
+	return readCredentialsFileJSON(b, opts)
+}
+
+func readCredentialsFileJSON(b []byte, opts *DetectOptions) (*auth.Credentials, error) {
+	// attempt to parse jsonData as a Google Developers Console client_credentials.json.
+	config := clientCredConfigFromJSON(b, opts)
+	if config != nil {
+		if config.AuthHandlerOpts == nil {
+			return nil, errors.New("credentials: auth handler must be specified for this credential filetype")
+		}
+		tp, err := auth.New3LOTokenProvider(config)
+		if err != nil {
+			return nil, err
+		}
+		return auth.NewCredentials(&auth.CredentialsOptions{
+			TokenProvider: tp,
+			JSON:          b,
+		}), nil
+	}
+	return fileCredentials(b, opts)
+}
+
+func clientCredConfigFromJSON(b []byte, opts *DetectOptions) *auth.Options3LO {
+	var creds credsfile.ClientCredentialsFile
+	var c *credsfile.Config3LO
+	if err := json.Unmarshal(b, &creds); err != nil {
+		return nil
+	}
+	switch {
+	case creds.Web != nil:
+		c = creds.Web
+	case creds.Installed != nil:
+		c = creds.Installed
+	default:
+		return nil
+	}
+	if len(c.RedirectURIs) < 1 {
+		return nil
+	}
+	var handleOpts *auth.AuthorizationHandlerOptions
+	if opts.AuthHandlerOptions != nil {
+		handleOpts = &auth.AuthorizationHandlerOptions{
+			Handler:  opts.AuthHandlerOptions.Handler,
+			State:    opts.AuthHandlerOptions.State,
+			PKCEOpts: opts.AuthHandlerOptions.PKCEOpts,
+		}
+	}
+	return &auth.Options3LO{
+		ClientID:         c.ClientID,
+		ClientSecret:     c.ClientSecret,
+		RedirectURL:      c.RedirectURIs[0],
+		Scopes:           opts.scopes(),
+		AuthURL:          c.AuthURI,
+		TokenURL:         c.TokenURI,
+		Client:           opts.client(),
+		Logger:           opts.logger(),
+		EarlyTokenExpiry: opts.EarlyTokenRefresh,
+		AuthHandlerOpts:  handleOpts,
+		// TODO(codyoss): refactor this out. We need to add in auto-detection
+		// for this use case.
+		AuthStyle: auth.StyleInParams,
+	}
+}

vendor/cloud.google.com/go/auth/credentials/doc.go 🔗

@@ -0,0 +1,45 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package credentials provides support for making OAuth2 authorized and
+// authenticated HTTP requests to Google APIs. It supports the Web server flow,
+// client-side credentials, service accounts, Google Compute Engine service
+// accounts, Google App Engine service accounts and workload identity federation
+// from non-Google cloud platforms.
+//
+// A brief overview of the package follows. For more information, please read
+// https://developers.google.com/accounts/docs/OAuth2
+// and
+// https://developers.google.com/accounts/docs/application-default-credentials.
+// For more information on using workload identity federation, refer to
+// https://cloud.google.com/iam/docs/how-to#using-workload-identity-federation.
+//
+// # Credentials
+//
+// The [cloud.google.com/go/auth.Credentials] type represents Google
+// credentials, including Application Default Credentials.
+//
+// Use [DetectDefault] to obtain Application Default Credentials.
+//
+// Application Default Credentials support workload identity federation to
+// access Google Cloud resources from non-Google Cloud platforms including Amazon
+// Web Services (AWS), Microsoft Azure or any identity provider that supports
+// OpenID Connect (OIDC). Workload identity federation is recommended for
+// non-Google Cloud environments as it avoids the need to download, manage, and
+// store service account private keys locally.
+//
+// # Workforce Identity Federation
+//
+// For more information on this feature see [cloud.google.com/go/auth/credentials/externalaccount].
+package credentials

vendor/cloud.google.com/go/auth/credentials/filetypes.go 🔗

@@ -0,0 +1,231 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package credentials
+
+import (
+	"errors"
+	"fmt"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/credentials/internal/externalaccount"
+	"cloud.google.com/go/auth/credentials/internal/externalaccountuser"
+	"cloud.google.com/go/auth/credentials/internal/gdch"
+	"cloud.google.com/go/auth/credentials/internal/impersonate"
+	internalauth "cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/credsfile"
+)
+
+func fileCredentials(b []byte, opts *DetectOptions) (*auth.Credentials, error) {
+	fileType, err := credsfile.ParseFileType(b)
+	if err != nil {
+		return nil, err
+	}
+
+	var projectID, universeDomain string
+	var tp auth.TokenProvider
+	switch fileType {
+	case credsfile.ServiceAccountKey:
+		f, err := credsfile.ParseServiceAccount(b)
+		if err != nil {
+			return nil, err
+		}
+		tp, err = handleServiceAccount(f, opts)
+		if err != nil {
+			return nil, err
+		}
+		projectID = f.ProjectID
+		universeDomain = resolveUniverseDomain(opts.UniverseDomain, f.UniverseDomain)
+	case credsfile.UserCredentialsKey:
+		f, err := credsfile.ParseUserCredentials(b)
+		if err != nil {
+			return nil, err
+		}
+		tp, err = handleUserCredential(f, opts)
+		if err != nil {
+			return nil, err
+		}
+		universeDomain = f.UniverseDomain
+	case credsfile.ExternalAccountKey:
+		f, err := credsfile.ParseExternalAccount(b)
+		if err != nil {
+			return nil, err
+		}
+		tp, err = handleExternalAccount(f, opts)
+		if err != nil {
+			return nil, err
+		}
+		universeDomain = resolveUniverseDomain(opts.UniverseDomain, f.UniverseDomain)
+	case credsfile.ExternalAccountAuthorizedUserKey:
+		f, err := credsfile.ParseExternalAccountAuthorizedUser(b)
+		if err != nil {
+			return nil, err
+		}
+		tp, err = handleExternalAccountAuthorizedUser(f, opts)
+		if err != nil {
+			return nil, err
+		}
+		universeDomain = f.UniverseDomain
+	case credsfile.ImpersonatedServiceAccountKey:
+		f, err := credsfile.ParseImpersonatedServiceAccount(b)
+		if err != nil {
+			return nil, err
+		}
+		tp, err = handleImpersonatedServiceAccount(f, opts)
+		if err != nil {
+			return nil, err
+		}
+		universeDomain = resolveUniverseDomain(opts.UniverseDomain, f.UniverseDomain)
+	case credsfile.GDCHServiceAccountKey:
+		f, err := credsfile.ParseGDCHServiceAccount(b)
+		if err != nil {
+			return nil, err
+		}
+		tp, err = handleGDCHServiceAccount(f, opts)
+		if err != nil {
+			return nil, err
+		}
+		projectID = f.Project
+		universeDomain = f.UniverseDomain
+	default:
+		return nil, fmt.Errorf("credentials: unsupported filetype %q", fileType)
+	}
+	return auth.NewCredentials(&auth.CredentialsOptions{
+		TokenProvider: auth.NewCachedTokenProvider(tp, &auth.CachedTokenProviderOptions{
+			ExpireEarly: opts.EarlyTokenRefresh,
+		}),
+		JSON:              b,
+		ProjectIDProvider: internalauth.StaticCredentialsProperty(projectID),
+		// TODO(codyoss): only set quota project here if there was a user override
+		UniverseDomainProvider: internalauth.StaticCredentialsProperty(universeDomain),
+	}), nil
+}
+
+// resolveUniverseDomain returns optsUniverseDomain if non-empty, in order to
+// support configuring universe-specific credentials in code. Auth flows
+// unsupported for universe domain should not use this func, but should instead
+// simply set the file universe domain on the credentials.
+func resolveUniverseDomain(optsUniverseDomain, fileUniverseDomain string) string {
+	if optsUniverseDomain != "" {
+		return optsUniverseDomain
+	}
+	return fileUniverseDomain
+}
+
+func handleServiceAccount(f *credsfile.ServiceAccountFile, opts *DetectOptions) (auth.TokenProvider, error) {
+	ud := resolveUniverseDomain(opts.UniverseDomain, f.UniverseDomain)
+	if opts.UseSelfSignedJWT {
+		return configureSelfSignedJWT(f, opts)
+	} else if ud != "" && ud != internalauth.DefaultUniverseDomain {
+		// For non-GDU universe domains, token exchange is impossible and services
+		// must support self-signed JWTs.
+		opts.UseSelfSignedJWT = true
+		return configureSelfSignedJWT(f, opts)
+	}
+	opts2LO := &auth.Options2LO{
+		Email:        f.ClientEmail,
+		PrivateKey:   []byte(f.PrivateKey),
+		PrivateKeyID: f.PrivateKeyID,
+		Scopes:       opts.scopes(),
+		TokenURL:     f.TokenURL,
+		Subject:      opts.Subject,
+		Client:       opts.client(),
+		Logger:       opts.logger(),
+	}
+	if opts2LO.TokenURL == "" {
+		opts2LO.TokenURL = jwtTokenURL
+	}
+	return auth.New2LOTokenProvider(opts2LO)
+}
+
+func handleUserCredential(f *credsfile.UserCredentialsFile, opts *DetectOptions) (auth.TokenProvider, error) {
+	opts3LO := &auth.Options3LO{
+		ClientID:         f.ClientID,
+		ClientSecret:     f.ClientSecret,
+		Scopes:           opts.scopes(),
+		AuthURL:          googleAuthURL,
+		TokenURL:         opts.tokenURL(),
+		AuthStyle:        auth.StyleInParams,
+		EarlyTokenExpiry: opts.EarlyTokenRefresh,
+		RefreshToken:     f.RefreshToken,
+		Client:           opts.client(),
+		Logger:           opts.logger(),
+	}
+	return auth.New3LOTokenProvider(opts3LO)
+}
+
+func handleExternalAccount(f *credsfile.ExternalAccountFile, opts *DetectOptions) (auth.TokenProvider, error) {
+	externalOpts := &externalaccount.Options{
+		Audience:                       f.Audience,
+		SubjectTokenType:               f.SubjectTokenType,
+		TokenURL:                       f.TokenURL,
+		TokenInfoURL:                   f.TokenInfoURL,
+		ServiceAccountImpersonationURL: f.ServiceAccountImpersonationURL,
+		ClientSecret:                   f.ClientSecret,
+		ClientID:                       f.ClientID,
+		CredentialSource:               f.CredentialSource,
+		QuotaProjectID:                 f.QuotaProjectID,
+		Scopes:                         opts.scopes(),
+		WorkforcePoolUserProject:       f.WorkforcePoolUserProject,
+		Client:                         opts.client(),
+		Logger:                         opts.logger(),
+		IsDefaultClient:                opts.Client == nil,
+	}
+	if f.ServiceAccountImpersonation != nil {
+		externalOpts.ServiceAccountImpersonationLifetimeSeconds = f.ServiceAccountImpersonation.TokenLifetimeSeconds
+	}
+	return externalaccount.NewTokenProvider(externalOpts)
+}
+
+func handleExternalAccountAuthorizedUser(f *credsfile.ExternalAccountAuthorizedUserFile, opts *DetectOptions) (auth.TokenProvider, error) {
+	externalOpts := &externalaccountuser.Options{
+		Audience:     f.Audience,
+		RefreshToken: f.RefreshToken,
+		TokenURL:     f.TokenURL,
+		TokenInfoURL: f.TokenInfoURL,
+		ClientID:     f.ClientID,
+		ClientSecret: f.ClientSecret,
+		Scopes:       opts.scopes(),
+		Client:       opts.client(),
+		Logger:       opts.logger(),
+	}
+	return externalaccountuser.NewTokenProvider(externalOpts)
+}
+
+func handleImpersonatedServiceAccount(f *credsfile.ImpersonatedServiceAccountFile, opts *DetectOptions) (auth.TokenProvider, error) {
+	if f.ServiceAccountImpersonationURL == "" || f.CredSource == nil {
+		return nil, errors.New("missing 'source_credentials' field or 'service_account_impersonation_url' in credentials")
+	}
+
+	tp, err := fileCredentials(f.CredSource, opts)
+	if err != nil {
+		return nil, err
+	}
+	return impersonate.NewTokenProvider(&impersonate.Options{
+		URL:       f.ServiceAccountImpersonationURL,
+		Scopes:    opts.scopes(),
+		Tp:        tp,
+		Delegates: f.Delegates,
+		Client:    opts.client(),
+		Logger:    opts.logger(),
+	})
+}
+
+func handleGDCHServiceAccount(f *credsfile.GDCHServiceAccountFile, opts *DetectOptions) (auth.TokenProvider, error) {
+	return gdch.NewTokenProvider(f, &gdch.Options{
+		STSAudience: opts.STSAudience,
+		Client:      opts.client(),
+		Logger:      opts.logger(),
+	})
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/aws_provider.go 🔗

@@ -0,0 +1,531 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import (
+	"bytes"
+	"context"
+	"crypto/hmac"
+	"crypto/sha256"
+	"encoding/hex"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"net/url"
+	"os"
+	"path"
+	"sort"
+	"strings"
+	"time"
+
+	"cloud.google.com/go/auth/internal"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+var (
+	// getenv aliases os.Getenv for testing
+	getenv = os.Getenv
+)
+
+const (
+	// AWS Signature Version 4 signing algorithm identifier.
+	awsAlgorithm = "AWS4-HMAC-SHA256"
+
+	// The termination string for the AWS credential scope value as defined in
+	// https://docs.aws.amazon.com/general/latest/gr/sigv4-create-string-to-sign.html
+	awsRequestType = "aws4_request"
+
+	// The AWS authorization header name for the security session token if available.
+	awsSecurityTokenHeader = "x-amz-security-token"
+
+	// The name of the header containing the session token for metadata endpoint calls
+	awsIMDSv2SessionTokenHeader = "X-aws-ec2-metadata-token"
+
+	awsIMDSv2SessionTTLHeader = "X-aws-ec2-metadata-token-ttl-seconds"
+
+	awsIMDSv2SessionTTL = "300"
+
+	// The AWS authorization header name for the auto-generated date.
+	awsDateHeader = "x-amz-date"
+
+	defaultRegionalCredentialVerificationURL = "https://sts.{region}.amazonaws.com?Action=GetCallerIdentity&Version=2011-06-15"
+
+	// Supported AWS configuration environment variables.
+	awsAccessKeyIDEnvVar     = "AWS_ACCESS_KEY_ID"
+	awsDefaultRegionEnvVar   = "AWS_DEFAULT_REGION"
+	awsRegionEnvVar          = "AWS_REGION"
+	awsSecretAccessKeyEnvVar = "AWS_SECRET_ACCESS_KEY"
+	awsSessionTokenEnvVar    = "AWS_SESSION_TOKEN"
+
+	awsTimeFormatLong  = "20060102T150405Z"
+	awsTimeFormatShort = "20060102"
+	awsProviderType    = "aws"
+)
+
+type awsSubjectProvider struct {
+	EnvironmentID               string
+	RegionURL                   string
+	RegionalCredVerificationURL string
+	CredVerificationURL         string
+	IMDSv2SessionTokenURL       string
+	TargetResource              string
+	requestSigner               *awsRequestSigner
+	region                      string
+	securityCredentialsProvider AwsSecurityCredentialsProvider
+	reqOpts                     *RequestOptions
+
+	Client *http.Client
+	logger *slog.Logger
+}
+
+func (sp *awsSubjectProvider) subjectToken(ctx context.Context) (string, error) {
+	// Set Defaults
+	if sp.RegionalCredVerificationURL == "" {
+		sp.RegionalCredVerificationURL = defaultRegionalCredentialVerificationURL
+	}
+	headers := make(map[string]string)
+	if sp.shouldUseMetadataServer() {
+		awsSessionToken, err := sp.getAWSSessionToken(ctx)
+		if err != nil {
+			return "", err
+		}
+
+		if awsSessionToken != "" {
+			headers[awsIMDSv2SessionTokenHeader] = awsSessionToken
+		}
+	}
+
+	awsSecurityCredentials, err := sp.getSecurityCredentials(ctx, headers)
+	if err != nil {
+		return "", err
+	}
+	if sp.region, err = sp.getRegion(ctx, headers); err != nil {
+		return "", err
+	}
+	sp.requestSigner = &awsRequestSigner{
+		RegionName:             sp.region,
+		AwsSecurityCredentials: awsSecurityCredentials,
+	}
+
+	// Generate the signed request to AWS STS GetCallerIdentity API.
+	// Use the required regional endpoint. Otherwise, the request will fail.
+	req, err := http.NewRequestWithContext(ctx, "POST", strings.Replace(sp.RegionalCredVerificationURL, "{region}", sp.region, 1), nil)
+	if err != nil {
+		return "", err
+	}
+	// The full, canonical resource name of the workload identity pool
+	// provider, with or without the HTTPS prefix.
+	// Including this header as part of the signature is recommended to
+	// ensure data integrity.
+	if sp.TargetResource != "" {
+		req.Header.Set("x-goog-cloud-target-resource", sp.TargetResource)
+	}
+	sp.requestSigner.signRequest(req)
+
+	/*
+	   The GCP STS endpoint expects the headers to be formatted as:
+	   # [
+	   #   {key: 'x-amz-date', value: '...'},
+	   #   {key: 'Authorization', value: '...'},
+	   #   ...
+	   # ]
+	   # And then serialized as:
+	   # quote(json.dumps({
+	   #   url: '...',
+	   #   method: 'POST',
+	   #   headers: [{key: 'x-amz-date', value: '...'}, ...]
+	   # }))
+	*/
+
+	awsSignedReq := awsRequest{
+		URL:    req.URL.String(),
+		Method: "POST",
+	}
+	for headerKey, headerList := range req.Header {
+		for _, headerValue := range headerList {
+			awsSignedReq.Headers = append(awsSignedReq.Headers, awsRequestHeader{
+				Key:   headerKey,
+				Value: headerValue,
+			})
+		}
+	}
+	sort.Slice(awsSignedReq.Headers, func(i, j int) bool {
+		headerCompare := strings.Compare(awsSignedReq.Headers[i].Key, awsSignedReq.Headers[j].Key)
+		if headerCompare == 0 {
+			return strings.Compare(awsSignedReq.Headers[i].Value, awsSignedReq.Headers[j].Value) < 0
+		}
+		return headerCompare < 0
+	})
+
+	result, err := json.Marshal(awsSignedReq)
+	if err != nil {
+		return "", err
+	}
+	return url.QueryEscape(string(result)), nil
+}
+
+func (sp *awsSubjectProvider) providerType() string {
+	if sp.securityCredentialsProvider != nil {
+		return programmaticProviderType
+	}
+	return awsProviderType
+}
+
+func (sp *awsSubjectProvider) getAWSSessionToken(ctx context.Context) (string, error) {
+	if sp.IMDSv2SessionTokenURL == "" {
+		return "", nil
+	}
+	req, err := http.NewRequestWithContext(ctx, "PUT", sp.IMDSv2SessionTokenURL, nil)
+	if err != nil {
+		return "", err
+	}
+	req.Header.Set(awsIMDSv2SessionTTLHeader, awsIMDSv2SessionTTL)
+
+	sp.logger.DebugContext(ctx, "aws session token request", "request", internallog.HTTPRequest(req, nil))
+	resp, body, err := internal.DoRequest(sp.Client, req)
+	if err != nil {
+		return "", err
+	}
+	sp.logger.DebugContext(ctx, "aws session token response", "response", internallog.HTTPResponse(resp, body))
+	if resp.StatusCode != http.StatusOK {
+		return "", fmt.Errorf("credentials: unable to retrieve AWS session token: %s", body)
+	}
+	return string(body), nil
+}
+
+func (sp *awsSubjectProvider) getRegion(ctx context.Context, headers map[string]string) (string, error) {
+	if sp.securityCredentialsProvider != nil {
+		return sp.securityCredentialsProvider.AwsRegion(ctx, sp.reqOpts)
+	}
+	if canRetrieveRegionFromEnvironment() {
+		if envAwsRegion := getenv(awsRegionEnvVar); envAwsRegion != "" {
+			return envAwsRegion, nil
+		}
+		return getenv(awsDefaultRegionEnvVar), nil
+	}
+
+	if sp.RegionURL == "" {
+		return "", errors.New("credentials: unable to determine AWS region")
+	}
+
+	req, err := http.NewRequestWithContext(ctx, "GET", sp.RegionURL, nil)
+	if err != nil {
+		return "", err
+	}
+
+	for name, value := range headers {
+		req.Header.Add(name, value)
+	}
+	sp.logger.DebugContext(ctx, "aws region request", "request", internallog.HTTPRequest(req, nil))
+	resp, body, err := internal.DoRequest(sp.Client, req)
+	if err != nil {
+		return "", err
+	}
+	sp.logger.DebugContext(ctx, "aws region response", "response", internallog.HTTPResponse(resp, body))
+	if resp.StatusCode != http.StatusOK {
+		return "", fmt.Errorf("credentials: unable to retrieve AWS region - %s", body)
+	}
+
+	// This endpoint will return the region in format: us-east-2b.
+	// Only the us-east-2 part should be used.
+	bodyLen := len(body)
+	if bodyLen == 0 {
+		return "", nil
+	}
+	return string(body[:bodyLen-1]), nil
+}
+
+func (sp *awsSubjectProvider) getSecurityCredentials(ctx context.Context, headers map[string]string) (result *AwsSecurityCredentials, err error) {
+	if sp.securityCredentialsProvider != nil {
+		return sp.securityCredentialsProvider.AwsSecurityCredentials(ctx, sp.reqOpts)
+	}
+	if canRetrieveSecurityCredentialFromEnvironment() {
+		return &AwsSecurityCredentials{
+			AccessKeyID:     getenv(awsAccessKeyIDEnvVar),
+			SecretAccessKey: getenv(awsSecretAccessKeyEnvVar),
+			SessionToken:    getenv(awsSessionTokenEnvVar),
+		}, nil
+	}
+
+	roleName, err := sp.getMetadataRoleName(ctx, headers)
+	if err != nil {
+		return
+	}
+	credentials, err := sp.getMetadataSecurityCredentials(ctx, roleName, headers)
+	if err != nil {
+		return
+	}
+
+	if credentials.AccessKeyID == "" {
+		return result, errors.New("credentials: missing AccessKeyId credential")
+	}
+	if credentials.SecretAccessKey == "" {
+		return result, errors.New("credentials: missing SecretAccessKey credential")
+	}
+
+	return credentials, nil
+}
+
+func (sp *awsSubjectProvider) getMetadataSecurityCredentials(ctx context.Context, roleName string, headers map[string]string) (*AwsSecurityCredentials, error) {
+	var result *AwsSecurityCredentials
+
+	req, err := http.NewRequestWithContext(ctx, "GET", fmt.Sprintf("%s/%s", sp.CredVerificationURL, roleName), nil)
+	if err != nil {
+		return result, err
+	}
+	for name, value := range headers {
+		req.Header.Add(name, value)
+	}
+	sp.logger.DebugContext(ctx, "aws security credential request", "request", internallog.HTTPRequest(req, nil))
+	resp, body, err := internal.DoRequest(sp.Client, req)
+	if err != nil {
+		return result, err
+	}
+	sp.logger.DebugContext(ctx, "aws security credential response", "response", internallog.HTTPResponse(resp, body))
+	if resp.StatusCode != http.StatusOK {
+		return result, fmt.Errorf("credentials: unable to retrieve AWS security credentials - %s", body)
+	}
+	if err := json.Unmarshal(body, &result); err != nil {
+		return nil, err
+	}
+	return result, nil
+}
+
+func (sp *awsSubjectProvider) getMetadataRoleName(ctx context.Context, headers map[string]string) (string, error) {
+	if sp.CredVerificationURL == "" {
+		return "", errors.New("credentials: unable to determine the AWS metadata server security credentials endpoint")
+	}
+	req, err := http.NewRequestWithContext(ctx, "GET", sp.CredVerificationURL, nil)
+	if err != nil {
+		return "", err
+	}
+	for name, value := range headers {
+		req.Header.Add(name, value)
+	}
+
+	sp.logger.DebugContext(ctx, "aws metadata role request", "request", internallog.HTTPRequest(req, nil))
+	resp, body, err := internal.DoRequest(sp.Client, req)
+	if err != nil {
+		return "", err
+	}
+	sp.logger.DebugContext(ctx, "aws metadata role response", "response", internallog.HTTPResponse(resp, body))
+	if resp.StatusCode != http.StatusOK {
+		return "", fmt.Errorf("credentials: unable to retrieve AWS role name - %s", body)
+	}
+	return string(body), nil
+}
+
+// awsRequestSigner is a utility class to sign http requests using a AWS V4 signature.
+type awsRequestSigner struct {
+	RegionName             string
+	AwsSecurityCredentials *AwsSecurityCredentials
+}
+
+// signRequest adds the appropriate headers to an http.Request
+// or returns an error if something prevented this.
+func (rs *awsRequestSigner) signRequest(req *http.Request) error {
+	// req is assumed non-nil
+	signedRequest := cloneRequest(req)
+	timestamp := Now()
+	signedRequest.Header.Set("host", requestHost(req))
+	if rs.AwsSecurityCredentials.SessionToken != "" {
+		signedRequest.Header.Set(awsSecurityTokenHeader, rs.AwsSecurityCredentials.SessionToken)
+	}
+	if signedRequest.Header.Get("date") == "" {
+		signedRequest.Header.Set(awsDateHeader, timestamp.Format(awsTimeFormatLong))
+	}
+	authorizationCode, err := rs.generateAuthentication(signedRequest, timestamp)
+	if err != nil {
+		return err
+	}
+	signedRequest.Header.Set("Authorization", authorizationCode)
+	req.Header = signedRequest.Header
+	return nil
+}
+
+func (rs *awsRequestSigner) generateAuthentication(req *http.Request, timestamp time.Time) (string, error) {
+	canonicalHeaderColumns, canonicalHeaderData := canonicalHeaders(req)
+	dateStamp := timestamp.Format(awsTimeFormatShort)
+	serviceName := ""
+
+	if splitHost := strings.Split(requestHost(req), "."); len(splitHost) > 0 {
+		serviceName = splitHost[0]
+	}
+	credentialScope := strings.Join([]string{dateStamp, rs.RegionName, serviceName, awsRequestType}, "/")
+	requestString, err := canonicalRequest(req, canonicalHeaderColumns, canonicalHeaderData)
+	if err != nil {
+		return "", err
+	}
+	requestHash, err := getSha256([]byte(requestString))
+	if err != nil {
+		return "", err
+	}
+
+	stringToSign := strings.Join([]string{awsAlgorithm, timestamp.Format(awsTimeFormatLong), credentialScope, requestHash}, "\n")
+	signingKey := []byte("AWS4" + rs.AwsSecurityCredentials.SecretAccessKey)
+	for _, signingInput := range []string{
+		dateStamp, rs.RegionName, serviceName, awsRequestType, stringToSign,
+	} {
+		signingKey, err = getHmacSha256(signingKey, []byte(signingInput))
+		if err != nil {
+			return "", err
+		}
+	}
+
+	return fmt.Sprintf("%s Credential=%s/%s, SignedHeaders=%s, Signature=%s", awsAlgorithm, rs.AwsSecurityCredentials.AccessKeyID, credentialScope, canonicalHeaderColumns, hex.EncodeToString(signingKey)), nil
+}
+
+func getSha256(input []byte) (string, error) {
+	hash := sha256.New()
+	if _, err := hash.Write(input); err != nil {
+		return "", err
+	}
+	return hex.EncodeToString(hash.Sum(nil)), nil
+}
+
+func getHmacSha256(key, input []byte) ([]byte, error) {
+	hash := hmac.New(sha256.New, key)
+	if _, err := hash.Write(input); err != nil {
+		return nil, err
+	}
+	return hash.Sum(nil), nil
+}
+
+func cloneRequest(r *http.Request) *http.Request {
+	r2 := new(http.Request)
+	*r2 = *r
+	if r.Header != nil {
+		r2.Header = make(http.Header, len(r.Header))
+
+		// Find total number of values.
+		headerCount := 0
+		for _, headerValues := range r.Header {
+			headerCount += len(headerValues)
+		}
+		copiedHeaders := make([]string, headerCount) // shared backing array for headers' values
+
+		for headerKey, headerValues := range r.Header {
+			headerCount = copy(copiedHeaders, headerValues)
+			r2.Header[headerKey] = copiedHeaders[:headerCount:headerCount]
+			copiedHeaders = copiedHeaders[headerCount:]
+		}
+	}
+	return r2
+}
+
+func canonicalPath(req *http.Request) string {
+	result := req.URL.EscapedPath()
+	if result == "" {
+		return "/"
+	}
+	return path.Clean(result)
+}
+
+func canonicalQuery(req *http.Request) string {
+	queryValues := req.URL.Query()
+	for queryKey := range queryValues {
+		sort.Strings(queryValues[queryKey])
+	}
+	return queryValues.Encode()
+}
+
+func canonicalHeaders(req *http.Request) (string, string) {
+	// Header keys need to be sorted alphabetically.
+	var headers []string
+	lowerCaseHeaders := make(http.Header)
+	for k, v := range req.Header {
+		k := strings.ToLower(k)
+		if _, ok := lowerCaseHeaders[k]; ok {
+			// include additional values
+			lowerCaseHeaders[k] = append(lowerCaseHeaders[k], v...)
+		} else {
+			headers = append(headers, k)
+			lowerCaseHeaders[k] = v
+		}
+	}
+	sort.Strings(headers)
+
+	var fullHeaders bytes.Buffer
+	for _, header := range headers {
+		headerValue := strings.Join(lowerCaseHeaders[header], ",")
+		fullHeaders.WriteString(header)
+		fullHeaders.WriteRune(':')
+		fullHeaders.WriteString(headerValue)
+		fullHeaders.WriteRune('\n')
+	}
+
+	return strings.Join(headers, ";"), fullHeaders.String()
+}
+
+func requestDataHash(req *http.Request) (string, error) {
+	var requestData []byte
+	if req.Body != nil {
+		requestBody, err := req.GetBody()
+		if err != nil {
+			return "", err
+		}
+		defer requestBody.Close()
+
+		requestData, err = internal.ReadAll(requestBody)
+		if err != nil {
+			return "", err
+		}
+	}
+
+	return getSha256(requestData)
+}
+
+func requestHost(req *http.Request) string {
+	if req.Host != "" {
+		return req.Host
+	}
+	return req.URL.Host
+}
+
+func canonicalRequest(req *http.Request, canonicalHeaderColumns, canonicalHeaderData string) (string, error) {
+	dataHash, err := requestDataHash(req)
+	if err != nil {
+		return "", err
+	}
+	return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", req.Method, canonicalPath(req), canonicalQuery(req), canonicalHeaderData, canonicalHeaderColumns, dataHash), nil
+}
+
+type awsRequestHeader struct {
+	Key   string `json:"key"`
+	Value string `json:"value"`
+}
+
+type awsRequest struct {
+	URL     string             `json:"url"`
+	Method  string             `json:"method"`
+	Headers []awsRequestHeader `json:"headers"`
+}
+
+// The AWS region can be provided through AWS_REGION or AWS_DEFAULT_REGION. Only one is
+// required.
+func canRetrieveRegionFromEnvironment() bool {
+	return getenv(awsRegionEnvVar) != "" || getenv(awsDefaultRegionEnvVar) != ""
+}
+
+// Check if both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are available.
+func canRetrieveSecurityCredentialFromEnvironment() bool {
+	return getenv(awsAccessKeyIDEnvVar) != "" && getenv(awsSecretAccessKeyEnvVar) != ""
+}
+
+func (sp *awsSubjectProvider) shouldUseMetadataServer() bool {
+	return sp.securityCredentialsProvider == nil && (!canRetrieveRegionFromEnvironment() || !canRetrieveSecurityCredentialFromEnvironment())
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/executable_provider.go 🔗

@@ -0,0 +1,284 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"os"
+	"os/exec"
+	"regexp"
+	"strings"
+	"time"
+
+	"cloud.google.com/go/auth/internal"
+)
+
+const (
+	executableSupportedMaxVersion = 1
+	executableDefaultTimeout      = 30 * time.Second
+	executableSource              = "response"
+	executableProviderType        = "executable"
+	outputFileSource              = "output file"
+
+	allowExecutablesEnvVar = "GOOGLE_EXTERNAL_ACCOUNT_ALLOW_EXECUTABLES"
+
+	jwtTokenType   = "urn:ietf:params:oauth:token-type:jwt"
+	idTokenType    = "urn:ietf:params:oauth:token-type:id_token"
+	saml2TokenType = "urn:ietf:params:oauth:token-type:saml2"
+)
+
+var (
+	serviceAccountImpersonationRE = regexp.MustCompile(`https://iamcredentials..+/v1/projects/-/serviceAccounts/(.*@.*):generateAccessToken`)
+)
+
+type nonCacheableError struct {
+	message string
+}
+
+func (nce nonCacheableError) Error() string {
+	return nce.message
+}
+
+// environment is a contract for testing
+type environment interface {
+	existingEnv() []string
+	getenv(string) string
+	run(ctx context.Context, command string, env []string) ([]byte, error)
+	now() time.Time
+}
+
+type runtimeEnvironment struct{}
+
+func (r runtimeEnvironment) existingEnv() []string {
+	return os.Environ()
+}
+func (r runtimeEnvironment) getenv(key string) string {
+	return os.Getenv(key)
+}
+func (r runtimeEnvironment) now() time.Time {
+	return time.Now().UTC()
+}
+
+func (r runtimeEnvironment) run(ctx context.Context, command string, env []string) ([]byte, error) {
+	splitCommand := strings.Fields(command)
+	cmd := exec.CommandContext(ctx, splitCommand[0], splitCommand[1:]...)
+	cmd.Env = env
+
+	var stdout, stderr bytes.Buffer
+	cmd.Stdout = &stdout
+	cmd.Stderr = &stderr
+
+	if err := cmd.Run(); err != nil {
+		if ctx.Err() == context.DeadlineExceeded {
+			return nil, context.DeadlineExceeded
+		}
+		if exitError, ok := err.(*exec.ExitError); ok {
+			return nil, exitCodeError(exitError)
+		}
+		return nil, executableError(err)
+	}
+
+	bytesStdout := bytes.TrimSpace(stdout.Bytes())
+	if len(bytesStdout) > 0 {
+		return bytesStdout, nil
+	}
+	return bytes.TrimSpace(stderr.Bytes()), nil
+}
+
+type executableSubjectProvider struct {
+	Command    string
+	Timeout    time.Duration
+	OutputFile string
+	client     *http.Client
+	opts       *Options
+	env        environment
+}
+
+type executableResponse struct {
+	Version        int    `json:"version,omitempty"`
+	Success        *bool  `json:"success,omitempty"`
+	TokenType      string `json:"token_type,omitempty"`
+	ExpirationTime int64  `json:"expiration_time,omitempty"`
+	IDToken        string `json:"id_token,omitempty"`
+	SamlResponse   string `json:"saml_response,omitempty"`
+	Code           string `json:"code,omitempty"`
+	Message        string `json:"message,omitempty"`
+}
+
+func (sp *executableSubjectProvider) parseSubjectTokenFromSource(response []byte, source string, now int64) (string, error) {
+	var result executableResponse
+	if err := json.Unmarshal(response, &result); err != nil {
+		return "", jsonParsingError(source, string(response))
+	}
+	// Validate
+	if result.Version == 0 {
+		return "", missingFieldError(source, "version")
+	}
+	if result.Success == nil {
+		return "", missingFieldError(source, "success")
+	}
+	if !*result.Success {
+		if result.Code == "" || result.Message == "" {
+			return "", malformedFailureError()
+		}
+		return "", userDefinedError(result.Code, result.Message)
+	}
+	if result.Version > executableSupportedMaxVersion || result.Version < 0 {
+		return "", unsupportedVersionError(source, result.Version)
+	}
+	if result.ExpirationTime == 0 && sp.OutputFile != "" {
+		return "", missingFieldError(source, "expiration_time")
+	}
+	if result.TokenType == "" {
+		return "", missingFieldError(source, "token_type")
+	}
+	if result.ExpirationTime != 0 && result.ExpirationTime < now {
+		return "", tokenExpiredError()
+	}
+
+	switch result.TokenType {
+	case jwtTokenType, idTokenType:
+		if result.IDToken == "" {
+			return "", missingFieldError(source, "id_token")
+		}
+		return result.IDToken, nil
+	case saml2TokenType:
+		if result.SamlResponse == "" {
+			return "", missingFieldError(source, "saml_response")
+		}
+		return result.SamlResponse, nil
+	default:
+		return "", tokenTypeError(source)
+	}
+}
+
+func (sp *executableSubjectProvider) subjectToken(ctx context.Context) (string, error) {
+	if token, err := sp.getTokenFromOutputFile(); token != "" || err != nil {
+		return token, err
+	}
+	return sp.getTokenFromExecutableCommand(ctx)
+}
+
+func (sp *executableSubjectProvider) providerType() string {
+	return executableProviderType
+}
+
+func (sp *executableSubjectProvider) getTokenFromOutputFile() (token string, err error) {
+	if sp.OutputFile == "" {
+		// This ExecutableCredentialSource doesn't use an OutputFile.
+		return "", nil
+	}
+
+	file, err := os.Open(sp.OutputFile)
+	if err != nil {
+		// No OutputFile found. Hasn't been created yet, so skip it.
+		return "", nil
+	}
+	defer file.Close()
+
+	data, err := internal.ReadAll(file)
+	if err != nil || len(data) == 0 {
+		// Cachefile exists, but no data found. Get new credential.
+		return "", nil
+	}
+
+	token, err = sp.parseSubjectTokenFromSource(data, outputFileSource, sp.env.now().Unix())
+	if err != nil {
+		if _, ok := err.(nonCacheableError); ok {
+			// If the cached token is expired we need a new token,
+			// and if the cache contains a failure, we need to try again.
+			return "", nil
+		}
+
+		// There was an error in the cached token, and the developer should be aware of it.
+		return "", err
+	}
+	// Token parsing succeeded.  Use found token.
+	return token, nil
+}
+
+func (sp *executableSubjectProvider) executableEnvironment() []string {
+	result := sp.env.existingEnv()
+	result = append(result, fmt.Sprintf("GOOGLE_EXTERNAL_ACCOUNT_AUDIENCE=%v", sp.opts.Audience))
+	result = append(result, fmt.Sprintf("GOOGLE_EXTERNAL_ACCOUNT_TOKEN_TYPE=%v", sp.opts.SubjectTokenType))
+	result = append(result, "GOOGLE_EXTERNAL_ACCOUNT_INTERACTIVE=0")
+	if sp.opts.ServiceAccountImpersonationURL != "" {
+		matches := serviceAccountImpersonationRE.FindStringSubmatch(sp.opts.ServiceAccountImpersonationURL)
+		if matches != nil {
+			result = append(result, fmt.Sprintf("GOOGLE_EXTERNAL_ACCOUNT_IMPERSONATED_EMAIL=%v", matches[1]))
+		}
+	}
+	if sp.OutputFile != "" {
+		result = append(result, fmt.Sprintf("GOOGLE_EXTERNAL_ACCOUNT_OUTPUT_FILE=%v", sp.OutputFile))
+	}
+	return result
+}
+
+func (sp *executableSubjectProvider) getTokenFromExecutableCommand(ctx context.Context) (string, error) {
+	// For security reasons, we need our consumers to set this environment variable to allow executables to be run.
+	if sp.env.getenv(allowExecutablesEnvVar) != "1" {
+		return "", errors.New("credentials: executables need to be explicitly allowed (set GOOGLE_EXTERNAL_ACCOUNT_ALLOW_EXECUTABLES to '1') to run")
+	}
+
+	ctx, cancel := context.WithDeadline(ctx, sp.env.now().Add(sp.Timeout))
+	defer cancel()
+
+	output, err := sp.env.run(ctx, sp.Command, sp.executableEnvironment())
+	if err != nil {
+		return "", err
+	}
+	return sp.parseSubjectTokenFromSource(output, executableSource, sp.env.now().Unix())
+}
+
+func missingFieldError(source, field string) error {
+	return fmt.Errorf("credentials: %q missing %q field", source, field)
+}
+
+func jsonParsingError(source, data string) error {
+	return fmt.Errorf("credentials: unable to parse %q: %v", source, data)
+}
+
+func malformedFailureError() error {
+	return nonCacheableError{"credentials: response must include `error` and `message` fields when unsuccessful"}
+}
+
+func userDefinedError(code, message string) error {
+	return nonCacheableError{fmt.Sprintf("credentials: response contains unsuccessful response: (%v) %v", code, message)}
+}
+
+func unsupportedVersionError(source string, version int) error {
+	return fmt.Errorf("credentials: %v contains unsupported version: %v", source, version)
+}
+
+func tokenExpiredError() error {
+	return nonCacheableError{"credentials: the token returned by the executable is expired"}
+}
+
+func tokenTypeError(source string) error {
+	return fmt.Errorf("credentials: %v contains unsupported token type", source)
+}
+
+func exitCodeError(err *exec.ExitError) error {
+	return fmt.Errorf("credentials: executable command failed with exit code %v: %w", err.ExitCode(), err)
+}
+
+func executableError(err error) error {
+	return fmt.Errorf("credentials: executable command failed: %w", err)
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/externalaccount.go 🔗

@@ -0,0 +1,428 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"regexp"
+	"strconv"
+	"strings"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/credentials/internal/impersonate"
+	"cloud.google.com/go/auth/credentials/internal/stsexchange"
+	"cloud.google.com/go/auth/internal/credsfile"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+const (
+	timeoutMinimum = 5 * time.Second
+	timeoutMaximum = 120 * time.Second
+
+	universeDomainPlaceholder = "UNIVERSE_DOMAIN"
+	defaultTokenURL           = "https://sts.UNIVERSE_DOMAIN/v1/token"
+	defaultUniverseDomain     = "googleapis.com"
+)
+
+var (
+	// Now aliases time.Now for testing
+	Now = func() time.Time {
+		return time.Now().UTC()
+	}
+	validWorkforceAudiencePattern *regexp.Regexp = regexp.MustCompile(`//iam\.googleapis\.com/locations/[^/]+/workforcePools/`)
+)
+
+// Options stores the configuration for fetching tokens with external credentials.
+type Options struct {
+	// Audience is the Secure Token Service (STS) audience which contains the resource name for the workload
+	// identity pool or the workforce pool and the provider identifier in that pool.
+	Audience string
+	// SubjectTokenType is the STS token type based on the Oauth2.0 token exchange spec
+	// e.g. `urn:ietf:params:oauth:token-type:jwt`.
+	SubjectTokenType string
+	// TokenURL is the STS token exchange endpoint.
+	TokenURL string
+	// TokenInfoURL is the token_info endpoint used to retrieve the account related information (
+	// user attributes like account identifier, eg. email, username, uid, etc). This is
+	// needed for gCloud session account identification.
+	TokenInfoURL string
+	// ServiceAccountImpersonationURL is the URL for the service account impersonation request. This is only
+	// required for workload identity pools when APIs to be accessed have not integrated with UberMint.
+	ServiceAccountImpersonationURL string
+	// ServiceAccountImpersonationLifetimeSeconds is the number of seconds the service account impersonation
+	// token will be valid for.
+	ServiceAccountImpersonationLifetimeSeconds int
+	// ClientSecret is currently only required if token_info endpoint also
+	// needs to be called with the generated GCP access token. When provided, STS will be
+	// called with additional basic authentication using client_id as username and client_secret as password.
+	ClientSecret string
+	// ClientID is only required in conjunction with ClientSecret, as described above.
+	ClientID string
+	// CredentialSource contains the necessary information to retrieve the token itself, as well
+	// as some environmental information.
+	CredentialSource *credsfile.CredentialSource
+	// QuotaProjectID is injected by gCloud. If the value is non-empty, the Auth libraries
+	// will set the x-goog-user-project which overrides the project associated with the credentials.
+	QuotaProjectID string
+	// Scopes contains the desired scopes for the returned access token.
+	Scopes []string
+	// WorkforcePoolUserProject should be set when it is a workforce pool and
+	// not a workload identity pool. The underlying principal must still have
+	// serviceusage.services.use IAM permission to use the project for
+	// billing/quota. Optional.
+	WorkforcePoolUserProject string
+	// UniverseDomain is the default service domain for a given Cloud universe.
+	// This value will be used in the default STS token URL. The default value
+	// is "googleapis.com". It will not be used if TokenURL is set. Optional.
+	UniverseDomain string
+	// SubjectTokenProvider is an optional token provider for OIDC/SAML
+	// credentials. One of SubjectTokenProvider, AWSSecurityCredentialProvider
+	// or CredentialSource must be provided. Optional.
+	SubjectTokenProvider SubjectTokenProvider
+	// AwsSecurityCredentialsProvider is an AWS Security Credential provider
+	// for AWS credentials. One of SubjectTokenProvider,
+	// AWSSecurityCredentialProvider or CredentialSource must be provided. Optional.
+	AwsSecurityCredentialsProvider AwsSecurityCredentialsProvider
+	// Client for token request.
+	Client *http.Client
+	// IsDefaultClient marks whether the client passed in is a default client that can be overriden.
+	// This is important for X509 credentials which should create a new client if the default was used
+	// but should respect a client explicitly passed in by the user.
+	IsDefaultClient bool
+	// Logger is used for debug logging. If provided, logging will be enabled
+	// at the loggers configured level. By default logging is disabled unless
+	// enabled by setting GOOGLE_SDK_GO_LOGGING_LEVEL in which case a default
+	// logger will be used. Optional.
+	Logger *slog.Logger
+}
+
+// SubjectTokenProvider can be used to supply a subject token to exchange for a
+// GCP access token.
+type SubjectTokenProvider interface {
+	// SubjectToken should return a valid subject token or an error.
+	// The external account token provider does not cache the returned subject
+	// token, so caching logic should be implemented in the provider to prevent
+	// multiple requests for the same subject token.
+	SubjectToken(ctx context.Context, opts *RequestOptions) (string, error)
+}
+
+// RequestOptions contains information about the requested subject token or AWS
+// security credentials from the Google external account credential.
+type RequestOptions struct {
+	// Audience is the requested audience for the external account credential.
+	Audience string
+	// Subject token type is the requested subject token type for the external
+	// account credential. Expected values include:
+	// “urn:ietf:params:oauth:token-type:jwt”
+	// “urn:ietf:params:oauth:token-type:id-token”
+	// “urn:ietf:params:oauth:token-type:saml2”
+	// “urn:ietf:params:aws:token-type:aws4_request”
+	SubjectTokenType string
+}
+
+// AwsSecurityCredentialsProvider can be used to supply AwsSecurityCredentials
+// and an AWS Region to exchange for a GCP access token.
+type AwsSecurityCredentialsProvider interface {
+	// AwsRegion should return the AWS region or an error.
+	AwsRegion(ctx context.Context, opts *RequestOptions) (string, error)
+	// GetAwsSecurityCredentials should return a valid set of
+	// AwsSecurityCredentials or an error. The external account token provider
+	// does not cache the returned security credentials, so caching logic should
+	// be implemented in the provider to prevent multiple requests for the
+	// same security credentials.
+	AwsSecurityCredentials(ctx context.Context, opts *RequestOptions) (*AwsSecurityCredentials, error)
+}
+
+// AwsSecurityCredentials models AWS security credentials.
+type AwsSecurityCredentials struct {
+	// AccessKeyId is the AWS Access Key ID - Required.
+	AccessKeyID string `json:"AccessKeyID"`
+	// SecretAccessKey is the AWS Secret Access Key - Required.
+	SecretAccessKey string `json:"SecretAccessKey"`
+	// SessionToken is the AWS Session token. This should be provided for
+	// temporary AWS security credentials - Optional.
+	SessionToken string `json:"Token"`
+}
+
+func (o *Options) validate() error {
+	if o.Audience == "" {
+		return fmt.Errorf("externalaccount: Audience must be set")
+	}
+	if o.SubjectTokenType == "" {
+		return fmt.Errorf("externalaccount: Subject token type must be set")
+	}
+	if o.WorkforcePoolUserProject != "" {
+		if valid := validWorkforceAudiencePattern.MatchString(o.Audience); !valid {
+			return fmt.Errorf("externalaccount: workforce_pool_user_project should not be set for non-workforce pool credentials")
+		}
+	}
+	count := 0
+	if o.CredentialSource != nil {
+		count++
+	}
+	if o.SubjectTokenProvider != nil {
+		count++
+	}
+	if o.AwsSecurityCredentialsProvider != nil {
+		count++
+	}
+	if count == 0 {
+		return fmt.Errorf("externalaccount: one of CredentialSource, SubjectTokenProvider, or AwsSecurityCredentialsProvider must be set")
+	}
+	if count > 1 {
+		return fmt.Errorf("externalaccount: only one of CredentialSource, SubjectTokenProvider, or AwsSecurityCredentialsProvider must be set")
+	}
+	return nil
+}
+
+// client returns the http client that should be used for the token exchange. If a non-default client
+// is provided, then the client configured in the options will always be returned. If a default client
+// is provided and the options are configured for X509 credentials, a new client will be created.
+func (o *Options) client() (*http.Client, error) {
+	// If a client was provided and no override certificate config location was provided, use the provided client.
+	if o.CredentialSource == nil || o.CredentialSource.Certificate == nil || (!o.IsDefaultClient && o.CredentialSource.Certificate.CertificateConfigLocation == "") {
+		return o.Client, nil
+	}
+
+	// If a new client should be created, validate and use the certificate source to create a new mTLS client.
+	cert := o.CredentialSource.Certificate
+	if !cert.UseDefaultCertificateConfig && cert.CertificateConfigLocation == "" {
+		return nil, errors.New("credentials: \"certificate\" object must either specify a certificate_config_location or use_default_certificate_config should be true")
+	}
+	if cert.UseDefaultCertificateConfig && cert.CertificateConfigLocation != "" {
+		return nil, errors.New("credentials: \"certificate\" object cannot specify both a certificate_config_location and use_default_certificate_config=true")
+	}
+	return createX509Client(cert.CertificateConfigLocation)
+}
+
+// resolveTokenURL sets the default STS token endpoint with the configured
+// universe domain.
+func (o *Options) resolveTokenURL() {
+	if o.TokenURL != "" {
+		return
+	} else if o.UniverseDomain != "" {
+		o.TokenURL = strings.Replace(defaultTokenURL, universeDomainPlaceholder, o.UniverseDomain, 1)
+	} else {
+		o.TokenURL = strings.Replace(defaultTokenURL, universeDomainPlaceholder, defaultUniverseDomain, 1)
+	}
+}
+
+// NewTokenProvider returns a [cloud.google.com/go/auth.TokenProvider]
+// configured with the provided options.
+func NewTokenProvider(opts *Options) (auth.TokenProvider, error) {
+	if err := opts.validate(); err != nil {
+		return nil, err
+	}
+	opts.resolveTokenURL()
+	logger := internallog.New(opts.Logger)
+	stp, err := newSubjectTokenProvider(opts)
+	if err != nil {
+		return nil, err
+	}
+
+	client, err := opts.client()
+	if err != nil {
+		return nil, err
+	}
+
+	tp := &tokenProvider{
+		client: client,
+		opts:   opts,
+		stp:    stp,
+		logger: logger,
+	}
+
+	if opts.ServiceAccountImpersonationURL == "" {
+		return auth.NewCachedTokenProvider(tp, nil), nil
+	}
+
+	scopes := make([]string, len(opts.Scopes))
+	copy(scopes, opts.Scopes)
+	// needed for impersonation
+	tp.opts.Scopes = []string{"https://www.googleapis.com/auth/cloud-platform"}
+	imp, err := impersonate.NewTokenProvider(&impersonate.Options{
+		Client:               client,
+		URL:                  opts.ServiceAccountImpersonationURL,
+		Scopes:               scopes,
+		Tp:                   auth.NewCachedTokenProvider(tp, nil),
+		TokenLifetimeSeconds: opts.ServiceAccountImpersonationLifetimeSeconds,
+		Logger:               logger,
+	})
+	if err != nil {
+		return nil, err
+	}
+	return auth.NewCachedTokenProvider(imp, nil), nil
+}
+
+type subjectTokenProvider interface {
+	subjectToken(ctx context.Context) (string, error)
+	providerType() string
+}
+
+// tokenProvider is the provider that handles external credentials. It is used to retrieve Tokens.
+type tokenProvider struct {
+	client *http.Client
+	logger *slog.Logger
+	opts   *Options
+	stp    subjectTokenProvider
+}
+
+func (tp *tokenProvider) Token(ctx context.Context) (*auth.Token, error) {
+	subjectToken, err := tp.stp.subjectToken(ctx)
+	if err != nil {
+		return nil, err
+	}
+
+	stsRequest := &stsexchange.TokenRequest{
+		GrantType:          stsexchange.GrantType,
+		Audience:           tp.opts.Audience,
+		Scope:              tp.opts.Scopes,
+		RequestedTokenType: stsexchange.TokenType,
+		SubjectToken:       subjectToken,
+		SubjectTokenType:   tp.opts.SubjectTokenType,
+	}
+	header := make(http.Header)
+	header.Set("Content-Type", "application/x-www-form-urlencoded")
+	header.Add("x-goog-api-client", getGoogHeaderValue(tp.opts, tp.stp))
+	clientAuth := stsexchange.ClientAuthentication{
+		AuthStyle:    auth.StyleInHeader,
+		ClientID:     tp.opts.ClientID,
+		ClientSecret: tp.opts.ClientSecret,
+	}
+	var options map[string]interface{}
+	// Do not pass workforce_pool_user_project when client authentication is used.
+	// The client ID is sufficient for determining the user project.
+	if tp.opts.WorkforcePoolUserProject != "" && tp.opts.ClientID == "" {
+		options = map[string]interface{}{
+			"userProject": tp.opts.WorkforcePoolUserProject,
+		}
+	}
+	stsResp, err := stsexchange.ExchangeToken(ctx, &stsexchange.Options{
+		Client:         tp.client,
+		Endpoint:       tp.opts.TokenURL,
+		Request:        stsRequest,
+		Authentication: clientAuth,
+		Headers:        header,
+		ExtraOpts:      options,
+		Logger:         tp.logger,
+	})
+	if err != nil {
+		return nil, err
+	}
+
+	tok := &auth.Token{
+		Value: stsResp.AccessToken,
+		Type:  stsResp.TokenType,
+	}
+	// The RFC8693 doesn't define the explicit 0 of "expires_in" field behavior.
+	if stsResp.ExpiresIn <= 0 {
+		return nil, fmt.Errorf("credentials: got invalid expiry from security token service")
+	}
+	tok.Expiry = Now().Add(time.Duration(stsResp.ExpiresIn) * time.Second)
+	return tok, nil
+}
+
+// newSubjectTokenProvider determines the type of credsfile.CredentialSource needed to create a
+// subjectTokenProvider
+func newSubjectTokenProvider(o *Options) (subjectTokenProvider, error) {
+	logger := internallog.New(o.Logger)
+	reqOpts := &RequestOptions{Audience: o.Audience, SubjectTokenType: o.SubjectTokenType}
+	if o.AwsSecurityCredentialsProvider != nil {
+		return &awsSubjectProvider{
+			securityCredentialsProvider: o.AwsSecurityCredentialsProvider,
+			TargetResource:              o.Audience,
+			reqOpts:                     reqOpts,
+			logger:                      logger,
+		}, nil
+	} else if o.SubjectTokenProvider != nil {
+		return &programmaticProvider{stp: o.SubjectTokenProvider, opts: reqOpts}, nil
+	} else if len(o.CredentialSource.EnvironmentID) > 3 && o.CredentialSource.EnvironmentID[:3] == "aws" {
+		if awsVersion, err := strconv.Atoi(o.CredentialSource.EnvironmentID[3:]); err == nil {
+			if awsVersion != 1 {
+				return nil, fmt.Errorf("credentials: aws version '%d' is not supported in the current build", awsVersion)
+			}
+
+			awsProvider := &awsSubjectProvider{
+				EnvironmentID:               o.CredentialSource.EnvironmentID,
+				RegionURL:                   o.CredentialSource.RegionURL,
+				RegionalCredVerificationURL: o.CredentialSource.RegionalCredVerificationURL,
+				CredVerificationURL:         o.CredentialSource.URL,
+				TargetResource:              o.Audience,
+				Client:                      o.Client,
+				logger:                      logger,
+			}
+			if o.CredentialSource.IMDSv2SessionTokenURL != "" {
+				awsProvider.IMDSv2SessionTokenURL = o.CredentialSource.IMDSv2SessionTokenURL
+			}
+
+			return awsProvider, nil
+		}
+	} else if o.CredentialSource.File != "" {
+		return &fileSubjectProvider{File: o.CredentialSource.File, Format: o.CredentialSource.Format}, nil
+	} else if o.CredentialSource.URL != "" {
+		return &urlSubjectProvider{
+			URL:     o.CredentialSource.URL,
+			Headers: o.CredentialSource.Headers,
+			Format:  o.CredentialSource.Format,
+			Client:  o.Client,
+			Logger:  logger,
+		}, nil
+	} else if o.CredentialSource.Executable != nil {
+		ec := o.CredentialSource.Executable
+		if ec.Command == "" {
+			return nil, errors.New("credentials: missing `command` field — executable command must be provided")
+		}
+
+		execProvider := &executableSubjectProvider{}
+		execProvider.Command = ec.Command
+		if ec.TimeoutMillis == 0 {
+			execProvider.Timeout = executableDefaultTimeout
+		} else {
+			execProvider.Timeout = time.Duration(ec.TimeoutMillis) * time.Millisecond
+			if execProvider.Timeout < timeoutMinimum || execProvider.Timeout > timeoutMaximum {
+				return nil, fmt.Errorf("credentials: invalid `timeout_millis` field — executable timeout must be between %v and %v seconds", timeoutMinimum.Seconds(), timeoutMaximum.Seconds())
+			}
+		}
+		execProvider.OutputFile = ec.OutputFile
+		execProvider.client = o.Client
+		execProvider.opts = o
+		execProvider.env = runtimeEnvironment{}
+		return execProvider, nil
+	} else if o.CredentialSource.Certificate != nil {
+		cert := o.CredentialSource.Certificate
+		if !cert.UseDefaultCertificateConfig && cert.CertificateConfigLocation == "" {
+			return nil, errors.New("credentials: \"certificate\" object must either specify a certificate_config_location or use_default_certificate_config should be true")
+		}
+		if cert.UseDefaultCertificateConfig && cert.CertificateConfigLocation != "" {
+			return nil, errors.New("credentials: \"certificate\" object cannot specify both a certificate_config_location and use_default_certificate_config=true")
+		}
+		return &x509Provider{}, nil
+	}
+	return nil, errors.New("credentials: unable to parse credential source")
+}
+
+func getGoogHeaderValue(conf *Options, p subjectTokenProvider) string {
+	return fmt.Sprintf("gl-go/%s auth/%s google-byoid-sdk source/%s sa-impersonation/%t config-lifetime/%t",
+		goVersion(),
+		"unknown",
+		p.providerType(),
+		conf.ServiceAccountImpersonationURL != "",
+		conf.ServiceAccountImpersonationLifetimeSeconds != 0)
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/file_provider.go 🔗

@@ -0,0 +1,78 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"os"
+
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/credsfile"
+)
+
+const (
+	fileProviderType = "file"
+)
+
+type fileSubjectProvider struct {
+	File   string
+	Format *credsfile.Format
+}
+
+func (sp *fileSubjectProvider) subjectToken(context.Context) (string, error) {
+	tokenFile, err := os.Open(sp.File)
+	if err != nil {
+		return "", fmt.Errorf("credentials: failed to open credential file %q: %w", sp.File, err)
+	}
+	defer tokenFile.Close()
+	tokenBytes, err := internal.ReadAll(tokenFile)
+	if err != nil {
+		return "", fmt.Errorf("credentials: failed to read credential file: %w", err)
+	}
+	tokenBytes = bytes.TrimSpace(tokenBytes)
+
+	if sp.Format == nil {
+		return string(tokenBytes), nil
+	}
+	switch sp.Format.Type {
+	case fileTypeJSON:
+		jsonData := make(map[string]interface{})
+		err = json.Unmarshal(tokenBytes, &jsonData)
+		if err != nil {
+			return "", fmt.Errorf("credentials: failed to unmarshal subject token file: %w", err)
+		}
+		val, ok := jsonData[sp.Format.SubjectTokenFieldName]
+		if !ok {
+			return "", errors.New("credentials: provided subject_token_field_name not found in credentials")
+		}
+		token, ok := val.(string)
+		if !ok {
+			return "", errors.New("credentials: improperly formatted subject token")
+		}
+		return token, nil
+	case fileTypeText:
+		return string(tokenBytes), nil
+	default:
+		return "", errors.New("credentials: invalid credential_source file format type: " + sp.Format.Type)
+	}
+}
+
+func (sp *fileSubjectProvider) providerType() string {
+	return fileProviderType
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/info.go 🔗

@@ -0,0 +1,74 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import (
+	"runtime"
+	"strings"
+	"unicode"
+)
+
+var (
+	// version is a package internal global variable for testing purposes.
+	version = runtime.Version
+)
+
+// versionUnknown is only used when the runtime version cannot be determined.
+const versionUnknown = "UNKNOWN"
+
+// goVersion returns a Go runtime version derived from the runtime environment
+// that is modified to be suitable for reporting in a header, meaning it has no
+// whitespace. If it is unable to determine the Go runtime version, it returns
+// versionUnknown.
+func goVersion() string {
+	const develPrefix = "devel +"
+
+	s := version()
+	if strings.HasPrefix(s, develPrefix) {
+		s = s[len(develPrefix):]
+		if p := strings.IndexFunc(s, unicode.IsSpace); p >= 0 {
+			s = s[:p]
+		}
+		return s
+	} else if p := strings.IndexFunc(s, unicode.IsSpace); p >= 0 {
+		s = s[:p]
+	}
+
+	notSemverRune := func(r rune) bool {
+		return !strings.ContainsRune("0123456789.", r)
+	}
+
+	if strings.HasPrefix(s, "go1") {
+		s = s[2:]
+		var prerelease string
+		if p := strings.IndexFunc(s, notSemverRune); p >= 0 {
+			s, prerelease = s[:p], s[p:]
+		}
+		if strings.HasSuffix(s, ".") {
+			s += "0"
+		} else if strings.Count(s, ".") < 2 {
+			s += ".0"
+		}
+		if prerelease != "" {
+			// Some release candidates already have a dash in them.
+			if !strings.HasPrefix(prerelease, "-") {
+				prerelease = "-" + prerelease
+			}
+			s += prerelease
+		}
+		return s
+	}
+	return versionUnknown
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/programmatic_provider.go 🔗

@@ -0,0 +1,30 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import "context"
+
+type programmaticProvider struct {
+	opts *RequestOptions
+	stp  SubjectTokenProvider
+}
+
+func (pp *programmaticProvider) providerType() string {
+	return programmaticProviderType
+}
+
+func (pp *programmaticProvider) subjectToken(ctx context.Context) (string, error) {
+	return pp.stp.SubjectToken(ctx, pp.opts)
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/url_provider.go 🔗

@@ -0,0 +1,93 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/credsfile"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+const (
+	fileTypeText             = "text"
+	fileTypeJSON             = "json"
+	urlProviderType          = "url"
+	programmaticProviderType = "programmatic"
+	x509ProviderType         = "x509"
+)
+
+type urlSubjectProvider struct {
+	URL     string
+	Headers map[string]string
+	Format  *credsfile.Format
+	Client  *http.Client
+	Logger  *slog.Logger
+}
+
+func (sp *urlSubjectProvider) subjectToken(ctx context.Context) (string, error) {
+	req, err := http.NewRequestWithContext(ctx, "GET", sp.URL, nil)
+	if err != nil {
+		return "", fmt.Errorf("credentials: HTTP request for URL-sourced credential failed: %w", err)
+	}
+
+	for key, val := range sp.Headers {
+		req.Header.Add(key, val)
+	}
+	sp.Logger.DebugContext(ctx, "url subject token request", "request", internallog.HTTPRequest(req, nil))
+	resp, body, err := internal.DoRequest(sp.Client, req)
+	if err != nil {
+		return "", fmt.Errorf("credentials: invalid response when retrieving subject token: %w", err)
+	}
+	sp.Logger.DebugContext(ctx, "url subject token response", "response", internallog.HTTPResponse(resp, body))
+	if c := resp.StatusCode; c < http.StatusOK || c >= http.StatusMultipleChoices {
+		return "", fmt.Errorf("credentials: status code %d: %s", c, body)
+	}
+
+	if sp.Format == nil {
+		return string(body), nil
+	}
+	switch sp.Format.Type {
+	case "json":
+		jsonData := make(map[string]interface{})
+		err = json.Unmarshal(body, &jsonData)
+		if err != nil {
+			return "", fmt.Errorf("credentials: failed to unmarshal subject token file: %w", err)
+		}
+		val, ok := jsonData[sp.Format.SubjectTokenFieldName]
+		if !ok {
+			return "", errors.New("credentials: provided subject_token_field_name not found in credentials")
+		}
+		token, ok := val.(string)
+		if !ok {
+			return "", errors.New("credentials: improperly formatted subject token")
+		}
+		return token, nil
+	case fileTypeText:
+		return string(body), nil
+	default:
+		return "", errors.New("credentials: invalid credential_source file format type: " + sp.Format.Type)
+	}
+}
+
+func (sp *urlSubjectProvider) providerType() string {
+	return urlProviderType
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccount/x509_provider.go 🔗

@@ -0,0 +1,63 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccount
+
+import (
+	"context"
+	"crypto/tls"
+	"net/http"
+	"time"
+
+	"cloud.google.com/go/auth/internal/transport/cert"
+)
+
+// x509Provider implements the subjectTokenProvider type for
+// x509 workload identity credentials. Because x509 credentials
+// rely on an mTLS connection to represent the 3rd party identity
+// rather than a subject token, this provider will always return
+// an empty string when a subject token is requested by the external account
+// token provider.
+type x509Provider struct {
+}
+
+func (xp *x509Provider) providerType() string {
+	return x509ProviderType
+}
+
+func (xp *x509Provider) subjectToken(ctx context.Context) (string, error) {
+	return "", nil
+}
+
+// createX509Client creates a new client that is configured with mTLS, using the
+// certificate configuration specified in the credential source.
+func createX509Client(certificateConfigLocation string) (*http.Client, error) {
+	certProvider, err := cert.NewWorkloadX509CertProvider(certificateConfigLocation)
+	if err != nil {
+		return nil, err
+	}
+	trans := http.DefaultTransport.(*http.Transport).Clone()
+
+	trans.TLSClientConfig = &tls.Config{
+		GetClientCertificate: certProvider,
+	}
+
+	// Create a client with default settings plus the X509 workload cert and key.
+	client := &http.Client{
+		Transport: trans,
+		Timeout:   30 * time.Second,
+	}
+
+	return client, nil
+}

vendor/cloud.google.com/go/auth/credentials/internal/externalaccountuser/externalaccountuser.go 🔗

@@ -0,0 +1,115 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package externalaccountuser
+
+import (
+	"context"
+	"errors"
+	"log/slog"
+	"net/http"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/credentials/internal/stsexchange"
+	"cloud.google.com/go/auth/internal"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+// Options stores the configuration for fetching tokens with external authorized
+// user credentials.
+type Options struct {
+	// Audience is the Secure Token Service (STS) audience which contains the
+	// resource name for the workforce pool and the provider identifier in that
+	// pool.
+	Audience string
+	// RefreshToken is the OAuth 2.0 refresh token.
+	RefreshToken string
+	// TokenURL is the STS token exchange endpoint for refresh.
+	TokenURL string
+	// TokenInfoURL is the STS endpoint URL for token introspection. Optional.
+	TokenInfoURL string
+	// ClientID is only required in conjunction with ClientSecret, as described
+	// below.
+	ClientID string
+	// ClientSecret is currently only required if token_info endpoint also needs
+	// to be called with the generated a cloud access token. When provided, STS
+	// will be called with additional basic authentication using client_id as
+	// username and client_secret as password.
+	ClientSecret string
+	// Scopes contains the desired scopes for the returned access token.
+	Scopes []string
+
+	// Client for token request.
+	Client *http.Client
+	// Logger for logging.
+	Logger *slog.Logger
+}
+
+func (c *Options) validate() bool {
+	return c.ClientID != "" && c.ClientSecret != "" && c.RefreshToken != "" && c.TokenURL != ""
+}
+
+// NewTokenProvider returns a [cloud.google.com/go/auth.TokenProvider]
+// configured with the provided options.
+func NewTokenProvider(opts *Options) (auth.TokenProvider, error) {
+	if !opts.validate() {
+		return nil, errors.New("credentials: invalid external_account_authorized_user configuration")
+	}
+
+	tp := &tokenProvider{
+		o: opts,
+	}
+	return auth.NewCachedTokenProvider(tp, nil), nil
+}
+
+type tokenProvider struct {
+	o *Options
+}
+
+func (tp *tokenProvider) Token(ctx context.Context) (*auth.Token, error) {
+	opts := tp.o
+
+	clientAuth := stsexchange.ClientAuthentication{
+		AuthStyle:    auth.StyleInHeader,
+		ClientID:     opts.ClientID,
+		ClientSecret: opts.ClientSecret,
+	}
+	headers := make(http.Header)
+	headers.Set("Content-Type", "application/x-www-form-urlencoded")
+	stsResponse, err := stsexchange.RefreshAccessToken(ctx, &stsexchange.Options{
+		Client:         opts.Client,
+		Endpoint:       opts.TokenURL,
+		RefreshToken:   opts.RefreshToken,
+		Authentication: clientAuth,
+		Headers:        headers,
+		Logger:         internallog.New(tp.o.Logger),
+	})
+	if err != nil {
+		return nil, err
+	}
+	if stsResponse.ExpiresIn < 0 {
+		return nil, errors.New("credentials: invalid expiry from security token service")
+	}
+
+	// guarded by the wrapping with CachedTokenProvider
+	if stsResponse.RefreshToken != "" {
+		opts.RefreshToken = stsResponse.RefreshToken
+	}
+	return &auth.Token{
+		Value:  stsResponse.AccessToken,
+		Expiry: time.Now().UTC().Add(time.Duration(stsResponse.ExpiresIn) * time.Second),
+		Type:   internal.TokenTypeBearer,
+	}, nil
+}

vendor/cloud.google.com/go/auth/credentials/internal/gdch/gdch.go 🔗

@@ -0,0 +1,191 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package gdch
+
+import (
+	"context"
+	"crypto"
+	"crypto/tls"
+	"crypto/x509"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"net/url"
+	"os"
+	"strings"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/credsfile"
+	"cloud.google.com/go/auth/internal/jwt"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+const (
+	// GrantType is the grant type for the token request.
+	GrantType        = "urn:ietf:params:oauth:token-type:token-exchange"
+	requestTokenType = "urn:ietf:params:oauth:token-type:access_token"
+	subjectTokenType = "urn:k8s:params:oauth:token-type:serviceaccount"
+)
+
+var (
+	gdchSupportFormatVersions map[string]bool = map[string]bool{
+		"1": true,
+	}
+)
+
+// Options for [NewTokenProvider].
+type Options struct {
+	STSAudience string
+	Client      *http.Client
+	Logger      *slog.Logger
+}
+
+// NewTokenProvider returns a [cloud.google.com/go/auth.TokenProvider] from a
+// GDCH cred file.
+func NewTokenProvider(f *credsfile.GDCHServiceAccountFile, o *Options) (auth.TokenProvider, error) {
+	if !gdchSupportFormatVersions[f.FormatVersion] {
+		return nil, fmt.Errorf("credentials: unsupported gdch_service_account format %q", f.FormatVersion)
+	}
+	if o.STSAudience == "" {
+		return nil, errors.New("credentials: STSAudience must be set for the GDCH auth flows")
+	}
+	signer, err := internal.ParseKey([]byte(f.PrivateKey))
+	if err != nil {
+		return nil, err
+	}
+	certPool, err := loadCertPool(f.CertPath)
+	if err != nil {
+		return nil, err
+	}
+
+	tp := gdchProvider{
+		serviceIdentity: fmt.Sprintf("system:serviceaccount:%s:%s", f.Project, f.Name),
+		tokenURL:        f.TokenURL,
+		aud:             o.STSAudience,
+		signer:          signer,
+		pkID:            f.PrivateKeyID,
+		certPool:        certPool,
+		client:          o.Client,
+		logger:          internallog.New(o.Logger),
+	}
+	return tp, nil
+}
+
+func loadCertPool(path string) (*x509.CertPool, error) {
+	pool := x509.NewCertPool()
+	pem, err := os.ReadFile(path)
+	if err != nil {
+		return nil, fmt.Errorf("credentials: failed to read certificate: %w", err)
+	}
+	pool.AppendCertsFromPEM(pem)
+	return pool, nil
+}
+
+type gdchProvider struct {
+	serviceIdentity string
+	tokenURL        string
+	aud             string
+	signer          crypto.Signer
+	pkID            string
+	certPool        *x509.CertPool
+
+	client *http.Client
+	logger *slog.Logger
+}
+
+func (g gdchProvider) Token(ctx context.Context) (*auth.Token, error) {
+	addCertToTransport(g.client, g.certPool)
+	iat := time.Now()
+	exp := iat.Add(time.Hour)
+	claims := jwt.Claims{
+		Iss: g.serviceIdentity,
+		Sub: g.serviceIdentity,
+		Aud: g.tokenURL,
+		Iat: iat.Unix(),
+		Exp: exp.Unix(),
+	}
+	h := jwt.Header{
+		Algorithm: jwt.HeaderAlgRSA256,
+		Type:      jwt.HeaderType,
+		KeyID:     string(g.pkID),
+	}
+	payload, err := jwt.EncodeJWS(&h, &claims, g.signer)
+	if err != nil {
+		return nil, err
+	}
+	v := url.Values{}
+	v.Set("grant_type", GrantType)
+	v.Set("audience", g.aud)
+	v.Set("requested_token_type", requestTokenType)
+	v.Set("subject_token", payload)
+	v.Set("subject_token_type", subjectTokenType)
+
+	req, err := http.NewRequestWithContext(ctx, "POST", g.tokenURL, strings.NewReader(v.Encode()))
+	if err != nil {
+		return nil, err
+	}
+	req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
+	g.logger.DebugContext(ctx, "gdch token request", "request", internallog.HTTPRequest(req, []byte(v.Encode())))
+	resp, body, err := internal.DoRequest(g.client, req)
+	if err != nil {
+		return nil, fmt.Errorf("credentials: cannot fetch token: %w", err)
+	}
+	g.logger.DebugContext(ctx, "gdch token response", "response", internallog.HTTPResponse(resp, body))
+	if c := resp.StatusCode; c < http.StatusOK || c > http.StatusMultipleChoices {
+		return nil, &auth.Error{
+			Response: resp,
+			Body:     body,
+		}
+	}
+
+	var tokenRes struct {
+		AccessToken string `json:"access_token"`
+		TokenType   string `json:"token_type"`
+		ExpiresIn   int64  `json:"expires_in"` // relative seconds from now
+	}
+	if err := json.Unmarshal(body, &tokenRes); err != nil {
+		return nil, fmt.Errorf("credentials: cannot fetch token: %w", err)
+	}
+	token := &auth.Token{
+		Value: tokenRes.AccessToken,
+		Type:  tokenRes.TokenType,
+	}
+	raw := make(map[string]interface{})
+	json.Unmarshal(body, &raw) // no error checks for optional fields
+	token.Metadata = raw
+
+	if secs := tokenRes.ExpiresIn; secs > 0 {
+		token.Expiry = time.Now().Add(time.Duration(secs) * time.Second)
+	}
+	return token, nil
+}
+
+// addCertToTransport makes a best effort attempt at adding in the cert info to
+// the client. It tries to keep all configured transport settings if the
+// underlying transport is an http.Transport. Or else it overwrites the
+// transport with defaults adding in the certs.
+func addCertToTransport(hc *http.Client, certPool *x509.CertPool) {
+	trans, ok := hc.Transport.(*http.Transport)
+	if !ok {
+		trans = http.DefaultTransport.(*http.Transport).Clone()
+	}
+	trans.TLSClientConfig = &tls.Config{
+		RootCAs: certPool,
+	}
+}

vendor/cloud.google.com/go/auth/credentials/internal/impersonate/impersonate.go 🔗

@@ -0,0 +1,156 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package impersonate
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/internal"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+const (
+	defaultTokenLifetime = "3600s"
+	authHeaderKey        = "Authorization"
+)
+
+// generateAccesstokenReq is used for service account impersonation
+type generateAccessTokenReq struct {
+	Delegates []string `json:"delegates,omitempty"`
+	Lifetime  string   `json:"lifetime,omitempty"`
+	Scope     []string `json:"scope,omitempty"`
+}
+
+type impersonateTokenResponse struct {
+	AccessToken string `json:"accessToken"`
+	ExpireTime  string `json:"expireTime"`
+}
+
+// NewTokenProvider uses a source credential, stored in Ts, to request an access token to the provided URL.
+// Scopes can be defined when the access token is requested.
+func NewTokenProvider(opts *Options) (auth.TokenProvider, error) {
+	if err := opts.validate(); err != nil {
+		return nil, err
+	}
+	return opts, nil
+}
+
+// Options for [NewTokenProvider].
+type Options struct {
+	// Tp is the source credential used to generate a token on the
+	// impersonated service account. Required.
+	Tp auth.TokenProvider
+
+	// URL is the endpoint to call to generate a token
+	// on behalf of the service account. Required.
+	URL string
+	// Scopes that the impersonated credential should have. Required.
+	Scopes []string
+	// Delegates are the service account email addresses in a delegation chain.
+	// Each service account must be granted roles/iam.serviceAccountTokenCreator
+	// on the next service account in the chain. Optional.
+	Delegates []string
+	// TokenLifetimeSeconds is the number of seconds the impersonation token will
+	// be valid for. Defaults to 1 hour if unset. Optional.
+	TokenLifetimeSeconds int
+	// Client configures the underlying client used to make network requests
+	// when fetching tokens. Required.
+	Client *http.Client
+	// Logger is used for debug logging. If provided, logging will be enabled
+	// at the loggers configured level. By default logging is disabled unless
+	// enabled by setting GOOGLE_SDK_GO_LOGGING_LEVEL in which case a default
+	// logger will be used. Optional.
+	Logger *slog.Logger
+}
+
+func (o *Options) validate() error {
+	if o.Tp == nil {
+		return errors.New("credentials: missing required 'source_credentials' field in impersonated credentials")
+	}
+	if o.URL == "" {
+		return errors.New("credentials: missing required 'service_account_impersonation_url' field in impersonated credentials")
+	}
+	return nil
+}
+
+// Token performs the exchange to get a temporary service account token to allow access to GCP.
+func (o *Options) Token(ctx context.Context) (*auth.Token, error) {
+	logger := internallog.New(o.Logger)
+	lifetime := defaultTokenLifetime
+	if o.TokenLifetimeSeconds != 0 {
+		lifetime = fmt.Sprintf("%ds", o.TokenLifetimeSeconds)
+	}
+	reqBody := generateAccessTokenReq{
+		Lifetime:  lifetime,
+		Scope:     o.Scopes,
+		Delegates: o.Delegates,
+	}
+	b, err := json.Marshal(reqBody)
+	if err != nil {
+		return nil, fmt.Errorf("credentials: unable to marshal request: %w", err)
+	}
+	req, err := http.NewRequestWithContext(ctx, "POST", o.URL, bytes.NewReader(b))
+	if err != nil {
+		return nil, fmt.Errorf("credentials: unable to create impersonation request: %w", err)
+	}
+	req.Header.Set("Content-Type", "application/json")
+	if err := setAuthHeader(ctx, o.Tp, req); err != nil {
+		return nil, err
+	}
+	logger.DebugContext(ctx, "impersonated token request", "request", internallog.HTTPRequest(req, b))
+	resp, body, err := internal.DoRequest(o.Client, req)
+	if err != nil {
+		return nil, fmt.Errorf("credentials: unable to generate access token: %w", err)
+	}
+	logger.DebugContext(ctx, "impersonated token response", "response", internallog.HTTPResponse(resp, body))
+	if c := resp.StatusCode; c < http.StatusOK || c >= http.StatusMultipleChoices {
+		return nil, fmt.Errorf("credentials: status code %d: %s", c, body)
+	}
+
+	var accessTokenResp impersonateTokenResponse
+	if err := json.Unmarshal(body, &accessTokenResp); err != nil {
+		return nil, fmt.Errorf("credentials: unable to parse response: %w", err)
+	}
+	expiry, err := time.Parse(time.RFC3339, accessTokenResp.ExpireTime)
+	if err != nil {
+		return nil, fmt.Errorf("credentials: unable to parse expiry: %w", err)
+	}
+	return &auth.Token{
+		Value:  accessTokenResp.AccessToken,
+		Expiry: expiry,
+		Type:   internal.TokenTypeBearer,
+	}, nil
+}
+
+func setAuthHeader(ctx context.Context, tp auth.TokenProvider, r *http.Request) error {
+	t, err := tp.Token(ctx)
+	if err != nil {
+		return err
+	}
+	typ := t.Type
+	if typ == "" {
+		typ = internal.TokenTypeBearer
+	}
+	r.Header.Set(authHeaderKey, typ+" "+t.Value)
+	return nil
+}

vendor/cloud.google.com/go/auth/credentials/internal/stsexchange/sts_exchange.go 🔗

@@ -0,0 +1,167 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package stsexchange
+
+import (
+	"context"
+	"encoding/base64"
+	"encoding/json"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"net/url"
+	"strconv"
+	"strings"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/internal"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+const (
+	// GrantType for a sts exchange.
+	GrantType = "urn:ietf:params:oauth:grant-type:token-exchange"
+	// TokenType for a sts exchange.
+	TokenType = "urn:ietf:params:oauth:token-type:access_token"
+
+	jwtTokenType = "urn:ietf:params:oauth:token-type:jwt"
+)
+
+// Options stores the configuration for making an sts exchange request.
+type Options struct {
+	Client         *http.Client
+	Logger         *slog.Logger
+	Endpoint       string
+	Request        *TokenRequest
+	Authentication ClientAuthentication
+	Headers        http.Header
+	// ExtraOpts are optional fields marshalled into the `options` field of the
+	// request body.
+	ExtraOpts    map[string]interface{}
+	RefreshToken string
+}
+
+// RefreshAccessToken performs the token exchange using a refresh token flow.
+func RefreshAccessToken(ctx context.Context, opts *Options) (*TokenResponse, error) {
+	data := url.Values{}
+	data.Set("grant_type", "refresh_token")
+	data.Set("refresh_token", opts.RefreshToken)
+	return doRequest(ctx, opts, data)
+}
+
+// ExchangeToken performs an oauth2 token exchange with the provided endpoint.
+func ExchangeToken(ctx context.Context, opts *Options) (*TokenResponse, error) {
+	data := url.Values{}
+	data.Set("audience", opts.Request.Audience)
+	data.Set("grant_type", GrantType)
+	data.Set("requested_token_type", TokenType)
+	data.Set("subject_token_type", opts.Request.SubjectTokenType)
+	data.Set("subject_token", opts.Request.SubjectToken)
+	data.Set("scope", strings.Join(opts.Request.Scope, " "))
+	if opts.ExtraOpts != nil {
+		opts, err := json.Marshal(opts.ExtraOpts)
+		if err != nil {
+			return nil, fmt.Errorf("credentials: failed to marshal additional options: %w", err)
+		}
+		data.Set("options", string(opts))
+	}
+	return doRequest(ctx, opts, data)
+}
+
+func doRequest(ctx context.Context, opts *Options, data url.Values) (*TokenResponse, error) {
+	opts.Authentication.InjectAuthentication(data, opts.Headers)
+	encodedData := data.Encode()
+	logger := internallog.New(opts.Logger)
+
+	req, err := http.NewRequestWithContext(ctx, "POST", opts.Endpoint, strings.NewReader(encodedData))
+	if err != nil {
+		return nil, fmt.Errorf("credentials: failed to properly build http request: %w", err)
+
+	}
+	for key, list := range opts.Headers {
+		for _, val := range list {
+			req.Header.Add(key, val)
+		}
+	}
+	req.Header.Set("Content-Length", strconv.Itoa(len(encodedData)))
+
+	logger.DebugContext(ctx, "sts token request", "request", internallog.HTTPRequest(req, []byte(encodedData)))
+	resp, body, err := internal.DoRequest(opts.Client, req)
+	if err != nil {
+		return nil, fmt.Errorf("credentials: invalid response from Secure Token Server: %w", err)
+	}
+	logger.DebugContext(ctx, "sts token response", "response", internallog.HTTPResponse(resp, body))
+	if c := resp.StatusCode; c < http.StatusOK || c > http.StatusMultipleChoices {
+		return nil, fmt.Errorf("credentials: status code %d: %s", c, body)
+	}
+	var stsResp TokenResponse
+	if err := json.Unmarshal(body, &stsResp); err != nil {
+		return nil, fmt.Errorf("credentials: failed to unmarshal response body from Secure Token Server: %w", err)
+	}
+
+	return &stsResp, nil
+}
+
+// TokenRequest contains fields necessary to make an oauth2 token
+// exchange.
+type TokenRequest struct {
+	ActingParty struct {
+		ActorToken     string
+		ActorTokenType string
+	}
+	GrantType          string
+	Resource           string
+	Audience           string
+	Scope              []string
+	RequestedTokenType string
+	SubjectToken       string
+	SubjectTokenType   string
+}
+
+// TokenResponse is used to decode the remote server response during
+// an oauth2 token exchange.
+type TokenResponse struct {
+	AccessToken     string `json:"access_token"`
+	IssuedTokenType string `json:"issued_token_type"`
+	TokenType       string `json:"token_type"`
+	ExpiresIn       int    `json:"expires_in"`
+	Scope           string `json:"scope"`
+	RefreshToken    string `json:"refresh_token"`
+}
+
+// ClientAuthentication represents an OAuth client ID and secret and the
+// mechanism for passing these credentials as stated in rfc6749#2.3.1.
+type ClientAuthentication struct {
+	AuthStyle    auth.Style
+	ClientID     string
+	ClientSecret string
+}
+
+// InjectAuthentication is used to add authentication to a Secure Token Service
+// exchange request.  It modifies either the passed url.Values or http.Header
+// depending on the desired authentication format.
+func (c *ClientAuthentication) InjectAuthentication(values url.Values, headers http.Header) {
+	if c.ClientID == "" || c.ClientSecret == "" || values == nil || headers == nil {
+		return
+	}
+	switch c.AuthStyle {
+	case auth.StyleInHeader:
+		plainHeader := c.ClientID + ":" + c.ClientSecret
+		headers.Set("Authorization", "Basic "+base64.StdEncoding.EncodeToString([]byte(plainHeader)))
+	default:
+		values.Set("client_id", c.ClientID)
+		values.Set("client_secret", c.ClientSecret)
+	}
+}

vendor/cloud.google.com/go/auth/credentials/selfsignedjwt.go 🔗

@@ -0,0 +1,89 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package credentials
+
+import (
+	"context"
+	"crypto"
+	"errors"
+	"fmt"
+	"log/slog"
+	"strings"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/credsfile"
+	"cloud.google.com/go/auth/internal/jwt"
+)
+
+var (
+	// for testing
+	now func() time.Time = time.Now
+)
+
+// configureSelfSignedJWT uses the private key in the service account to create
+// a JWT without making a network call.
+func configureSelfSignedJWT(f *credsfile.ServiceAccountFile, opts *DetectOptions) (auth.TokenProvider, error) {
+	if len(opts.scopes()) == 0 && opts.Audience == "" {
+		return nil, errors.New("credentials: both scopes and audience are empty")
+	}
+	signer, err := internal.ParseKey([]byte(f.PrivateKey))
+	if err != nil {
+		return nil, fmt.Errorf("credentials: could not parse key: %w", err)
+	}
+	return &selfSignedTokenProvider{
+		email:    f.ClientEmail,
+		audience: opts.Audience,
+		scopes:   opts.scopes(),
+		signer:   signer,
+		pkID:     f.PrivateKeyID,
+		logger:   opts.logger(),
+	}, nil
+}
+
+type selfSignedTokenProvider struct {
+	email    string
+	audience string
+	scopes   []string
+	signer   crypto.Signer
+	pkID     string
+	logger   *slog.Logger
+}
+
+func (tp *selfSignedTokenProvider) Token(context.Context) (*auth.Token, error) {
+	iat := now()
+	exp := iat.Add(time.Hour)
+	scope := strings.Join(tp.scopes, " ")
+	c := &jwt.Claims{
+		Iss:   tp.email,
+		Sub:   tp.email,
+		Aud:   tp.audience,
+		Scope: scope,
+		Iat:   iat.Unix(),
+		Exp:   exp.Unix(),
+	}
+	h := &jwt.Header{
+		Algorithm: jwt.HeaderAlgRSA256,
+		Type:      jwt.HeaderType,
+		KeyID:     string(tp.pkID),
+	}
+	tok, err := jwt.EncodeJWS(h, c, tp.signer)
+	if err != nil {
+		return nil, fmt.Errorf("credentials: could not encode JWT: %w", err)
+	}
+	tp.logger.Debug("created self-signed JWT", "token", tok)
+	return &auth.Token{Value: tok, Type: internal.TokenTypeBearer, Expiry: exp}, nil
+}

vendor/cloud.google.com/go/auth/httptransport/httptransport.go 🔗

@@ -0,0 +1,247 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package httptransport provides functionality for managing HTTP client
+// connections to Google Cloud services.
+package httptransport
+
+import (
+	"crypto/tls"
+	"errors"
+	"fmt"
+	"log/slog"
+	"net/http"
+
+	"cloud.google.com/go/auth"
+	detect "cloud.google.com/go/auth/credentials"
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/transport"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+// ClientCertProvider is a function that returns a TLS client certificate to be
+// used when opening TLS connections. It follows the same semantics as
+// [crypto/tls.Config.GetClientCertificate].
+type ClientCertProvider = func(*tls.CertificateRequestInfo) (*tls.Certificate, error)
+
+// Options used to configure a [net/http.Client] from [NewClient].
+type Options struct {
+	// DisableTelemetry disables default telemetry (OpenTelemetry). An example
+	// reason to do so would be to bind custom telemetry that overrides the
+	// defaults.
+	DisableTelemetry bool
+	// DisableAuthentication specifies that no authentication should be used. It
+	// is suitable only for testing and for accessing public resources, like
+	// public Google Cloud Storage buckets.
+	DisableAuthentication bool
+	// Headers are extra HTTP headers that will be appended to every outgoing
+	// request.
+	Headers http.Header
+	// BaseRoundTripper overrides the base transport used for serving requests.
+	// If specified ClientCertProvider is ignored.
+	BaseRoundTripper http.RoundTripper
+	// Endpoint overrides the default endpoint to be used for a service.
+	Endpoint string
+	// APIKey specifies an API key to be used as the basis for authentication.
+	// If set DetectOpts are ignored.
+	APIKey string
+	// Credentials used to add Authorization header to all requests. If set
+	// DetectOpts are ignored.
+	Credentials *auth.Credentials
+	// ClientCertProvider is a function that returns a TLS client certificate to
+	// be used when opening TLS connections. It follows the same semantics as
+	// crypto/tls.Config.GetClientCertificate.
+	ClientCertProvider ClientCertProvider
+	// DetectOpts configures settings for detect Application Default
+	// Credentials.
+	DetectOpts *detect.DetectOptions
+	// UniverseDomain is the default service domain for a given Cloud universe.
+	// The default value is "googleapis.com". This is the universe domain
+	// configured for the client, which will be compared to the universe domain
+	// that is separately configured for the credentials.
+	UniverseDomain string
+	// Logger is used for debug logging. If provided, logging will be enabled
+	// at the loggers configured level. By default logging is disabled unless
+	// enabled by setting GOOGLE_SDK_GO_LOGGING_LEVEL in which case a default
+	// logger will be used. Optional.
+	Logger *slog.Logger
+
+	// InternalOptions are NOT meant to be set directly by consumers of this
+	// package, they should only be set by generated client code.
+	InternalOptions *InternalOptions
+}
+
+func (o *Options) validate() error {
+	if o == nil {
+		return errors.New("httptransport: opts required to be non-nil")
+	}
+	if o.InternalOptions != nil && o.InternalOptions.SkipValidation {
+		return nil
+	}
+	hasCreds := o.APIKey != "" ||
+		o.Credentials != nil ||
+		(o.DetectOpts != nil && len(o.DetectOpts.CredentialsJSON) > 0) ||
+		(o.DetectOpts != nil && o.DetectOpts.CredentialsFile != "")
+	if o.DisableAuthentication && hasCreds {
+		return errors.New("httptransport: DisableAuthentication is incompatible with options that set or detect credentials")
+	}
+	return nil
+}
+
+// client returns the client a user set for the detect options or nil if one was
+// not set.
+func (o *Options) client() *http.Client {
+	if o.DetectOpts != nil && o.DetectOpts.Client != nil {
+		return o.DetectOpts.Client
+	}
+	return nil
+}
+
+func (o *Options) logger() *slog.Logger {
+	return internallog.New(o.Logger)
+}
+
+func (o *Options) resolveDetectOptions() *detect.DetectOptions {
+	io := o.InternalOptions
+	// soft-clone these so we are not updating a ref the user holds and may reuse
+	do := transport.CloneDetectOptions(o.DetectOpts)
+
+	// If scoped JWTs are enabled user provided an aud, allow self-signed JWT.
+	if (io != nil && io.EnableJWTWithScope) || do.Audience != "" {
+		do.UseSelfSignedJWT = true
+	}
+	// Only default scopes if user did not also set an audience.
+	if len(do.Scopes) == 0 && do.Audience == "" && io != nil && len(io.DefaultScopes) > 0 {
+		do.Scopes = make([]string, len(io.DefaultScopes))
+		copy(do.Scopes, io.DefaultScopes)
+	}
+	if len(do.Scopes) == 0 && do.Audience == "" && io != nil {
+		do.Audience = o.InternalOptions.DefaultAudience
+	}
+	if o.ClientCertProvider != nil {
+		tlsConfig := &tls.Config{
+			GetClientCertificate: o.ClientCertProvider,
+		}
+		do.Client = transport.DefaultHTTPClientWithTLS(tlsConfig)
+		do.TokenURL = detect.GoogleMTLSTokenURL
+	}
+	if do.Logger == nil {
+		do.Logger = o.logger()
+	}
+	return do
+}
+
+// InternalOptions are only meant to be set by generated client code. These are
+// not meant to be set directly by consumers of this package. Configuration in
+// this type is considered EXPERIMENTAL and may be removed at any time in the
+// future without warning.
+type InternalOptions struct {
+	// EnableJWTWithScope specifies if scope can be used with self-signed JWT.
+	EnableJWTWithScope bool
+	// DefaultAudience specifies a default audience to be used as the audience
+	// field ("aud") for the JWT token authentication.
+	DefaultAudience string
+	// DefaultEndpointTemplate combined with UniverseDomain specifies the
+	// default endpoint.
+	DefaultEndpointTemplate string
+	// DefaultMTLSEndpoint specifies the default mTLS endpoint.
+	DefaultMTLSEndpoint string
+	// DefaultScopes specifies the default OAuth2 scopes to be used for a
+	// service.
+	DefaultScopes []string
+	// SkipValidation bypasses validation on Options. It should only be used
+	// internally for clients that need more control over their transport.
+	SkipValidation bool
+	// SkipUniverseDomainValidation skips the verification that the universe
+	// domain configured for the client matches the universe domain configured
+	// for the credentials. It should only be used internally for clients that
+	// need more control over their transport. The default is false.
+	SkipUniverseDomainValidation bool
+}
+
+// AddAuthorizationMiddleware adds a middleware to the provided client's
+// transport that sets the Authorization header with the value produced by the
+// provided [cloud.google.com/go/auth.Credentials]. An error is returned only
+// if client or creds is nil.
+//
+// This function does not support setting a universe domain value on the client.
+func AddAuthorizationMiddleware(client *http.Client, creds *auth.Credentials) error {
+	if client == nil || creds == nil {
+		return fmt.Errorf("httptransport: client and tp must not be nil")
+	}
+	base := client.Transport
+	if base == nil {
+		if dt, ok := http.DefaultTransport.(*http.Transport); ok {
+			base = dt.Clone()
+		} else {
+			// Directly reuse the DefaultTransport if the application has
+			// replaced it with an implementation of RoundTripper other than
+			// http.Transport.
+			base = http.DefaultTransport
+		}
+	}
+	client.Transport = &authTransport{
+		creds: creds,
+		base:  base,
+	}
+	return nil
+}
+
+// NewClient returns a [net/http.Client] that can be used to communicate with a
+// Google cloud service, configured with the provided [Options]. It
+// automatically appends Authorization headers to all outgoing requests.
+func NewClient(opts *Options) (*http.Client, error) {
+	if err := opts.validate(); err != nil {
+		return nil, err
+	}
+
+	tOpts := &transport.Options{
+		Endpoint:           opts.Endpoint,
+		ClientCertProvider: opts.ClientCertProvider,
+		Client:             opts.client(),
+		UniverseDomain:     opts.UniverseDomain,
+		Logger:             opts.logger(),
+	}
+	if io := opts.InternalOptions; io != nil {
+		tOpts.DefaultEndpointTemplate = io.DefaultEndpointTemplate
+		tOpts.DefaultMTLSEndpoint = io.DefaultMTLSEndpoint
+	}
+	clientCertProvider, dialTLSContext, err := transport.GetHTTPTransportConfig(tOpts)
+	if err != nil {
+		return nil, err
+	}
+	baseRoundTripper := opts.BaseRoundTripper
+	if baseRoundTripper == nil {
+		baseRoundTripper = defaultBaseTransport(clientCertProvider, dialTLSContext)
+	}
+	// Ensure the token exchange transport uses the same ClientCertProvider as the API transport.
+	opts.ClientCertProvider = clientCertProvider
+	trans, err := newTransport(baseRoundTripper, opts)
+	if err != nil {
+		return nil, err
+	}
+	return &http.Client{
+		Transport: trans,
+	}, nil
+}
+
+// SetAuthHeader uses the provided token to set the Authorization header on a
+// request. If the token.Type is empty, the type is assumed to be Bearer.
+func SetAuthHeader(token *auth.Token, req *http.Request) {
+	typ := token.Type
+	if typ == "" {
+		typ = internal.TokenTypeBearer
+	}
+	req.Header.Set("Authorization", typ+" "+token.Value)
+}

vendor/cloud.google.com/go/auth/httptransport/transport.go 🔗

@@ -0,0 +1,234 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package httptransport
+
+import (
+	"context"
+	"crypto/tls"
+	"net"
+	"net/http"
+	"os"
+	"time"
+
+	"cloud.google.com/go/auth"
+	"cloud.google.com/go/auth/credentials"
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/transport"
+	"cloud.google.com/go/auth/internal/transport/cert"
+	"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
+	"golang.org/x/net/http2"
+)
+
+const (
+	quotaProjectHeaderKey = "X-goog-user-project"
+)
+
+func newTransport(base http.RoundTripper, opts *Options) (http.RoundTripper, error) {
+	var headers = opts.Headers
+	ht := &headerTransport{
+		base:    base,
+		headers: headers,
+	}
+	var trans http.RoundTripper = ht
+	trans = addOpenTelemetryTransport(trans, opts)
+	switch {
+	case opts.DisableAuthentication:
+		// Do nothing.
+	case opts.APIKey != "":
+		qp := internal.GetQuotaProject(nil, opts.Headers.Get(quotaProjectHeaderKey))
+		if qp != "" {
+			if headers == nil {
+				headers = make(map[string][]string, 1)
+			}
+			headers.Set(quotaProjectHeaderKey, qp)
+		}
+		trans = &apiKeyTransport{
+			Transport: trans,
+			Key:       opts.APIKey,
+		}
+	default:
+		var creds *auth.Credentials
+		if opts.Credentials != nil {
+			creds = opts.Credentials
+		} else {
+			var err error
+			creds, err = credentials.DetectDefault(opts.resolveDetectOptions())
+			if err != nil {
+				return nil, err
+			}
+		}
+		qp, err := creds.QuotaProjectID(context.Background())
+		if err != nil {
+			return nil, err
+		}
+		if qp != "" {
+			if headers == nil {
+				headers = make(map[string][]string, 1)
+			}
+			// Don't overwrite user specified quota
+			if v := headers.Get(quotaProjectHeaderKey); v == "" {
+				headers.Set(quotaProjectHeaderKey, qp)
+			}
+		}
+		var skipUD bool
+		if iOpts := opts.InternalOptions; iOpts != nil {
+			skipUD = iOpts.SkipUniverseDomainValidation
+		}
+		creds.TokenProvider = auth.NewCachedTokenProvider(creds.TokenProvider, nil)
+		trans = &authTransport{
+			base:                         trans,
+			creds:                        creds,
+			clientUniverseDomain:         opts.UniverseDomain,
+			skipUniverseDomainValidation: skipUD,
+		}
+	}
+	return trans, nil
+}
+
+// defaultBaseTransport returns the base HTTP transport.
+// On App Engine, this is urlfetch.Transport.
+// Otherwise, use a default transport, taking most defaults from
+// http.DefaultTransport.
+// If TLSCertificate is available, set TLSClientConfig as well.
+func defaultBaseTransport(clientCertSource cert.Provider, dialTLSContext func(context.Context, string, string) (net.Conn, error)) http.RoundTripper {
+	defaultTransport, ok := http.DefaultTransport.(*http.Transport)
+	if !ok {
+		defaultTransport = transport.BaseTransport()
+	}
+	trans := defaultTransport.Clone()
+	trans.MaxIdleConnsPerHost = 100
+
+	if clientCertSource != nil {
+		trans.TLSClientConfig = &tls.Config{
+			GetClientCertificate: clientCertSource,
+		}
+	}
+	if dialTLSContext != nil {
+		// If DialTLSContext is set, TLSClientConfig wil be ignored
+		trans.DialTLSContext = dialTLSContext
+	}
+
+	// Configures the ReadIdleTimeout HTTP/2 option for the
+	// transport. This allows broken idle connections to be pruned more quickly,
+	// preventing the client from attempting to re-use connections that will no
+	// longer work.
+	http2Trans, err := http2.ConfigureTransports(trans)
+	if err == nil {
+		http2Trans.ReadIdleTimeout = time.Second * 31
+	}
+
+	return trans
+}
+
+type apiKeyTransport struct {
+	// Key is the API Key to set on requests.
+	Key string
+	// Transport is the underlying HTTP transport.
+	// If nil, http.DefaultTransport is used.
+	Transport http.RoundTripper
+}
+
+func (t *apiKeyTransport) RoundTrip(req *http.Request) (*http.Response, error) {
+	newReq := *req
+	args := newReq.URL.Query()
+	args.Set("key", t.Key)
+	newReq.URL.RawQuery = args.Encode()
+	return t.Transport.RoundTrip(&newReq)
+}
+
+type headerTransport struct {
+	headers http.Header
+	base    http.RoundTripper
+}
+
+func (t *headerTransport) RoundTrip(req *http.Request) (*http.Response, error) {
+	rt := t.base
+	newReq := *req
+	newReq.Header = make(http.Header)
+	for k, vv := range req.Header {
+		newReq.Header[k] = vv
+	}
+
+	for k, v := range t.headers {
+		newReq.Header[k] = v
+	}
+
+	return rt.RoundTrip(&newReq)
+}
+
+func addOpenTelemetryTransport(trans http.RoundTripper, opts *Options) http.RoundTripper {
+	if opts.DisableTelemetry {
+		return trans
+	}
+	return otelhttp.NewTransport(trans)
+}
+
+type authTransport struct {
+	creds                        *auth.Credentials
+	base                         http.RoundTripper
+	clientUniverseDomain         string
+	skipUniverseDomainValidation bool
+}
+
+// getClientUniverseDomain returns the default service domain for a given Cloud
+// universe, with the following precedence:
+//
+// 1. A non-empty option.WithUniverseDomain or similar client option.
+// 2. A non-empty environment variable GOOGLE_CLOUD_UNIVERSE_DOMAIN.
+// 3. The default value "googleapis.com".
+//
+// This is the universe domain configured for the client, which will be compared
+// to the universe domain that is separately configured for the credentials.
+func (t *authTransport) getClientUniverseDomain() string {
+	if t.clientUniverseDomain != "" {
+		return t.clientUniverseDomain
+	}
+	if envUD := os.Getenv(internal.UniverseDomainEnvVar); envUD != "" {
+		return envUD
+	}
+	return internal.DefaultUniverseDomain
+}
+
+// RoundTrip authorizes and authenticates the request with an
+// access token from Transport's Source. Per the RoundTripper contract we must
+// not modify the initial request, so we clone it, and we must close the body
+// on any errors that happens during our token logic.
+func (t *authTransport) RoundTrip(req *http.Request) (*http.Response, error) {
+	reqBodyClosed := false
+	if req.Body != nil {
+		defer func() {
+			if !reqBodyClosed {
+				req.Body.Close()
+			}
+		}()
+	}
+	token, err := t.creds.Token(req.Context())
+	if err != nil {
+		return nil, err
+	}
+	if !t.skipUniverseDomainValidation && token.MetadataString("auth.google.tokenSource") != "compute-metadata" {
+		credentialsUniverseDomain, err := t.creds.UniverseDomain(req.Context())
+		if err != nil {
+			return nil, err
+		}
+		if err := transport.ValidateUniverseDomain(t.getClientUniverseDomain(), credentialsUniverseDomain); err != nil {
+			return nil, err
+		}
+	}
+	req2 := req.Clone(req.Context())
+	SetAuthHeader(token, req2)
+	reqBodyClosed = true
+	return t.base.RoundTrip(req2)
+}

vendor/cloud.google.com/go/auth/internal/credsfile/credsfile.go 🔗

@@ -0,0 +1,107 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package credsfile is meant to hide implementation details from the pubic
+// surface of the detect package. It should not import any other packages in
+// this module. It is located under the main internal package so other
+// sub-packages can use these parsed types as well.
+package credsfile
+
+import (
+	"os"
+	"os/user"
+	"path/filepath"
+	"runtime"
+)
+
+const (
+	// GoogleAppCredsEnvVar is the environment variable for setting the
+	// application default credentials.
+	GoogleAppCredsEnvVar = "GOOGLE_APPLICATION_CREDENTIALS"
+	userCredsFilename    = "application_default_credentials.json"
+)
+
+// CredentialType represents different credential filetypes Google credentials
+// can be.
+type CredentialType int
+
+const (
+	// UnknownCredType is an unidentified file type.
+	UnknownCredType CredentialType = iota
+	// UserCredentialsKey represents a user creds file type.
+	UserCredentialsKey
+	// ServiceAccountKey represents a service account file type.
+	ServiceAccountKey
+	// ImpersonatedServiceAccountKey represents a impersonated service account
+	// file type.
+	ImpersonatedServiceAccountKey
+	// ExternalAccountKey represents a external account file type.
+	ExternalAccountKey
+	// GDCHServiceAccountKey represents a GDCH file type.
+	GDCHServiceAccountKey
+	// ExternalAccountAuthorizedUserKey represents a external account authorized
+	// user file type.
+	ExternalAccountAuthorizedUserKey
+)
+
+// parseCredentialType returns the associated filetype based on the parsed
+// typeString provided.
+func parseCredentialType(typeString string) CredentialType {
+	switch typeString {
+	case "service_account":
+		return ServiceAccountKey
+	case "authorized_user":
+		return UserCredentialsKey
+	case "impersonated_service_account":
+		return ImpersonatedServiceAccountKey
+	case "external_account":
+		return ExternalAccountKey
+	case "external_account_authorized_user":
+		return ExternalAccountAuthorizedUserKey
+	case "gdch_service_account":
+		return GDCHServiceAccountKey
+	default:
+		return UnknownCredType
+	}
+}
+
+// GetFileNameFromEnv returns the override if provided or detects a filename
+// from the environment.
+func GetFileNameFromEnv(override string) string {
+	if override != "" {
+		return override
+	}
+	return os.Getenv(GoogleAppCredsEnvVar)
+}
+
+// GetWellKnownFileName tries to locate the filepath for the user credential
+// file based on the environment.
+func GetWellKnownFileName() string {
+	if runtime.GOOS == "windows" {
+		return filepath.Join(os.Getenv("APPDATA"), "gcloud", userCredsFilename)
+	}
+	return filepath.Join(guessUnixHomeDir(), ".config", "gcloud", userCredsFilename)
+}
+
+// guessUnixHomeDir default to checking for HOME, but not all unix systems have
+// this set, do have a fallback.
+func guessUnixHomeDir() string {
+	if v := os.Getenv("HOME"); v != "" {
+		return v
+	}
+	if u, err := user.Current(); err == nil {
+		return u.HomeDir
+	}
+	return ""
+}

vendor/cloud.google.com/go/auth/internal/credsfile/filetype.go 🔗

@@ -0,0 +1,157 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package credsfile
+
+import (
+	"encoding/json"
+)
+
+// Config3LO is the internals of a client creds file.
+type Config3LO struct {
+	ClientID     string   `json:"client_id"`
+	ClientSecret string   `json:"client_secret"`
+	RedirectURIs []string `json:"redirect_uris"`
+	AuthURI      string   `json:"auth_uri"`
+	TokenURI     string   `json:"token_uri"`
+}
+
+// ClientCredentialsFile representation.
+type ClientCredentialsFile struct {
+	Web            *Config3LO `json:"web"`
+	Installed      *Config3LO `json:"installed"`
+	UniverseDomain string     `json:"universe_domain"`
+}
+
+// ServiceAccountFile representation.
+type ServiceAccountFile struct {
+	Type           string `json:"type"`
+	ProjectID      string `json:"project_id"`
+	PrivateKeyID   string `json:"private_key_id"`
+	PrivateKey     string `json:"private_key"`
+	ClientEmail    string `json:"client_email"`
+	ClientID       string `json:"client_id"`
+	AuthURL        string `json:"auth_uri"`
+	TokenURL       string `json:"token_uri"`
+	UniverseDomain string `json:"universe_domain"`
+}
+
+// UserCredentialsFile representation.
+type UserCredentialsFile struct {
+	Type           string `json:"type"`
+	ClientID       string `json:"client_id"`
+	ClientSecret   string `json:"client_secret"`
+	QuotaProjectID string `json:"quota_project_id"`
+	RefreshToken   string `json:"refresh_token"`
+	UniverseDomain string `json:"universe_domain"`
+}
+
+// ExternalAccountFile representation.
+type ExternalAccountFile struct {
+	Type                           string                           `json:"type"`
+	ClientID                       string                           `json:"client_id"`
+	ClientSecret                   string                           `json:"client_secret"`
+	Audience                       string                           `json:"audience"`
+	SubjectTokenType               string                           `json:"subject_token_type"`
+	ServiceAccountImpersonationURL string                           `json:"service_account_impersonation_url"`
+	TokenURL                       string                           `json:"token_url"`
+	CredentialSource               *CredentialSource                `json:"credential_source,omitempty"`
+	TokenInfoURL                   string                           `json:"token_info_url"`
+	ServiceAccountImpersonation    *ServiceAccountImpersonationInfo `json:"service_account_impersonation,omitempty"`
+	QuotaProjectID                 string                           `json:"quota_project_id"`
+	WorkforcePoolUserProject       string                           `json:"workforce_pool_user_project"`
+	UniverseDomain                 string                           `json:"universe_domain"`
+}
+
+// ExternalAccountAuthorizedUserFile representation.
+type ExternalAccountAuthorizedUserFile struct {
+	Type           string `json:"type"`
+	Audience       string `json:"audience"`
+	ClientID       string `json:"client_id"`
+	ClientSecret   string `json:"client_secret"`
+	RefreshToken   string `json:"refresh_token"`
+	TokenURL       string `json:"token_url"`
+	TokenInfoURL   string `json:"token_info_url"`
+	RevokeURL      string `json:"revoke_url"`
+	QuotaProjectID string `json:"quota_project_id"`
+	UniverseDomain string `json:"universe_domain"`
+}
+
+// CredentialSource stores the information necessary to retrieve the credentials for the STS exchange.
+//
+// One field amongst File, URL, Certificate, and Executable should be filled, depending on the kind of credential in question.
+// The EnvironmentID should start with AWS if being used for an AWS credential.
+type CredentialSource struct {
+	File                        string             `json:"file"`
+	URL                         string             `json:"url"`
+	Headers                     map[string]string  `json:"headers"`
+	Executable                  *ExecutableConfig  `json:"executable,omitempty"`
+	Certificate                 *CertificateConfig `json:"certificate"`
+	EnvironmentID               string             `json:"environment_id"` // TODO: Make type for this
+	RegionURL                   string             `json:"region_url"`
+	RegionalCredVerificationURL string             `json:"regional_cred_verification_url"`
+	CredVerificationURL         string             `json:"cred_verification_url"`
+	IMDSv2SessionTokenURL       string             `json:"imdsv2_session_token_url"`
+	Format                      *Format            `json:"format,omitempty"`
+}
+
+// Format describes the format of a [CredentialSource].
+type Format struct {
+	// Type is either "text" or "json". When not provided "text" type is assumed.
+	Type string `json:"type"`
+	// SubjectTokenFieldName is only required for JSON format. This would be "access_token" for azure.
+	SubjectTokenFieldName string `json:"subject_token_field_name"`
+}
+
+// ExecutableConfig represents the command to run for an executable
+// [CredentialSource].
+type ExecutableConfig struct {
+	Command       string `json:"command"`
+	TimeoutMillis int    `json:"timeout_millis"`
+	OutputFile    string `json:"output_file"`
+}
+
+// CertificateConfig represents the options used to set up X509 based workload
+// [CredentialSource]
+type CertificateConfig struct {
+	UseDefaultCertificateConfig bool   `json:"use_default_certificate_config"`
+	CertificateConfigLocation   string `json:"certificate_config_location"`
+}
+
+// ServiceAccountImpersonationInfo has impersonation configuration.
+type ServiceAccountImpersonationInfo struct {
+	TokenLifetimeSeconds int `json:"token_lifetime_seconds"`
+}
+
+// ImpersonatedServiceAccountFile representation.
+type ImpersonatedServiceAccountFile struct {
+	Type                           string          `json:"type"`
+	ServiceAccountImpersonationURL string          `json:"service_account_impersonation_url"`
+	Delegates                      []string        `json:"delegates"`
+	CredSource                     json.RawMessage `json:"source_credentials"`
+	UniverseDomain                 string          `json:"universe_domain"`
+}
+
+// GDCHServiceAccountFile represents the Google Distributed Cloud Hosted (GDCH) service identity file.
+type GDCHServiceAccountFile struct {
+	Type           string `json:"type"`
+	FormatVersion  string `json:"format_version"`
+	Project        string `json:"project"`
+	Name           string `json:"name"`
+	CertPath       string `json:"ca_cert_path"`
+	PrivateKeyID   string `json:"private_key_id"`
+	PrivateKey     string `json:"private_key"`
+	TokenURL       string `json:"token_uri"`
+	UniverseDomain string `json:"universe_domain"`
+}

vendor/cloud.google.com/go/auth/internal/credsfile/parse.go 🔗

@@ -0,0 +1,98 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package credsfile
+
+import (
+	"encoding/json"
+)
+
+// ParseServiceAccount parses bytes into a [ServiceAccountFile].
+func ParseServiceAccount(b []byte) (*ServiceAccountFile, error) {
+	var f *ServiceAccountFile
+	if err := json.Unmarshal(b, &f); err != nil {
+		return nil, err
+	}
+	return f, nil
+}
+
+// ParseClientCredentials parses bytes into a
+// [credsfile.ClientCredentialsFile].
+func ParseClientCredentials(b []byte) (*ClientCredentialsFile, error) {
+	var f *ClientCredentialsFile
+	if err := json.Unmarshal(b, &f); err != nil {
+		return nil, err
+	}
+	return f, nil
+}
+
+// ParseUserCredentials parses bytes into a [UserCredentialsFile].
+func ParseUserCredentials(b []byte) (*UserCredentialsFile, error) {
+	var f *UserCredentialsFile
+	if err := json.Unmarshal(b, &f); err != nil {
+		return nil, err
+	}
+	return f, nil
+}
+
+// ParseExternalAccount parses bytes into a [ExternalAccountFile].
+func ParseExternalAccount(b []byte) (*ExternalAccountFile, error) {
+	var f *ExternalAccountFile
+	if err := json.Unmarshal(b, &f); err != nil {
+		return nil, err
+	}
+	return f, nil
+}
+
+// ParseExternalAccountAuthorizedUser parses bytes into a
+// [ExternalAccountAuthorizedUserFile].
+func ParseExternalAccountAuthorizedUser(b []byte) (*ExternalAccountAuthorizedUserFile, error) {
+	var f *ExternalAccountAuthorizedUserFile
+	if err := json.Unmarshal(b, &f); err != nil {
+		return nil, err
+	}
+	return f, nil
+}
+
+// ParseImpersonatedServiceAccount parses bytes into a
+// [ImpersonatedServiceAccountFile].
+func ParseImpersonatedServiceAccount(b []byte) (*ImpersonatedServiceAccountFile, error) {
+	var f *ImpersonatedServiceAccountFile
+	if err := json.Unmarshal(b, &f); err != nil {
+		return nil, err
+	}
+	return f, nil
+}
+
+// ParseGDCHServiceAccount parses bytes into a [GDCHServiceAccountFile].
+func ParseGDCHServiceAccount(b []byte) (*GDCHServiceAccountFile, error) {
+	var f *GDCHServiceAccountFile
+	if err := json.Unmarshal(b, &f); err != nil {
+		return nil, err
+	}
+	return f, nil
+}
+
+type fileTypeChecker struct {
+	Type string `json:"type"`
+}
+
+// ParseFileType determines the [CredentialType] based on bytes provided.
+func ParseFileType(b []byte) (CredentialType, error) {
+	var f fileTypeChecker
+	if err := json.Unmarshal(b, &f); err != nil {
+		return 0, err
+	}
+	return parseCredentialType(f.Type), nil
+}

vendor/cloud.google.com/go/auth/internal/internal.go 🔗

@@ -0,0 +1,219 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package internal
+
+import (
+	"context"
+	"crypto"
+	"crypto/x509"
+	"encoding/json"
+	"encoding/pem"
+	"errors"
+	"fmt"
+	"io"
+	"net/http"
+	"os"
+	"sync"
+	"time"
+
+	"cloud.google.com/go/compute/metadata"
+)
+
+const (
+	// TokenTypeBearer is the auth header prefix for bearer tokens.
+	TokenTypeBearer = "Bearer"
+
+	// QuotaProjectEnvVar is the environment variable for setting the quota
+	// project.
+	QuotaProjectEnvVar = "GOOGLE_CLOUD_QUOTA_PROJECT"
+	// UniverseDomainEnvVar is the environment variable for setting the default
+	// service domain for a given Cloud universe.
+	UniverseDomainEnvVar = "GOOGLE_CLOUD_UNIVERSE_DOMAIN"
+	projectEnvVar        = "GOOGLE_CLOUD_PROJECT"
+	maxBodySize          = 1 << 20
+
+	// DefaultUniverseDomain is the default value for universe domain.
+	// Universe domain is the default service domain for a given Cloud universe.
+	DefaultUniverseDomain = "googleapis.com"
+)
+
+type clonableTransport interface {
+	Clone() *http.Transport
+}
+
+// DefaultClient returns an [http.Client] with some defaults set. If
+// the current [http.DefaultTransport] is a [clonableTransport], as
+// is the case for an [*http.Transport], the clone will be used.
+// Otherwise the [http.DefaultTransport] is used directly.
+func DefaultClient() *http.Client {
+	if transport, ok := http.DefaultTransport.(clonableTransport); ok {
+		return &http.Client{
+			Transport: transport.Clone(),
+			Timeout:   30 * time.Second,
+		}
+	}
+
+	return &http.Client{
+		Transport: http.DefaultTransport,
+		Timeout:   30 * time.Second,
+	}
+}
+
+// ParseKey converts the binary contents of a private key file
+// to an crypto.Signer. It detects whether the private key is in a
+// PEM container or not. If so, it extracts the the private key
+// from PEM container before conversion. It only supports PEM
+// containers with no passphrase.
+func ParseKey(key []byte) (crypto.Signer, error) {
+	block, _ := pem.Decode(key)
+	if block != nil {
+		key = block.Bytes
+	}
+	var parsedKey crypto.PrivateKey
+	var err error
+	parsedKey, err = x509.ParsePKCS8PrivateKey(key)
+	if err != nil {
+		parsedKey, err = x509.ParsePKCS1PrivateKey(key)
+		if err != nil {
+			return nil, fmt.Errorf("private key should be a PEM or plain PKCS1 or PKCS8: %w", err)
+		}
+	}
+	parsed, ok := parsedKey.(crypto.Signer)
+	if !ok {
+		return nil, errors.New("private key is not a signer")
+	}
+	return parsed, nil
+}
+
+// GetQuotaProject retrieves quota project with precedence being: override,
+// environment variable, creds json file.
+func GetQuotaProject(b []byte, override string) string {
+	if override != "" {
+		return override
+	}
+	if env := os.Getenv(QuotaProjectEnvVar); env != "" {
+		return env
+	}
+	if b == nil {
+		return ""
+	}
+	var v struct {
+		QuotaProject string `json:"quota_project_id"`
+	}
+	if err := json.Unmarshal(b, &v); err != nil {
+		return ""
+	}
+	return v.QuotaProject
+}
+
+// GetProjectID retrieves project with precedence being: override,
+// environment variable, creds json file.
+func GetProjectID(b []byte, override string) string {
+	if override != "" {
+		return override
+	}
+	if env := os.Getenv(projectEnvVar); env != "" {
+		return env
+	}
+	if b == nil {
+		return ""
+	}
+	var v struct {
+		ProjectID string `json:"project_id"` // standard service account key
+		Project   string `json:"project"`    // gdch key
+	}
+	if err := json.Unmarshal(b, &v); err != nil {
+		return ""
+	}
+	if v.ProjectID != "" {
+		return v.ProjectID
+	}
+	return v.Project
+}
+
+// DoRequest executes the provided req with the client. It reads the response
+// body, closes it, and returns it.
+func DoRequest(client *http.Client, req *http.Request) (*http.Response, []byte, error) {
+	resp, err := client.Do(req)
+	if err != nil {
+		return nil, nil, err
+	}
+	defer resp.Body.Close()
+	body, err := ReadAll(io.LimitReader(resp.Body, maxBodySize))
+	if err != nil {
+		return nil, nil, err
+	}
+	return resp, body, nil
+}
+
+// ReadAll consumes the whole reader and safely reads the content of its body
+// with some overflow protection.
+func ReadAll(r io.Reader) ([]byte, error) {
+	return io.ReadAll(io.LimitReader(r, maxBodySize))
+}
+
+// StaticCredentialsProperty is a helper for creating static credentials
+// properties.
+func StaticCredentialsProperty(s string) StaticProperty {
+	return StaticProperty(s)
+}
+
+// StaticProperty always returns that value of the underlying string.
+type StaticProperty string
+
+// GetProperty loads the properly value provided the given context.
+func (p StaticProperty) GetProperty(context.Context) (string, error) {
+	return string(p), nil
+}
+
+// ComputeUniverseDomainProvider fetches the credentials universe domain from
+// the google cloud metadata service.
+type ComputeUniverseDomainProvider struct {
+	MetadataClient     *metadata.Client
+	universeDomainOnce sync.Once
+	universeDomain     string
+	universeDomainErr  error
+}
+
+// GetProperty fetches the credentials universe domain from the google cloud
+// metadata service.
+func (c *ComputeUniverseDomainProvider) GetProperty(ctx context.Context) (string, error) {
+	c.universeDomainOnce.Do(func() {
+		c.universeDomain, c.universeDomainErr = getMetadataUniverseDomain(ctx, c.MetadataClient)
+	})
+	if c.universeDomainErr != nil {
+		return "", c.universeDomainErr
+	}
+	return c.universeDomain, nil
+}
+
+// httpGetMetadataUniverseDomain is a package var for unit test substitution.
+var httpGetMetadataUniverseDomain = func(ctx context.Context, client *metadata.Client) (string, error) {
+	ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
+	defer cancel()
+	return client.GetWithContext(ctx, "universe/universe-domain")
+}
+
+func getMetadataUniverseDomain(ctx context.Context, client *metadata.Client) (string, error) {
+	universeDomain, err := httpGetMetadataUniverseDomain(ctx, client)
+	if err == nil {
+		return universeDomain, nil
+	}
+	if _, ok := err.(metadata.NotDefinedError); ok {
+		// http.StatusNotFound (404)
+		return DefaultUniverseDomain, nil
+	}
+	return "", err
+}

vendor/cloud.google.com/go/auth/internal/jwt/jwt.go 🔗

@@ -0,0 +1,171 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package jwt
+
+import (
+	"bytes"
+	"crypto"
+	"crypto/rand"
+	"crypto/rsa"
+	"crypto/sha256"
+	"encoding/base64"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"strings"
+	"time"
+)
+
+const (
+	// HeaderAlgRSA256 is the RS256 [Header.Algorithm].
+	HeaderAlgRSA256 = "RS256"
+	// HeaderAlgES256 is the ES256 [Header.Algorithm].
+	HeaderAlgES256 = "ES256"
+	// HeaderType is the standard [Header.Type].
+	HeaderType = "JWT"
+)
+
+// Header represents a JWT header.
+type Header struct {
+	Algorithm string `json:"alg"`
+	Type      string `json:"typ"`
+	KeyID     string `json:"kid"`
+}
+
+func (h *Header) encode() (string, error) {
+	b, err := json.Marshal(h)
+	if err != nil {
+		return "", err
+	}
+	return base64.RawURLEncoding.EncodeToString(b), nil
+}
+
+// Claims represents the claims set of a JWT.
+type Claims struct {
+	// Iss is the issuer JWT claim.
+	Iss string `json:"iss"`
+	// Scope is the scope JWT claim.
+	Scope string `json:"scope,omitempty"`
+	// Exp is the expiry JWT claim. If unset, default is in one hour from now.
+	Exp int64 `json:"exp"`
+	// Iat is the subject issued at claim. If unset, default is now.
+	Iat int64 `json:"iat"`
+	// Aud is the audience JWT claim. Optional.
+	Aud string `json:"aud"`
+	// Sub is the subject JWT claim. Optional.
+	Sub string `json:"sub,omitempty"`
+	// AdditionalClaims contains any additional non-standard JWT claims. Optional.
+	AdditionalClaims map[string]interface{} `json:"-"`
+}
+
+func (c *Claims) encode() (string, error) {
+	// Compensate for skew
+	now := time.Now().Add(-10 * time.Second)
+	if c.Iat == 0 {
+		c.Iat = now.Unix()
+	}
+	if c.Exp == 0 {
+		c.Exp = now.Add(time.Hour).Unix()
+	}
+	if c.Exp < c.Iat {
+		return "", fmt.Errorf("jwt: invalid Exp = %d; must be later than Iat = %d", c.Exp, c.Iat)
+	}
+
+	b, err := json.Marshal(c)
+	if err != nil {
+		return "", err
+	}
+
+	if len(c.AdditionalClaims) == 0 {
+		return base64.RawURLEncoding.EncodeToString(b), nil
+	}
+
+	// Marshal private claim set and then append it to b.
+	prv, err := json.Marshal(c.AdditionalClaims)
+	if err != nil {
+		return "", fmt.Errorf("invalid map of additional claims %v: %w", c.AdditionalClaims, err)
+	}
+
+	// Concatenate public and private claim JSON objects.
+	if !bytes.HasSuffix(b, []byte{'}'}) {
+		return "", fmt.Errorf("invalid JSON %s", b)
+	}
+	if !bytes.HasPrefix(prv, []byte{'{'}) {
+		return "", fmt.Errorf("invalid JSON %s", prv)
+	}
+	b[len(b)-1] = ','         // Replace closing curly brace with a comma.
+	b = append(b, prv[1:]...) // Append private claims.
+	return base64.RawURLEncoding.EncodeToString(b), nil
+}
+
+// EncodeJWS encodes the data using the provided key as a JSON web signature.
+func EncodeJWS(header *Header, c *Claims, signer crypto.Signer) (string, error) {
+	head, err := header.encode()
+	if err != nil {
+		return "", err
+	}
+	claims, err := c.encode()
+	if err != nil {
+		return "", err
+	}
+	ss := fmt.Sprintf("%s.%s", head, claims)
+	h := sha256.New()
+	h.Write([]byte(ss))
+	sig, err := signer.Sign(rand.Reader, h.Sum(nil), crypto.SHA256)
+	if err != nil {
+		return "", err
+	}
+	return fmt.Sprintf("%s.%s", ss, base64.RawURLEncoding.EncodeToString(sig)), nil
+}
+
+// DecodeJWS decodes a claim set from a JWS payload.
+func DecodeJWS(payload string) (*Claims, error) {
+	// decode returned id token to get expiry
+	s := strings.Split(payload, ".")
+	if len(s) < 2 {
+		return nil, errors.New("invalid token received")
+	}
+	decoded, err := base64.RawURLEncoding.DecodeString(s[1])
+	if err != nil {
+		return nil, err
+	}
+	c := &Claims{}
+	if err := json.NewDecoder(bytes.NewBuffer(decoded)).Decode(c); err != nil {
+		return nil, err
+	}
+	if err := json.NewDecoder(bytes.NewBuffer(decoded)).Decode(&c.AdditionalClaims); err != nil {
+		return nil, err
+	}
+	return c, err
+}
+
+// VerifyJWS tests whether the provided JWT token's signature was produced by
+// the private key associated with the provided public key.
+func VerifyJWS(token string, key *rsa.PublicKey) error {
+	parts := strings.Split(token, ".")
+	if len(parts) != 3 {
+		return errors.New("jwt: invalid token received, token must have 3 parts")
+	}
+
+	signedContent := parts[0] + "." + parts[1]
+	signatureString, err := base64.RawURLEncoding.DecodeString(parts[2])
+	if err != nil {
+		return err
+	}
+
+	h := sha256.New()
+	h.Write([]byte(signedContent))
+	return rsa.VerifyPKCS1v15(key, crypto.SHA256, h.Sum(nil), signatureString)
+}

vendor/cloud.google.com/go/auth/internal/transport/cba.go 🔗

@@ -0,0 +1,368 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package transport
+
+import (
+	"context"
+	"crypto/tls"
+	"crypto/x509"
+	"errors"
+	"log"
+	"log/slog"
+	"net"
+	"net/http"
+	"net/url"
+	"os"
+	"strconv"
+	"strings"
+
+	"cloud.google.com/go/auth/internal"
+	"cloud.google.com/go/auth/internal/transport/cert"
+	"github.com/google/s2a-go"
+	"github.com/google/s2a-go/fallback"
+	"google.golang.org/grpc/credentials"
+)
+
+const (
+	mTLSModeAlways = "always"
+	mTLSModeNever  = "never"
+	mTLSModeAuto   = "auto"
+
+	// Experimental: if true, the code will try MTLS with S2A as the default for transport security. Default value is false.
+	googleAPIUseS2AEnv     = "EXPERIMENTAL_GOOGLE_API_USE_S2A"
+	googleAPIUseCertSource = "GOOGLE_API_USE_CLIENT_CERTIFICATE"
+	googleAPIUseMTLS       = "GOOGLE_API_USE_MTLS_ENDPOINT"
+	googleAPIUseMTLSOld    = "GOOGLE_API_USE_MTLS"
+
+	universeDomainPlaceholder = "UNIVERSE_DOMAIN"
+
+	mtlsMDSRoot = "/run/google-mds-mtls/root.crt"
+	mtlsMDSKey  = "/run/google-mds-mtls/client.key"
+)
+
+// Options is a struct that is duplicated information from the individual
+// transport packages in order to avoid cyclic deps. It correlates 1:1 with
+// fields on httptransport.Options and grpctransport.Options.
+type Options struct {
+	Endpoint                string
+	DefaultEndpointTemplate string
+	DefaultMTLSEndpoint     string
+	ClientCertProvider      cert.Provider
+	Client                  *http.Client
+	UniverseDomain          string
+	EnableDirectPath        bool
+	EnableDirectPathXds     bool
+	Logger                  *slog.Logger
+}
+
+// getUniverseDomain returns the default service domain for a given Cloud
+// universe.
+func (o *Options) getUniverseDomain() string {
+	if o.UniverseDomain == "" {
+		return internal.DefaultUniverseDomain
+	}
+	return o.UniverseDomain
+}
+
+// isUniverseDomainGDU returns true if the universe domain is the default Google
+// universe.
+func (o *Options) isUniverseDomainGDU() bool {
+	return o.getUniverseDomain() == internal.DefaultUniverseDomain
+}
+
+// defaultEndpoint returns the DefaultEndpointTemplate merged with the
+// universe domain if the DefaultEndpointTemplate is set, otherwise returns an
+// empty string.
+func (o *Options) defaultEndpoint() string {
+	if o.DefaultEndpointTemplate == "" {
+		return ""
+	}
+	return strings.Replace(o.DefaultEndpointTemplate, universeDomainPlaceholder, o.getUniverseDomain(), 1)
+}
+
+// defaultMTLSEndpoint returns the DefaultMTLSEndpointTemplate merged with the
+// universe domain if the DefaultMTLSEndpointTemplate is set, otherwise returns an
+// empty string.
+func (o *Options) defaultMTLSEndpoint() string {
+	if o.DefaultMTLSEndpoint == "" {
+		return ""
+	}
+	return strings.Replace(o.DefaultMTLSEndpoint, universeDomainPlaceholder, o.getUniverseDomain(), 1)
+}
+
+// mergedEndpoint merges a user-provided Endpoint of format host[:port] with the
+// default endpoint.
+func (o *Options) mergedEndpoint() (string, error) {
+	defaultEndpoint := o.defaultEndpoint()
+	u, err := url.Parse(fixScheme(defaultEndpoint))
+	if err != nil {
+		return "", err
+	}
+	return strings.Replace(defaultEndpoint, u.Host, o.Endpoint, 1), nil
+}
+
+func fixScheme(baseURL string) string {
+	if !strings.Contains(baseURL, "://") {
+		baseURL = "https://" + baseURL
+	}
+	return baseURL
+}
+
+// GetGRPCTransportCredsAndEndpoint returns an instance of
+// [google.golang.org/grpc/credentials.TransportCredentials], and the
+// corresponding endpoint to use for GRPC client.
+func GetGRPCTransportCredsAndEndpoint(opts *Options) (credentials.TransportCredentials, string, error) {
+	config, err := getTransportConfig(opts)
+	if err != nil {
+		return nil, "", err
+	}
+
+	defaultTransportCreds := credentials.NewTLS(&tls.Config{
+		GetClientCertificate: config.clientCertSource,
+	})
+
+	var s2aAddr string
+	var transportCredsForS2A credentials.TransportCredentials
+
+	if config.mtlsS2AAddress != "" {
+		s2aAddr = config.mtlsS2AAddress
+		transportCredsForS2A, err = loadMTLSMDSTransportCreds(mtlsMDSRoot, mtlsMDSKey)
+		if err != nil {
+			log.Printf("Loading MTLS MDS credentials failed: %v", err)
+			if config.s2aAddress != "" {
+				s2aAddr = config.s2aAddress
+			} else {
+				return defaultTransportCreds, config.endpoint, nil
+			}
+		}
+	} else if config.s2aAddress != "" {
+		s2aAddr = config.s2aAddress
+	} else {
+		return defaultTransportCreds, config.endpoint, nil
+	}
+
+	var fallbackOpts *s2a.FallbackOptions
+	// In case of S2A failure, fall back to the endpoint that would've been used without S2A.
+	if fallbackHandshake, err := fallback.DefaultFallbackClientHandshakeFunc(config.endpoint); err == nil {
+		fallbackOpts = &s2a.FallbackOptions{
+			FallbackClientHandshakeFunc: fallbackHandshake,
+		}
+	}
+
+	s2aTransportCreds, err := s2a.NewClientCreds(&s2a.ClientOptions{
+		S2AAddress:     s2aAddr,
+		TransportCreds: transportCredsForS2A,
+		FallbackOpts:   fallbackOpts,
+	})
+	if err != nil {
+		// Use default if we cannot initialize S2A client transport credentials.
+		return defaultTransportCreds, config.endpoint, nil
+	}
+	return s2aTransportCreds, config.s2aMTLSEndpoint, nil
+}
+
+// GetHTTPTransportConfig returns a client certificate source and a function for
+// dialing MTLS with S2A.
+func GetHTTPTransportConfig(opts *Options) (cert.Provider, func(context.Context, string, string) (net.Conn, error), error) {
+	config, err := getTransportConfig(opts)
+	if err != nil {
+		return nil, nil, err
+	}
+
+	var s2aAddr string
+	var transportCredsForS2A credentials.TransportCredentials
+
+	if config.mtlsS2AAddress != "" {
+		s2aAddr = config.mtlsS2AAddress
+		transportCredsForS2A, err = loadMTLSMDSTransportCreds(mtlsMDSRoot, mtlsMDSKey)
+		if err != nil {
+			log.Printf("Loading MTLS MDS credentials failed: %v", err)
+			if config.s2aAddress != "" {
+				s2aAddr = config.s2aAddress
+			} else {
+				return config.clientCertSource, nil, nil
+			}
+		}
+	} else if config.s2aAddress != "" {
+		s2aAddr = config.s2aAddress
+	} else {
+		return config.clientCertSource, nil, nil
+	}
+
+	var fallbackOpts *s2a.FallbackOptions
+	// In case of S2A failure, fall back to the endpoint that would've been used without S2A.
+	if fallbackURL, err := url.Parse(config.endpoint); err == nil {
+		if fallbackDialer, fallbackServerAddr, err := fallback.DefaultFallbackDialerAndAddress(fallbackURL.Hostname()); err == nil {
+			fallbackOpts = &s2a.FallbackOptions{
+				FallbackDialer: &s2a.FallbackDialer{
+					Dialer:     fallbackDialer,
+					ServerAddr: fallbackServerAddr,
+				},
+			}
+		}
+	}
+
+	dialTLSContextFunc := s2a.NewS2ADialTLSContextFunc(&s2a.ClientOptions{
+		S2AAddress:     s2aAddr,
+		TransportCreds: transportCredsForS2A,
+		FallbackOpts:   fallbackOpts,
+	})
+	return nil, dialTLSContextFunc, nil
+}
+
+func loadMTLSMDSTransportCreds(mtlsMDSRootFile, mtlsMDSKeyFile string) (credentials.TransportCredentials, error) {
+	rootPEM, err := os.ReadFile(mtlsMDSRootFile)
+	if err != nil {
+		return nil, err
+	}
+	caCertPool := x509.NewCertPool()
+	ok := caCertPool.AppendCertsFromPEM(rootPEM)
+	if !ok {
+		return nil, errors.New("failed to load MTLS MDS root certificate")
+	}
+	// The mTLS MDS credentials are formatted as the concatenation of a PEM-encoded certificate chain
+	// followed by a PEM-encoded private key. For this reason, the concatenation is passed in to the
+	// tls.X509KeyPair function as both the certificate chain and private key arguments.
+	cert, err := tls.LoadX509KeyPair(mtlsMDSKeyFile, mtlsMDSKeyFile)
+	if err != nil {
+		return nil, err
+	}
+	tlsConfig := tls.Config{
+		RootCAs:      caCertPool,
+		Certificates: []tls.Certificate{cert},
+		MinVersion:   tls.VersionTLS13,
+	}
+	return credentials.NewTLS(&tlsConfig), nil
+}
+
+func getTransportConfig(opts *Options) (*transportConfig, error) {
+	clientCertSource, err := GetClientCertificateProvider(opts)
+	if err != nil {
+		return nil, err
+	}
+	endpoint, err := getEndpoint(opts, clientCertSource)
+	if err != nil {
+		return nil, err
+	}
+	defaultTransportConfig := transportConfig{
+		clientCertSource: clientCertSource,
+		endpoint:         endpoint,
+	}
+
+	if !shouldUseS2A(clientCertSource, opts) {
+		return &defaultTransportConfig, nil
+	}
+
+	s2aAddress := GetS2AAddress(opts.Logger)
+	mtlsS2AAddress := GetMTLSS2AAddress(opts.Logger)
+	if s2aAddress == "" && mtlsS2AAddress == "" {
+		return &defaultTransportConfig, nil
+	}
+	return &transportConfig{
+		clientCertSource: clientCertSource,
+		endpoint:         endpoint,
+		s2aAddress:       s2aAddress,
+		mtlsS2AAddress:   mtlsS2AAddress,
+		s2aMTLSEndpoint:  opts.defaultMTLSEndpoint(),
+	}, nil
+}
+
+// GetClientCertificateProvider returns a default client certificate source, if
+// not provided by the user.
+//
+// A nil default source can be returned if the source does not exist. Any exceptions
+// encountered while initializing the default source will be reported as client
+// error (ex. corrupt metadata file).
+func GetClientCertificateProvider(opts *Options) (cert.Provider, error) {
+	if !isClientCertificateEnabled(opts) {
+		return nil, nil
+	} else if opts.ClientCertProvider != nil {
+		return opts.ClientCertProvider, nil
+	}
+	return cert.DefaultProvider()
+
+}
+
+// isClientCertificateEnabled returns true by default for all GDU universe domain, unless explicitly overridden by env var
+func isClientCertificateEnabled(opts *Options) bool {
+	if value, ok := os.LookupEnv(googleAPIUseCertSource); ok {
+		// error as false is OK
+		b, _ := strconv.ParseBool(value)
+		return b
+	}
+	return opts.isUniverseDomainGDU()
+}
+
+type transportConfig struct {
+	// The client certificate source.
+	clientCertSource cert.Provider
+	// The corresponding endpoint to use based on client certificate source.
+	endpoint string
+	// The plaintext S2A address if it can be used, otherwise an empty string.
+	s2aAddress string
+	// The MTLS S2A address if it can be used, otherwise an empty string.
+	mtlsS2AAddress string
+	// The MTLS endpoint to use with S2A.
+	s2aMTLSEndpoint string
+}
+
+// getEndpoint returns the endpoint for the service, taking into account the
+// user-provided endpoint override "settings.Endpoint".
+//
+// If no endpoint override is specified, we will either return the default
+// endpoint or the default mTLS endpoint if a client certificate is available.
+//
+// You can override the default endpoint choice (mTLS vs. regular) by setting
+// the GOOGLE_API_USE_MTLS_ENDPOINT environment variable.
+//
+// If the endpoint override is an address (host:port) rather than full base
+// URL (ex. https://...), then the user-provided address will be merged into
+// the default endpoint. For example, WithEndpoint("myhost:8000") and
+// DefaultEndpointTemplate("https://UNIVERSE_DOMAIN/bar/baz") will return
+// "https://myhost:8080/bar/baz". Note that this does not apply to the mTLS
+// endpoint.
+func getEndpoint(opts *Options, clientCertSource cert.Provider) (string, error) {
+	if opts.Endpoint == "" {
+		mtlsMode := getMTLSMode()
+		if mtlsMode == mTLSModeAlways || (clientCertSource != nil && mtlsMode == mTLSModeAuto) {
+			return opts.defaultMTLSEndpoint(), nil
+		}
+		return opts.defaultEndpoint(), nil
+	}
+	if strings.Contains(opts.Endpoint, "://") {
+		// User passed in a full URL path, use it verbatim.
+		return opts.Endpoint, nil
+	}
+	if opts.defaultEndpoint() == "" {
+		// If DefaultEndpointTemplate is not configured,
+		// use the user provided endpoint verbatim. This allows a naked
+		// "host[:port]" URL to be used with GRPC Direct Path.
+		return opts.Endpoint, nil
+	}
+
+	// Assume user-provided endpoint is host[:port], merge it with the default endpoint.
+	return opts.mergedEndpoint()
+}
+
+func getMTLSMode() string {
+	mode := os.Getenv(googleAPIUseMTLS)
+	if mode == "" {
+		mode = os.Getenv(googleAPIUseMTLSOld) // Deprecated.
+	}
+	if mode == "" {
+		return mTLSModeAuto
+	}
+	return strings.ToLower(mode)
+}

vendor/cloud.google.com/go/auth/internal/transport/cert/default_cert.go 🔗

@@ -0,0 +1,65 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cert
+
+import (
+	"crypto/tls"
+	"errors"
+	"sync"
+)
+
+// defaultCertData holds all the variables pertaining to
+// the default certificate provider created by [DefaultProvider].
+//
+// A singleton model is used to allow the provider to be reused
+// by the transport layer. As mentioned in [DefaultProvider] (provider nil, nil)
+// may be returned to indicate a default provider could not be found, which
+// will skip extra tls config in the transport layer .
+type defaultCertData struct {
+	once     sync.Once
+	provider Provider
+	err      error
+}
+
+var (
+	defaultCert defaultCertData
+)
+
+// Provider is a function that can be passed into crypto/tls.Config.GetClientCertificate.
+type Provider func(*tls.CertificateRequestInfo) (*tls.Certificate, error)
+
+// errSourceUnavailable is a sentinel error to indicate certificate source is unavailable.
+var errSourceUnavailable = errors.New("certificate source is unavailable")
+
+// DefaultProvider returns a certificate source using the preferred EnterpriseCertificateProxySource.
+// If EnterpriseCertificateProxySource is not available, fall back to the legacy SecureConnectSource.
+//
+// If neither source is available (due to missing configurations), a nil Source and a nil Error are
+// returned to indicate that a default certificate source is unavailable.
+func DefaultProvider() (Provider, error) {
+	defaultCert.once.Do(func() {
+		defaultCert.provider, defaultCert.err = NewWorkloadX509CertProvider("")
+		if errors.Is(defaultCert.err, errSourceUnavailable) {
+			defaultCert.provider, defaultCert.err = NewEnterpriseCertificateProxyProvider("")
+			if errors.Is(defaultCert.err, errSourceUnavailable) {
+				defaultCert.provider, defaultCert.err = NewSecureConnectProvider("")
+				if errors.Is(defaultCert.err, errSourceUnavailable) {
+					defaultCert.provider, defaultCert.err = nil, nil
+				}
+			}
+		}
+	})
+	return defaultCert.provider, defaultCert.err
+}

vendor/cloud.google.com/go/auth/internal/transport/cert/enterprise_cert.go 🔗

@@ -0,0 +1,54 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cert
+
+import (
+	"crypto/tls"
+
+	"github.com/googleapis/enterprise-certificate-proxy/client"
+)
+
+type ecpSource struct {
+	key *client.Key
+}
+
+// NewEnterpriseCertificateProxyProvider creates a certificate source
+// using the Enterprise Certificate Proxy client, which delegates
+// certifcate related operations to an OS-specific "signer binary"
+// that communicates with the native keystore (ex. keychain on MacOS).
+//
+// The configFilePath points to a config file containing relevant parameters
+// such as the certificate issuer and the location of the signer binary.
+// If configFilePath is empty, the client will attempt to load the config from
+// a well-known gcloud location.
+func NewEnterpriseCertificateProxyProvider(configFilePath string) (Provider, error) {
+	key, err := client.Cred(configFilePath)
+	if err != nil {
+		// TODO(codyoss): once this is fixed upstream can handle this error a
+		// little better here. But be safe for now and assume unavailable.
+		return nil, errSourceUnavailable
+	}
+
+	return (&ecpSource{
+		key: key,
+	}).getClientCertificate, nil
+}
+
+func (s *ecpSource) getClientCertificate(info *tls.CertificateRequestInfo) (*tls.Certificate, error) {
+	var cert tls.Certificate
+	cert.PrivateKey = s.key
+	cert.Certificate = s.key.CertificateChain()
+	return &cert, nil
+}

vendor/cloud.google.com/go/auth/internal/transport/cert/secureconnect_cert.go 🔗

@@ -0,0 +1,124 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cert
+
+import (
+	"crypto/tls"
+	"crypto/x509"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"os"
+	"os/exec"
+	"os/user"
+	"path/filepath"
+	"sync"
+	"time"
+)
+
+const (
+	metadataPath = ".secureConnect"
+	metadataFile = "context_aware_metadata.json"
+)
+
+type secureConnectSource struct {
+	metadata secureConnectMetadata
+
+	// Cache the cert to avoid executing helper command repeatedly.
+	cachedCertMutex sync.Mutex
+	cachedCert      *tls.Certificate
+}
+
+type secureConnectMetadata struct {
+	Cmd []string `json:"cert_provider_command"`
+}
+
+// NewSecureConnectProvider creates a certificate source using
+// the Secure Connect Helper and its associated metadata file.
+//
+// The configFilePath points to the location of the context aware metadata file.
+// If configFilePath is empty, use the default context aware metadata location.
+func NewSecureConnectProvider(configFilePath string) (Provider, error) {
+	if configFilePath == "" {
+		user, err := user.Current()
+		if err != nil {
+			// Error locating the default config means Secure Connect is not supported.
+			return nil, errSourceUnavailable
+		}
+		configFilePath = filepath.Join(user.HomeDir, metadataPath, metadataFile)
+	}
+
+	file, err := os.ReadFile(configFilePath)
+	if err != nil {
+		// Config file missing means Secure Connect is not supported.
+		// There are non-os.ErrNotExist errors that may be returned.
+		// (e.g. if the home directory is /dev/null, *nix systems will
+		// return ENOTDIR instead of ENOENT)
+		return nil, errSourceUnavailable
+	}
+
+	var metadata secureConnectMetadata
+	if err := json.Unmarshal(file, &metadata); err != nil {
+		return nil, fmt.Errorf("cert: could not parse JSON in %q: %w", configFilePath, err)
+	}
+	if err := validateMetadata(metadata); err != nil {
+		return nil, fmt.Errorf("cert: invalid config in %q: %w", configFilePath, err)
+	}
+	return (&secureConnectSource{
+		metadata: metadata,
+	}).getClientCertificate, nil
+}
+
+func validateMetadata(metadata secureConnectMetadata) error {
+	if len(metadata.Cmd) == 0 {
+		return errors.New("empty cert_provider_command")
+	}
+	return nil
+}
+
+func (s *secureConnectSource) getClientCertificate(info *tls.CertificateRequestInfo) (*tls.Certificate, error) {
+	s.cachedCertMutex.Lock()
+	defer s.cachedCertMutex.Unlock()
+	if s.cachedCert != nil && !isCertificateExpired(s.cachedCert) {
+		return s.cachedCert, nil
+	}
+	// Expand OS environment variables in the cert provider command such as "$HOME".
+	for i := 0; i < len(s.metadata.Cmd); i++ {
+		s.metadata.Cmd[i] = os.ExpandEnv(s.metadata.Cmd[i])
+	}
+	command := s.metadata.Cmd
+	data, err := exec.Command(command[0], command[1:]...).Output()
+	if err != nil {
+		return nil, err
+	}
+	cert, err := tls.X509KeyPair(data, data)
+	if err != nil {
+		return nil, err
+	}
+	s.cachedCert = &cert
+	return &cert, nil
+}
+
+// isCertificateExpired returns true if the given cert is expired or invalid.
+func isCertificateExpired(cert *tls.Certificate) bool {
+	if len(cert.Certificate) == 0 {
+		return true
+	}
+	parsed, err := x509.ParseCertificate(cert.Certificate[0])
+	if err != nil {
+		return true
+	}
+	return time.Now().After(parsed.NotAfter)
+}

vendor/cloud.google.com/go/auth/internal/transport/cert/workload_cert.go 🔗

@@ -0,0 +1,114 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package cert
+
+import (
+	"crypto/tls"
+	"encoding/json"
+	"errors"
+	"io"
+	"os"
+
+	"github.com/googleapis/enterprise-certificate-proxy/client/util"
+)
+
+type certConfigs struct {
+	Workload *workloadSource `json:"workload"`
+}
+
+type workloadSource struct {
+	CertPath string `json:"cert_path"`
+	KeyPath  string `json:"key_path"`
+}
+
+type certificateConfig struct {
+	CertConfigs certConfigs `json:"cert_configs"`
+}
+
+// NewWorkloadX509CertProvider creates a certificate source
+// that reads a certificate and private key file from the local file system.
+// This is intended to be used for workload identity federation.
+//
+// The configFilePath points to a config file containing relevant parameters
+// such as the certificate and key file paths.
+// If configFilePath is empty, the client will attempt to load the config from
+// a well-known gcloud location.
+func NewWorkloadX509CertProvider(configFilePath string) (Provider, error) {
+	if configFilePath == "" {
+		envFilePath := util.GetConfigFilePathFromEnv()
+		if envFilePath != "" {
+			configFilePath = envFilePath
+		} else {
+			configFilePath = util.GetDefaultConfigFilePath()
+		}
+	}
+
+	certFile, keyFile, err := getCertAndKeyFiles(configFilePath)
+	if err != nil {
+		return nil, err
+	}
+
+	source := &workloadSource{
+		CertPath: certFile,
+		KeyPath:  keyFile,
+	}
+	return source.getClientCertificate, nil
+}
+
+// getClientCertificate attempts to load the certificate and key from the files specified in the
+// certificate config.
+func (s *workloadSource) getClientCertificate(info *tls.CertificateRequestInfo) (*tls.Certificate, error) {
+	cert, err := tls.LoadX509KeyPair(s.CertPath, s.KeyPath)
+	if err != nil {
+		return nil, err
+	}
+	return &cert, nil
+}
+
+// getCertAndKeyFiles attempts to read the provided config file and return the certificate and private
+// key file paths.
+func getCertAndKeyFiles(configFilePath string) (string, string, error) {
+	jsonFile, err := os.Open(configFilePath)
+	if err != nil {
+		return "", "", errSourceUnavailable
+	}
+
+	byteValue, err := io.ReadAll(jsonFile)
+	if err != nil {
+		return "", "", err
+	}
+
+	var config certificateConfig
+	if err := json.Unmarshal(byteValue, &config); err != nil {
+		return "", "", err
+	}
+
+	if config.CertConfigs.Workload == nil {
+		return "", "", errSourceUnavailable
+	}
+
+	certFile := config.CertConfigs.Workload.CertPath
+	keyFile := config.CertConfigs.Workload.KeyPath
+
+	if certFile == "" {
+		return "", "", errors.New("certificate configuration is missing the certificate file location")
+	}
+
+	if keyFile == "" {
+		return "", "", errors.New("certificate configuration is missing the key file location")
+	}
+
+	return certFile, keyFile, nil
+}

vendor/cloud.google.com/go/auth/internal/transport/s2a.go 🔗

@@ -0,0 +1,138 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package transport
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"log"
+	"log/slog"
+	"os"
+	"strconv"
+	"sync"
+
+	"cloud.google.com/go/auth/internal/transport/cert"
+	"cloud.google.com/go/compute/metadata"
+)
+
+const (
+	configEndpointSuffix = "instance/platform-security/auto-mtls-configuration"
+)
+
+var (
+	mtlsConfiguration *mtlsConfig
+
+	mtlsOnce sync.Once
+)
+
+// GetS2AAddress returns the S2A address to be reached via plaintext connection.
+// Returns empty string if not set or invalid.
+func GetS2AAddress(logger *slog.Logger) string {
+	getMetadataMTLSAutoConfig(logger)
+	if !mtlsConfiguration.valid() {
+		return ""
+	}
+	return mtlsConfiguration.S2A.PlaintextAddress
+}
+
+// GetMTLSS2AAddress returns the S2A address to be reached via MTLS connection.
+// Returns empty string if not set or invalid.
+func GetMTLSS2AAddress(logger *slog.Logger) string {
+	getMetadataMTLSAutoConfig(logger)
+	if !mtlsConfiguration.valid() {
+		return ""
+	}
+	return mtlsConfiguration.S2A.MTLSAddress
+}
+
+// mtlsConfig contains the configuration for establishing MTLS connections with Google APIs.
+type mtlsConfig struct {
+	S2A *s2aAddresses `json:"s2a"`
+}
+
+func (c *mtlsConfig) valid() bool {
+	return c != nil && c.S2A != nil
+}
+
+// s2aAddresses contains the plaintext and/or MTLS S2A addresses.
+type s2aAddresses struct {
+	// PlaintextAddress is the plaintext address to reach S2A
+	PlaintextAddress string `json:"plaintext_address"`
+	// MTLSAddress is the MTLS address to reach S2A
+	MTLSAddress string `json:"mtls_address"`
+}
+
+func getMetadataMTLSAutoConfig(logger *slog.Logger) {
+	var err error
+	mtlsOnce.Do(func() {
+		mtlsConfiguration, err = queryConfig(logger)
+		if err != nil {
+			log.Printf("Getting MTLS config failed: %v", err)
+		}
+	})
+}
+
+var httpGetMetadataMTLSConfig = func(logger *slog.Logger) (string, error) {
+	metadataClient := metadata.NewWithOptions(&metadata.Options{
+		Logger: logger,
+	})
+	return metadataClient.GetWithContext(context.Background(), configEndpointSuffix)
+}
+
+func queryConfig(logger *slog.Logger) (*mtlsConfig, error) {
+	resp, err := httpGetMetadataMTLSConfig(logger)
+	if err != nil {
+		return nil, fmt.Errorf("querying MTLS config from MDS endpoint failed: %w", err)
+	}
+	var config mtlsConfig
+	err = json.Unmarshal([]byte(resp), &config)
+	if err != nil {
+		return nil, fmt.Errorf("unmarshalling MTLS config from MDS endpoint failed: %w", err)
+	}
+	if config.S2A == nil {
+		return nil, fmt.Errorf("returned MTLS config from MDS endpoint is invalid: %v", config)
+	}
+	return &config, nil
+}
+
+func shouldUseS2A(clientCertSource cert.Provider, opts *Options) bool {
+	// If client cert is found, use that over S2A.
+	if clientCertSource != nil {
+		return false
+	}
+	// If EXPERIMENTAL_GOOGLE_API_USE_S2A is not set to true, skip S2A.
+	if !isGoogleS2AEnabled() {
+		return false
+	}
+	// If DefaultMTLSEndpoint is not set or has endpoint override, skip S2A.
+	if opts.DefaultMTLSEndpoint == "" || opts.Endpoint != "" {
+		return false
+	}
+	// If custom HTTP client is provided, skip S2A.
+	if opts.Client != nil {
+		return false
+	}
+	// If directPath is enabled, skip S2A.
+	return !opts.EnableDirectPath && !opts.EnableDirectPathXds
+}
+
+func isGoogleS2AEnabled() bool {
+	b, err := strconv.ParseBool(os.Getenv(googleAPIUseS2AEnv))
+	if err != nil {
+		return false
+	}
+	return b
+}

vendor/cloud.google.com/go/auth/internal/transport/transport.go 🔗

@@ -0,0 +1,106 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package transport provided internal helpers for the two transport packages
+// (grpctransport and httptransport).
+package transport
+
+import (
+	"crypto/tls"
+	"fmt"
+	"net"
+	"net/http"
+	"time"
+
+	"cloud.google.com/go/auth/credentials"
+)
+
+// CloneDetectOptions clones a user set detect option into some new memory that
+// we can internally manipulate before sending onto the detect package.
+func CloneDetectOptions(oldDo *credentials.DetectOptions) *credentials.DetectOptions {
+	if oldDo == nil {
+		// it is valid for users not to set this, but we will need to to default
+		// some options for them in this case so return some initialized memory
+		// to work with.
+		return &credentials.DetectOptions{}
+	}
+	newDo := &credentials.DetectOptions{
+		// Simple types
+		Audience:          oldDo.Audience,
+		Subject:           oldDo.Subject,
+		EarlyTokenRefresh: oldDo.EarlyTokenRefresh,
+		TokenURL:          oldDo.TokenURL,
+		STSAudience:       oldDo.STSAudience,
+		CredentialsFile:   oldDo.CredentialsFile,
+		UseSelfSignedJWT:  oldDo.UseSelfSignedJWT,
+		UniverseDomain:    oldDo.UniverseDomain,
+
+		// These fields are are pointer types that we just want to use exactly
+		// as the user set, copy the ref
+		Client:             oldDo.Client,
+		Logger:             oldDo.Logger,
+		AuthHandlerOptions: oldDo.AuthHandlerOptions,
+	}
+
+	// Smartly size this memory and copy below.
+	if len(oldDo.CredentialsJSON) > 0 {
+		newDo.CredentialsJSON = make([]byte, len(oldDo.CredentialsJSON))
+		copy(newDo.CredentialsJSON, oldDo.CredentialsJSON)
+	}
+	if len(oldDo.Scopes) > 0 {
+		newDo.Scopes = make([]string, len(oldDo.Scopes))
+		copy(newDo.Scopes, oldDo.Scopes)
+	}
+
+	return newDo
+}
+
+// ValidateUniverseDomain verifies that the universe domain configured for the
+// client matches the universe domain configured for the credentials.
+func ValidateUniverseDomain(clientUniverseDomain, credentialsUniverseDomain string) error {
+	if clientUniverseDomain != credentialsUniverseDomain {
+		return fmt.Errorf(
+			"the configured universe domain (%q) does not match the universe "+
+				"domain found in the credentials (%q). If you haven't configured "+
+				"the universe domain explicitly, \"googleapis.com\" is the default",
+			clientUniverseDomain,
+			credentialsUniverseDomain)
+	}
+	return nil
+}
+
+// DefaultHTTPClientWithTLS constructs an HTTPClient using the provided tlsConfig, to support mTLS.
+func DefaultHTTPClientWithTLS(tlsConfig *tls.Config) *http.Client {
+	trans := BaseTransport()
+	trans.TLSClientConfig = tlsConfig
+	return &http.Client{Transport: trans}
+}
+
+// BaseTransport returns a default [http.Transport] which can be used if
+// [http.DefaultTransport] has been overwritten.
+func BaseTransport() *http.Transport {
+	return &http.Transport{
+		Proxy: http.ProxyFromEnvironment,
+		DialContext: (&net.Dialer{
+			Timeout:   30 * time.Second,
+			KeepAlive: 30 * time.Second,
+			DualStack: true,
+		}).DialContext,
+		MaxIdleConns:          100,
+		MaxIdleConnsPerHost:   100,
+		IdleConnTimeout:       90 * time.Second,
+		TLSHandshakeTimeout:   10 * time.Second,
+		ExpectContinueTimeout: 1 * time.Second,
+	}
+}

vendor/cloud.google.com/go/auth/threelegged.go 🔗

@@ -0,0 +1,382 @@
+// Copyright 2023 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package auth
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"log/slog"
+	"mime"
+	"net/http"
+	"net/url"
+	"strconv"
+	"strings"
+	"time"
+
+	"cloud.google.com/go/auth/internal"
+	"github.com/googleapis/gax-go/v2/internallog"
+)
+
+// AuthorizationHandler is a 3-legged-OAuth helper that prompts the user for
+// OAuth consent at the specified auth code URL and returns an auth code and
+// state upon approval.
+type AuthorizationHandler func(authCodeURL string) (code string, state string, err error)
+
+// Options3LO are the options for doing a 3-legged OAuth2 flow.
+type Options3LO struct {
+	// ClientID is the application's ID.
+	ClientID string
+	// ClientSecret is the application's secret. Not required if AuthHandlerOpts
+	// is set.
+	ClientSecret string
+	// AuthURL is the URL for authenticating.
+	AuthURL string
+	// TokenURL is the URL for retrieving a token.
+	TokenURL string
+	// AuthStyle is used to describe how to client info in the token request.
+	AuthStyle Style
+	// RefreshToken is the token used to refresh the credential. Not required
+	// if AuthHandlerOpts is set.
+	RefreshToken string
+	// RedirectURL is the URL to redirect users to. Optional.
+	RedirectURL string
+	// Scopes specifies requested permissions for the Token. Optional.
+	Scopes []string
+
+	// URLParams are the set of values to apply to the token exchange. Optional.
+	URLParams url.Values
+	// Client is the client to be used to make the underlying token requests.
+	// Optional.
+	Client *http.Client
+	// EarlyTokenExpiry is the time before the token expires that it should be
+	// refreshed. If not set the default value is 3 minutes and 45 seconds.
+	// Optional.
+	EarlyTokenExpiry time.Duration
+
+	// AuthHandlerOpts provides a set of options for doing a
+	// 3-legged OAuth2 flow with a custom [AuthorizationHandler]. Optional.
+	AuthHandlerOpts *AuthorizationHandlerOptions
+	// Logger is used for debug logging. If provided, logging will be enabled
+	// at the loggers configured level. By default logging is disabled unless
+	// enabled by setting GOOGLE_SDK_GO_LOGGING_LEVEL in which case a default
+	// logger will be used. Optional.
+	Logger *slog.Logger
+}
+
+func (o *Options3LO) validate() error {
+	if o == nil {
+		return errors.New("auth: options must be provided")
+	}
+	if o.ClientID == "" {
+		return errors.New("auth: client ID must be provided")
+	}
+	if o.AuthHandlerOpts == nil && o.ClientSecret == "" {
+		return errors.New("auth: client secret must be provided")
+	}
+	if o.AuthURL == "" {
+		return errors.New("auth: auth URL must be provided")
+	}
+	if o.TokenURL == "" {
+		return errors.New("auth: token URL must be provided")
+	}
+	if o.AuthStyle == StyleUnknown {
+		return errors.New("auth: auth style must be provided")
+	}
+	if o.AuthHandlerOpts == nil && o.RefreshToken == "" {
+		return errors.New("auth: refresh token must be provided")
+	}
+	return nil
+}
+
+func (o *Options3LO) logger() *slog.Logger {
+	return internallog.New(o.Logger)
+}
+
+// PKCEOptions holds parameters to support PKCE.
+type PKCEOptions struct {
+	// Challenge is the un-padded, base64-url-encoded string of the encrypted code verifier.
+	Challenge string // The un-padded, base64-url-encoded string of the encrypted code verifier.
+	// ChallengeMethod is the encryption method (ex. S256).
+	ChallengeMethod string
+	// Verifier is the original, non-encrypted secret.
+	Verifier string // The original, non-encrypted secret.
+}
+
+type tokenJSON struct {
+	AccessToken  string `json:"access_token"`
+	TokenType    string `json:"token_type"`
+	RefreshToken string `json:"refresh_token"`
+	ExpiresIn    int    `json:"expires_in"`
+	// error fields
+	ErrorCode        string `json:"error"`
+	ErrorDescription string `json:"error_description"`
+	ErrorURI         string `json:"error_uri"`
+}
+
+func (e *tokenJSON) expiry() (t time.Time) {
+	if v := e.ExpiresIn; v != 0 {
+		return time.Now().Add(time.Duration(v) * time.Second)
+	}
+	return
+}
+
+func (o *Options3LO) client() *http.Client {
+	if o.Client != nil {
+		return o.Client
+	}
+	return internal.DefaultClient()
+}
+
+// authCodeURL returns a URL that points to a OAuth2 consent page.
+func (o *Options3LO) authCodeURL(state string, values url.Values) string {
+	var buf bytes.Buffer
+	buf.WriteString(o.AuthURL)
+	v := url.Values{
+		"response_type": {"code"},
+		"client_id":     {o.ClientID},
+	}
+	if o.RedirectURL != "" {
+		v.Set("redirect_uri", o.RedirectURL)
+	}
+	if len(o.Scopes) > 0 {
+		v.Set("scope", strings.Join(o.Scopes, " "))
+	}
+	if state != "" {
+		v.Set("state", state)
+	}
+	if o.AuthHandlerOpts != nil {
+		if o.AuthHandlerOpts.PKCEOpts != nil &&
+			o.AuthHandlerOpts.PKCEOpts.Challenge != "" {
+			v.Set(codeChallengeKey, o.AuthHandlerOpts.PKCEOpts.Challenge)
+		}
+		if o.AuthHandlerOpts.PKCEOpts != nil &&
+			o.AuthHandlerOpts.PKCEOpts.ChallengeMethod != "" {
+			v.Set(codeChallengeMethodKey, o.AuthHandlerOpts.PKCEOpts.ChallengeMethod)
+		}
+	}
+	for k := range values {
+		v.Set(k, v.Get(k))
+	}
+	if strings.Contains(o.AuthURL, "?") {
+		buf.WriteByte('&')
+	} else {
+		buf.WriteByte('?')
+	}
+	buf.WriteString(v.Encode())
+	return buf.String()
+}
+
+// New3LOTokenProvider returns a [TokenProvider] based on the 3-legged OAuth2
+// configuration. The TokenProvider is caches and auto-refreshes tokens by
+// default.
+func New3LOTokenProvider(opts *Options3LO) (TokenProvider, error) {
+	if err := opts.validate(); err != nil {
+		return nil, err
+	}
+	if opts.AuthHandlerOpts != nil {
+		return new3LOTokenProviderWithAuthHandler(opts), nil
+	}
+	return NewCachedTokenProvider(&tokenProvider3LO{opts: opts, refreshToken: opts.RefreshToken, client: opts.client()}, &CachedTokenProviderOptions{
+		ExpireEarly: opts.EarlyTokenExpiry,
+	}), nil
+}
+
+// AuthorizationHandlerOptions provides a set of options to specify for doing a
+// 3-legged OAuth2 flow with a custom [AuthorizationHandler].
+type AuthorizationHandlerOptions struct {
+	// AuthorizationHandler specifies the handler used to for the authorization
+	// part of the flow.
+	Handler AuthorizationHandler
+	// State is used verify that the "state" is identical in the request and
+	// response before exchanging the auth code for OAuth2 token.
+	State string
+	// PKCEOpts allows setting configurations for PKCE. Optional.
+	PKCEOpts *PKCEOptions
+}
+
+func new3LOTokenProviderWithAuthHandler(opts *Options3LO) TokenProvider {
+	return NewCachedTokenProvider(&tokenProviderWithHandler{opts: opts, state: opts.AuthHandlerOpts.State}, &CachedTokenProviderOptions{
+		ExpireEarly: opts.EarlyTokenExpiry,
+	})
+}
+
+// exchange handles the final exchange portion of the 3lo flow. Returns a Token,
+// refreshToken, and error.
+func (o *Options3LO) exchange(ctx context.Context, code string) (*Token, string, error) {
+	// Build request
+	v := url.Values{
+		"grant_type": {"authorization_code"},
+		"code":       {code},
+	}
+	if o.RedirectURL != "" {
+		v.Set("redirect_uri", o.RedirectURL)
+	}
+	if o.AuthHandlerOpts != nil &&
+		o.AuthHandlerOpts.PKCEOpts != nil &&
+		o.AuthHandlerOpts.PKCEOpts.Verifier != "" {
+		v.Set(codeVerifierKey, o.AuthHandlerOpts.PKCEOpts.Verifier)
+	}
+	for k := range o.URLParams {
+		v.Set(k, o.URLParams.Get(k))
+	}
+	return fetchToken(ctx, o, v)
+}
+
+// This struct is not safe for concurrent access alone, but the way it is used
+// in this package by wrapping it with a cachedTokenProvider makes it so.
+type tokenProvider3LO struct {
+	opts         *Options3LO
+	client       *http.Client
+	refreshToken string
+}
+
+func (tp *tokenProvider3LO) Token(ctx context.Context) (*Token, error) {
+	if tp.refreshToken == "" {
+		return nil, errors.New("auth: token expired and refresh token is not set")
+	}
+	v := url.Values{
+		"grant_type":    {"refresh_token"},
+		"refresh_token": {tp.refreshToken},
+	}
+	for k := range tp.opts.URLParams {
+		v.Set(k, tp.opts.URLParams.Get(k))
+	}
+
+	tk, rt, err := fetchToken(ctx, tp.opts, v)
+	if err != nil {
+		return nil, err
+	}
+	if tp.refreshToken != rt && rt != "" {
+		tp.refreshToken = rt
+	}
+	return tk, err
+}
+
+type tokenProviderWithHandler struct {
+	opts  *Options3LO
+	state string
+}
+
+func (tp tokenProviderWithHandler) Token(ctx context.Context) (*Token, error) {
+	url := tp.opts.authCodeURL(tp.state, nil)
+	code, state, err := tp.opts.AuthHandlerOpts.Handler(url)
+	if err != nil {
+		return nil, err
+	}
+	if state != tp.state {
+		return nil, errors.New("auth: state mismatch in 3-legged-OAuth flow")
+	}
+	tok, _, err := tp.opts.exchange(ctx, code)
+	return tok, err
+}
+
+// fetchToken returns a Token, refresh token, and/or an error.
+func fetchToken(ctx context.Context, o *Options3LO, v url.Values) (*Token, string, error) {
+	var refreshToken string
+	if o.AuthStyle == StyleInParams {
+		if o.ClientID != "" {
+			v.Set("client_id", o.ClientID)
+		}
+		if o.ClientSecret != "" {
+			v.Set("client_secret", o.ClientSecret)
+		}
+	}
+	req, err := http.NewRequestWithContext(ctx, "POST", o.TokenURL, strings.NewReader(v.Encode()))
+	if err != nil {
+		return nil, refreshToken, err
+	}
+	req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
+	if o.AuthStyle == StyleInHeader {
+		req.SetBasicAuth(url.QueryEscape(o.ClientID), url.QueryEscape(o.ClientSecret))
+	}
+	logger := o.logger()
+
+	logger.DebugContext(ctx, "3LO token request", "request", internallog.HTTPRequest(req, []byte(v.Encode())))
+	// Make request
+	resp, body, err := internal.DoRequest(o.client(), req)
+	if err != nil {
+		return nil, refreshToken, err
+	}
+	logger.DebugContext(ctx, "3LO token response", "response", internallog.HTTPResponse(resp, body))
+	failureStatus := resp.StatusCode < 200 || resp.StatusCode > 299
+	tokError := &Error{
+		Response: resp,
+		Body:     body,
+	}
+
+	var token *Token
+	// errors ignored because of default switch on content
+	content, _, _ := mime.ParseMediaType(resp.Header.Get("Content-Type"))
+	switch content {
+	case "application/x-www-form-urlencoded", "text/plain":
+		// some endpoints return a query string
+		vals, err := url.ParseQuery(string(body))
+		if err != nil {
+			if failureStatus {
+				return nil, refreshToken, tokError
+			}
+			return nil, refreshToken, fmt.Errorf("auth: cannot parse response: %w", err)
+		}
+		tokError.code = vals.Get("error")
+		tokError.description = vals.Get("error_description")
+		tokError.uri = vals.Get("error_uri")
+		token = &Token{
+			Value:    vals.Get("access_token"),
+			Type:     vals.Get("token_type"),
+			Metadata: make(map[string]interface{}, len(vals)),
+		}
+		for k, v := range vals {
+			token.Metadata[k] = v
+		}
+		refreshToken = vals.Get("refresh_token")
+		e := vals.Get("expires_in")
+		expires, _ := strconv.Atoi(e)
+		if expires != 0 {
+			token.Expiry = time.Now().Add(time.Duration(expires) * time.Second)
+		}
+	default:
+		var tj tokenJSON
+		if err = json.Unmarshal(body, &tj); err != nil {
+			if failureStatus {
+				return nil, refreshToken, tokError
+			}
+			return nil, refreshToken, fmt.Errorf("auth: cannot parse json: %w", err)
+		}
+		tokError.code = tj.ErrorCode
+		tokError.description = tj.ErrorDescription
+		tokError.uri = tj.ErrorURI
+		token = &Token{
+			Value:    tj.AccessToken,
+			Type:     tj.TokenType,
+			Expiry:   tj.expiry(),
+			Metadata: make(map[string]interface{}),
+		}
+		json.Unmarshal(body, &token.Metadata) // optional field, skip err check
+		refreshToken = tj.RefreshToken
+	}
+	// according to spec, servers should respond status 400 in error case
+	// https://www.rfc-editor.org/rfc/rfc6749#section-5.2
+	// but some unorthodox servers respond 200 in error case
+	if failureStatus || tokError.code != "" {
+		return nil, refreshToken, tokError
+	}
+	if token.Value == "" {
+		return nil, refreshToken, errors.New("auth: server response missing access_token")
+	}
+	return token, refreshToken, nil
+}

vendor/cloud.google.com/go/civil/civil.go 🔗

@@ -0,0 +1,350 @@
+// Copyright 2016 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package civil implements types for civil time, a time-zone-independent
+// representation of time that follows the rules of the proleptic
+// Gregorian calendar with exactly 24-hour days, 60-minute hours, and 60-second
+// minutes.
+//
+// Because they lack location information, these types do not represent unique
+// moments or intervals of time. Use time.Time for that purpose.
+package civil
+
+import (
+	"fmt"
+	"time"
+)
+
+// A Date represents a date (year, month, day).
+//
+// This type does not include location information, and therefore does not
+// describe a unique 24-hour timespan.
+type Date struct {
+	Year  int        // Year (e.g., 2014).
+	Month time.Month // Month of the year (January = 1, ...).
+	Day   int        // Day of the month, starting at 1.
+}
+
+// DateOf returns the Date in which a time occurs in that time's location.
+func DateOf(t time.Time) Date {
+	var d Date
+	d.Year, d.Month, d.Day = t.Date()
+	return d
+}
+
+// ParseDate parses a string in RFC3339 full-date format and returns the date value it represents.
+func ParseDate(s string) (Date, error) {
+	t, err := time.Parse("2006-01-02", s)
+	if err != nil {
+		return Date{}, err
+	}
+	return DateOf(t), nil
+}
+
+// String returns the date in RFC3339 full-date format.
+func (d Date) String() string {
+	return fmt.Sprintf("%04d-%02d-%02d", d.Year, d.Month, d.Day)
+}
+
+// IsValid reports whether the date is valid.
+func (d Date) IsValid() bool {
+	return DateOf(d.In(time.UTC)) == d
+}
+
+// In returns the time corresponding to time 00:00:00 of the date in the location.
+//
+// In is always consistent with time.Date, even when time.Date returns a time
+// on a different day. For example, if loc is America/Indiana/Vincennes, then both
+//
+//	time.Date(1955, time.May, 1, 0, 0, 0, 0, loc)
+//
+// and
+//
+//	civil.Date{Year: 1955, Month: time.May, Day: 1}.In(loc)
+//
+// return 23:00:00 on April 30, 1955.
+//
+// In panics if loc is nil.
+func (d Date) In(loc *time.Location) time.Time {
+	return time.Date(d.Year, d.Month, d.Day, 0, 0, 0, 0, loc)
+}
+
+// AddDays returns the date that is n days in the future.
+// n can also be negative to go into the past.
+func (d Date) AddDays(n int) Date {
+	return DateOf(d.In(time.UTC).AddDate(0, 0, n))
+}
+
+// DaysSince returns the signed number of days between the date and s, not including the end day.
+// This is the inverse operation to AddDays.
+func (d Date) DaysSince(s Date) (days int) {
+	// We convert to Unix time so we do not have to worry about leap seconds:
+	// Unix time increases by exactly 86400 seconds per day.
+	deltaUnix := d.In(time.UTC).Unix() - s.In(time.UTC).Unix()
+	return int(deltaUnix / 86400)
+}
+
+// Before reports whether d occurs before d2.
+func (d Date) Before(d2 Date) bool {
+	if d.Year != d2.Year {
+		return d.Year < d2.Year
+	}
+	if d.Month != d2.Month {
+		return d.Month < d2.Month
+	}
+	return d.Day < d2.Day
+}
+
+// After reports whether d occurs after d2.
+func (d Date) After(d2 Date) bool {
+	return d2.Before(d)
+}
+
+// Compare compares d and d2. If d is before d2, it returns -1;
+// if d is after d2, it returns +1; otherwise it returns 0.
+func (d Date) Compare(d2 Date) int {
+	if d.Before(d2) {
+		return -1
+	} else if d.After(d2) {
+		return +1
+	}
+	return 0
+}
+
+// IsZero reports whether date fields are set to their default value.
+func (d Date) IsZero() bool {
+	return (d.Year == 0) && (int(d.Month) == 0) && (d.Day == 0)
+}
+
+// MarshalText implements the encoding.TextMarshaler interface.
+// The output is the result of d.String().
+func (d Date) MarshalText() ([]byte, error) {
+	return []byte(d.String()), nil
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+// The date is expected to be a string in a format accepted by ParseDate.
+func (d *Date) UnmarshalText(data []byte) error {
+	var err error
+	*d, err = ParseDate(string(data))
+	return err
+}
+
+// A Time represents a time with nanosecond precision.
+//
+// This type does not include location information, and therefore does not
+// describe a unique moment in time.
+//
+// This type exists to represent the TIME type in storage-based APIs like BigQuery.
+// Most operations on Times are unlikely to be meaningful. Prefer the DateTime type.
+type Time struct {
+	Hour       int // The hour of the day in 24-hour format; range [0-23]
+	Minute     int // The minute of the hour; range [0-59]
+	Second     int // The second of the minute; range [0-59]
+	Nanosecond int // The nanosecond of the second; range [0-999999999]
+}
+
+// TimeOf returns the Time representing the time of day in which a time occurs
+// in that time's location. It ignores the date.
+func TimeOf(t time.Time) Time {
+	var tm Time
+	tm.Hour, tm.Minute, tm.Second = t.Clock()
+	tm.Nanosecond = t.Nanosecond()
+	return tm
+}
+
+// ParseTime parses a string and returns the time value it represents.
+// ParseTime accepts an extended form of the RFC3339 partial-time format. After
+// the HH:MM:SS part of the string, an optional fractional part may appear,
+// consisting of a decimal point followed by one to nine decimal digits.
+// (RFC3339 admits only one digit after the decimal point).
+func ParseTime(s string) (Time, error) {
+	t, err := time.Parse("15:04:05.999999999", s)
+	if err != nil {
+		return Time{}, err
+	}
+	return TimeOf(t), nil
+}
+
+// String returns the date in the format described in ParseTime. If Nanoseconds
+// is zero, no fractional part will be generated. Otherwise, the result will
+// end with a fractional part consisting of a decimal point and nine digits.
+func (t Time) String() string {
+	s := fmt.Sprintf("%02d:%02d:%02d", t.Hour, t.Minute, t.Second)
+	if t.Nanosecond == 0 {
+		return s
+	}
+	return s + fmt.Sprintf(".%09d", t.Nanosecond)
+}
+
+// IsValid reports whether the time is valid.
+func (t Time) IsValid() bool {
+	// Construct a non-zero time.
+	tm := time.Date(2, 2, 2, t.Hour, t.Minute, t.Second, t.Nanosecond, time.UTC)
+	return TimeOf(tm) == t
+}
+
+// IsZero reports whether time fields are set to their default value.
+func (t Time) IsZero() bool {
+	return (t.Hour == 0) && (t.Minute == 0) && (t.Second == 0) && (t.Nanosecond == 0)
+}
+
+// Before reports whether t occurs before t2.
+func (t Time) Before(t2 Time) bool {
+	if t.Hour != t2.Hour {
+		return t.Hour < t2.Hour
+	}
+	if t.Minute != t2.Minute {
+		return t.Minute < t2.Minute
+	}
+	if t.Second != t2.Second {
+		return t.Second < t2.Second
+	}
+
+	return t.Nanosecond < t2.Nanosecond
+}
+
+// After reports whether t occurs after t2.
+func (t Time) After(t2 Time) bool {
+	return t2.Before(t)
+}
+
+// Compare compares t and t2. If t is before t2, it returns -1;
+// if t is after t2, it returns +1; otherwise it returns 0.
+func (t Time) Compare(t2 Time) int {
+	if t.Before(t2) {
+		return -1
+	} else if t.After(t2) {
+		return +1
+	}
+	return 0
+}
+
+// MarshalText implements the encoding.TextMarshaler interface.
+// The output is the result of t.String().
+func (t Time) MarshalText() ([]byte, error) {
+	return []byte(t.String()), nil
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+// The time is expected to be a string in a format accepted by ParseTime.
+func (t *Time) UnmarshalText(data []byte) error {
+	var err error
+	*t, err = ParseTime(string(data))
+	return err
+}
+
+// A DateTime represents a date and time.
+//
+// This type does not include location information, and therefore does not
+// describe a unique moment in time.
+type DateTime struct {
+	Date Date
+	Time Time
+}
+
+// Note: We deliberately do not embed Date into DateTime, to avoid promoting AddDays and Sub.
+
+// DateTimeOf returns the DateTime in which a time occurs in that time's location.
+func DateTimeOf(t time.Time) DateTime {
+	return DateTime{
+		Date: DateOf(t),
+		Time: TimeOf(t),
+	}
+}
+
+// ParseDateTime parses a string and returns the DateTime it represents.
+// ParseDateTime accepts a variant of the RFC3339 date-time format that omits
+// the time offset but includes an optional fractional time, as described in
+// ParseTime. Informally, the accepted format is
+//
+//	YYYY-MM-DDTHH:MM:SS[.FFFFFFFFF]
+//
+// where the 'T' may be a lower-case 't'.
+func ParseDateTime(s string) (DateTime, error) {
+	t, err := time.Parse("2006-01-02T15:04:05.999999999", s)
+	if err != nil {
+		t, err = time.Parse("2006-01-02t15:04:05.999999999", s)
+		if err != nil {
+			return DateTime{}, err
+		}
+	}
+	return DateTimeOf(t), nil
+}
+
+// String returns the date in the format described in ParseDate.
+func (dt DateTime) String() string {
+	return dt.Date.String() + "T" + dt.Time.String()
+}
+
+// IsValid reports whether the datetime is valid.
+func (dt DateTime) IsValid() bool {
+	return dt.Date.IsValid() && dt.Time.IsValid()
+}
+
+// In returns the time corresponding to the DateTime in the given location.
+//
+// If the time is missing or ambigous at the location, In returns the same
+// result as time.Date. For example, if loc is America/Indiana/Vincennes, then
+// both
+//
+//	time.Date(1955, time.May, 1, 0, 30, 0, 0, loc)
+//
+// and
+//
+//	civil.DateTime{
+//	    civil.Date{Year: 1955, Month: time.May, Day: 1}},
+//	    civil.Time{Minute: 30}}.In(loc)
+//
+// return 23:30:00 on April 30, 1955.
+//
+// In panics if loc is nil.
+func (dt DateTime) In(loc *time.Location) time.Time {
+	return time.Date(dt.Date.Year, dt.Date.Month, dt.Date.Day, dt.Time.Hour, dt.Time.Minute, dt.Time.Second, dt.Time.Nanosecond, loc)
+}
+
+// Before reports whether dt occurs before dt2.
+func (dt DateTime) Before(dt2 DateTime) bool {
+	return dt.In(time.UTC).Before(dt2.In(time.UTC))
+}
+
+// After reports whether dt occurs after dt2.
+func (dt DateTime) After(dt2 DateTime) bool {
+	return dt2.Before(dt)
+}
+
+// Compare compares dt and dt2. If dt is before dt2, it returns -1;
+// if dt is after dt2, it returns +1; otherwise it returns 0.
+func (dt DateTime) Compare(dt2 DateTime) int {
+	return dt.In(time.UTC).Compare(dt2.In(time.UTC))
+}
+
+// IsZero reports whether datetime fields are set to their default value.
+func (dt DateTime) IsZero() bool {
+	return dt.Date.IsZero() && dt.Time.IsZero()
+}
+
+// MarshalText implements the encoding.TextMarshaler interface.
+// The output is the result of dt.String().
+func (dt DateTime) MarshalText() ([]byte, error) {
+	return []byte(dt.String()), nil
+}
+
+// UnmarshalText implements the encoding.TextUnmarshaler interface.
+// The datetime is expected to be a string in a format accepted by ParseDateTime
+func (dt *DateTime) UnmarshalText(data []byte) error {
+	var err error
+	*dt, err = ParseDateTime(string(data))
+	return err
+}

vendor/cloud.google.com/go/compute/metadata/CHANGES.md 🔗

@@ -0,0 +1,66 @@
+# Changes
+
+## [0.6.0](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.5.2...compute/metadata/v0.6.0) (2024-12-13)
+
+
+### Features
+
+* **compute/metadata:** Add debug logging ([#11078](https://github.com/googleapis/google-cloud-go/issues/11078)) ([a816814](https://github.com/googleapis/google-cloud-go/commit/a81681463906e4473570a2f426eb0dc2de64e53f))
+
+## [0.5.2](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.5.1...compute/metadata/v0.5.2) (2024-09-20)
+
+
+### Bug Fixes
+
+* **compute/metadata:** Close Response Body for failed request ([#10891](https://github.com/googleapis/google-cloud-go/issues/10891)) ([e91d45e](https://github.com/googleapis/google-cloud-go/commit/e91d45e4757a9e354114509ba9800085d9e0ff1f))
+
+## [0.5.1](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.5.0...compute/metadata/v0.5.1) (2024-09-12)
+
+
+### Bug Fixes
+
+* **compute/metadata:** Check error chain for retryable error ([#10840](https://github.com/googleapis/google-cloud-go/issues/10840)) ([2bdedef](https://github.com/googleapis/google-cloud-go/commit/2bdedeff621b223d63cebc4355fcf83bc68412cd))
+
+## [0.5.0](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.4.0...compute/metadata/v0.5.0) (2024-07-10)
+
+
+### Features
+
+* **compute/metadata:** Add sys check for windows OnGCE ([#10521](https://github.com/googleapis/google-cloud-go/issues/10521)) ([3b9a830](https://github.com/googleapis/google-cloud-go/commit/3b9a83063960d2a2ac20beb47cc15818a68bd302))
+
+## [0.4.0](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.3.0...compute/metadata/v0.4.0) (2024-07-01)
+
+
+### Features
+
+* **compute/metadata:** Add context for all functions/methods ([#10370](https://github.com/googleapis/google-cloud-go/issues/10370)) ([66b8efe](https://github.com/googleapis/google-cloud-go/commit/66b8efe7ad877e052b2987bb4475477e38c67bb3))
+
+
+### Documentation
+
+* **compute/metadata:** Update OnGCE description ([#10408](https://github.com/googleapis/google-cloud-go/issues/10408)) ([6a46dca](https://github.com/googleapis/google-cloud-go/commit/6a46dca4eae4f88ec6f88822e01e5bf8aeca787f))
+
+## [0.3.0](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.2.3...compute/metadata/v0.3.0) (2024-04-15)
+
+
+### Features
+
+* **compute/metadata:** Add context aware functions  ([#9733](https://github.com/googleapis/google-cloud-go/issues/9733)) ([e4eb5b4](https://github.com/googleapis/google-cloud-go/commit/e4eb5b46ee2aec9d2fc18300bfd66015e25a0510))
+
+## [0.2.3](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.2.2...compute/metadata/v0.2.3) (2022-12-15)
+
+
+### Bug Fixes
+
+* **compute/metadata:** Switch DNS lookup to an absolute lookup ([119b410](https://github.com/googleapis/google-cloud-go/commit/119b41060c7895e45e48aee5621ad35607c4d021)), refs [#7165](https://github.com/googleapis/google-cloud-go/issues/7165)
+
+## [0.2.2](https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.2.1...compute/metadata/v0.2.2) (2022-12-01)
+
+
+### Bug Fixes
+
+* **compute/metadata:** Set IdleConnTimeout for http.Client ([#7084](https://github.com/googleapis/google-cloud-go/issues/7084)) ([766516a](https://github.com/googleapis/google-cloud-go/commit/766516aaf3816bfb3159efeea65aa3d1d205a3e2)), refs [#5430](https://github.com/googleapis/google-cloud-go/issues/5430)
+
+## [0.1.0] (2022-10-26)
+
+Initial release of metadata being it's own module.

vendor/cloud.google.com/go/compute/metadata/LICENSE 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/cloud.google.com/go/compute/metadata/README.md 🔗

@@ -0,0 +1,27 @@
+# Compute API
+
+[![Go Reference](https://pkg.go.dev/badge/cloud.google.com/go/compute.svg)](https://pkg.go.dev/cloud.google.com/go/compute/metadata)
+
+This is a utility library for communicating with Google Cloud metadata service
+on Google Cloud.
+
+## Install
+
+```bash
+go get cloud.google.com/go/compute/metadata
+```
+
+## Go Version Support
+
+See the [Go Versions Supported](https://github.com/googleapis/google-cloud-go#go-versions-supported)
+section in the root directory's README.
+
+## Contributing
+
+Contributions are welcome. Please, see the [CONTRIBUTING](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/main/CONTRIBUTING.md)
+document for details.
+
+Please note that this project is released with a Contributor Code of Conduct.
+By participating in this project you agree to abide by its terms. See
+[Contributor Code of Conduct](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/main/CONTRIBUTING.md#contributor-code-of-conduct)
+for more information.

vendor/cloud.google.com/go/compute/metadata/log.go 🔗

@@ -0,0 +1,149 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package metadata
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"fmt"
+	"log/slog"
+	"net/http"
+	"strings"
+)
+
+// Code below this point is copied from github.com/googleapis/gax-go/v2/internallog
+// to avoid the dependency. The compute/metadata module is used by too many
+// non-client library modules that can't justify the dependency.
+
+// The handler returned if logging is not enabled.
+type noOpHandler struct{}
+
+func (h noOpHandler) Enabled(_ context.Context, _ slog.Level) bool {
+	return false
+}
+
+func (h noOpHandler) Handle(_ context.Context, _ slog.Record) error {
+	return nil
+}
+
+func (h noOpHandler) WithAttrs(_ []slog.Attr) slog.Handler {
+	return h
+}
+
+func (h noOpHandler) WithGroup(_ string) slog.Handler {
+	return h
+}
+
+// httpRequest returns a lazily evaluated [slog.LogValuer] for a
+// [http.Request] and the associated body.
+func httpRequest(req *http.Request, body []byte) slog.LogValuer {
+	return &request{
+		req:     req,
+		payload: body,
+	}
+}
+
+type request struct {
+	req     *http.Request
+	payload []byte
+}
+
+func (r *request) LogValue() slog.Value {
+	if r == nil || r.req == nil {
+		return slog.Value{}
+	}
+	var groupValueAttrs []slog.Attr
+	groupValueAttrs = append(groupValueAttrs, slog.String("method", r.req.Method))
+	groupValueAttrs = append(groupValueAttrs, slog.String("url", r.req.URL.String()))
+
+	var headerAttr []slog.Attr
+	for k, val := range r.req.Header {
+		headerAttr = append(headerAttr, slog.String(k, strings.Join(val, ",")))
+	}
+	if len(headerAttr) > 0 {
+		groupValueAttrs = append(groupValueAttrs, slog.Any("headers", headerAttr))
+	}
+
+	if len(r.payload) > 0 {
+		if attr, ok := processPayload(r.payload); ok {
+			groupValueAttrs = append(groupValueAttrs, attr)
+		}
+	}
+	return slog.GroupValue(groupValueAttrs...)
+}
+
+// httpResponse returns a lazily evaluated [slog.LogValuer] for a
+// [http.Response] and the associated body.
+func httpResponse(resp *http.Response, body []byte) slog.LogValuer {
+	return &response{
+		resp:    resp,
+		payload: body,
+	}
+}
+
+type response struct {
+	resp    *http.Response
+	payload []byte
+}
+
+func (r *response) LogValue() slog.Value {
+	if r == nil {
+		return slog.Value{}
+	}
+	var groupValueAttrs []slog.Attr
+	groupValueAttrs = append(groupValueAttrs, slog.String("status", fmt.Sprint(r.resp.StatusCode)))
+
+	var headerAttr []slog.Attr
+	for k, val := range r.resp.Header {
+		headerAttr = append(headerAttr, slog.String(k, strings.Join(val, ",")))
+	}
+	if len(headerAttr) > 0 {
+		groupValueAttrs = append(groupValueAttrs, slog.Any("headers", headerAttr))
+	}
+
+	if len(r.payload) > 0 {
+		if attr, ok := processPayload(r.payload); ok {
+			groupValueAttrs = append(groupValueAttrs, attr)
+		}
+	}
+	return slog.GroupValue(groupValueAttrs...)
+}
+
+func processPayload(payload []byte) (slog.Attr, bool) {
+	peekChar := payload[0]
+	if peekChar == '{' {
+		// JSON object
+		var m map[string]any
+		if err := json.Unmarshal(payload, &m); err == nil {
+			return slog.Any("payload", m), true
+		}
+	} else if peekChar == '[' {
+		// JSON array
+		var m []any
+		if err := json.Unmarshal(payload, &m); err == nil {
+			return slog.Any("payload", m), true
+		}
+	} else {
+		// Everything else
+		buf := &bytes.Buffer{}
+		if err := json.Compact(buf, payload); err != nil {
+			// Write raw payload incase of error
+			buf.Write(payload)
+		}
+		return slog.String("payload", buf.String()), true
+	}
+	return slog.Attr{}, false
+}

vendor/cloud.google.com/go/compute/metadata/metadata.go 🔗

@@ -0,0 +1,872 @@
+// Copyright 2014 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package metadata provides access to Google Compute Engine (GCE)
+// metadata and API service accounts.
+//
+// This package is a wrapper around the GCE metadata service,
+// as documented at https://cloud.google.com/compute/docs/metadata/overview.
+package metadata // import "cloud.google.com/go/compute/metadata"
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"log/slog"
+	"net"
+	"net/http"
+	"net/url"
+	"os"
+	"strings"
+	"sync"
+	"time"
+)
+
+const (
+	// metadataIP is the documented metadata server IP address.
+	metadataIP = "169.254.169.254"
+
+	// metadataHostEnv is the environment variable specifying the
+	// GCE metadata hostname.  If empty, the default value of
+	// metadataIP ("169.254.169.254") is used instead.
+	// This is variable name is not defined by any spec, as far as
+	// I know; it was made up for the Go package.
+	metadataHostEnv = "GCE_METADATA_HOST"
+
+	userAgent = "gcloud-golang/0.1"
+)
+
+type cachedValue struct {
+	k    string
+	trim bool
+	mu   sync.Mutex
+	v    string
+}
+
+var (
+	projID  = &cachedValue{k: "project/project-id", trim: true}
+	projNum = &cachedValue{k: "project/numeric-project-id", trim: true}
+	instID  = &cachedValue{k: "instance/id", trim: true}
+)
+
+var defaultClient = &Client{
+	hc:     newDefaultHTTPClient(),
+	logger: slog.New(noOpHandler{}),
+}
+
+func newDefaultHTTPClient() *http.Client {
+	return &http.Client{
+		Transport: &http.Transport{
+			Dial: (&net.Dialer{
+				Timeout:   2 * time.Second,
+				KeepAlive: 30 * time.Second,
+			}).Dial,
+			IdleConnTimeout: 60 * time.Second,
+		},
+		Timeout: 5 * time.Second,
+	}
+}
+
+// NotDefinedError is returned when requested metadata is not defined.
+//
+// The underlying string is the suffix after "/computeMetadata/v1/".
+//
+// This error is not returned if the value is defined to be the empty
+// string.
+type NotDefinedError string
+
+func (suffix NotDefinedError) Error() string {
+	return fmt.Sprintf("metadata: GCE metadata %q not defined", string(suffix))
+}
+
+func (c *cachedValue) get(ctx context.Context, cl *Client) (v string, err error) {
+	defer c.mu.Unlock()
+	c.mu.Lock()
+	if c.v != "" {
+		return c.v, nil
+	}
+	if c.trim {
+		v, err = cl.getTrimmed(ctx, c.k)
+	} else {
+		v, err = cl.GetWithContext(ctx, c.k)
+	}
+	if err == nil {
+		c.v = v
+	}
+	return
+}
+
+var (
+	onGCEOnce sync.Once
+	onGCE     bool
+)
+
+// OnGCE reports whether this process is running on Google Compute Platforms.
+// NOTE: True returned from `OnGCE` does not guarantee that the metadata server
+// is accessible from this process and have all the metadata defined.
+func OnGCE() bool {
+	onGCEOnce.Do(initOnGCE)
+	return onGCE
+}
+
+func initOnGCE() {
+	onGCE = testOnGCE()
+}
+
+func testOnGCE() bool {
+	// The user explicitly said they're on GCE, so trust them.
+	if os.Getenv(metadataHostEnv) != "" {
+		return true
+	}
+
+	ctx, cancel := context.WithCancel(context.Background())
+	defer cancel()
+
+	resc := make(chan bool, 2)
+
+	// Try two strategies in parallel.
+	// See https://github.com/googleapis/google-cloud-go/issues/194
+	go func() {
+		req, _ := http.NewRequest("GET", "http://"+metadataIP, nil)
+		req.Header.Set("User-Agent", userAgent)
+		res, err := newDefaultHTTPClient().Do(req.WithContext(ctx))
+		if err != nil {
+			resc <- false
+			return
+		}
+		defer res.Body.Close()
+		resc <- res.Header.Get("Metadata-Flavor") == "Google"
+	}()
+
+	go func() {
+		resolver := &net.Resolver{}
+		addrs, err := resolver.LookupHost(ctx, "metadata.google.internal.")
+		if err != nil || len(addrs) == 0 {
+			resc <- false
+			return
+		}
+		resc <- strsContains(addrs, metadataIP)
+	}()
+
+	tryHarder := systemInfoSuggestsGCE()
+	if tryHarder {
+		res := <-resc
+		if res {
+			// The first strategy succeeded, so let's use it.
+			return true
+		}
+		// Wait for either the DNS or metadata server probe to
+		// contradict the other one and say we are running on
+		// GCE. Give it a lot of time to do so, since the system
+		// info already suggests we're running on a GCE BIOS.
+		timer := time.NewTimer(5 * time.Second)
+		defer timer.Stop()
+		select {
+		case res = <-resc:
+			return res
+		case <-timer.C:
+			// Too slow. Who knows what this system is.
+			return false
+		}
+	}
+
+	// There's no hint from the system info that we're running on
+	// GCE, so use the first probe's result as truth, whether it's
+	// true or false. The goal here is to optimize for speed for
+	// users who are NOT running on GCE. We can't assume that
+	// either a DNS lookup or an HTTP request to a blackholed IP
+	// address is fast. Worst case this should return when the
+	// metaClient's Transport.ResponseHeaderTimeout or
+	// Transport.Dial.Timeout fires (in two seconds).
+	return <-resc
+}
+
+// Subscribe calls Client.SubscribeWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [SubscribeWithContext].
+func Subscribe(suffix string, fn func(v string, ok bool) error) error {
+	return defaultClient.SubscribeWithContext(context.Background(), suffix, func(ctx context.Context, v string, ok bool) error { return fn(v, ok) })
+}
+
+// SubscribeWithContext calls Client.SubscribeWithContext on the default client.
+func SubscribeWithContext(ctx context.Context, suffix string, fn func(ctx context.Context, v string, ok bool) error) error {
+	return defaultClient.SubscribeWithContext(ctx, suffix, fn)
+}
+
+// Get calls Client.GetWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [GetWithContext].
+func Get(suffix string) (string, error) {
+	return defaultClient.GetWithContext(context.Background(), suffix)
+}
+
+// GetWithContext calls Client.GetWithContext on the default client.
+func GetWithContext(ctx context.Context, suffix string) (string, error) {
+	return defaultClient.GetWithContext(ctx, suffix)
+}
+
+// ProjectID returns the current instance's project ID string.
+//
+// Deprecated: Please use the context aware variant [ProjectIDWithContext].
+func ProjectID() (string, error) {
+	return defaultClient.ProjectIDWithContext(context.Background())
+}
+
+// ProjectIDWithContext returns the current instance's project ID string.
+func ProjectIDWithContext(ctx context.Context) (string, error) {
+	return defaultClient.ProjectIDWithContext(ctx)
+}
+
+// NumericProjectID returns the current instance's numeric project ID.
+//
+// Deprecated: Please use the context aware variant [NumericProjectIDWithContext].
+func NumericProjectID() (string, error) {
+	return defaultClient.NumericProjectIDWithContext(context.Background())
+}
+
+// NumericProjectIDWithContext returns the current instance's numeric project ID.
+func NumericProjectIDWithContext(ctx context.Context) (string, error) {
+	return defaultClient.NumericProjectIDWithContext(ctx)
+}
+
+// InternalIP returns the instance's primary internal IP address.
+//
+// Deprecated: Please use the context aware variant [InternalIPWithContext].
+func InternalIP() (string, error) {
+	return defaultClient.InternalIPWithContext(context.Background())
+}
+
+// InternalIPWithContext returns the instance's primary internal IP address.
+func InternalIPWithContext(ctx context.Context) (string, error) {
+	return defaultClient.InternalIPWithContext(ctx)
+}
+
+// ExternalIP returns the instance's primary external (public) IP address.
+//
+// Deprecated: Please use the context aware variant [ExternalIPWithContext].
+func ExternalIP() (string, error) {
+	return defaultClient.ExternalIPWithContext(context.Background())
+}
+
+// ExternalIPWithContext returns the instance's primary external (public) IP address.
+func ExternalIPWithContext(ctx context.Context) (string, error) {
+	return defaultClient.ExternalIPWithContext(ctx)
+}
+
+// Email calls Client.EmailWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [EmailWithContext].
+func Email(serviceAccount string) (string, error) {
+	return defaultClient.EmailWithContext(context.Background(), serviceAccount)
+}
+
+// EmailWithContext calls Client.EmailWithContext on the default client.
+func EmailWithContext(ctx context.Context, serviceAccount string) (string, error) {
+	return defaultClient.EmailWithContext(ctx, serviceAccount)
+}
+
+// Hostname returns the instance's hostname. This will be of the form
+// "<instanceID>.c.<projID>.internal".
+//
+// Deprecated: Please use the context aware variant [HostnameWithContext].
+func Hostname() (string, error) {
+	return defaultClient.HostnameWithContext(context.Background())
+}
+
+// HostnameWithContext returns the instance's hostname. This will be of the form
+// "<instanceID>.c.<projID>.internal".
+func HostnameWithContext(ctx context.Context) (string, error) {
+	return defaultClient.HostnameWithContext(ctx)
+}
+
+// InstanceTags returns the list of user-defined instance tags,
+// assigned when initially creating a GCE instance.
+//
+// Deprecated: Please use the context aware variant [InstanceTagsWithContext].
+func InstanceTags() ([]string, error) {
+	return defaultClient.InstanceTagsWithContext(context.Background())
+}
+
+// InstanceTagsWithContext returns the list of user-defined instance tags,
+// assigned when initially creating a GCE instance.
+func InstanceTagsWithContext(ctx context.Context) ([]string, error) {
+	return defaultClient.InstanceTagsWithContext(ctx)
+}
+
+// InstanceID returns the current VM's numeric instance ID.
+//
+// Deprecated: Please use the context aware variant [InstanceIDWithContext].
+func InstanceID() (string, error) {
+	return defaultClient.InstanceIDWithContext(context.Background())
+}
+
+// InstanceIDWithContext returns the current VM's numeric instance ID.
+func InstanceIDWithContext(ctx context.Context) (string, error) {
+	return defaultClient.InstanceIDWithContext(ctx)
+}
+
+// InstanceName returns the current VM's instance ID string.
+//
+// Deprecated: Please use the context aware variant [InstanceNameWithContext].
+func InstanceName() (string, error) {
+	return defaultClient.InstanceNameWithContext(context.Background())
+}
+
+// InstanceNameWithContext returns the current VM's instance ID string.
+func InstanceNameWithContext(ctx context.Context) (string, error) {
+	return defaultClient.InstanceNameWithContext(ctx)
+}
+
+// Zone returns the current VM's zone, such as "us-central1-b".
+//
+// Deprecated: Please use the context aware variant [ZoneWithContext].
+func Zone() (string, error) {
+	return defaultClient.ZoneWithContext(context.Background())
+}
+
+// ZoneWithContext returns the current VM's zone, such as "us-central1-b".
+func ZoneWithContext(ctx context.Context) (string, error) {
+	return defaultClient.ZoneWithContext(ctx)
+}
+
+// InstanceAttributes calls Client.InstanceAttributesWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [InstanceAttributesWithContext.
+func InstanceAttributes() ([]string, error) {
+	return defaultClient.InstanceAttributesWithContext(context.Background())
+}
+
+// InstanceAttributesWithContext calls Client.ProjectAttributesWithContext on the default client.
+func InstanceAttributesWithContext(ctx context.Context) ([]string, error) {
+	return defaultClient.InstanceAttributesWithContext(ctx)
+}
+
+// ProjectAttributes calls Client.ProjectAttributesWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [ProjectAttributesWithContext].
+func ProjectAttributes() ([]string, error) {
+	return defaultClient.ProjectAttributesWithContext(context.Background())
+}
+
+// ProjectAttributesWithContext calls Client.ProjectAttributesWithContext on the default client.
+func ProjectAttributesWithContext(ctx context.Context) ([]string, error) {
+	return defaultClient.ProjectAttributesWithContext(ctx)
+}
+
+// InstanceAttributeValue calls Client.InstanceAttributeValueWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [InstanceAttributeValueWithContext].
+func InstanceAttributeValue(attr string) (string, error) {
+	return defaultClient.InstanceAttributeValueWithContext(context.Background(), attr)
+}
+
+// InstanceAttributeValueWithContext calls Client.InstanceAttributeValueWithContext on the default client.
+func InstanceAttributeValueWithContext(ctx context.Context, attr string) (string, error) {
+	return defaultClient.InstanceAttributeValueWithContext(ctx, attr)
+}
+
+// ProjectAttributeValue calls Client.ProjectAttributeValueWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [ProjectAttributeValueWithContext].
+func ProjectAttributeValue(attr string) (string, error) {
+	return defaultClient.ProjectAttributeValueWithContext(context.Background(), attr)
+}
+
+// ProjectAttributeValueWithContext calls Client.ProjectAttributeValueWithContext on the default client.
+func ProjectAttributeValueWithContext(ctx context.Context, attr string) (string, error) {
+	return defaultClient.ProjectAttributeValueWithContext(ctx, attr)
+}
+
+// Scopes calls Client.ScopesWithContext on the default client.
+//
+// Deprecated: Please use the context aware variant [ScopesWithContext].
+func Scopes(serviceAccount string) ([]string, error) {
+	return defaultClient.ScopesWithContext(context.Background(), serviceAccount)
+}
+
+// ScopesWithContext calls Client.ScopesWithContext on the default client.
+func ScopesWithContext(ctx context.Context, serviceAccount string) ([]string, error) {
+	return defaultClient.ScopesWithContext(ctx, serviceAccount)
+}
+
+func strsContains(ss []string, s string) bool {
+	for _, v := range ss {
+		if v == s {
+			return true
+		}
+	}
+	return false
+}
+
+// A Client provides metadata.
+type Client struct {
+	hc     *http.Client
+	logger *slog.Logger
+}
+
+// Options for configuring a [Client].
+type Options struct {
+	// Client is the HTTP client used to make requests. Optional.
+	Client *http.Client
+	// Logger is used to log information about HTTP request and responses.
+	// If not provided, nothing will be logged. Optional.
+	Logger *slog.Logger
+}
+
+// NewClient returns a Client that can be used to fetch metadata.
+// Returns the client that uses the specified http.Client for HTTP requests.
+// If nil is specified, returns the default client.
+func NewClient(c *http.Client) *Client {
+	return NewWithOptions(&Options{
+		Client: c,
+	})
+}
+
+// NewWithOptions returns a Client that is configured with the provided Options.
+func NewWithOptions(opts *Options) *Client {
+	if opts == nil {
+		return defaultClient
+	}
+	client := opts.Client
+	if client == nil {
+		client = newDefaultHTTPClient()
+	}
+	logger := opts.Logger
+	if logger == nil {
+		logger = slog.New(noOpHandler{})
+	}
+	return &Client{hc: client, logger: logger}
+}
+
+// getETag returns a value from the metadata service as well as the associated ETag.
+// This func is otherwise equivalent to Get.
+func (c *Client) getETag(ctx context.Context, suffix string) (value, etag string, err error) {
+	// Using a fixed IP makes it very difficult to spoof the metadata service in
+	// a container, which is an important use-case for local testing of cloud
+	// deployments. To enable spoofing of the metadata service, the environment
+	// variable GCE_METADATA_HOST is first inspected to decide where metadata
+	// requests shall go.
+	host := os.Getenv(metadataHostEnv)
+	if host == "" {
+		// Using 169.254.169.254 instead of "metadata" here because Go
+		// binaries built with the "netgo" tag and without cgo won't
+		// know the search suffix for "metadata" is
+		// ".google.internal", and this IP address is documented as
+		// being stable anyway.
+		host = metadataIP
+	}
+	suffix = strings.TrimLeft(suffix, "/")
+	u := "http://" + host + "/computeMetadata/v1/" + suffix
+	req, err := http.NewRequestWithContext(ctx, "GET", u, nil)
+	if err != nil {
+		return "", "", err
+	}
+	req.Header.Set("Metadata-Flavor", "Google")
+	req.Header.Set("User-Agent", userAgent)
+	var res *http.Response
+	var reqErr error
+	var body []byte
+	retryer := newRetryer()
+	for {
+		c.logger.DebugContext(ctx, "metadata request", "request", httpRequest(req, nil))
+		res, reqErr = c.hc.Do(req)
+		var code int
+		if res != nil {
+			code = res.StatusCode
+			body, err = io.ReadAll(res.Body)
+			if err != nil {
+				res.Body.Close()
+				return "", "", err
+			}
+			c.logger.DebugContext(ctx, "metadata response", "response", httpResponse(res, body))
+			res.Body.Close()
+		}
+		if delay, shouldRetry := retryer.Retry(code, reqErr); shouldRetry {
+			if res != nil && res.Body != nil {
+				res.Body.Close()
+			}
+			if err := sleep(ctx, delay); err != nil {
+				return "", "", err
+			}
+			continue
+		}
+		break
+	}
+	if reqErr != nil {
+		return "", "", reqErr
+	}
+	if res.StatusCode == http.StatusNotFound {
+		return "", "", NotDefinedError(suffix)
+	}
+	if res.StatusCode != 200 {
+		return "", "", &Error{Code: res.StatusCode, Message: string(body)}
+	}
+	return string(body), res.Header.Get("Etag"), nil
+}
+
+// Get returns a value from the metadata service.
+// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
+//
+// If the GCE_METADATA_HOST environment variable is not defined, a default of
+// 169.254.169.254 will be used instead.
+//
+// If the requested metadata is not defined, the returned error will
+// be of type NotDefinedError.
+//
+// Deprecated: Please use the context aware variant [Client.GetWithContext].
+func (c *Client) Get(suffix string) (string, error) {
+	return c.GetWithContext(context.Background(), suffix)
+}
+
+// GetWithContext returns a value from the metadata service.
+// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
+//
+// If the GCE_METADATA_HOST environment variable is not defined, a default of
+// 169.254.169.254 will be used instead.
+//
+// If the requested metadata is not defined, the returned error will
+// be of type NotDefinedError.
+//
+// NOTE: Without an extra deadline in the context this call can take in the
+// worst case, with internal backoff retries, up to 15 seconds (e.g. when server
+// is responding slowly). Pass context with additional timeouts when needed.
+func (c *Client) GetWithContext(ctx context.Context, suffix string) (string, error) {
+	val, _, err := c.getETag(ctx, suffix)
+	return val, err
+}
+
+func (c *Client) getTrimmed(ctx context.Context, suffix string) (s string, err error) {
+	s, err = c.GetWithContext(ctx, suffix)
+	s = strings.TrimSpace(s)
+	return
+}
+
+func (c *Client) lines(ctx context.Context, suffix string) ([]string, error) {
+	j, err := c.GetWithContext(ctx, suffix)
+	if err != nil {
+		return nil, err
+	}
+	s := strings.Split(strings.TrimSpace(j), "\n")
+	for i := range s {
+		s[i] = strings.TrimSpace(s[i])
+	}
+	return s, nil
+}
+
+// ProjectID returns the current instance's project ID string.
+//
+// Deprecated: Please use the context aware variant [Client.ProjectIDWithContext].
+func (c *Client) ProjectID() (string, error) { return c.ProjectIDWithContext(context.Background()) }
+
+// ProjectIDWithContext returns the current instance's project ID string.
+func (c *Client) ProjectIDWithContext(ctx context.Context) (string, error) { return projID.get(ctx, c) }
+
+// NumericProjectID returns the current instance's numeric project ID.
+//
+// Deprecated: Please use the context aware variant [Client.NumericProjectIDWithContext].
+func (c *Client) NumericProjectID() (string, error) {
+	return c.NumericProjectIDWithContext(context.Background())
+}
+
+// NumericProjectIDWithContext returns the current instance's numeric project ID.
+func (c *Client) NumericProjectIDWithContext(ctx context.Context) (string, error) {
+	return projNum.get(ctx, c)
+}
+
+// InstanceID returns the current VM's numeric instance ID.
+//
+// Deprecated: Please use the context aware variant [Client.InstanceIDWithContext].
+func (c *Client) InstanceID() (string, error) {
+	return c.InstanceIDWithContext(context.Background())
+}
+
+// InstanceIDWithContext returns the current VM's numeric instance ID.
+func (c *Client) InstanceIDWithContext(ctx context.Context) (string, error) {
+	return instID.get(ctx, c)
+}
+
+// InternalIP returns the instance's primary internal IP address.
+//
+// Deprecated: Please use the context aware variant [Client.InternalIPWithContext].
+func (c *Client) InternalIP() (string, error) {
+	return c.InternalIPWithContext(context.Background())
+}
+
+// InternalIPWithContext returns the instance's primary internal IP address.
+func (c *Client) InternalIPWithContext(ctx context.Context) (string, error) {
+	return c.getTrimmed(ctx, "instance/network-interfaces/0/ip")
+}
+
+// Email returns the email address associated with the service account.
+//
+// Deprecated: Please use the context aware variant [Client.EmailWithContext].
+func (c *Client) Email(serviceAccount string) (string, error) {
+	return c.EmailWithContext(context.Background(), serviceAccount)
+}
+
+// EmailWithContext returns the email address associated with the service account.
+// The serviceAccount parameter default value (empty string or "default" value)
+// will use the instance's main account.
+func (c *Client) EmailWithContext(ctx context.Context, serviceAccount string) (string, error) {
+	if serviceAccount == "" {
+		serviceAccount = "default"
+	}
+	return c.getTrimmed(ctx, "instance/service-accounts/"+serviceAccount+"/email")
+}
+
+// ExternalIP returns the instance's primary external (public) IP address.
+//
+// Deprecated: Please use the context aware variant [Client.ExternalIPWithContext].
+func (c *Client) ExternalIP() (string, error) {
+	return c.ExternalIPWithContext(context.Background())
+}
+
+// ExternalIPWithContext returns the instance's primary external (public) IP address.
+func (c *Client) ExternalIPWithContext(ctx context.Context) (string, error) {
+	return c.getTrimmed(ctx, "instance/network-interfaces/0/access-configs/0/external-ip")
+}
+
+// Hostname returns the instance's hostname. This will be of the form
+// "<instanceID>.c.<projID>.internal".
+//
+// Deprecated: Please use the context aware variant [Client.HostnameWithContext].
+func (c *Client) Hostname() (string, error) {
+	return c.HostnameWithContext(context.Background())
+}
+
+// HostnameWithContext returns the instance's hostname. This will be of the form
+// "<instanceID>.c.<projID>.internal".
+func (c *Client) HostnameWithContext(ctx context.Context) (string, error) {
+	return c.getTrimmed(ctx, "instance/hostname")
+}
+
+// InstanceTags returns the list of user-defined instance tags.
+//
+// Deprecated: Please use the context aware variant [Client.InstanceTagsWithContext].
+func (c *Client) InstanceTags() ([]string, error) {
+	return c.InstanceTagsWithContext(context.Background())
+}
+
+// InstanceTagsWithContext returns the list of user-defined instance tags,
+// assigned when initially creating a GCE instance.
+func (c *Client) InstanceTagsWithContext(ctx context.Context) ([]string, error) {
+	var s []string
+	j, err := c.GetWithContext(ctx, "instance/tags")
+	if err != nil {
+		return nil, err
+	}
+	if err := json.NewDecoder(strings.NewReader(j)).Decode(&s); err != nil {
+		return nil, err
+	}
+	return s, nil
+}
+
+// InstanceName returns the current VM's instance ID string.
+//
+// Deprecated: Please use the context aware variant [Client.InstanceNameWithContext].
+func (c *Client) InstanceName() (string, error) {
+	return c.InstanceNameWithContext(context.Background())
+}
+
+// InstanceNameWithContext returns the current VM's instance ID string.
+func (c *Client) InstanceNameWithContext(ctx context.Context) (string, error) {
+	return c.getTrimmed(ctx, "instance/name")
+}
+
+// Zone returns the current VM's zone, such as "us-central1-b".
+//
+// Deprecated: Please use the context aware variant [Client.ZoneWithContext].
+func (c *Client) Zone() (string, error) {
+	return c.ZoneWithContext(context.Background())
+}
+
+// ZoneWithContext returns the current VM's zone, such as "us-central1-b".
+func (c *Client) ZoneWithContext(ctx context.Context) (string, error) {
+	zone, err := c.getTrimmed(ctx, "instance/zone")
+	// zone is of the form "projects/<projNum>/zones/<zoneName>".
+	if err != nil {
+		return "", err
+	}
+	return zone[strings.LastIndex(zone, "/")+1:], nil
+}
+
+// InstanceAttributes returns the list of user-defined attributes,
+// assigned when initially creating a GCE VM instance. The value of an
+// attribute can be obtained with InstanceAttributeValue.
+//
+// Deprecated: Please use the context aware variant [Client.InstanceAttributesWithContext].
+func (c *Client) InstanceAttributes() ([]string, error) {
+	return c.InstanceAttributesWithContext(context.Background())
+}
+
+// InstanceAttributesWithContext returns the list of user-defined attributes,
+// assigned when initially creating a GCE VM instance. The value of an
+// attribute can be obtained with InstanceAttributeValue.
+func (c *Client) InstanceAttributesWithContext(ctx context.Context) ([]string, error) {
+	return c.lines(ctx, "instance/attributes/")
+}
+
+// ProjectAttributes returns the list of user-defined attributes
+// applying to the project as a whole, not just this VM.  The value of
+// an attribute can be obtained with ProjectAttributeValue.
+//
+// Deprecated: Please use the context aware variant [Client.ProjectAttributesWithContext].
+func (c *Client) ProjectAttributes() ([]string, error) {
+	return c.ProjectAttributesWithContext(context.Background())
+}
+
+// ProjectAttributesWithContext returns the list of user-defined attributes
+// applying to the project as a whole, not just this VM.  The value of
+// an attribute can be obtained with ProjectAttributeValue.
+func (c *Client) ProjectAttributesWithContext(ctx context.Context) ([]string, error) {
+	return c.lines(ctx, "project/attributes/")
+}
+
+// InstanceAttributeValue returns the value of the provided VM
+// instance attribute.
+//
+// If the requested attribute is not defined, the returned error will
+// be of type NotDefinedError.
+//
+// InstanceAttributeValue may return ("", nil) if the attribute was
+// defined to be the empty string.
+//
+// Deprecated: Please use the context aware variant [Client.InstanceAttributeValueWithContext].
+func (c *Client) InstanceAttributeValue(attr string) (string, error) {
+	return c.InstanceAttributeValueWithContext(context.Background(), attr)
+}
+
+// InstanceAttributeValueWithContext returns the value of the provided VM
+// instance attribute.
+//
+// If the requested attribute is not defined, the returned error will
+// be of type NotDefinedError.
+//
+// InstanceAttributeValue may return ("", nil) if the attribute was
+// defined to be the empty string.
+func (c *Client) InstanceAttributeValueWithContext(ctx context.Context, attr string) (string, error) {
+	return c.GetWithContext(ctx, "instance/attributes/"+attr)
+}
+
+// ProjectAttributeValue returns the value of the provided
+// project attribute.
+//
+// If the requested attribute is not defined, the returned error will
+// be of type NotDefinedError.
+//
+// ProjectAttributeValue may return ("", nil) if the attribute was
+// defined to be the empty string.
+//
+// Deprecated: Please use the context aware variant [Client.ProjectAttributeValueWithContext].
+func (c *Client) ProjectAttributeValue(attr string) (string, error) {
+	return c.ProjectAttributeValueWithContext(context.Background(), attr)
+}
+
+// ProjectAttributeValueWithContext returns the value of the provided
+// project attribute.
+//
+// If the requested attribute is not defined, the returned error will
+// be of type NotDefinedError.
+//
+// ProjectAttributeValue may return ("", nil) if the attribute was
+// defined to be the empty string.
+func (c *Client) ProjectAttributeValueWithContext(ctx context.Context, attr string) (string, error) {
+	return c.GetWithContext(ctx, "project/attributes/"+attr)
+}
+
+// Scopes returns the service account scopes for the given account.
+// The account may be empty or the string "default" to use the instance's
+// main account.
+//
+// Deprecated: Please use the context aware variant [Client.ScopesWithContext].
+func (c *Client) Scopes(serviceAccount string) ([]string, error) {
+	return c.ScopesWithContext(context.Background(), serviceAccount)
+}
+
+// ScopesWithContext returns the service account scopes for the given account.
+// The account may be empty or the string "default" to use the instance's
+// main account.
+func (c *Client) ScopesWithContext(ctx context.Context, serviceAccount string) ([]string, error) {
+	if serviceAccount == "" {
+		serviceAccount = "default"
+	}
+	return c.lines(ctx, "instance/service-accounts/"+serviceAccount+"/scopes")
+}
+
+// Subscribe subscribes to a value from the metadata service.
+// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
+// The suffix may contain query parameters.
+//
+// Deprecated: Please use the context aware variant [Client.SubscribeWithContext].
+func (c *Client) Subscribe(suffix string, fn func(v string, ok bool) error) error {
+	return c.SubscribeWithContext(context.Background(), suffix, func(ctx context.Context, v string, ok bool) error { return fn(v, ok) })
+}
+
+// SubscribeWithContext subscribes to a value from the metadata service.
+// The suffix is appended to "http://${GCE_METADATA_HOST}/computeMetadata/v1/".
+// The suffix may contain query parameters.
+//
+// SubscribeWithContext calls fn with the latest metadata value indicated by the
+// provided suffix. If the metadata value is deleted, fn is called with the
+// empty string and ok false. Subscribe blocks until fn returns a non-nil error
+// or the value is deleted. Subscribe returns the error value returned from the
+// last call to fn, which may be nil when ok == false.
+func (c *Client) SubscribeWithContext(ctx context.Context, suffix string, fn func(ctx context.Context, v string, ok bool) error) error {
+	const failedSubscribeSleep = time.Second * 5
+
+	// First check to see if the metadata value exists at all.
+	val, lastETag, err := c.getETag(ctx, suffix)
+	if err != nil {
+		return err
+	}
+
+	if err := fn(ctx, val, true); err != nil {
+		return err
+	}
+
+	ok := true
+	if strings.ContainsRune(suffix, '?') {
+		suffix += "&wait_for_change=true&last_etag="
+	} else {
+		suffix += "?wait_for_change=true&last_etag="
+	}
+	for {
+		val, etag, err := c.getETag(ctx, suffix+url.QueryEscape(lastETag))
+		if err != nil {
+			if _, deleted := err.(NotDefinedError); !deleted {
+				time.Sleep(failedSubscribeSleep)
+				continue // Retry on other errors.
+			}
+			ok = false
+		}
+		lastETag = etag
+
+		if err := fn(ctx, val, ok); err != nil || !ok {
+			return err
+		}
+	}
+}
+
+// Error contains an error response from the server.
+type Error struct {
+	// Code is the HTTP response status code.
+	Code int
+	// Message is the server response message.
+	Message string
+}
+
+func (e *Error) Error() string {
+	return fmt.Sprintf("compute: Received %d `%s`", e.Code, e.Message)
+}

vendor/cloud.google.com/go/compute/metadata/retry.go 🔗

@@ -0,0 +1,114 @@
+// Copyright 2021 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package metadata
+
+import (
+	"context"
+	"io"
+	"math/rand"
+	"net/http"
+	"time"
+)
+
+const (
+	maxRetryAttempts = 5
+)
+
+var (
+	syscallRetryable = func(error) bool { return false }
+)
+
+// defaultBackoff is basically equivalent to gax.Backoff without the need for
+// the dependency.
+type defaultBackoff struct {
+	max time.Duration
+	mul float64
+	cur time.Duration
+}
+
+func (b *defaultBackoff) Pause() time.Duration {
+	d := time.Duration(1 + rand.Int63n(int64(b.cur)))
+	b.cur = time.Duration(float64(b.cur) * b.mul)
+	if b.cur > b.max {
+		b.cur = b.max
+	}
+	return d
+}
+
+// sleep is the equivalent of gax.Sleep without the need for the dependency.
+func sleep(ctx context.Context, d time.Duration) error {
+	t := time.NewTimer(d)
+	select {
+	case <-ctx.Done():
+		t.Stop()
+		return ctx.Err()
+	case <-t.C:
+		return nil
+	}
+}
+
+func newRetryer() *metadataRetryer {
+	return &metadataRetryer{bo: &defaultBackoff{
+		cur: 100 * time.Millisecond,
+		max: 30 * time.Second,
+		mul: 2,
+	}}
+}
+
+type backoff interface {
+	Pause() time.Duration
+}
+
+type metadataRetryer struct {
+	bo       backoff
+	attempts int
+}
+
+func (r *metadataRetryer) Retry(status int, err error) (time.Duration, bool) {
+	if status == http.StatusOK {
+		return 0, false
+	}
+	retryOk := shouldRetry(status, err)
+	if !retryOk {
+		return 0, false
+	}
+	if r.attempts == maxRetryAttempts {
+		return 0, false
+	}
+	r.attempts++
+	return r.bo.Pause(), true
+}
+
+func shouldRetry(status int, err error) bool {
+	if 500 <= status && status <= 599 {
+		return true
+	}
+	if err == io.ErrUnexpectedEOF {
+		return true
+	}
+	// Transient network errors should be retried.
+	if syscallRetryable(err) {
+		return true
+	}
+	if err, ok := err.(interface{ Temporary() bool }); ok {
+		if err.Temporary() {
+			return true
+		}
+	}
+	if err, ok := err.(interface{ Unwrap() error }); ok {
+		return shouldRetry(status, err.Unwrap())
+	}
+	return false
+}

vendor/cloud.google.com/go/compute/metadata/retry_linux.go 🔗

@@ -0,0 +1,31 @@
+// Copyright 2021 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+// +build linux
+
+package metadata
+
+import (
+	"errors"
+	"syscall"
+)
+
+func init() {
+	// Initialize syscallRetryable to return true on transient socket-level
+	// errors. These errors are specific to Linux.
+	syscallRetryable = func(err error) bool {
+		return errors.Is(err, syscall.ECONNRESET) || errors.Is(err, syscall.ECONNREFUSED)
+	}
+}

vendor/cloud.google.com/go/compute/metadata/syscheck.go 🔗

@@ -0,0 +1,26 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build !windows && !linux
+
+package metadata
+
+// systemInfoSuggestsGCE reports whether the local system (without
+// doing network requests) suggests that we're running on GCE. If this
+// returns true, testOnGCE tries a bit harder to reach its metadata
+// server.
+func systemInfoSuggestsGCE() bool {
+	// We don't currently have checks for other GOOS
+	return false
+}

vendor/cloud.google.com/go/compute/metadata/syscheck_linux.go 🔗

@@ -0,0 +1,28 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build linux
+
+package metadata
+
+import (
+	"os"
+	"strings"
+)
+
+func systemInfoSuggestsGCE() bool {
+	b, _ := os.ReadFile("/sys/class/dmi/id/product_name")
+	name := strings.TrimSpace(string(b))
+	return name == "Google" || name == "Google Compute Engine"
+}

vendor/cloud.google.com/go/compute/metadata/syscheck_windows.go 🔗

@@ -0,0 +1,38 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+//      http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//go:build windows
+
+package metadata
+
+import (
+	"strings"
+
+	"golang.org/x/sys/windows/registry"
+)
+
+func systemInfoSuggestsGCE() bool {
+	k, err := registry.OpenKey(registry.LOCAL_MACHINE, `SYSTEM\HardwareConfig\Current`, registry.QUERY_VALUE)
+	if err != nil {
+		return false
+	}
+	defer k.Close()
+
+	s, _, err := k.GetStringValue("SystemProductName")
+	if err != nil {
+		return false
+	}
+	s = strings.TrimSpace(s)
+	return strings.HasPrefix(s, "Google")
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/CHANGELOG.md 🔗

@@ -0,0 +1,849 @@
+# Release History
+
+## 1.17.0 (2025-01-07)
+
+### Features Added
+
+* Added field `OperationLocationResultPath` to `runtime.NewPollerOptions[T]` for LROs that use the `Operation-Location` pattern.
+* Support `encoding.TextMarshaler` and `encoding.TextUnmarshaler` interfaces in `arm.ResourceID`.
+
+## 1.16.0 (2024-10-17)
+
+### Features Added
+
+* Added field `Kind` to `runtime.StartSpanOptions` to allow a kind to be set when starting a span.
+
+### Bugs Fixed
+
+* `BearerTokenPolicy` now rewinds request bodies before retrying
+
+## 1.15.0 (2024-10-14)
+
+### Features Added
+
+* `BearerTokenPolicy` handles CAE claims challenges
+
+### Bugs Fixed
+
+* Omit the `ResponseError.RawResponse` field from JSON marshaling so instances can be marshaled.
+* Fixed an integer overflow in the retry policy.
+
+### Other Changes
+
+* Update dependencies.
+
+## 1.14.0 (2024-08-07)
+
+### Features Added
+
+* Added field `Attributes` to `runtime.StartSpanOptions` to simplify creating spans with attributes.
+
+### Other Changes
+
+* Include the HTTP verb and URL in `log.EventRetryPolicy` log entries so it's clear which operation is being retried.
+
+## 1.13.0 (2024-07-16)
+
+### Features Added
+
+- Added runtime.NewRequestFromRequest(), allowing for a policy.Request to be created from an existing *http.Request.
+
+## 1.12.0 (2024-06-06)
+
+### Features Added
+
+* Added field `StatusCodes` to `runtime.FetcherForNextLinkOptions` allowing for additional HTTP status codes indicating success.
+* Added func `NewUUID` to the `runtime` package for generating UUIDs.
+
+### Bugs Fixed
+
+* Fixed an issue that prevented pollers using the `Operation-Location` strategy from unmarshaling the final result in some cases.
+
+### Other Changes
+
+* Updated dependencies.
+
+## 1.11.1 (2024-04-02)
+
+### Bugs Fixed
+
+* Pollers that use the `Location` header won't consider `http.StatusRequestTimeout` a terminal failure.
+* `runtime.Poller[T].Result` won't consider non-terminal error responses as terminal.
+
+## 1.11.0 (2024-04-01)
+
+### Features Added
+
+* Added `StatusCodes` to `arm/policy.RegistrationOptions` to allow supporting non-standard HTTP status codes during registration.
+* Added field `InsecureAllowCredentialWithHTTP` to `azcore.ClientOptions` and dependent authentication pipeline policies.
+* Added type `MultipartContent` to the `streaming` package to support multipart/form payloads with custom Content-Type and file name.
+
+### Bugs Fixed
+
+* `runtime.SetMultipartFormData` won't try to stringify `[]byte` values.
+* Pollers that use the `Location` header won't consider `http.StatusTooManyRequests` a terminal failure.
+
+### Other Changes
+
+* Update dependencies.
+
+## 1.10.0 (2024-02-29)
+
+### Features Added
+
+* Added logging event `log.EventResponseError` that will contain the contents of `ResponseError.Error()` whenever an `azcore.ResponseError` is created.
+* Added `runtime.NewResponseErrorWithErrorCode` for creating an `azcore.ResponseError` with a caller-supplied error code.
+* Added type `MatchConditions` for use in conditional requests.
+
+### Bugs Fixed
+
+* Fixed a potential race condition between `NullValue` and `IsNullValue`.
+* `runtime.EncodeQueryParams` will escape semicolons before calling `url.ParseQuery`.
+
+### Other Changes
+
+* Update dependencies.
+
+## 1.9.2 (2024-02-06)
+
+### Bugs Fixed
+
+* `runtime.MarshalAsByteArray` and `runtime.MarshalAsJSON` will preserve the preexisting value of the `Content-Type` header.
+
+### Other Changes
+
+* Update to latest version of `internal`.
+
+## 1.9.1 (2023-12-11)
+
+### Bugs Fixed
+
+* The `retry-after-ms` and `x-ms-retry-after-ms` headers weren't being checked during retries.
+
+### Other Changes
+
+* Update dependencies.
+
+## 1.9.0 (2023-11-06)
+
+### Breaking Changes
+> These changes affect only code written against previous beta versions of `v1.7.0` and `v1.8.0`
+* The function `NewTokenCredential` has been removed from the `fake` package. Use a literal `&fake.TokenCredential{}` instead.
+* The field `TracingNamespace` in `runtime.PipelineOptions` has been replaced by `TracingOptions`.
+
+### Bugs Fixed
+
+* Fixed an issue that could cause some allowed HTTP header values to not show up in logs.
+* Include error text instead of error type in traces when the transport returns an error.
+* Fixed an issue that could cause an HTTP/2 request to hang when the TCP connection becomes unresponsive.
+* Block key and SAS authentication for non TLS protected endpoints.
+* Passing a `nil` credential value will no longer cause a panic. Instead, the authentication is skipped.
+* Calling `Error` on a zero-value `azcore.ResponseError` will no longer panic.
+* Fixed an issue in `fake.PagerResponder[T]` that would cause a trailing error to be omitted when iterating over pages.
+* Context values created by `azcore` will no longer flow across disjoint HTTP requests.
+
+### Other Changes
+
+* Skip generating trace info for no-op tracers.
+* The `clientName` paramater in client constructors has been renamed to `moduleName`.
+
+## 1.9.0-beta.1 (2023-10-05)
+
+### Other Changes
+
+* The beta features for tracing and fakes have been reinstated.
+
+## 1.8.0 (2023-10-05)
+
+### Features Added
+
+* This includes the following features from `v1.8.0-beta.N` releases.
+  * Claims and CAE for authentication.
+  * New `messaging` package.
+  * Various helpers in the `runtime` package.
+  * Deprecation of `runtime.With*` funcs and their replacements in the `policy` package.
+* Added types `KeyCredential` and `SASCredential` to the `azcore` package.
+  * Includes their respective constructor functions.
+* Added types `KeyCredentialPolicy` and `SASCredentialPolicy` to the `azcore/runtime` package.
+  * Includes their respective constructor functions and options types.
+
+### Breaking Changes
+> These changes affect only code written against beta versions of `v1.8.0`
+* The beta features for tracing and fakes have been omitted for this release.
+
+### Bugs Fixed
+
+* Fixed an issue that could cause some ARM RPs to not be automatically registered.
+* Block bearer token authentication for non TLS protected endpoints.
+
+### Other Changes
+
+* Updated dependencies.
+
+## 1.8.0-beta.3 (2023-09-07)
+
+### Features Added
+
+* Added function `FetcherForNextLink` and `FetcherForNextLinkOptions` to the `runtime` package to centralize creation of `Pager[T].Fetcher` from a next link URL.
+
+### Bugs Fixed
+
+* Suppress creating spans for nested SDK API calls. The HTTP span will be a child of the outer API span.
+
+### Other Changes
+
+* The following functions in the `runtime` package are now exposed from the `policy` package, and the `runtime` versions have been deprecated.
+  * `WithCaptureResponse`
+  * `WithHTTPHeader`
+  * `WithRetryOptions`
+
+## 1.7.2 (2023-09-06)
+
+### Bugs Fixed
+
+* Fix default HTTP transport to work in WASM modules.
+
+## 1.8.0-beta.2 (2023-08-14)
+
+### Features Added
+
+* Added function `SanitizePagerPollerPath` to the `server` package to centralize sanitization and formalize the contract.
+* Added `TokenRequestOptions.EnableCAE` to indicate whether to request a CAE token.
+
+### Breaking Changes
+
+> This change affects only code written against beta version `v1.8.0-beta.1`.
+* `messaging.CloudEvent` deserializes JSON objects as `[]byte`, instead of `json.RawMessage`. See the documentation for CloudEvent.Data for more information.
+
+> This change affects only code written against beta versions `v1.7.0-beta.2` and `v1.8.0-beta.1`.
+* Removed parameter from method `Span.End()` and its type `tracing.SpanEndOptions`. This API GA'ed in `v1.2.0` so we cannot change it.
+
+### Bugs Fixed
+
+* Propagate any query parameters when constructing a fake poller and/or injecting next links.
+
+## 1.7.1 (2023-08-14)
+
+## Bugs Fixed
+
+* Enable TLS renegotiation in the default transport policy.
+
+## 1.8.0-beta.1 (2023-07-12)
+
+### Features Added
+
+- `messaging/CloudEvent` allows you to serialize/deserialize CloudEvents, as described in the CloudEvents 1.0 specification: [link](https://github.com/cloudevents/spec)
+
+### Other Changes
+
+* The beta features for CAE, tracing, and fakes have been reinstated.
+
+## 1.7.0 (2023-07-12)
+
+### Features Added
+* Added method `WithClientName()` to type `azcore.Client` to support shallow cloning of a client with a new name used for tracing.
+
+### Breaking Changes
+> These changes affect only code written against beta versions v1.7.0-beta.1 or v1.7.0-beta.2
+* The beta features for CAE, tracing, and fakes have been omitted for this release.
+
+## 1.7.0-beta.2 (2023-06-06)
+
+### Breaking Changes
+> These changes affect only code written against beta version v1.7.0-beta.1
+* Method `SpanFromContext()` on type `tracing.Tracer` had the `bool` return value removed.
+  * This includes the field `SpanFromContext` in supporting type `tracing.TracerOptions`.
+* Method `AddError()` has been removed from type `tracing.Span`.
+* Method `Span.End()` now requires an argument of type `*tracing.SpanEndOptions`.
+
+## 1.6.1 (2023-06-06)
+
+### Bugs Fixed
+* Fixed an issue in `azcore.NewClient()` and `arm.NewClient()` that could cause an incorrect module name to be used in telemetry.
+
+### Other Changes
+* This version contains all bug fixes from `v1.7.0-beta.1`
+
+## 1.7.0-beta.1 (2023-05-24)
+
+### Features Added
+* Restored CAE support for ARM clients.
+* Added supporting features to enable distributed tracing.
+  * Added func `runtime.StartSpan()` for use by SDKs to start spans.
+  * Added method `WithContext()` to `runtime.Request` to support shallow cloning with a new context.
+  * Added field `TracingNamespace` to `runtime.PipelineOptions`.
+  * Added field `Tracer` to `runtime.NewPollerOptions` and `runtime.NewPollerFromResumeTokenOptions` types.
+  * Added field `SpanFromContext` to `tracing.TracerOptions`.
+  * Added methods `Enabled()`, `SetAttributes()`, and `SpanFromContext()` to `tracing.Tracer`.
+  * Added supporting pipeline policies to include HTTP spans when creating clients.
+* Added package `fake` to support generated fakes packages in SDKs.
+  * The package contains public surface area exposed by fake servers and supporting APIs intended only for use by the fake server implementations.
+  * Added an internal fake poller implementation.
+
+### Bugs Fixed
+* Retry policy always clones the underlying `*http.Request` before invoking the next policy.
+* Added some non-standard error codes to the list of error codes for unregistered resource providers.
+
+## 1.6.0 (2023-05-04)
+
+### Features Added
+* Added support for ARM cross-tenant authentication. Set the `AuxiliaryTenants` field of `arm.ClientOptions` to enable.
+* Added `TenantID` field to `policy.TokenRequestOptions`.
+
+## 1.5.0 (2023-04-06)
+
+### Features Added
+* Added `ShouldRetry` to `policy.RetryOptions` for finer-grained control over when to retry.
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.5.0-beta.1
+> These features will return in v1.6.0-beta.1.
+* Removed `TokenRequestOptions.Claims` and `.TenantID`
+* Removed ARM client support for CAE and cross-tenant auth.
+
+### Bugs Fixed
+* Added non-conformant LRO terminal states `Cancelled` and `Completed`.
+
+### Other Changes
+* Updated to latest `internal` module.
+
+## 1.5.0-beta.1 (2023-03-02)
+
+### Features Added
+* This release includes the features added in v1.4.0-beta.1
+
+## 1.4.0 (2023-03-02)
+> This release doesn't include features added in v1.4.0-beta.1. They will return in v1.5.0-beta.1.
+
+### Features Added
+* Add `Clone()` method for `arm/policy.ClientOptions`.
+
+### Bugs Fixed
+* ARM's RP registration policy will no longer swallow unrecognized errors.
+* Fixed an issue in `runtime.NewPollerFromResumeToken()` when resuming a `Poller` with a custom `PollingHandler`.
+* Fixed wrong policy copy in `arm/runtime.NewPipeline()`.
+
+## 1.4.0-beta.1 (2023-02-02)
+
+### Features Added
+* Added support for ARM cross-tenant authentication. Set the `AuxiliaryTenants` field of `arm.ClientOptions` to enable.
+* Added `Claims` and `TenantID` fields to `policy.TokenRequestOptions`.
+* ARM bearer token policy handles CAE challenges.
+
+## 1.3.1 (2023-02-02)
+
+### Other Changes
+* Update dependencies to latest versions.
+
+## 1.3.0 (2023-01-06)
+
+### Features Added
+* Added `BearerTokenOptions.AuthorizationHandler` to enable extending `runtime.BearerTokenPolicy`
+  with custom authorization logic
+* Added `Client` types and matching constructors to the `azcore` and `arm` packages.  These represent a basic client for HTTP and ARM respectively.
+
+### Other Changes
+* Updated `internal` module to latest version.
+* `policy/Request.SetBody()` allows replacing a request's body with an empty one
+
+## 1.2.0 (2022-11-04)
+
+### Features Added
+* Added `ClientOptions.APIVersion` field, which overrides the default version a client
+  requests of the service, if the client supports this (all ARM clients do).
+* Added package `tracing` that contains the building blocks for distributed tracing.
+* Added field `TracingProvider` to type `policy.ClientOptions` that will be used to set the per-client tracing implementation.
+
+### Bugs Fixed
+* Fixed an issue in `runtime.SetMultipartFormData` to properly handle slices of `io.ReadSeekCloser`.
+* Fixed the MaxRetryDelay default to be 60s.
+* Failure to poll the state of an LRO will now return an `*azcore.ResponseError` for poller types that require this behavior.
+* Fixed a bug in `runtime.NewPipeline` that would cause pipeline-specified allowed headers and query parameters to be lost.
+
+### Other Changes
+* Retain contents of read-only fields when sending requests.
+
+## 1.1.4 (2022-10-06)
+
+### Bugs Fixed
+* Don't retry a request if the `Retry-After` delay is greater than the configured `RetryOptions.MaxRetryDelay`.
+* `runtime.JoinPaths`: do not unconditionally add a forward slash before the query string
+
+### Other Changes
+* Removed logging URL from retry policy as it's redundant.
+* Retry policy logs when it exits due to a non-retriable status code.
+
+## 1.1.3 (2022-09-01)
+
+### Bugs Fixed
+* Adjusted the initial retry delay to 800ms per the Azure SDK guidelines.
+
+## 1.1.2 (2022-08-09)
+
+### Other Changes
+* Fixed various doc bugs.
+
+## 1.1.1 (2022-06-30)
+
+### Bugs Fixed
+* Avoid polling when a RELO LRO synchronously terminates.
+
+## 1.1.0 (2022-06-03)
+
+### Other Changes
+* The one-second floor for `Frequency` when calling `PollUntilDone()` has been removed when running tests.
+
+## 1.0.0 (2022-05-12)
+
+### Features Added
+* Added interface `runtime.PollingHandler` to support custom poller implementations.
+  * Added field `PollingHandler` of this type to `runtime.NewPollerOptions[T]` and `runtime.NewPollerFromResumeTokenOptions[T]`.
+
+### Breaking Changes
+* Renamed `cloud.Configuration.LoginEndpoint` to `.ActiveDirectoryAuthorityHost`
+* Renamed `cloud.AzurePublicCloud` to `cloud.AzurePublic`
+* Removed `AuxiliaryTenants` field from `arm/ClientOptions` and `arm/policy/BearerTokenOptions`
+* Removed `TokenRequestOptions.TenantID`
+* `Poller[T].PollUntilDone()` now takes an `options *PollUntilDoneOptions` param instead of `freq time.Duration`
+* Removed `arm/runtime.Poller[T]`, `arm/runtime.NewPoller[T]()` and `arm/runtime.NewPollerFromResumeToken[T]()`
+* Removed `arm/runtime.FinalStateVia` and related `const` values
+* Renamed `runtime.PageProcessor` to `runtime.PagingHandler`
+* The `arm/runtime.ProviderRepsonse` and `arm/runtime.Provider` types are no longer exported.
+* Renamed `NewRequestIdPolicy()` to `NewRequestIDPolicy()`
+* `TokenCredential.GetToken` now returns `AccessToken` by value.
+
+### Bugs Fixed
+* When per-try timeouts are enabled, only cancel the context after the body has been read and closed.
+* The `Operation-Location` poller now properly handles `final-state-via` values.
+* Improvements in `runtime.Poller[T]`
+  * `Poll()` shouldn't cache errors, allowing for additional retries when in a non-terminal state.
+  * `Result()` will cache the terminal result or error but not transient errors, allowing for additional retries.
+
+### Other Changes
+* Updated to latest `internal` module and absorbed breaking changes.
+  * Use `temporal.Resource` and deleted copy.
+* The internal poller implementation has been refactored.
+  * The implementation in `internal/pollers/poller.go` has been merged into `runtime/poller.go` with some slight modification.
+  * The internal poller types had their methods updated to conform to the `runtime.PollingHandler` interface.
+  * The creation of resume tokens has been refactored so that implementers of `runtime.PollingHandler` don't need to know about it.
+* `NewPipeline()` places policies from `ClientOptions` after policies from `PipelineOptions`
+* Default User-Agent headers no longer include `azcore` version information
+
+## 0.23.1 (2022-04-14)
+
+### Bugs Fixed
+* Include XML header when marshalling XML content.
+* Handle XML namespaces when searching for error code.
+* Handle `odata.error` when searching for error code.
+
+## 0.23.0 (2022-04-04)
+
+### Features Added
+* Added `runtime.Pager[T any]` and `runtime.Poller[T any]` supporting types for central, generic, implementations.
+* Added `cloud` package with a new API for cloud configuration
+* Added `FinalStateVia` field to `runtime.NewPollerOptions[T any]` type.
+
+### Breaking Changes
+* Removed the `Poller` type-alias to the internal poller implementation.
+* Added `Ptr[T any]` and `SliceOfPtrs[T any]` in the `to` package and removed all non-generic implementations.
+* `NullValue` and `IsNullValue` now take a generic type parameter instead of an interface func parameter.
+* Replaced `arm.Endpoint` with `cloud` API
+  * Removed the `endpoint` parameter from `NewRPRegistrationPolicy()`
+  * `arm/runtime.NewPipeline()` and `.NewRPRegistrationPolicy()` now return an `error`
+* Refactored `NewPoller` and `NewPollerFromResumeToken` funcs in `arm/runtime` and `runtime` packages.
+  * Removed the `pollerID` parameter as it's no longer required.
+  * Created optional parameter structs and moved optional parameters into them.
+* Changed `FinalStateVia` field to a `const` type.
+
+### Other Changes
+* Converted expiring resource and dependent types to use generics.
+
+## 0.22.0 (2022-03-03)
+
+### Features Added
+* Added header `WWW-Authenticate` to the default allow-list of headers for logging.
+* Added a pipeline policy that enables the retrieval of HTTP responses from API calls.
+  * Added `runtime.WithCaptureResponse` to enable the policy at the API level (off by default).
+
+### Breaking Changes
+* Moved `WithHTTPHeader` and `WithRetryOptions` from the `policy` package to the `runtime` package.
+
+## 0.21.1 (2022-02-04)
+
+### Bugs Fixed
+* Restore response body after reading in `Poller.FinalResponse()`. (#16911)
+* Fixed bug in `NullValue` that could lead to incorrect comparisons for empty maps/slices (#16969)
+
+### Other Changes
+* `BearerTokenPolicy` is more resilient to transient authentication failures. (#16789)
+
+## 0.21.0 (2022-01-11)
+
+### Features Added
+* Added `AllowedHeaders` and `AllowedQueryParams` to `policy.LogOptions` to control which headers and query parameters are written to the logger.
+* Added `azcore.ResponseError` type which is returned from APIs when a non-success HTTP status code is received.
+
+### Breaking Changes
+* Moved `[]policy.Policy` parameters of `arm/runtime.NewPipeline` and `runtime.NewPipeline` into a new struct, `runtime.PipelineOptions`
+* Renamed `arm/ClientOptions.Host` to `.Endpoint`
+* Moved `Request.SkipBodyDownload` method to function `runtime.SkipBodyDownload`
+* Removed `azcore.HTTPResponse` interface type
+* `arm.NewPoller()` and `runtime.NewPoller()` no longer require an `eu` parameter
+* `runtime.NewResponseError()` no longer requires an `error` parameter
+
+## 0.20.0 (2021-10-22)
+
+### Breaking Changes
+* Removed `arm.Connection`
+* Removed `azcore.Credential` and `.NewAnonymousCredential()`
+  * `NewRPRegistrationPolicy` now requires an `azcore.TokenCredential`
+* `runtime.NewPipeline` has a new signature that simplifies implementing custom authentication
+* `arm/runtime.RegistrationOptions` embeds `policy.ClientOptions`
+* Contents in the `log` package have been slightly renamed.
+* Removed `AuthenticationOptions` in favor of `policy.BearerTokenOptions`
+* Changed parameters for `NewBearerTokenPolicy()`
+* Moved policy config options out of `arm/runtime` and into `arm/policy`
+
+### Features Added
+* Updating Documentation
+* Added string typdef `arm.Endpoint` to provide a hint toward expected ARM client endpoints
+* `azcore.ClientOptions` contains common pipeline configuration settings
+* Added support for multi-tenant authorization in `arm/runtime`
+* Require one second minimum when calling `PollUntilDone()`
+
+### Bug Fixes
+* Fixed a potential panic when creating the default Transporter.
+* Close LRO initial response body when creating a poller.
+* Fixed a panic when recursively cloning structs that contain time.Time.
+
+## 0.19.0 (2021-08-25)
+
+### Breaking Changes
+* Split content out of `azcore` into various packages.  The intent is to separate content based on its usage (common, uncommon, SDK authors).
+  * `azcore` has all core functionality.
+  * `log` contains facilities for configuring in-box logging.
+  * `policy` is used for configuring pipeline options and creating custom pipeline policies.
+  * `runtime` contains various helpers used by SDK authors and generated content.
+  * `streaming` has helpers for streaming IO operations.
+* `NewTelemetryPolicy()` now requires module and version parameters and the `Value` option has been removed.
+  * As a result, the `Request.Telemetry()` method has been removed.
+* The telemetry policy now includes the SDK prefix `azsdk-go-` so callers no longer need to provide it.
+* The `*http.Request` in `runtime.Request` is no longer anonymously embedded.  Use the `Raw()` method to access it.
+* The `UserAgent` and `Version` constants have been made internal, `Module` and `Version` respectively.
+
+### Bug Fixes
+* Fixed an issue in the retry policy where the request body could be overwritten after a rewind.
+
+### Other Changes
+* Moved modules `armcore` and `to` content into `arm` and `to` packages respectively.
+  * The `Pipeline()` method on `armcore.Connection` has been replaced by `NewPipeline()` in `arm.Connection`.  It takes module and version parameters used by the telemetry policy.
+* Poller logic has been consolidated across ARM and core implementations.
+  * This required some changes to the internal interfaces for core pollers.
+* The core poller types have been improved, including more logging and test coverage.
+
+## 0.18.1 (2021-08-20)
+
+### Features Added
+* Adds an `ETag` type for comparing etags and handling etags on requests
+* Simplifies the `requestBodyProgess` and `responseBodyProgress` into a single `progress` object
+
+### Bugs Fixed
+* `JoinPaths` will preserve query parameters encoded in the `root` url.
+
+### Other Changes
+* Bumps dependency on `internal` module to the latest version (v0.7.0)
+
+## 0.18.0 (2021-07-29)
+### Features Added
+* Replaces methods from Logger type with two package methods for interacting with the logging functionality.
+* `azcore.SetClassifications` replaces `azcore.Logger().SetClassifications`
+* `azcore.SetListener` replaces `azcore.Logger().SetListener`
+
+### Breaking Changes
+* Removes `Logger` type from `azcore`
+
+
+## 0.17.0 (2021-07-27)
+### Features Added
+* Adding TenantID to TokenRequestOptions (https://github.com/Azure/azure-sdk-for-go/pull/14879)
+* Adding AuxiliaryTenants to AuthenticationOptions (https://github.com/Azure/azure-sdk-for-go/pull/15123)
+
+### Breaking Changes
+* Rename `AnonymousCredential` to `NewAnonymousCredential` (https://github.com/Azure/azure-sdk-for-go/pull/15104)
+* rename `AuthenticationPolicyOptions` to `AuthenticationOptions` (https://github.com/Azure/azure-sdk-for-go/pull/15103)
+* Make Header constants private (https://github.com/Azure/azure-sdk-for-go/pull/15038)
+
+
+## 0.16.2 (2021-05-26)
+### Features Added
+* Improved support for byte arrays [#14715](https://github.com/Azure/azure-sdk-for-go/pull/14715)
+
+
+## 0.16.1 (2021-05-19)
+### Features Added
+* Add license.txt to azcore module [#14682](https://github.com/Azure/azure-sdk-for-go/pull/14682)
+
+
+## 0.16.0 (2021-05-07)
+### Features Added
+* Remove extra `*` in UnmarshalAsByteArray() [#14642](https://github.com/Azure/azure-sdk-for-go/pull/14642)
+
+
+## 0.15.1 (2021-05-06)
+### Features Added
+* Cache the original request body on Request [#14634](https://github.com/Azure/azure-sdk-for-go/pull/14634)
+
+
+## 0.15.0 (2021-05-05)
+### Features Added
+* Add support for null map and slice
+* Export `Response.Payload` method
+
+### Breaking Changes
+* remove `Response.UnmarshalError` as it's no longer required
+
+
+## 0.14.5 (2021-04-23)
+### Features Added
+* Add `UnmarshalError()` on `azcore.Response`
+
+
+## 0.14.4 (2021-04-22)
+### Features Added
+* Support for basic LRO polling
+* Added type `LROPoller` and supporting types for basic polling on long running operations.
+* rename poller param and added doc comment
+
+### Bugs Fixed
+* Fixed content type detection bug in logging.
+
+
+## 0.14.3 (2021-03-29)
+### Features Added
+* Add support for multi-part form data
+* Added method `WriteMultipartFormData()` to Request.
+
+
+## 0.14.2 (2021-03-17)
+### Features Added
+* Add support for encoding JSON null values
+* Adds `NullValue()` and `IsNullValue()` functions for setting and detecting sentinel values used for encoding a JSON null.
+* Documentation fixes
+
+### Bugs Fixed
+* Fixed improper error wrapping
+
+
+## 0.14.1 (2021-02-08)
+### Features Added
+* Add `Pager` and `Poller` interfaces to azcore
+
+
+## 0.14.0 (2021-01-12)
+### Features Added
+* Accept zero-value options for default values
+* Specify zero-value options structs to accept default values.
+* Remove `DefaultXxxOptions()` methods.
+* Do not silently change TryTimeout on negative values
+* make per-try timeout opt-in
+
+
+## 0.13.4 (2020-11-20)
+### Features Added
+* Include telemetry string in User Agent
+
+
+## 0.13.3 (2020-11-20)
+### Features Added
+* Updating response body handling on `azcore.Response`
+
+
+## 0.13.2 (2020-11-13)
+### Features Added
+* Remove implementation of stateless policies as first-class functions.
+
+
+## 0.13.1 (2020-11-05)
+### Features Added
+* Add `Telemetry()` method to `azcore.Request()`
+
+
+## 0.13.0 (2020-10-14)
+### Features Added
+* Rename `log` to `logger` to avoid name collision with the log package.
+* Documentation improvements
+* Simplified `DefaultHTTPClientTransport()` implementation
+
+
+## 0.12.1 (2020-10-13)
+### Features Added
+* Update `internal` module dependence to `v0.5.0`
+
+
+## 0.12.0 (2020-10-08)
+### Features Added
+* Removed storage specific content
+* Removed internal content to prevent API clutter
+* Refactored various policy options to conform with our options pattern
+
+
+## 0.11.0 (2020-09-22)
+### Features Added
+
+* Removed `LogError` and `LogSlowResponse`.
+* Renamed `options` in `RequestLogOptions`.
+* Updated `NewRequestLogPolicy()` to follow standard pattern for options.
+* Refactored `requestLogPolicy.Do()` per above changes.
+* Cleaned up/added logging in retry policy.
+* Export `NewResponseError()`
+* Fix `RequestLogOptions` comment
+
+
+## 0.10.1 (2020-09-17)
+### Features Added
+* Add default console logger
+* Default console logger writes to stderr. To enable it, set env var `AZURE_SDK_GO_LOGGING` to the value 'all'.
+* Added `Logger.Writef()` to reduce the need for `ShouldLog()` checks.
+* Add `LogLongRunningOperation`
+
+
+## 0.10.0 (2020-09-10)
+### Features Added
+* The `request` and `transport` interfaces have been refactored to align with the patterns in the standard library.
+* `NewRequest()` now uses `http.NewRequestWithContext()` and performs additional validation, it also requires a context parameter.
+* The `Policy` and `Transport` interfaces have had their context parameter removed as the context is associated with the underlying `http.Request`.
+* `Pipeline.Do()` will validate the HTTP request before sending it through the pipeline, avoiding retries on a malformed request.
+* The `Retrier` interface has been replaced with the `NonRetriableError` interface, and the retry policy updated to test for this.
+* `Request.SetBody()` now requires a content type parameter for setting the request's MIME type.
+* moved path concatenation into `JoinPaths()` func
+
+
+## 0.9.6 (2020-08-18)
+### Features Added
+* Improvements to body download policy
+* Always download the response body for error responses, i.e. HTTP status codes >= 400.
+* Simplify variable declarations
+
+
+## 0.9.5 (2020-08-11)
+### Features Added
+* Set the Content-Length header in `Request.SetBody`
+
+
+## 0.9.4 (2020-08-03)
+### Features Added
+* Fix cancellation of per try timeout
+* Per try timeout is used to ensure that an HTTP operation doesn't take too long, e.g. that a GET on some URL doesn't take an inordinant amount of time.
+* Once the HTTP request returns, the per try timeout should be cancelled, not when the response has been read to completion.
+* Do not drain response body if there are no more retries
+* Do not retry non-idempotent operations when body download fails
+
+
+## 0.9.3 (2020-07-28)
+### Features Added
+* Add support for custom HTTP request headers
+* Inserts an internal policy into the pipeline that can extract HTTP header values from the caller's context, adding them to the request.
+* Use `azcore.WithHTTPHeader` to add HTTP headers to a context.
+* Remove method specific to Go 1.14
+
+
+## 0.9.2 (2020-07-28)
+### Features Added
+* Omit read-only content from request payloads
+* If any field in a payload's object graph contains `azure:"ro"`, make a clone of the object graph, omitting all fields with this annotation.
+* Verify no fields were dropped
+* Handle embedded struct types
+* Added test for cloning by value
+* Add messages to failures
+
+
+## 0.9.1 (2020-07-22)
+### Features Added
+* Updated dependency on internal module to fix race condition.
+
+
+## 0.9.0 (2020-07-09)
+### Features Added
+* Add `HTTPResponse` interface to be used by callers to access the raw HTTP response from an error in the event of an API call failure.
+* Updated `sdk/internal` dependency to latest version.
+* Rename package alias
+
+
+## 0.8.2 (2020-06-29)
+### Features Added
+* Added missing documentation comments
+
+### Bugs Fixed
+* Fixed a bug in body download policy.
+
+
+## 0.8.1 (2020-06-26)
+### Features Added
+* Miscellaneous clean-up reported by linters
+
+
+## 0.8.0 (2020-06-01)
+### Features Added
+* Differentiate between standard and URL encoding.
+
+
+## 0.7.1 (2020-05-27)
+### Features Added
+* Add support for for base64 encoding and decoding of payloads.
+
+
+## 0.7.0 (2020-05-12)
+### Features Added
+* Change `RetryAfter()` to a function.
+
+
+## 0.6.0 (2020-04-29)
+### Features Added
+* Updating `RetryAfter` to only return the detaion in the RetryAfter header
+
+
+## 0.5.0 (2020-03-23)
+### Features Added
+* Export `TransportFunc`
+
+### Breaking Changes
+* Removed `IterationDone`
+
+
+## 0.4.1 (2020-02-25)
+### Features Added
+* Ensure per-try timeout is properly cancelled
+* Explicitly call cancel the per-try timeout when the response body has been read/closed by the body download policy.
+* When the response body is returned to the caller for reading/closing, wrap it in a `responseBodyReader` that will cancel the timeout when the body is closed.
+* `Logger.Should()` will return false if no listener is set.
+
+
+## 0.4.0 (2020-02-18)
+### Features Added
+* Enable custom `RetryOptions` to be specified per API call
+* Added `WithRetryOptions()` that adds a custom `RetryOptions` to the provided context, allowing custom settings per API call.
+* Remove 429 from the list of default HTTP status codes for retry.
+* Change StatusCodesForRetry to a slice so consumers can append to it.
+* Added support for retry-after in HTTP-date format.
+* Cleaned up some comments specific to storage.
+* Remove `Request.SetQueryParam()`
+* Renamed `MaxTries` to `MaxRetries`
+
+## 0.3.0 (2020-01-16)
+### Features Added
+* Added `DefaultRetryOptions` to create initialized default options.
+
+### Breaking Changes
+* Removed `Response.CheckStatusCode()`
+
+
+## 0.2.0 (2020-01-15)
+### Features Added
+* Add support for marshalling and unmarshalling JSON
+* Removed `Response.Payload` field
+* Exit early when unmarsahlling if there is no payload
+
+
+## 0.1.0 (2020-01-10)
+### Features Added
+* Initial release

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/LICENSE.txt 🔗

@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) Microsoft Corporation.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/README.md 🔗

@@ -0,0 +1,39 @@
+# Azure Core Client Module for Go
+
+[![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/azcore)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore)
+[![Build Status](https://dev.azure.com/azure-sdk/public/_apis/build/status/go/go%20-%20azcore%20-%20ci?branchName=main)](https://dev.azure.com/azure-sdk/public/_build/latest?definitionId=1843&branchName=main)
+[![Code Coverage](https://img.shields.io/azure-devops/coverage/azure-sdk/public/1843/main)](https://img.shields.io/azure-devops/coverage/azure-sdk/public/1843/main)
+
+The `azcore` module provides a set of common interfaces and types for Go SDK client modules.
+These modules follow the [Azure SDK Design Guidelines for Go](https://azure.github.io/azure-sdk/golang_introduction.html).
+
+## Getting started
+
+This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management.
+
+Typically, you will not need to explicitly install `azcore` as it will be installed as a client module dependency.
+To add the latest version to your `go.mod` file, execute the following command.
+
+```bash
+go get github.com/Azure/azure-sdk-for-go/sdk/azcore
+```
+
+General documentation and examples can be found on [pkg.go.dev](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore).
+
+## Contributing
+This project welcomes contributions and suggestions. Most contributions require
+you to agree to a Contributor License Agreement (CLA) declaring that you have
+the right to, and actually do, grant us the rights to use your contribution.
+For details, visit [https://cla.microsoft.com](https://cla.microsoft.com).
+
+When you submit a pull request, a CLA-bot will automatically determine whether
+you need to provide a CLA and decorate the PR appropriately (e.g., label,
+comment). Simply follow the instructions provided by the bot. You will only
+need to do this once across all repos using our CLA.
+
+This project has adopted the
+[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+For more information, see the
+[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any
+additional questions or comments.

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_identifier.go 🔗

@@ -0,0 +1,239 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package resource
+
+import (
+	"fmt"
+	"strings"
+)
+
+const (
+	providersKey             = "providers"
+	subscriptionsKey         = "subscriptions"
+	resourceGroupsLowerKey   = "resourcegroups"
+	locationsKey             = "locations"
+	builtInResourceNamespace = "Microsoft.Resources"
+)
+
+// RootResourceID defines the tenant as the root parent of all other ResourceID.
+var RootResourceID = &ResourceID{
+	Parent:       nil,
+	ResourceType: TenantResourceType,
+	Name:         "",
+}
+
+// ResourceID represents a resource ID such as `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRg`.
+// Don't create this type directly, use ParseResourceID instead.
+type ResourceID struct {
+	// Parent is the parent ResourceID of this instance.
+	// Can be nil if there is no parent.
+	Parent *ResourceID
+
+	// SubscriptionID is the subscription ID in this resource ID.
+	// The value can be empty if the resource ID does not contain a subscription ID.
+	SubscriptionID string
+
+	// ResourceGroupName is the resource group name in this resource ID.
+	// The value can be empty if the resource ID does not contain a resource group name.
+	ResourceGroupName string
+
+	// Provider represents the provider name in this resource ID.
+	// This is only valid when the resource ID represents a resource provider.
+	// Example: `/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Insights`
+	Provider string
+
+	// Location is the location in this resource ID.
+	// The value can be empty if the resource ID does not contain a location name.
+	Location string
+
+	// ResourceType represents the type of this resource ID.
+	ResourceType ResourceType
+
+	// Name is the resource name of this resource ID.
+	Name string
+
+	isChild     bool
+	stringValue string
+}
+
+// ParseResourceID parses a string to an instance of ResourceID
+func ParseResourceID(id string) (*ResourceID, error) {
+	if len(id) == 0 {
+		return nil, fmt.Errorf("invalid resource ID: id cannot be empty")
+	}
+
+	if !strings.HasPrefix(id, "/") {
+		return nil, fmt.Errorf("invalid resource ID: resource id '%s' must start with '/'", id)
+	}
+
+	parts := splitStringAndOmitEmpty(id, "/")
+
+	if len(parts) < 2 {
+		return nil, fmt.Errorf("invalid resource ID: %s", id)
+	}
+
+	if !strings.EqualFold(parts[0], subscriptionsKey) && !strings.EqualFold(parts[0], providersKey) {
+		return nil, fmt.Errorf("invalid resource ID: %s", id)
+	}
+
+	return appendNext(RootResourceID, parts, id)
+}
+
+// String returns the string of the ResourceID
+func (id *ResourceID) String() string {
+	if len(id.stringValue) > 0 {
+		return id.stringValue
+	}
+
+	if id.Parent == nil {
+		return ""
+	}
+
+	builder := strings.Builder{}
+	builder.WriteString(id.Parent.String())
+
+	if id.isChild {
+		builder.WriteString(fmt.Sprintf("/%s", id.ResourceType.lastType()))
+		if len(id.Name) > 0 {
+			builder.WriteString(fmt.Sprintf("/%s", id.Name))
+		}
+	} else {
+		builder.WriteString(fmt.Sprintf("/providers/%s/%s/%s", id.ResourceType.Namespace, id.ResourceType.Type, id.Name))
+	}
+
+	id.stringValue = builder.String()
+
+	return id.stringValue
+}
+
+// MarshalText returns a textual representation of the ResourceID
+func (id *ResourceID) MarshalText() ([]byte, error) {
+	return []byte(id.String()), nil
+}
+
+// UnmarshalText decodes the textual representation of a ResourceID
+func (id *ResourceID) UnmarshalText(text []byte) error {
+	newId, err := ParseResourceID(string(text))
+	if err != nil {
+		return err
+	}
+	*id = *newId
+	return nil
+}
+
+func newResourceID(parent *ResourceID, resourceTypeName string, resourceName string) *ResourceID {
+	id := &ResourceID{}
+	id.init(parent, chooseResourceType(resourceTypeName, parent), resourceName, true)
+	return id
+}
+
+func newResourceIDWithResourceType(parent *ResourceID, resourceType ResourceType, resourceName string) *ResourceID {
+	id := &ResourceID{}
+	id.init(parent, resourceType, resourceName, true)
+	return id
+}
+
+func newResourceIDWithProvider(parent *ResourceID, providerNamespace, resourceTypeName, resourceName string) *ResourceID {
+	id := &ResourceID{}
+	id.init(parent, NewResourceType(providerNamespace, resourceTypeName), resourceName, false)
+	return id
+}
+
+func chooseResourceType(resourceTypeName string, parent *ResourceID) ResourceType {
+	if strings.EqualFold(resourceTypeName, resourceGroupsLowerKey) {
+		return ResourceGroupResourceType
+	} else if strings.EqualFold(resourceTypeName, subscriptionsKey) && parent != nil && parent.ResourceType.String() == TenantResourceType.String() {
+		return SubscriptionResourceType
+	}
+
+	return parent.ResourceType.AppendChild(resourceTypeName)
+}
+
+func (id *ResourceID) init(parent *ResourceID, resourceType ResourceType, name string, isChild bool) {
+	if parent != nil {
+		id.Provider = parent.Provider
+		id.SubscriptionID = parent.SubscriptionID
+		id.ResourceGroupName = parent.ResourceGroupName
+		id.Location = parent.Location
+	}
+
+	if resourceType.String() == SubscriptionResourceType.String() {
+		id.SubscriptionID = name
+	}
+
+	if resourceType.lastType() == locationsKey {
+		id.Location = name
+	}
+
+	if resourceType.String() == ResourceGroupResourceType.String() {
+		id.ResourceGroupName = name
+	}
+
+	if resourceType.String() == ProviderResourceType.String() {
+		id.Provider = name
+	}
+
+	if parent == nil {
+		id.Parent = RootResourceID
+	} else {
+		id.Parent = parent
+	}
+	id.isChild = isChild
+	id.ResourceType = resourceType
+	id.Name = name
+}
+
+func appendNext(parent *ResourceID, parts []string, id string) (*ResourceID, error) {
+	if len(parts) == 0 {
+		return parent, nil
+	}
+
+	if len(parts) == 1 {
+		// subscriptions and resourceGroups are not valid ids without their names
+		if strings.EqualFold(parts[0], subscriptionsKey) || strings.EqualFold(parts[0], resourceGroupsLowerKey) {
+			return nil, fmt.Errorf("invalid resource ID: %s", id)
+		}
+
+		// resourceGroup must contain either child or provider resource type
+		if parent.ResourceType.String() == ResourceGroupResourceType.String() {
+			return nil, fmt.Errorf("invalid resource ID: %s", id)
+		}
+
+		return newResourceID(parent, parts[0], ""), nil
+	}
+
+	if strings.EqualFold(parts[0], providersKey) && (len(parts) == 2 || strings.EqualFold(parts[2], providersKey)) {
+		// provider resource can only be on a tenant or a subscription parent
+		if parent.ResourceType.String() != SubscriptionResourceType.String() && parent.ResourceType.String() != TenantResourceType.String() {
+			return nil, fmt.Errorf("invalid resource ID: %s", id)
+		}
+
+		return appendNext(newResourceIDWithResourceType(parent, ProviderResourceType, parts[1]), parts[2:], id)
+	}
+
+	if len(parts) > 3 && strings.EqualFold(parts[0], providersKey) {
+		return appendNext(newResourceIDWithProvider(parent, parts[1], parts[2], parts[3]), parts[4:], id)
+	}
+
+	if len(parts) > 1 && !strings.EqualFold(parts[0], providersKey) {
+		return appendNext(newResourceID(parent, parts[0], parts[1]), parts[2:], id)
+	}
+
+	return nil, fmt.Errorf("invalid resource ID: %s", id)
+}
+
+func splitStringAndOmitEmpty(v, sep string) []string {
+	r := make([]string, 0)
+	for _, s := range strings.Split(v, sep) {
+		if len(s) == 0 {
+			continue
+		}
+		r = append(r, s)
+	}
+
+	return r
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_type.go 🔗

@@ -0,0 +1,114 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package resource
+
+import (
+	"fmt"
+	"strings"
+)
+
+// SubscriptionResourceType is the ResourceType of a subscription
+var SubscriptionResourceType = NewResourceType(builtInResourceNamespace, "subscriptions")
+
+// ResourceGroupResourceType is the ResourceType of a resource group
+var ResourceGroupResourceType = NewResourceType(builtInResourceNamespace, "resourceGroups")
+
+// TenantResourceType is the ResourceType of a tenant
+var TenantResourceType = NewResourceType(builtInResourceNamespace, "tenants")
+
+// ProviderResourceType is the ResourceType of a provider
+var ProviderResourceType = NewResourceType(builtInResourceNamespace, "providers")
+
+// ResourceType represents an Azure resource type, e.g. "Microsoft.Network/virtualNetworks/subnets".
+// Don't create this type directly, use ParseResourceType or NewResourceType instead.
+type ResourceType struct {
+	// Namespace is the namespace of the resource type.
+	// e.g. "Microsoft.Network" in resource type "Microsoft.Network/virtualNetworks/subnets"
+	Namespace string
+
+	// Type is the full type name of the resource type.
+	// e.g. "virtualNetworks/subnets" in resource type "Microsoft.Network/virtualNetworks/subnets"
+	Type string
+
+	// Types is the slice of all the sub-types of this resource type.
+	// e.g. ["virtualNetworks", "subnets"] in resource type "Microsoft.Network/virtualNetworks/subnets"
+	Types []string
+
+	stringValue string
+}
+
+// String returns the string of the ResourceType
+func (t ResourceType) String() string {
+	return t.stringValue
+}
+
+// IsParentOf returns true when the receiver is the parent resource type of the child.
+func (t ResourceType) IsParentOf(child ResourceType) bool {
+	if !strings.EqualFold(t.Namespace, child.Namespace) {
+		return false
+	}
+	if len(t.Types) >= len(child.Types) {
+		return false
+	}
+	for i := range t.Types {
+		if !strings.EqualFold(t.Types[i], child.Types[i]) {
+			return false
+		}
+	}
+
+	return true
+}
+
+// AppendChild creates an instance of ResourceType using the receiver as the parent with childType appended to it.
+func (t ResourceType) AppendChild(childType string) ResourceType {
+	return NewResourceType(t.Namespace, fmt.Sprintf("%s/%s", t.Type, childType))
+}
+
+// NewResourceType creates an instance of ResourceType using a provider namespace
+// such as "Microsoft.Network" and type such as "virtualNetworks/subnets".
+func NewResourceType(providerNamespace, typeName string) ResourceType {
+	return ResourceType{
+		Namespace:   providerNamespace,
+		Type:        typeName,
+		Types:       splitStringAndOmitEmpty(typeName, "/"),
+		stringValue: fmt.Sprintf("%s/%s", providerNamespace, typeName),
+	}
+}
+
+// ParseResourceType parses the ResourceType from a resource type string (e.g. Microsoft.Network/virtualNetworks/subsets)
+// or a resource identifier string.
+// e.g. /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRg/providers/Microsoft.Network/virtualNetworks/vnet/subnets/mySubnet)
+func ParseResourceType(resourceIDOrType string) (ResourceType, error) {
+	// split the path into segments
+	parts := splitStringAndOmitEmpty(resourceIDOrType, "/")
+
+	// There must be at least a namespace and type name
+	if len(parts) < 1 {
+		return ResourceType{}, fmt.Errorf("invalid resource ID or type: %s", resourceIDOrType)
+	}
+
+	// if the type is just subscriptions, it is a built-in type in the Microsoft.Resources namespace
+	if len(parts) == 1 {
+		// Simple resource type
+		return NewResourceType(builtInResourceNamespace, parts[0]), nil
+	} else if strings.Contains(parts[0], ".") {
+		// Handle resource types (Microsoft.Compute/virtualMachines, Microsoft.Network/virtualNetworks/subnets)
+		// it is a full type name
+		return NewResourceType(parts[0], strings.Join(parts[1:], "/")), nil
+	} else {
+		// Check if ResourceID
+		id, err := ParseResourceID(resourceIDOrType)
+		if err != nil {
+			return ResourceType{}, err
+		}
+		return NewResourceType(id.ResourceType.Namespace, id.ResourceType.Type), nil
+	}
+}
+
+func (t ResourceType) lastType() string {
+	return t.Types[len(t.Types)-1]
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy/policy.go 🔗

@@ -0,0 +1,108 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package policy
+
+import (
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+// BearerTokenOptions configures the bearer token policy's behavior.
+type BearerTokenOptions struct {
+	// AuxiliaryTenants are additional tenant IDs for authenticating cross-tenant requests.
+	// The policy will add a token from each of these tenants to every request. The
+	// authenticating user or service principal must be a guest in these tenants, and the
+	// policy's credential must support multitenant authentication.
+	AuxiliaryTenants []string
+
+	// InsecureAllowCredentialWithHTTP enables authenticated requests over HTTP.
+	// By default, authenticated requests to an HTTP endpoint are rejected by the client.
+	// WARNING: setting this to true will allow sending the authentication key in clear text. Use with caution.
+	InsecureAllowCredentialWithHTTP bool
+
+	// Scopes contains the list of permission scopes required for the token.
+	Scopes []string
+}
+
+// RegistrationOptions configures the registration policy's behavior.
+// All zero-value fields will be initialized with their default values.
+type RegistrationOptions struct {
+	policy.ClientOptions
+
+	// MaxAttempts is the total number of times to attempt automatic registration
+	// in the event that an attempt fails.
+	// The default value is 3.
+	// Set to a value less than zero to disable the policy.
+	MaxAttempts int
+
+	// PollingDelay is the amount of time to sleep between polling intervals.
+	// The default value is 15 seconds.
+	// A value less than zero means no delay between polling intervals (not recommended).
+	PollingDelay time.Duration
+
+	// PollingDuration is the amount of time to wait before abandoning polling.
+	// The default valule is 5 minutes.
+	// NOTE: Setting this to a small value might cause the policy to prematurely fail.
+	PollingDuration time.Duration
+
+	// StatusCodes contains the slice of custom HTTP status codes to use instead
+	// of the default http.StatusConflict. This should only be set if a service
+	// returns a non-standard HTTP status code when unregistered.
+	StatusCodes []int
+}
+
+// ClientOptions contains configuration settings for a client's pipeline.
+type ClientOptions struct {
+	policy.ClientOptions
+
+	// AuxiliaryTenants are additional tenant IDs for authenticating cross-tenant requests.
+	// The client will add a token from each of these tenants to every request. The
+	// authenticating user or service principal must be a guest in these tenants, and the
+	// client's credential must support multitenant authentication.
+	AuxiliaryTenants []string
+
+	// DisableRPRegistration disables the auto-RP registration policy. Defaults to false.
+	DisableRPRegistration bool
+}
+
+// Clone return a deep copy of the current options.
+func (o *ClientOptions) Clone() *ClientOptions {
+	if o == nil {
+		return nil
+	}
+	copiedOptions := *o
+	copiedOptions.Cloud.Services = copyMap(copiedOptions.Cloud.Services)
+	copiedOptions.Logging.AllowedHeaders = copyArray(copiedOptions.Logging.AllowedHeaders)
+	copiedOptions.Logging.AllowedQueryParams = copyArray(copiedOptions.Logging.AllowedQueryParams)
+	copiedOptions.Retry.StatusCodes = copyArray(copiedOptions.Retry.StatusCodes)
+	copiedOptions.PerRetryPolicies = copyArray(copiedOptions.PerRetryPolicies)
+	copiedOptions.PerCallPolicies = copyArray(copiedOptions.PerCallPolicies)
+	return &copiedOptions
+}
+
+// copyMap return a new map with all the key value pair in the src map
+func copyMap[K comparable, V any](src map[K]V) map[K]V {
+	if src == nil {
+		return nil
+	}
+	copiedMap := make(map[K]V)
+	for k, v := range src {
+		copiedMap[k] = v
+	}
+	return copiedMap
+}
+
+// copyMap return a new array with all the elements in the src array
+func copyArray[T any](src []T) []T {
+	if src == nil {
+		return nil
+	}
+	copiedArray := make([]T, len(src))
+	copy(copiedArray, src)
+	return copiedArray
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/pipeline.go 🔗

@@ -0,0 +1,70 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"errors"
+	"reflect"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	armpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	azpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	azruntime "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+)
+
+// NewPipeline creates a pipeline from connection options. Policies from ClientOptions are
+// placed after policies from PipelineOptions. The telemetry policy, when enabled, will
+// use the specified module and version info.
+func NewPipeline(module, version string, cred azcore.TokenCredential, plOpts azruntime.PipelineOptions, options *armpolicy.ClientOptions) (azruntime.Pipeline, error) {
+	if options == nil {
+		options = &armpolicy.ClientOptions{}
+	}
+	conf, err := getConfiguration(&options.ClientOptions)
+	if err != nil {
+		return azruntime.Pipeline{}, err
+	}
+	authPolicy := NewBearerTokenPolicy(cred, &armpolicy.BearerTokenOptions{
+		AuxiliaryTenants:                options.AuxiliaryTenants,
+		InsecureAllowCredentialWithHTTP: options.InsecureAllowCredentialWithHTTP,
+		Scopes:                          []string{conf.Audience + "/.default"},
+	})
+	// we don't want to modify the underlying array in plOpts.PerRetry
+	perRetry := make([]azpolicy.Policy, len(plOpts.PerRetry), len(plOpts.PerRetry)+1)
+	copy(perRetry, plOpts.PerRetry)
+	perRetry = append(perRetry, authPolicy, exported.PolicyFunc(httpTraceNamespacePolicy))
+	plOpts.PerRetry = perRetry
+	if !options.DisableRPRegistration {
+		regRPOpts := armpolicy.RegistrationOptions{ClientOptions: options.ClientOptions}
+		regPolicy, err := NewRPRegistrationPolicy(cred, &regRPOpts)
+		if err != nil {
+			return azruntime.Pipeline{}, err
+		}
+		// we don't want to modify the underlying array in plOpts.PerCall
+		perCall := make([]azpolicy.Policy, len(plOpts.PerCall), len(plOpts.PerCall)+1)
+		copy(perCall, plOpts.PerCall)
+		perCall = append(perCall, regPolicy)
+		plOpts.PerCall = perCall
+	}
+	if plOpts.APIVersion.Name == "" {
+		plOpts.APIVersion.Name = "api-version"
+	}
+	return azruntime.NewPipeline(module, version, plOpts, &options.ClientOptions), nil
+}
+
+func getConfiguration(o *azpolicy.ClientOptions) (cloud.ServiceConfiguration, error) {
+	c := cloud.AzurePublic
+	if !reflect.ValueOf(o.Cloud).IsZero() {
+		c = o.Cloud
+	}
+	if conf, ok := c.Services[cloud.ResourceManager]; ok && conf.Endpoint != "" && conf.Audience != "" {
+		return conf, nil
+	} else {
+		return conf, errors.New("provided Cloud field is missing Azure Resource Manager configuration")
+	}
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_bearer_token.go 🔗

@@ -0,0 +1,102 @@
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+	"strings"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	armpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	azpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	azruntime "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/temporal"
+)
+
+const headerAuxiliaryAuthorization = "x-ms-authorization-auxiliary"
+
+// acquiringResourceState holds data for an auxiliary token request
+type acquiringResourceState struct {
+	ctx    context.Context
+	p      *BearerTokenPolicy
+	tenant string
+}
+
+// acquireAuxToken acquires a token from an auxiliary tenant. Only one thread/goroutine at a time ever calls this function.
+func acquireAuxToken(state acquiringResourceState) (newResource azcore.AccessToken, newExpiration time.Time, err error) {
+	tk, err := state.p.cred.GetToken(state.ctx, azpolicy.TokenRequestOptions{
+		EnableCAE: true,
+		Scopes:    state.p.scopes,
+		TenantID:  state.tenant,
+	})
+	if err != nil {
+		return azcore.AccessToken{}, time.Time{}, err
+	}
+	return tk, tk.ExpiresOn, nil
+}
+
+// BearerTokenPolicy authorizes requests with bearer tokens acquired from a TokenCredential.
+type BearerTokenPolicy struct {
+	auxResources map[string]*temporal.Resource[azcore.AccessToken, acquiringResourceState]
+	btp          *azruntime.BearerTokenPolicy
+	cred         azcore.TokenCredential
+	scopes       []string
+}
+
+// NewBearerTokenPolicy creates a policy object that authorizes requests with bearer tokens.
+// cred: an azcore.TokenCredential implementation such as a credential object from azidentity
+// opts: optional settings. Pass nil to accept default values; this is the same as passing a zero-value options.
+func NewBearerTokenPolicy(cred azcore.TokenCredential, opts *armpolicy.BearerTokenOptions) *BearerTokenPolicy {
+	if opts == nil {
+		opts = &armpolicy.BearerTokenOptions{}
+	}
+	p := &BearerTokenPolicy{cred: cred}
+	p.auxResources = make(map[string]*temporal.Resource[azcore.AccessToken, acquiringResourceState], len(opts.AuxiliaryTenants))
+	for _, t := range opts.AuxiliaryTenants {
+		p.auxResources[t] = temporal.NewResource(acquireAuxToken)
+	}
+	p.scopes = make([]string, len(opts.Scopes))
+	copy(p.scopes, opts.Scopes)
+	p.btp = azruntime.NewBearerTokenPolicy(cred, opts.Scopes, &azpolicy.BearerTokenOptions{
+		InsecureAllowCredentialWithHTTP: opts.InsecureAllowCredentialWithHTTP,
+		AuthorizationHandler: azpolicy.AuthorizationHandler{
+			OnRequest: p.onRequest,
+		},
+	})
+	return p
+}
+
+// onRequest authorizes requests with one or more bearer tokens
+func (b *BearerTokenPolicy) onRequest(req *azpolicy.Request, authNZ func(azpolicy.TokenRequestOptions) error) error {
+	// authorize the request with a token for the primary tenant
+	err := authNZ(azpolicy.TokenRequestOptions{Scopes: b.scopes})
+	if err != nil || len(b.auxResources) == 0 {
+		return err
+	}
+	// add tokens for auxiliary tenants
+	as := acquiringResourceState{
+		ctx: req.Raw().Context(),
+		p:   b,
+	}
+	auxTokens := make([]string, 0, len(b.auxResources))
+	for tenant, er := range b.auxResources {
+		as.tenant = tenant
+		auxTk, err := er.Get(as)
+		if err != nil {
+			return err
+		}
+		auxTokens = append(auxTokens, fmt.Sprintf("%s%s", shared.BearerTokenPrefix, auxTk.Token))
+	}
+	req.Raw().Header.Set(headerAuxiliaryAuthorization, strings.Join(auxTokens, ", "))
+	return nil
+}
+
+// Do authorizes a request with a bearer token
+func (b *BearerTokenPolicy) Do(req *azpolicy.Request) (*http.Response, error) {
+	return b.btp.Do(req)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_register_rp.go 🔗

@@ -0,0 +1,322 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+	"net/url"
+	"strings"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource"
+	armpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	azpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+const (
+	// LogRPRegistration entries contain information specific to the automatic registration of an RP.
+	// Entries of this classification are written IFF the policy needs to take any action.
+	LogRPRegistration log.Event = "RPRegistration"
+)
+
+// init sets any default values
+func setDefaults(r *armpolicy.RegistrationOptions) {
+	if r.MaxAttempts == 0 {
+		r.MaxAttempts = 3
+	} else if r.MaxAttempts < 0 {
+		r.MaxAttempts = 0
+	}
+	if r.PollingDelay == 0 {
+		r.PollingDelay = 15 * time.Second
+	} else if r.PollingDelay < 0 {
+		r.PollingDelay = 0
+	}
+	if r.PollingDuration == 0 {
+		r.PollingDuration = 5 * time.Minute
+	}
+	if len(r.StatusCodes) == 0 {
+		r.StatusCodes = []int{http.StatusConflict}
+	}
+}
+
+// NewRPRegistrationPolicy creates a policy object configured using the specified options.
+// The policy controls whether an unregistered resource provider should automatically be
+// registered. See https://aka.ms/rps-not-found for more information.
+func NewRPRegistrationPolicy(cred azcore.TokenCredential, o *armpolicy.RegistrationOptions) (azpolicy.Policy, error) {
+	if o == nil {
+		o = &armpolicy.RegistrationOptions{}
+	}
+	conf, err := getConfiguration(&o.ClientOptions)
+	if err != nil {
+		return nil, err
+	}
+	authPolicy := NewBearerTokenPolicy(cred, &armpolicy.BearerTokenOptions{Scopes: []string{conf.Audience + "/.default"}})
+	p := &rpRegistrationPolicy{
+		endpoint: conf.Endpoint,
+		pipeline: runtime.NewPipeline(shared.Module, shared.Version, runtime.PipelineOptions{PerRetry: []azpolicy.Policy{authPolicy}}, &o.ClientOptions),
+		options:  *o,
+	}
+	// init the copy
+	setDefaults(&p.options)
+	return p, nil
+}
+
+type rpRegistrationPolicy struct {
+	endpoint string
+	pipeline runtime.Pipeline
+	options  armpolicy.RegistrationOptions
+}
+
+func (r *rpRegistrationPolicy) Do(req *azpolicy.Request) (*http.Response, error) {
+	if r.options.MaxAttempts == 0 {
+		// policy is disabled
+		return req.Next()
+	}
+	const registeredState = "Registered"
+	var rp string
+	var resp *http.Response
+	for attempts := 0; attempts < r.options.MaxAttempts; attempts++ {
+		var err error
+		// make the original request
+		resp, err = req.Next()
+		// getting a 409 is the first indication that the RP might need to be registered, check error response
+		if err != nil || !runtime.HasStatusCode(resp, r.options.StatusCodes...) {
+			return resp, err
+		}
+		var reqErr requestError
+		if err = runtime.UnmarshalAsJSON(resp, &reqErr); err != nil {
+			return resp, err
+		}
+		if reqErr.ServiceError == nil {
+			// missing service error info. just return the response
+			// to the caller so its error unmarshalling will kick in
+			return resp, err
+		}
+		if !isUnregisteredRPCode(reqErr.ServiceError.Code) {
+			// not a 409 due to unregistered RP. just return the response
+			// to the caller so its error unmarshalling will kick in
+			return resp, err
+		}
+		res, err := resource.ParseResourceID(req.Raw().URL.Path)
+		if err != nil {
+			return resp, err
+		}
+		rp = res.ResourceType.Namespace
+		logRegistrationExit := func(v any) {
+			log.Writef(LogRPRegistration, "END registration for %s: %v", rp, v)
+		}
+		log.Writef(LogRPRegistration, "BEGIN registration for %s", rp)
+		// create client and make the registration request
+		// we use the scheme and host from the original request
+		rpOps := &providersOperations{
+			p:     r.pipeline,
+			u:     r.endpoint,
+			subID: res.SubscriptionID,
+		}
+		if _, err = rpOps.Register(&shared.ContextWithDeniedValues{Context: req.Raw().Context()}, rp); err != nil {
+			logRegistrationExit(err)
+			return resp, err
+		}
+
+		// RP was registered, however we need to wait for the registration to complete
+		pollCtx, pollCancel := context.WithTimeout(&shared.ContextWithDeniedValues{Context: req.Raw().Context()}, r.options.PollingDuration)
+		var lastRegState string
+		for {
+			// get the current registration state
+			getResp, err := rpOps.Get(pollCtx, rp)
+			if err != nil {
+				pollCancel()
+				logRegistrationExit(err)
+				return resp, err
+			}
+			if getResp.Provider.RegistrationState != nil && !strings.EqualFold(*getResp.Provider.RegistrationState, lastRegState) {
+				// registration state has changed, or was updated for the first time
+				lastRegState = *getResp.Provider.RegistrationState
+				log.Writef(LogRPRegistration, "registration state is %s", lastRegState)
+			}
+			if strings.EqualFold(lastRegState, registeredState) {
+				// registration complete
+				pollCancel()
+				logRegistrationExit(lastRegState)
+				break
+			}
+			// wait before trying again
+			select {
+			case <-time.After(r.options.PollingDelay):
+				// continue polling
+			case <-pollCtx.Done():
+				pollCancel()
+				logRegistrationExit(pollCtx.Err())
+				return resp, pollCtx.Err()
+			}
+		}
+		// RP was successfully registered, retry the original request
+		err = req.RewindBody()
+		if err != nil {
+			return resp, err
+		}
+	}
+	// if we get here it means we exceeded the number of attempts
+	return resp, fmt.Errorf("exceeded attempts to register %s", rp)
+}
+
+var unregisteredRPCodes = []string{
+	"MissingSubscriptionRegistration",
+	"MissingRegistrationForResourceProvider",
+	"Subscription Not Registered",
+	"SubscriptionNotRegistered",
+}
+
+func isUnregisteredRPCode(errorCode string) bool {
+	for _, code := range unregisteredRPCodes {
+		if strings.EqualFold(errorCode, code) {
+			return true
+		}
+	}
+	return false
+}
+
+// minimal error definitions to simplify detection
+type requestError struct {
+	ServiceError *serviceError `json:"error"`
+}
+
+type serviceError struct {
+	Code string `json:"code"`
+}
+
+///////////////////////////////////////////////////////////////////////////////////////////////
+// the following code was copied from module armresources, providers.go and models.go
+// only the minimum amount of code was copied to get this working and some edits were made.
+///////////////////////////////////////////////////////////////////////////////////////////////
+
+type providersOperations struct {
+	p     runtime.Pipeline
+	u     string
+	subID string
+}
+
+// Get - Gets the specified resource provider.
+func (client *providersOperations) Get(ctx context.Context, resourceProviderNamespace string) (providerResponse, error) {
+	req, err := client.getCreateRequest(ctx, resourceProviderNamespace)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	resp, err := client.p.Do(req)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	result, err := client.getHandleResponse(resp)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	return result, nil
+}
+
+// getCreateRequest creates the Get request.
+func (client *providersOperations) getCreateRequest(ctx context.Context, resourceProviderNamespace string) (*azpolicy.Request, error) {
+	urlPath := "/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}"
+	urlPath = strings.ReplaceAll(urlPath, "{resourceProviderNamespace}", url.PathEscape(resourceProviderNamespace))
+	urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subID))
+	req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.u, urlPath))
+	if err != nil {
+		return nil, err
+	}
+	query := req.Raw().URL.Query()
+	query.Set("api-version", "2019-05-01")
+	req.Raw().URL.RawQuery = query.Encode()
+	return req, nil
+}
+
+// getHandleResponse handles the Get response.
+func (client *providersOperations) getHandleResponse(resp *http.Response) (providerResponse, error) {
+	if !runtime.HasStatusCode(resp, http.StatusOK) {
+		return providerResponse{}, exported.NewResponseError(resp)
+	}
+	result := providerResponse{RawResponse: resp}
+	err := runtime.UnmarshalAsJSON(resp, &result.Provider)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	return result, err
+}
+
+// Register - Registers a subscription with a resource provider.
+func (client *providersOperations) Register(ctx context.Context, resourceProviderNamespace string) (providerResponse, error) {
+	req, err := client.registerCreateRequest(ctx, resourceProviderNamespace)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	resp, err := client.p.Do(req)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	result, err := client.registerHandleResponse(resp)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	return result, nil
+}
+
+// registerCreateRequest creates the Register request.
+func (client *providersOperations) registerCreateRequest(ctx context.Context, resourceProviderNamespace string) (*azpolicy.Request, error) {
+	urlPath := "/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}/register"
+	urlPath = strings.ReplaceAll(urlPath, "{resourceProviderNamespace}", url.PathEscape(resourceProviderNamespace))
+	urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subID))
+	req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.u, urlPath))
+	if err != nil {
+		return nil, err
+	}
+	query := req.Raw().URL.Query()
+	query.Set("api-version", "2019-05-01")
+	req.Raw().URL.RawQuery = query.Encode()
+	return req, nil
+}
+
+// registerHandleResponse handles the Register response.
+func (client *providersOperations) registerHandleResponse(resp *http.Response) (providerResponse, error) {
+	if !runtime.HasStatusCode(resp, http.StatusOK) {
+		return providerResponse{}, exported.NewResponseError(resp)
+	}
+	result := providerResponse{RawResponse: resp}
+	err := runtime.UnmarshalAsJSON(resp, &result.Provider)
+	if err != nil {
+		return providerResponse{}, err
+	}
+	return result, err
+}
+
+// ProviderResponse is the response envelope for operations that return a Provider type.
+type providerResponse struct {
+	// Resource provider information.
+	Provider *provider
+
+	// RawResponse contains the underlying HTTP response.
+	RawResponse *http.Response
+}
+
+// Provider - Resource provider information.
+type provider struct {
+	// The provider ID.
+	ID *string `json:"id,omitempty"`
+
+	// The namespace of the resource provider.
+	Namespace *string `json:"namespace,omitempty"`
+
+	// The registration policy of the resource provider.
+	RegistrationPolicy *string `json:"registrationPolicy,omitempty"`
+
+	// The registration state of the resource provider.
+	RegistrationState *string `json:"registrationState,omitempty"`
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_trace_namespace.go 🔗

@@ -0,0 +1,30 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing"
+)
+
+// httpTraceNamespacePolicy is a policy that adds the az.namespace attribute to the current Span
+func httpTraceNamespacePolicy(req *policy.Request) (resp *http.Response, err error) {
+	rawTracer := req.Raw().Context().Value(shared.CtxWithTracingTracer{})
+	if tracer, ok := rawTracer.(tracing.Tracer); ok && tracer.Enabled() {
+		rt, err := resource.ParseResourceType(req.Raw().URL.Path)
+		if err == nil {
+			// add the namespace attribute to the current span
+			span := tracer.SpanFromContext(req.Raw().Context())
+			span.SetAttributes(tracing.Attribute{Key: shared.TracingNamespaceAttrName, Value: rt.Namespace})
+		}
+	}
+	return req.Next()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/runtime.go 🔗

@@ -0,0 +1,24 @@
+//go:build go1.16
+// +build go1.16
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
+
+func init() {
+	cloud.AzureChina.Services[cloud.ResourceManager] = cloud.ServiceConfiguration{
+		Audience: "https://management.core.chinacloudapi.cn",
+		Endpoint: "https://management.chinacloudapi.cn",
+	}
+	cloud.AzureGovernment.Services[cloud.ResourceManager] = cloud.ServiceConfiguration{
+		Audience: "https://management.core.usgovcloudapi.net",
+		Endpoint: "https://management.usgovcloudapi.net",
+	}
+	cloud.AzurePublic.Services[cloud.ResourceManager] = cloud.ServiceConfiguration{
+		Audience: "https://management.core.windows.net/",
+		Endpoint: "https://management.azure.com",
+	}
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/ci.yml 🔗

@@ -0,0 +1,29 @@
+# NOTE: Please refer to https://aka.ms/azsdk/engsys/ci-yaml before editing this file.
+trigger:
+  branches:
+    include:
+      - main
+      - feature/*
+      - hotfix/*
+      - release/*
+  paths:
+    include:
+    - sdk/azcore/
+    - eng/
+
+pr:
+  branches:
+    include:
+      - main
+      - feature/*
+      - hotfix/*
+      - release/*
+  paths:
+    include:
+    - sdk/azcore/
+    - eng/
+
+extends:
+  template: /eng/pipelines/templates/jobs/archetype-sdk-client.yml
+  parameters:
+    ServiceDirectory: azcore

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/cloud.go 🔗

@@ -0,0 +1,44 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package cloud
+
+var (
+	// AzureChina contains configuration for Azure China.
+	AzureChina = Configuration{
+		ActiveDirectoryAuthorityHost: "https://login.chinacloudapi.cn/", Services: map[ServiceName]ServiceConfiguration{},
+	}
+	// AzureGovernment contains configuration for Azure Government.
+	AzureGovernment = Configuration{
+		ActiveDirectoryAuthorityHost: "https://login.microsoftonline.us/", Services: map[ServiceName]ServiceConfiguration{},
+	}
+	// AzurePublic contains configuration for Azure Public Cloud.
+	AzurePublic = Configuration{
+		ActiveDirectoryAuthorityHost: "https://login.microsoftonline.com/", Services: map[ServiceName]ServiceConfiguration{},
+	}
+)
+
+// ServiceName identifies a cloud service.
+type ServiceName string
+
+// ResourceManager is a global constant identifying Azure Resource Manager.
+const ResourceManager ServiceName = "resourceManager"
+
+// ServiceConfiguration configures a specific cloud service such as Azure Resource Manager.
+type ServiceConfiguration struct {
+	// Audience is the audience the client will request for its access tokens.
+	Audience string
+	// Endpoint is the service's base URL.
+	Endpoint string
+}
+
+// Configuration configures a cloud.
+type Configuration struct {
+	// ActiveDirectoryAuthorityHost is the base URL of the cloud's Azure Active Directory.
+	ActiveDirectoryAuthorityHost string
+	// Services contains configuration for the cloud's services.
+	Services map[ServiceName]ServiceConfiguration
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/doc.go 🔗

@@ -0,0 +1,53 @@
+//go:build go1.16
+// +build go1.16
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+/*
+Package cloud implements a configuration API for applications deployed to sovereign or private Azure clouds.
+
+Azure SDK client configuration defaults are appropriate for Azure Public Cloud (sometimes referred to as
+"Azure Commercial" or simply "Microsoft Azure"). This package enables applications deployed to other
+Azure Clouds to configure clients appropriately.
+
+This package contains predefined configuration for well-known sovereign clouds such as Azure Government and
+Azure China. Azure SDK clients accept this configuration via the Cloud field of azcore.ClientOptions. For
+example, configuring a credential and ARM client for Azure Government:
+
+	opts := azcore.ClientOptions{Cloud: cloud.AzureGovernment}
+	cred, err := azidentity.NewDefaultAzureCredential(
+		&azidentity.DefaultAzureCredentialOptions{ClientOptions: opts},
+	)
+	handle(err)
+
+	client, err := armsubscription.NewClient(
+		cred, &arm.ClientOptions{ClientOptions: opts},
+	)
+	handle(err)
+
+Applications deployed to a private cloud such as Azure Stack create a Configuration object with
+appropriate values:
+
+	c := cloud.Configuration{
+		ActiveDirectoryAuthorityHost: "https://...",
+		Services: map[cloud.ServiceName]cloud.ServiceConfiguration{
+			cloud.ResourceManager: {
+				Audience: "...",
+				Endpoint: "https://...",
+			},
+		},
+	}
+	opts := azcore.ClientOptions{Cloud: c}
+
+	cred, err := azidentity.NewDefaultAzureCredential(
+		&azidentity.DefaultAzureCredentialOptions{ClientOptions: opts},
+	)
+	handle(err)
+
+	client, err := armsubscription.NewClient(
+		cred, &arm.ClientOptions{ClientOptions: opts},
+	)
+	handle(err)
+*/
+package cloud

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/core.go 🔗

@@ -0,0 +1,173 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azcore
+
+import (
+	"reflect"
+	"sync"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing"
+)
+
+// AccessToken represents an Azure service bearer access token with expiry information.
+type AccessToken = exported.AccessToken
+
+// TokenCredential represents a credential capable of providing an OAuth token.
+type TokenCredential = exported.TokenCredential
+
+// KeyCredential contains an authentication key used to authenticate to an Azure service.
+type KeyCredential = exported.KeyCredential
+
+// NewKeyCredential creates a new instance of [KeyCredential] with the specified values.
+//   - key is the authentication key
+func NewKeyCredential(key string) *KeyCredential {
+	return exported.NewKeyCredential(key)
+}
+
+// SASCredential contains a shared access signature used to authenticate to an Azure service.
+type SASCredential = exported.SASCredential
+
+// NewSASCredential creates a new instance of [SASCredential] with the specified values.
+//   - sas is the shared access signature
+func NewSASCredential(sas string) *SASCredential {
+	return exported.NewSASCredential(sas)
+}
+
+// holds sentinel values used to send nulls
+var nullables map[reflect.Type]any = map[reflect.Type]any{}
+var nullablesMu sync.RWMutex
+
+// NullValue is used to send an explicit 'null' within a request.
+// This is typically used in JSON-MERGE-PATCH operations to delete a value.
+func NullValue[T any]() T {
+	t := shared.TypeOfT[T]()
+
+	nullablesMu.RLock()
+	v, found := nullables[t]
+	nullablesMu.RUnlock()
+
+	if found {
+		// return the sentinel object
+		return v.(T)
+	}
+
+	// promote to exclusive lock and check again (double-checked locking pattern)
+	nullablesMu.Lock()
+	defer nullablesMu.Unlock()
+	v, found = nullables[t]
+
+	if !found {
+		var o reflect.Value
+		if k := t.Kind(); k == reflect.Map {
+			o = reflect.MakeMap(t)
+		} else if k == reflect.Slice {
+			// empty slices appear to all point to the same data block
+			// which causes comparisons to become ambiguous.  so we create
+			// a slice with len/cap of one which ensures a unique address.
+			o = reflect.MakeSlice(t, 1, 1)
+		} else {
+			o = reflect.New(t.Elem())
+		}
+		v = o.Interface()
+		nullables[t] = v
+	}
+	// return the sentinel object
+	return v.(T)
+}
+
+// IsNullValue returns true if the field contains a null sentinel value.
+// This is used by custom marshallers to properly encode a null value.
+func IsNullValue[T any](v T) bool {
+	// see if our map has a sentinel object for this *T
+	t := reflect.TypeOf(v)
+	nullablesMu.RLock()
+	defer nullablesMu.RUnlock()
+
+	if o, found := nullables[t]; found {
+		o1 := reflect.ValueOf(o)
+		v1 := reflect.ValueOf(v)
+		// we found it; return true if v points to the sentinel object.
+		// NOTE: maps and slices can only be compared to nil, else you get
+		// a runtime panic.  so we compare addresses instead.
+		return o1.Pointer() == v1.Pointer()
+	}
+	// no sentinel object for this *t
+	return false
+}
+
+// ClientOptions contains optional settings for a client's pipeline.
+// Instances can be shared across calls to SDK client constructors when uniform configuration is desired.
+// Zero-value fields will have their specified default values applied during use.
+type ClientOptions = policy.ClientOptions
+
+// Client is a basic HTTP client.  It consists of a pipeline and tracing provider.
+type Client struct {
+	pl runtime.Pipeline
+	tr tracing.Tracer
+
+	// cached on the client to support shallow copying with new values
+	tp        tracing.Provider
+	modVer    string
+	namespace string
+}
+
+// NewClient creates a new Client instance with the provided values.
+//   - moduleName - the fully qualified name of the module where the client is defined; used by the telemetry policy and tracing provider.
+//   - moduleVersion - the semantic version of the module; used by the telemetry policy and tracing provider.
+//   - plOpts - pipeline configuration options; can be the zero-value
+//   - options - optional client configurations; pass nil to accept the default values
+func NewClient(moduleName, moduleVersion string, plOpts runtime.PipelineOptions, options *ClientOptions) (*Client, error) {
+	if options == nil {
+		options = &ClientOptions{}
+	}
+
+	if !options.Telemetry.Disabled {
+		if err := shared.ValidateModVer(moduleVersion); err != nil {
+			return nil, err
+		}
+	}
+
+	pl := runtime.NewPipeline(moduleName, moduleVersion, plOpts, options)
+
+	tr := options.TracingProvider.NewTracer(moduleName, moduleVersion)
+	if tr.Enabled() && plOpts.Tracing.Namespace != "" {
+		tr.SetAttributes(tracing.Attribute{Key: shared.TracingNamespaceAttrName, Value: plOpts.Tracing.Namespace})
+	}
+
+	return &Client{
+		pl:        pl,
+		tr:        tr,
+		tp:        options.TracingProvider,
+		modVer:    moduleVersion,
+		namespace: plOpts.Tracing.Namespace,
+	}, nil
+}
+
+// Pipeline returns the pipeline for this client.
+func (c *Client) Pipeline() runtime.Pipeline {
+	return c.pl
+}
+
+// Tracer returns the tracer for this client.
+func (c *Client) Tracer() tracing.Tracer {
+	return c.tr
+}
+
+// WithClientName returns a shallow copy of the Client with its tracing client name changed to clientName.
+// Note that the values for module name and version will be preserved from the source Client.
+//   - clientName - the fully qualified name of the client ("package.Client"); this is used by the tracing provider when creating spans
+func (c *Client) WithClientName(clientName string) *Client {
+	tr := c.tp.NewTracer(clientName, c.modVer)
+	if tr.Enabled() && c.namespace != "" {
+		tr.SetAttributes(tracing.Attribute{Key: shared.TracingNamespaceAttrName, Value: c.namespace})
+	}
+	return &Client{pl: c.pl, tr: tr, tp: c.tp, modVer: c.modVer, namespace: c.namespace}
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/doc.go 🔗

@@ -0,0 +1,264 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright 2017 Microsoft Corporation. All rights reserved.
+// Use of this source code is governed by an MIT
+// license that can be found in the LICENSE file.
+
+/*
+Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients.
+
+The middleware consists of three components.
+
+  - One or more Policy instances.
+  - A Transporter instance.
+  - A Pipeline instance that combines the Policy and Transporter instances.
+
+# Implementing the Policy Interface
+
+A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as
+a method on a type for a stateful Policy.  Note that HTTP requests made via the same pipeline share
+the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to
+avoid race conditions.
+
+A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can
+perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers,
+and/or query parameters, inject a failure, etc.  Once the Policy has successfully completed its request
+work, it must call the Next() method on the *policy.Request instance in order to pass the request to the
+next Policy in the chain.
+
+When an HTTP response comes back, the Policy then gets a chance to process the response/error.  The Policy instance
+can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response
+body, etc.  Once the Policy has successfully completed its response work, it must return the *http.Response
+and error instances to its caller.
+
+Template for implementing a stateless Policy:
+
+	type policyFunc func(*policy.Request) (*http.Response, error)
+
+	// Do implements the Policy interface on policyFunc.
+	func (pf policyFunc) Do(req *policy.Request) (*http.Response, error) {
+		return pf(req)
+	}
+
+	func NewMyStatelessPolicy() policy.Policy {
+		return policyFunc(func(req *policy.Request) (*http.Response, error) {
+			// TODO: mutate/process Request here
+
+			// forward Request to next Policy & get Response/error
+			resp, err := req.Next()
+
+			// TODO: mutate/process Response/error here
+
+			// return Response/error to previous Policy
+			return resp, err
+		})
+	}
+
+Template for implementing a stateful Policy:
+
+	type MyStatefulPolicy struct {
+		// TODO: add configuration/setting fields here
+	}
+
+	// TODO: add initialization args to NewMyStatefulPolicy()
+	func NewMyStatefulPolicy() policy.Policy {
+		return &MyStatefulPolicy{
+			// TODO: initialize configuration/setting fields here
+		}
+	}
+
+	func (p *MyStatefulPolicy) Do(req *policy.Request) (resp *http.Response, err error) {
+		// TODO: mutate/process Request here
+
+		// forward Request to next Policy & get Response/error
+		resp, err := req.Next()
+
+		// TODO: mutate/process Response/error here
+
+		// return Response/error to previous Policy
+		return resp, err
+	}
+
+# Implementing the Transporter Interface
+
+The Transporter interface is responsible for sending the HTTP request and returning the corresponding
+HTTP response or error.  The Transporter is invoked by the last Policy in the chain.  The default Transporter
+implementation uses a shared http.Client from the standard library.
+
+The same stateful/stateless rules for Policy implementations apply to Transporter implementations.
+
+# Using Policy and Transporter Instances Via a Pipeline
+
+To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function.
+
+	func NewPipeline(transport Transporter, policies ...Policy) Pipeline
+
+The specified Policy instances form a chain and are invoked in the order provided to NewPipeline
+followed by the Transporter.
+
+Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method.
+
+	func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*Request, error)
+
+	func (p Pipeline) Do(req *Request) (*http.Request, error)
+
+The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter
+instances.  The response/error is then sent through the same chain of Policy instances in reverse
+order.  For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with
+TransportA.
+
+	pipeline := NewPipeline(TransportA, PolicyA, PolicyB, PolicyC)
+
+The flow of Request and Response looks like the following:
+
+	policy.Request -> PolicyA -> PolicyB -> PolicyC -> TransportA -----+
+	                                                                   |
+	                                                            HTTP(S) endpoint
+	                                                                   |
+	caller <--------- PolicyA <- PolicyB <- PolicyC <- http.Response-+
+
+# Creating a Request Instance
+
+The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request.  It also
+contains some internal state and provides various convenience methods.  You create a Request instance
+by calling the runtime.NewRequest function:
+
+	func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*Request, error)
+
+If the Request should contain a body, call the SetBody method.
+
+	func (req *Request) SetBody(body ReadSeekCloser, contentType string) error
+
+A seekable stream is required so that upon retry, the retry Policy instance can seek the stream
+back to the beginning before retrying the network request and re-uploading the body.
+
+# Sending an Explicit Null
+
+Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted.
+
+	{
+		"delete-me": null
+	}
+
+This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as
+a means to resolve the ambiguity between a field to be excluded and its zero-value.
+
+	type Widget struct {
+		Name  *string `json:",omitempty"`
+		Count *int    `json:",omitempty"`
+	}
+
+In the above example, Name and Count are defined as pointer-to-type to disambiguate between
+a missing value (nil) and a zero-value (0) which might have semantic differences.
+
+In a PATCH operation, any fields left as nil are to have their values preserved.  When updating
+a Widget's count, one simply specifies the new value for Count, leaving Name nil.
+
+To fulfill the requirement for sending a JSON null, the NullValue() function can be used.
+
+	w := Widget{
+		Count: azcore.NullValue[*int](),
+	}
+
+This sends an explict "null" for Count, indicating that any current value for Count should be deleted.
+
+# Processing the Response
+
+When the HTTP response is received, the *http.Response is returned directly. Each Policy instance
+can inspect/mutate the *http.Response.
+
+# Built-in Logging
+
+To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program.
+
+By default the logger writes to stderr.  This can be customized by calling log.SetListener, providing
+a callback that writes to the desired location.  Any custom logging implementation MUST provide its
+own synchronization to handle concurrent invocations.
+
+See the docs for the log package for further details.
+
+# Pageable Operations
+
+Pageable operations return potentially large data sets spread over multiple GET requests.  The result of
+each GET is a "page" of data consisting of a slice of items.
+
+Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T].
+
+	func (c *WidgetClient) NewListWidgetsPager(o *Options) *runtime.Pager[PageResponse]
+
+The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages
+and determining if there are more pages to fetch.  No IO calls are made until the NextPage() method is invoked.
+
+	pager := widgetClient.NewListWidgetsPager(nil)
+	for pager.More() {
+		page, err := pager.NextPage(context.TODO())
+		// handle err
+		for _, widget := range page.Values {
+			// process widget
+		}
+	}
+
+# Long-Running Operations
+
+Long-running operations (LROs) are operations consisting of an initial request to start the operation followed
+by polling to determine when the operation has reached a terminal state.  An LRO's terminal state is one
+of the following values.
+
+  - Succeeded - the LRO completed successfully
+  - Failed - the LRO failed to complete
+  - Canceled - the LRO was canceled
+
+LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T].
+
+	func (c *WidgetClient) BeginCreateOrUpdate(ctx context.Context, w Widget, o *Options) (*runtime.Poller[Response], error)
+
+When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started.
+It does _not_ mean that the widget has been created or updated (or failed to be created/updated).
+
+The *runtime.Poller[T] provides APIs for determining the state of the LRO.  To wait for the LRO to complete,
+call the PollUntilDone() method.
+
+	poller, err := widgetClient.BeginCreateOrUpdate(context.TODO(), Widget{}, nil)
+	// handle err
+	result, err := poller.PollUntilDone(context.TODO(), nil)
+	// handle err
+	// use result
+
+The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the
+context is canceled/timed out.
+
+Note that LROs can take anywhere from several seconds to several minutes.  The duration is operation-dependent.  Due to
+this variant behavior, pollers do _not_ have a preconfigured time-out.  Use a context with the appropriate cancellation
+mechanism as required.
+
+# Resume Tokens
+
+Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to
+recreate the poller.  This is achieved via the runtime.Poller[T].ResumeToken() method.
+
+	token, err := poller.ResumeToken()
+	// handle error
+
+Note that a token can only be obtained for a poller that's in a non-terminal state.  Also note that any subsequent calls
+to poller.Poll() might change the poller's state.  In this case, a new token should be created.
+
+After the token has been obtained, it can be used to recreate an instance of the originating poller.
+
+	poller, err := widgetClient.BeginCreateOrUpdate(nil, Widget{}, &Options{
+		ResumeToken: token,
+	})
+
+When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken.
+
+Resume tokens are unique per service client and operation.  Attempting to resume a poller for LRO BeginB() with a token from LRO
+BeginA() will result in an error.
+
+# Fakes
+
+The fake package contains types used for constructing in-memory fake servers used in unit tests.
+This allows writing tests to cover various success/error conditions without the need for connecting to a live service.
+
+Please see https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
+*/
+package azcore

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/errors.go 🔗

@@ -0,0 +1,17 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azcore
+
+import "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+
+// ResponseError is returned when a request is made to a service and
+// the service returns a non-success HTTP status code.
+// Use errors.As() to access this type in the error chain.
+//
+// When marshaling instances, the RawResponse field will be omitted.
+// However, the contents returned by Error() will be preserved.
+type ResponseError = exported.ResponseError

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/etag.go 🔗

@@ -0,0 +1,57 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azcore
+
+import (
+	"strings"
+)
+
+// ETag is a property used for optimistic concurrency during updates
+// ETag is a validator based on https://tools.ietf.org/html/rfc7232#section-2.3.2
+// An ETag can be empty ("").
+type ETag string
+
+// ETagAny is an ETag that represents everything, the value is "*"
+const ETagAny ETag = "*"
+
+// Equals does a strong comparison of two ETags. Equals returns true when both
+// ETags are not weak and the values of the underlying strings are equal.
+func (e ETag) Equals(other ETag) bool {
+	return !e.IsWeak() && !other.IsWeak() && e == other
+}
+
+// WeakEquals does a weak comparison of two ETags. Two ETags are equivalent if their opaque-tags match
+// character-by-character, regardless of either or both being tagged as "weak".
+func (e ETag) WeakEquals(other ETag) bool {
+	getStart := func(e1 ETag) int {
+		if e1.IsWeak() {
+			return 2
+		}
+		return 0
+	}
+	aStart := getStart(e)
+	bStart := getStart(other)
+
+	aVal := e[aStart:]
+	bVal := other[bStart:]
+
+	return aVal == bVal
+}
+
+// IsWeak specifies whether the ETag is strong or weak.
+func (e ETag) IsWeak() bool {
+	return len(e) >= 4 && strings.HasPrefix(string(e), "W/\"") && strings.HasSuffix(string(e), "\"")
+}
+
+// MatchConditions specifies HTTP options for conditional requests.
+type MatchConditions struct {
+	// Optionally limit requests to resources that have a matching ETag.
+	IfMatch *ETag
+
+	// Optionally limit requests to resources that do not match the ETag.
+	IfNoneMatch *ETag
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/exported.go 🔗

@@ -0,0 +1,175 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package exported
+
+import (
+	"context"
+	"encoding/base64"
+	"fmt"
+	"io"
+	"net/http"
+	"sync/atomic"
+	"time"
+)
+
+type nopCloser struct {
+	io.ReadSeeker
+}
+
+func (n nopCloser) Close() error {
+	return nil
+}
+
+// NopCloser returns a ReadSeekCloser with a no-op close method wrapping the provided io.ReadSeeker.
+// Exported as streaming.NopCloser().
+func NopCloser(rs io.ReadSeeker) io.ReadSeekCloser {
+	return nopCloser{rs}
+}
+
+// HasStatusCode returns true if the Response's status code is one of the specified values.
+// Exported as runtime.HasStatusCode().
+func HasStatusCode(resp *http.Response, statusCodes ...int) bool {
+	if resp == nil {
+		return false
+	}
+	for _, sc := range statusCodes {
+		if resp.StatusCode == sc {
+			return true
+		}
+	}
+	return false
+}
+
+// AccessToken represents an Azure service bearer access token with expiry information.
+// Exported as azcore.AccessToken.
+type AccessToken struct {
+	Token     string
+	ExpiresOn time.Time
+}
+
+// TokenRequestOptions contain specific parameter that may be used by credentials types when attempting to get a token.
+// Exported as policy.TokenRequestOptions.
+type TokenRequestOptions struct {
+	// Claims are any additional claims required for the token to satisfy a conditional access policy, such as a
+	// service may return in a claims challenge following an authorization failure. If a service returned the
+	// claims value base64 encoded, it must be decoded before setting this field.
+	Claims string
+
+	// EnableCAE indicates whether to enable Continuous Access Evaluation (CAE) for the requested token. When true,
+	// azidentity credentials request CAE tokens for resource APIs supporting CAE. Clients are responsible for
+	// handling CAE challenges. If a client that doesn't handle CAE challenges receives a CAE token, it may end up
+	// in a loop retrying an API call with a token that has been revoked due to CAE.
+	EnableCAE bool
+
+	// Scopes contains the list of permission scopes required for the token.
+	Scopes []string
+
+	// TenantID identifies the tenant from which to request the token. azidentity credentials authenticate in
+	// their configured default tenants when this field isn't set.
+	TenantID string
+}
+
+// TokenCredential represents a credential capable of providing an OAuth token.
+// Exported as azcore.TokenCredential.
+type TokenCredential interface {
+	// GetToken requests an access token for the specified set of scopes.
+	GetToken(ctx context.Context, options TokenRequestOptions) (AccessToken, error)
+}
+
+// DecodeByteArray will base-64 decode the provided string into v.
+// Exported as runtime.DecodeByteArray()
+func DecodeByteArray(s string, v *[]byte, format Base64Encoding) error {
+	if len(s) == 0 {
+		return nil
+	}
+	payload := string(s)
+	if payload[0] == '"' {
+		// remove surrounding quotes
+		payload = payload[1 : len(payload)-1]
+	}
+	switch format {
+	case Base64StdFormat:
+		decoded, err := base64.StdEncoding.DecodeString(payload)
+		if err == nil {
+			*v = decoded
+			return nil
+		}
+		return err
+	case Base64URLFormat:
+		// use raw encoding as URL format should not contain any '=' characters
+		decoded, err := base64.RawURLEncoding.DecodeString(payload)
+		if err == nil {
+			*v = decoded
+			return nil
+		}
+		return err
+	default:
+		return fmt.Errorf("unrecognized byte array format: %d", format)
+	}
+}
+
+// KeyCredential contains an authentication key used to authenticate to an Azure service.
+// Exported as azcore.KeyCredential.
+type KeyCredential struct {
+	cred *keyCredential
+}
+
+// NewKeyCredential creates a new instance of [KeyCredential] with the specified values.
+//   - key is the authentication key
+func NewKeyCredential(key string) *KeyCredential {
+	return &KeyCredential{cred: newKeyCredential(key)}
+}
+
+// Update replaces the existing key with the specified value.
+func (k *KeyCredential) Update(key string) {
+	k.cred.Update(key)
+}
+
+// SASCredential contains a shared access signature used to authenticate to an Azure service.
+// Exported as azcore.SASCredential.
+type SASCredential struct {
+	cred *keyCredential
+}
+
+// NewSASCredential creates a new instance of [SASCredential] with the specified values.
+//   - sas is the shared access signature
+func NewSASCredential(sas string) *SASCredential {
+	return &SASCredential{cred: newKeyCredential(sas)}
+}
+
+// Update replaces the existing shared access signature with the specified value.
+func (k *SASCredential) Update(sas string) {
+	k.cred.Update(sas)
+}
+
+// KeyCredentialGet returns the key for cred.
+func KeyCredentialGet(cred *KeyCredential) string {
+	return cred.cred.Get()
+}
+
+// SASCredentialGet returns the shared access sig for cred.
+func SASCredentialGet(cred *SASCredential) string {
+	return cred.cred.Get()
+}
+
+type keyCredential struct {
+	key atomic.Value // string
+}
+
+func newKeyCredential(key string) *keyCredential {
+	keyCred := keyCredential{}
+	keyCred.key.Store(key)
+	return &keyCred
+}
+
+func (k *keyCredential) Get() string {
+	return k.key.Load().(string)
+}
+
+func (k *keyCredential) Update(key string) {
+	k.key.Store(key)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/pipeline.go 🔗

@@ -0,0 +1,77 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package exported
+
+import (
+	"errors"
+	"net/http"
+)
+
+// Policy represents an extensibility point for the Pipeline that can mutate the specified
+// Request and react to the received Response.
+// Exported as policy.Policy.
+type Policy interface {
+	// Do applies the policy to the specified Request.  When implementing a Policy, mutate the
+	// request before calling req.Next() to move on to the next policy, and respond to the result
+	// before returning to the caller.
+	Do(req *Request) (*http.Response, error)
+}
+
+// Pipeline represents a primitive for sending HTTP requests and receiving responses.
+// Its behavior can be extended by specifying policies during construction.
+// Exported as runtime.Pipeline.
+type Pipeline struct {
+	policies []Policy
+}
+
+// Transporter represents an HTTP pipeline transport used to send HTTP requests and receive responses.
+// Exported as policy.Transporter.
+type Transporter interface {
+	// Do sends the HTTP request and returns the HTTP response or error.
+	Do(req *http.Request) (*http.Response, error)
+}
+
+// used to adapt a TransportPolicy to a Policy
+type transportPolicy struct {
+	trans Transporter
+}
+
+func (tp transportPolicy) Do(req *Request) (*http.Response, error) {
+	if tp.trans == nil {
+		return nil, errors.New("missing transporter")
+	}
+	resp, err := tp.trans.Do(req.Raw())
+	if err != nil {
+		return nil, err
+	} else if resp == nil {
+		// there was no response and no error (rare but can happen)
+		// this ensures the retry policy will retry the request
+		return nil, errors.New("received nil response")
+	}
+	return resp, nil
+}
+
+// NewPipeline creates a new Pipeline object from the specified Policies.
+// Not directly exported, but used as part of runtime.NewPipeline().
+func NewPipeline(transport Transporter, policies ...Policy) Pipeline {
+	// transport policy must always be the last in the slice
+	policies = append(policies, transportPolicy{trans: transport})
+	return Pipeline{
+		policies: policies,
+	}
+}
+
+// Do is called for each and every HTTP request. It passes the request through all
+// the Policy objects (which can transform the Request's URL/query parameters/headers)
+// and ultimately sends the transformed HTTP request over the network.
+func (p Pipeline) Do(req *Request) (*http.Response, error) {
+	if req == nil {
+		return nil, errors.New("request cannot be nil")
+	}
+	req.policies = p.policies
+	return req.Next()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/request.go 🔗

@@ -0,0 +1,260 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package exported
+
+import (
+	"bytes"
+	"context"
+	"encoding/base64"
+	"errors"
+	"fmt"
+	"io"
+	"net/http"
+	"reflect"
+	"strconv"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+)
+
+// Base64Encoding is usesd to specify which base-64 encoder/decoder to use when
+// encoding/decoding a slice of bytes to/from a string.
+// Exported as runtime.Base64Encoding
+type Base64Encoding int
+
+const (
+	// Base64StdFormat uses base64.StdEncoding for encoding and decoding payloads.
+	Base64StdFormat Base64Encoding = 0
+
+	// Base64URLFormat uses base64.RawURLEncoding for encoding and decoding payloads.
+	Base64URLFormat Base64Encoding = 1
+)
+
+// EncodeByteArray will base-64 encode the byte slice v.
+// Exported as runtime.EncodeByteArray()
+func EncodeByteArray(v []byte, format Base64Encoding) string {
+	if format == Base64URLFormat {
+		return base64.RawURLEncoding.EncodeToString(v)
+	}
+	return base64.StdEncoding.EncodeToString(v)
+}
+
+// Request is an abstraction over the creation of an HTTP request as it passes through the pipeline.
+// Don't use this type directly, use NewRequest() instead.
+// Exported as policy.Request.
+type Request struct {
+	req      *http.Request
+	body     io.ReadSeekCloser
+	policies []Policy
+	values   opValues
+}
+
+type opValues map[reflect.Type]any
+
+// Set adds/changes a value
+func (ov opValues) set(value any) {
+	ov[reflect.TypeOf(value)] = value
+}
+
+// Get looks for a value set by SetValue first
+func (ov opValues) get(value any) bool {
+	v, ok := ov[reflect.ValueOf(value).Elem().Type()]
+	if ok {
+		reflect.ValueOf(value).Elem().Set(reflect.ValueOf(v))
+	}
+	return ok
+}
+
+// NewRequestFromRequest creates a new policy.Request with an existing *http.Request
+// Exported as runtime.NewRequestFromRequest().
+func NewRequestFromRequest(req *http.Request) (*Request, error) {
+	policyReq := &Request{req: req}
+
+	if req.Body != nil {
+		// we can avoid a body copy here if the underlying stream is already a
+		// ReadSeekCloser.
+		readSeekCloser, isReadSeekCloser := req.Body.(io.ReadSeekCloser)
+
+		if !isReadSeekCloser {
+			// since this is an already populated http.Request we want to copy
+			// over its body, if it has one.
+			bodyBytes, err := io.ReadAll(req.Body)
+
+			if err != nil {
+				return nil, err
+			}
+
+			if err := req.Body.Close(); err != nil {
+				return nil, err
+			}
+
+			readSeekCloser = NopCloser(bytes.NewReader(bodyBytes))
+		}
+
+		// SetBody also takes care of updating the http.Request's body
+		// as well, so they should stay in-sync from this point.
+		if err := policyReq.SetBody(readSeekCloser, req.Header.Get("Content-Type")); err != nil {
+			return nil, err
+		}
+	}
+
+	return policyReq, nil
+}
+
+// NewRequest creates a new Request with the specified input.
+// Exported as runtime.NewRequest().
+func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*Request, error) {
+	req, err := http.NewRequestWithContext(ctx, httpMethod, endpoint, nil)
+	if err != nil {
+		return nil, err
+	}
+	if req.URL.Host == "" {
+		return nil, errors.New("no Host in request URL")
+	}
+	if !(req.URL.Scheme == "http" || req.URL.Scheme == "https") {
+		return nil, fmt.Errorf("unsupported protocol scheme %s", req.URL.Scheme)
+	}
+	return &Request{req: req}, nil
+}
+
+// Body returns the original body specified when the Request was created.
+func (req *Request) Body() io.ReadSeekCloser {
+	return req.body
+}
+
+// Raw returns the underlying HTTP request.
+func (req *Request) Raw() *http.Request {
+	return req.req
+}
+
+// Next calls the next policy in the pipeline.
+// If there are no more policies, nil and an error are returned.
+// This method is intended to be called from pipeline policies.
+// To send a request through a pipeline call Pipeline.Do().
+func (req *Request) Next() (*http.Response, error) {
+	if len(req.policies) == 0 {
+		return nil, errors.New("no more policies")
+	}
+	nextPolicy := req.policies[0]
+	nextReq := *req
+	nextReq.policies = nextReq.policies[1:]
+	return nextPolicy.Do(&nextReq)
+}
+
+// SetOperationValue adds/changes a mutable key/value associated with a single operation.
+func (req *Request) SetOperationValue(value any) {
+	if req.values == nil {
+		req.values = opValues{}
+	}
+	req.values.set(value)
+}
+
+// OperationValue looks for a value set by SetOperationValue().
+func (req *Request) OperationValue(value any) bool {
+	if req.values == nil {
+		return false
+	}
+	return req.values.get(value)
+}
+
+// SetBody sets the specified ReadSeekCloser as the HTTP request body, and sets Content-Type and Content-Length
+// accordingly. If the ReadSeekCloser is nil or empty, Content-Length won't be set. If contentType is "",
+// Content-Type won't be set, and if it was set, will be deleted.
+// Use streaming.NopCloser to turn an io.ReadSeeker into an io.ReadSeekCloser.
+func (req *Request) SetBody(body io.ReadSeekCloser, contentType string) error {
+	// clobber the existing Content-Type to preserve behavior
+	return SetBody(req, body, contentType, true)
+}
+
+// RewindBody seeks the request's Body stream back to the beginning so it can be resent when retrying an operation.
+func (req *Request) RewindBody() error {
+	if req.body != nil {
+		// Reset the stream back to the beginning and restore the body
+		_, err := req.body.Seek(0, io.SeekStart)
+		req.req.Body = req.body
+		return err
+	}
+	return nil
+}
+
+// Close closes the request body.
+func (req *Request) Close() error {
+	if req.body == nil {
+		return nil
+	}
+	return req.body.Close()
+}
+
+// Clone returns a deep copy of the request with its context changed to ctx.
+func (req *Request) Clone(ctx context.Context) *Request {
+	r2 := *req
+	r2.req = req.req.Clone(ctx)
+	return &r2
+}
+
+// WithContext returns a shallow copy of the request with its context changed to ctx.
+func (req *Request) WithContext(ctx context.Context) *Request {
+	r2 := new(Request)
+	*r2 = *req
+	r2.req = r2.req.WithContext(ctx)
+	return r2
+}
+
+// not exported but dependent on Request
+
+// PolicyFunc is a type that implements the Policy interface.
+// Use this type when implementing a stateless policy as a first-class function.
+type PolicyFunc func(*Request) (*http.Response, error)
+
+// Do implements the Policy interface on policyFunc.
+func (pf PolicyFunc) Do(req *Request) (*http.Response, error) {
+	return pf(req)
+}
+
+// SetBody sets the specified ReadSeekCloser as the HTTP request body, and sets Content-Type and Content-Length accordingly.
+//   - req is the request to modify
+//   - body is the request body; if nil or empty, Content-Length won't be set
+//   - contentType is the value for the Content-Type header; if empty, Content-Type will be deleted
+//   - clobberContentType when true, will overwrite the existing value of Content-Type with contentType
+func SetBody(req *Request, body io.ReadSeekCloser, contentType string, clobberContentType bool) error {
+	var err error
+	var size int64
+	if body != nil {
+		size, err = body.Seek(0, io.SeekEnd) // Seek to the end to get the stream's size
+		if err != nil {
+			return err
+		}
+	}
+	if size == 0 {
+		// treat an empty stream the same as a nil one: assign req a nil body
+		body = nil
+		// RFC 9110 specifies a client shouldn't set Content-Length on a request containing no content
+		// (Del is a no-op when the header has no value)
+		req.req.Header.Del(shared.HeaderContentLength)
+	} else {
+		_, err = body.Seek(0, io.SeekStart)
+		if err != nil {
+			return err
+		}
+		req.req.Header.Set(shared.HeaderContentLength, strconv.FormatInt(size, 10))
+		req.Raw().GetBody = func() (io.ReadCloser, error) {
+			_, err := body.Seek(0, io.SeekStart) // Seek back to the beginning of the stream
+			return body, err
+		}
+	}
+	// keep a copy of the body argument.  this is to handle cases
+	// where req.Body is replaced, e.g. httputil.DumpRequest and friends.
+	req.body = body
+	req.req.Body = body
+	req.req.ContentLength = size
+	if contentType == "" {
+		// Del is a no-op when the header has no value
+		req.req.Header.Del(shared.HeaderContentType)
+	} else if req.req.Header.Get(shared.HeaderContentType) == "" || clobberContentType {
+		req.req.Header.Set(shared.HeaderContentType, contentType)
+	}
+	return nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/response_error.go 🔗

@@ -0,0 +1,201 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package exported
+
+import (
+	"bytes"
+	"encoding/json"
+	"fmt"
+	"net/http"
+	"regexp"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/exported"
+)
+
+// NewResponseError creates a new *ResponseError from the provided HTTP response.
+// Exported as runtime.NewResponseError().
+func NewResponseError(resp *http.Response) error {
+	// prefer the error code in the response header
+	if ec := resp.Header.Get(shared.HeaderXMSErrorCode); ec != "" {
+		return NewResponseErrorWithErrorCode(resp, ec)
+	}
+
+	// if we didn't get x-ms-error-code, check in the response body
+	body, err := exported.Payload(resp, nil)
+	if err != nil {
+		// since we're not returning the ResponseError in this
+		// case we also don't want to write it to the log.
+		return err
+	}
+
+	var errorCode string
+	if len(body) > 0 {
+		if fromJSON := extractErrorCodeJSON(body); fromJSON != "" {
+			errorCode = fromJSON
+		} else if fromXML := extractErrorCodeXML(body); fromXML != "" {
+			errorCode = fromXML
+		}
+	}
+
+	return NewResponseErrorWithErrorCode(resp, errorCode)
+}
+
+// NewResponseErrorWithErrorCode creates an *azcore.ResponseError from the provided HTTP response and errorCode.
+// Exported as runtime.NewResponseErrorWithErrorCode().
+func NewResponseErrorWithErrorCode(resp *http.Response, errorCode string) error {
+	respErr := &ResponseError{
+		ErrorCode:   errorCode,
+		StatusCode:  resp.StatusCode,
+		RawResponse: resp,
+	}
+	log.Write(log.EventResponseError, respErr.Error())
+	return respErr
+}
+
+func extractErrorCodeJSON(body []byte) string {
+	var rawObj map[string]any
+	if err := json.Unmarshal(body, &rawObj); err != nil {
+		// not a JSON object
+		return ""
+	}
+
+	// check if this is a wrapped error, i.e. { "error": { ... } }
+	// if so then unwrap it
+	if wrapped, ok := rawObj["error"]; ok {
+		unwrapped, ok := wrapped.(map[string]any)
+		if !ok {
+			return ""
+		}
+		rawObj = unwrapped
+	} else if wrapped, ok := rawObj["odata.error"]; ok {
+		// check if this a wrapped odata error, i.e. { "odata.error": { ... } }
+		unwrapped, ok := wrapped.(map[string]any)
+		if !ok {
+			return ""
+		}
+		rawObj = unwrapped
+	}
+
+	// now check for the error code
+	code, ok := rawObj["code"]
+	if !ok {
+		return ""
+	}
+	codeStr, ok := code.(string)
+	if !ok {
+		return ""
+	}
+	return codeStr
+}
+
+func extractErrorCodeXML(body []byte) string {
+	// regular expression is much easier than dealing with the XML parser
+	rx := regexp.MustCompile(`<(?:\w+:)?[c|C]ode>\s*(\w+)\s*<\/(?:\w+:)?[c|C]ode>`)
+	res := rx.FindStringSubmatch(string(body))
+	if len(res) != 2 {
+		return ""
+	}
+	// first submatch is the entire thing, second one is the captured error code
+	return res[1]
+}
+
+// ResponseError is returned when a request is made to a service and
+// the service returns a non-success HTTP status code.
+// Use errors.As() to access this type in the error chain.
+// Exported as azcore.ResponseError.
+type ResponseError struct {
+	// ErrorCode is the error code returned by the resource provider if available.
+	ErrorCode string
+
+	// StatusCode is the HTTP status code as defined in https://pkg.go.dev/net/http#pkg-constants.
+	StatusCode int
+
+	// RawResponse is the underlying HTTP response.
+	RawResponse *http.Response `json:"-"`
+
+	errMsg string
+}
+
+// Error implements the error interface for type ResponseError.
+// Note that the message contents are not contractual and can change over time.
+func (e *ResponseError) Error() string {
+	if e.errMsg != "" {
+		return e.errMsg
+	}
+
+	const separator = "--------------------------------------------------------------------------------"
+	// write the request method and URL with response status code
+	msg := &bytes.Buffer{}
+	if e.RawResponse != nil {
+		if e.RawResponse.Request != nil {
+			fmt.Fprintf(msg, "%s %s://%s%s\n", e.RawResponse.Request.Method, e.RawResponse.Request.URL.Scheme, e.RawResponse.Request.URL.Host, e.RawResponse.Request.URL.Path)
+		} else {
+			fmt.Fprintln(msg, "Request information not available")
+		}
+		fmt.Fprintln(msg, separator)
+		fmt.Fprintf(msg, "RESPONSE %d: %s\n", e.RawResponse.StatusCode, e.RawResponse.Status)
+	} else {
+		fmt.Fprintln(msg, "Missing RawResponse")
+		fmt.Fprintln(msg, separator)
+	}
+	if e.ErrorCode != "" {
+		fmt.Fprintf(msg, "ERROR CODE: %s\n", e.ErrorCode)
+	} else {
+		fmt.Fprintln(msg, "ERROR CODE UNAVAILABLE")
+	}
+	if e.RawResponse != nil {
+		fmt.Fprintln(msg, separator)
+		body, err := exported.Payload(e.RawResponse, nil)
+		if err != nil {
+			// this really shouldn't fail at this point as the response
+			// body is already cached (it was read in NewResponseError)
+			fmt.Fprintf(msg, "Error reading response body: %v", err)
+		} else if len(body) > 0 {
+			if err := json.Indent(msg, body, "", "  "); err != nil {
+				// failed to pretty-print so just dump it verbatim
+				fmt.Fprint(msg, string(body))
+			}
+			// the standard library doesn't have a pretty-printer for XML
+			fmt.Fprintln(msg)
+		} else {
+			fmt.Fprintln(msg, "Response contained no body")
+		}
+	}
+	fmt.Fprintln(msg, separator)
+
+	e.errMsg = msg.String()
+	return e.errMsg
+}
+
+// internal type used for marshaling/unmarshaling
+type responseError struct {
+	ErrorCode    string `json:"errorCode"`
+	StatusCode   int    `json:"statusCode"`
+	ErrorMessage string `json:"errorMessage"`
+}
+
+func (e ResponseError) MarshalJSON() ([]byte, error) {
+	return json.Marshal(responseError{
+		ErrorCode:    e.ErrorCode,
+		StatusCode:   e.StatusCode,
+		ErrorMessage: e.Error(),
+	})
+}
+
+func (e *ResponseError) UnmarshalJSON(data []byte) error {
+	re := responseError{}
+	if err := json.Unmarshal(data, &re); err != nil {
+		return err
+	}
+
+	e.ErrorCode = re.ErrorCode
+	e.StatusCode = re.StatusCode
+	e.errMsg = re.ErrorMessage
+	return nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log/log.go 🔗

@@ -0,0 +1,50 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+// This is an internal helper package to combine the complete logging APIs.
+package log
+
+import (
+	azlog "github.com/Azure/azure-sdk-for-go/sdk/azcore/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+type Event = log.Event
+
+const (
+	EventRequest       = azlog.EventRequest
+	EventResponse      = azlog.EventResponse
+	EventResponseError = azlog.EventResponseError
+	EventRetryPolicy   = azlog.EventRetryPolicy
+	EventLRO           = azlog.EventLRO
+)
+
+// Write invokes the underlying listener with the specified event and message.
+// If the event shouldn't be logged or there is no listener then Write does nothing.
+func Write(cls log.Event, msg string) {
+	log.Write(cls, msg)
+}
+
+// Writef invokes the underlying listener with the specified event and formatted message.
+// If the event shouldn't be logged or there is no listener then Writef does nothing.
+func Writef(cls log.Event, format string, a ...any) {
+	log.Writef(cls, format, a...)
+}
+
+// SetListener will set the Logger to write to the specified listener.
+func SetListener(lst func(Event, string)) {
+	log.SetListener(lst)
+}
+
+// Should returns true if the specified log event should be written to the log.
+// By default all log events will be logged.  Call SetEvents() to limit
+// the log events for logging.
+// If no listener has been set this will return false.
+// Calling this method is useful when the message to log is computationally expensive
+// and you want to avoid the overhead if its log event is not enabled.
+func Should(cls log.Event) bool {
+	return log.Should(cls)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async/async.go 🔗

@@ -0,0 +1,159 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package async
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/poller"
+)
+
+// see https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/async-api-reference.md
+
+// Applicable returns true if the LRO is using Azure-AsyncOperation.
+func Applicable(resp *http.Response) bool {
+	return resp.Header.Get(shared.HeaderAzureAsync) != ""
+}
+
+// CanResume returns true if the token can rehydrate this poller type.
+func CanResume(token map[string]any) bool {
+	_, ok := token["asyncURL"]
+	return ok
+}
+
+// Poller is an LRO poller that uses the Azure-AsyncOperation pattern.
+type Poller[T any] struct {
+	pl exported.Pipeline
+
+	resp *http.Response
+
+	// The URL from Azure-AsyncOperation header.
+	AsyncURL string `json:"asyncURL"`
+
+	// The URL from Location header.
+	LocURL string `json:"locURL"`
+
+	// The URL from the initial LRO request.
+	OrigURL string `json:"origURL"`
+
+	// The HTTP method from the initial LRO request.
+	Method string `json:"method"`
+
+	// The value of final-state-via from swagger, can be the empty string.
+	FinalState pollers.FinalStateVia `json:"finalState"`
+
+	// The LRO's current state.
+	CurState string `json:"state"`
+}
+
+// New creates a new Poller from the provided initial response and final-state type.
+// Pass nil for response to create an empty Poller for rehydration.
+func New[T any](pl exported.Pipeline, resp *http.Response, finalState pollers.FinalStateVia) (*Poller[T], error) {
+	if resp == nil {
+		log.Write(log.EventLRO, "Resuming Azure-AsyncOperation poller.")
+		return &Poller[T]{pl: pl}, nil
+	}
+	log.Write(log.EventLRO, "Using Azure-AsyncOperation poller.")
+	asyncURL := resp.Header.Get(shared.HeaderAzureAsync)
+	if asyncURL == "" {
+		return nil, errors.New("response is missing Azure-AsyncOperation header")
+	}
+	if !poller.IsValidURL(asyncURL) {
+		return nil, fmt.Errorf("invalid polling URL %s", asyncURL)
+	}
+	// check for provisioning state.  if the operation is a RELO
+	// and terminates synchronously this will prevent extra polling.
+	// it's ok if there's no provisioning state.
+	state, _ := poller.GetProvisioningState(resp)
+	if state == "" {
+		state = poller.StatusInProgress
+	}
+	p := &Poller[T]{
+		pl:         pl,
+		resp:       resp,
+		AsyncURL:   asyncURL,
+		LocURL:     resp.Header.Get(shared.HeaderLocation),
+		OrigURL:    resp.Request.URL.String(),
+		Method:     resp.Request.Method,
+		FinalState: finalState,
+		CurState:   state,
+	}
+	return p, nil
+}
+
+// Done returns true if the LRO is in a terminal state.
+func (p *Poller[T]) Done() bool {
+	return poller.IsTerminalState(p.CurState)
+}
+
+// Poll retrieves the current state of the LRO.
+func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) {
+	err := pollers.PollHelper(ctx, p.AsyncURL, p.pl, func(resp *http.Response) (string, error) {
+		if !poller.StatusCodeValid(resp) {
+			p.resp = resp
+			return "", exported.NewResponseError(resp)
+		}
+		state, err := poller.GetStatus(resp)
+		if err != nil {
+			return "", err
+		} else if state == "" {
+			return "", errors.New("the response did not contain a status")
+		}
+		p.resp = resp
+		p.CurState = state
+		return p.CurState, nil
+	})
+	if err != nil {
+		return nil, err
+	}
+	return p.resp, nil
+}
+
+func (p *Poller[T]) Result(ctx context.Context, out *T) error {
+	if p.resp.StatusCode == http.StatusNoContent {
+		return nil
+	} else if poller.Failed(p.CurState) {
+		return exported.NewResponseError(p.resp)
+	}
+	var req *exported.Request
+	var err error
+	if p.Method == http.MethodPatch || p.Method == http.MethodPut {
+		// for PATCH and PUT, the final GET is on the original resource URL
+		req, err = exported.NewRequest(ctx, http.MethodGet, p.OrigURL)
+	} else if p.Method == http.MethodPost {
+		if p.FinalState == pollers.FinalStateViaAzureAsyncOp {
+			// no final GET required
+		} else if p.FinalState == pollers.FinalStateViaOriginalURI {
+			req, err = exported.NewRequest(ctx, http.MethodGet, p.OrigURL)
+		} else if p.LocURL != "" {
+			// ideally FinalState would be set to "location" but it isn't always.
+			// must check last due to more permissive condition.
+			req, err = exported.NewRequest(ctx, http.MethodGet, p.LocURL)
+		}
+	}
+	if err != nil {
+		return err
+	}
+
+	// if a final GET request has been created, execute it
+	if req != nil {
+		resp, err := p.pl.Do(req)
+		if err != nil {
+			return err
+		}
+		p.resp = resp
+	}
+
+	return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), "", out)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body/body.go 🔗

@@ -0,0 +1,135 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package body
+
+import (
+	"context"
+	"errors"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/poller"
+)
+
+// Kind is the identifier of this type in a resume token.
+const kind = "body"
+
+// Applicable returns true if the LRO is using no headers, just provisioning state.
+// This is only applicable to PATCH and PUT methods and assumes no polling headers.
+func Applicable(resp *http.Response) bool {
+	// we can't check for absense of headers due to some misbehaving services
+	// like redis that return a Location header but don't actually use that protocol
+	return resp.Request.Method == http.MethodPatch || resp.Request.Method == http.MethodPut
+}
+
+// CanResume returns true if the token can rehydrate this poller type.
+func CanResume(token map[string]any) bool {
+	t, ok := token["type"]
+	if !ok {
+		return false
+	}
+	tt, ok := t.(string)
+	if !ok {
+		return false
+	}
+	return tt == kind
+}
+
+// Poller is an LRO poller that uses the Body pattern.
+type Poller[T any] struct {
+	pl exported.Pipeline
+
+	resp *http.Response
+
+	// The poller's type, used for resume token processing.
+	Type string `json:"type"`
+
+	// The URL for polling.
+	PollURL string `json:"pollURL"`
+
+	// The LRO's current state.
+	CurState string `json:"state"`
+}
+
+// New creates a new Poller from the provided initial response.
+// Pass nil for response to create an empty Poller for rehydration.
+func New[T any](pl exported.Pipeline, resp *http.Response) (*Poller[T], error) {
+	if resp == nil {
+		log.Write(log.EventLRO, "Resuming Body poller.")
+		return &Poller[T]{pl: pl}, nil
+	}
+	log.Write(log.EventLRO, "Using Body poller.")
+	p := &Poller[T]{
+		pl:      pl,
+		resp:    resp,
+		Type:    kind,
+		PollURL: resp.Request.URL.String(),
+	}
+	// default initial state to InProgress.  depending on the HTTP
+	// status code and provisioning state, we might change the value.
+	curState := poller.StatusInProgress
+	provState, err := poller.GetProvisioningState(resp)
+	if err != nil && !errors.Is(err, poller.ErrNoBody) {
+		return nil, err
+	}
+	if resp.StatusCode == http.StatusCreated && provState != "" {
+		// absense of provisioning state is ok for a 201, means the operation is in progress
+		curState = provState
+	} else if resp.StatusCode == http.StatusOK {
+		if provState != "" {
+			curState = provState
+		} else if provState == "" {
+			// for a 200, absense of provisioning state indicates success
+			curState = poller.StatusSucceeded
+		}
+	} else if resp.StatusCode == http.StatusNoContent {
+		curState = poller.StatusSucceeded
+	}
+	p.CurState = curState
+	return p, nil
+}
+
+func (p *Poller[T]) Done() bool {
+	return poller.IsTerminalState(p.CurState)
+}
+
+func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) {
+	err := pollers.PollHelper(ctx, p.PollURL, p.pl, func(resp *http.Response) (string, error) {
+		if !poller.StatusCodeValid(resp) {
+			p.resp = resp
+			return "", exported.NewResponseError(resp)
+		}
+		if resp.StatusCode == http.StatusNoContent {
+			p.resp = resp
+			p.CurState = poller.StatusSucceeded
+			return p.CurState, nil
+		}
+		state, err := poller.GetProvisioningState(resp)
+		if errors.Is(err, poller.ErrNoBody) {
+			// a missing response body in non-204 case is an error
+			return "", err
+		} else if state == "" {
+			// a response body without provisioning state is considered terminal success
+			state = poller.StatusSucceeded
+		} else if err != nil {
+			return "", err
+		}
+		p.resp = resp
+		p.CurState = state
+		return p.CurState, nil
+	})
+	if err != nil {
+		return nil, err
+	}
+	return p.resp, nil
+}
+
+func (p *Poller[T]) Result(ctx context.Context, out *T) error {
+	return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), "", out)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake/fake.go 🔗

@@ -0,0 +1,133 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package fake
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/poller"
+)
+
+// Applicable returns true if the LRO is a fake.
+func Applicable(resp *http.Response) bool {
+	return resp.Header.Get(shared.HeaderFakePollerStatus) != ""
+}
+
+// CanResume returns true if the token can rehydrate this poller type.
+func CanResume(token map[string]any) bool {
+	_, ok := token["fakeURL"]
+	return ok
+}
+
+// Poller is an LRO poller that uses the Core-Fake-Poller pattern.
+type Poller[T any] struct {
+	pl exported.Pipeline
+
+	resp *http.Response
+
+	// The API name from CtxAPINameKey
+	APIName string `json:"apiName"`
+
+	// The URL from Core-Fake-Poller header.
+	FakeURL string `json:"fakeURL"`
+
+	// The LRO's current state.
+	FakeStatus string `json:"status"`
+}
+
+// lroStatusURLSuffix is the URL path suffix for a faked LRO.
+const lroStatusURLSuffix = "/get/fake/status"
+
+// New creates a new Poller from the provided initial response.
+// Pass nil for response to create an empty Poller for rehydration.
+func New[T any](pl exported.Pipeline, resp *http.Response) (*Poller[T], error) {
+	if resp == nil {
+		log.Write(log.EventLRO, "Resuming Core-Fake-Poller poller.")
+		return &Poller[T]{pl: pl}, nil
+	}
+
+	log.Write(log.EventLRO, "Using Core-Fake-Poller poller.")
+	fakeStatus := resp.Header.Get(shared.HeaderFakePollerStatus)
+	if fakeStatus == "" {
+		return nil, errors.New("response is missing Fake-Poller-Status header")
+	}
+
+	ctxVal := resp.Request.Context().Value(shared.CtxAPINameKey{})
+	if ctxVal == nil {
+		return nil, errors.New("missing value for CtxAPINameKey")
+	}
+
+	apiName, ok := ctxVal.(string)
+	if !ok {
+		return nil, fmt.Errorf("expected string for CtxAPINameKey, the type was %T", ctxVal)
+	}
+
+	qp := ""
+	if resp.Request.URL.RawQuery != "" {
+		qp = "?" + resp.Request.URL.RawQuery
+	}
+
+	p := &Poller[T]{
+		pl:      pl,
+		resp:    resp,
+		APIName: apiName,
+		// NOTE: any changes to this path format MUST be reflected in SanitizePollerPath()
+		FakeURL:    fmt.Sprintf("%s://%s%s%s%s", resp.Request.URL.Scheme, resp.Request.URL.Host, resp.Request.URL.Path, lroStatusURLSuffix, qp),
+		FakeStatus: fakeStatus,
+	}
+	return p, nil
+}
+
+// Done returns true if the LRO is in a terminal state.
+func (p *Poller[T]) Done() bool {
+	return poller.IsTerminalState(p.FakeStatus)
+}
+
+// Poll retrieves the current state of the LRO.
+func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) {
+	ctx = context.WithValue(ctx, shared.CtxAPINameKey{}, p.APIName)
+	err := pollers.PollHelper(ctx, p.FakeURL, p.pl, func(resp *http.Response) (string, error) {
+		if !poller.StatusCodeValid(resp) {
+			p.resp = resp
+			return "", exported.NewResponseError(resp)
+		}
+		fakeStatus := resp.Header.Get(shared.HeaderFakePollerStatus)
+		if fakeStatus == "" {
+			return "", errors.New("response is missing Fake-Poller-Status header")
+		}
+		p.resp = resp
+		p.FakeStatus = fakeStatus
+		return p.FakeStatus, nil
+	})
+	if err != nil {
+		return nil, err
+	}
+	return p.resp, nil
+}
+
+func (p *Poller[T]) Result(ctx context.Context, out *T) error {
+	if p.resp.StatusCode == http.StatusNoContent {
+		return nil
+	} else if poller.Failed(p.FakeStatus) {
+		return exported.NewResponseError(p.resp)
+	}
+
+	return pollers.ResultHelper(p.resp, poller.Failed(p.FakeStatus), "", out)
+}
+
+// SanitizePollerPath removes any fake-appended suffix from a URL's path.
+func SanitizePollerPath(path string) string {
+	return strings.TrimSuffix(path, lroStatusURLSuffix)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc/loc.go 🔗

@@ -0,0 +1,123 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package loc
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/poller"
+)
+
+// Kind is the identifier of this type in a resume token.
+const kind = "loc"
+
+// Applicable returns true if the LRO is using Location.
+func Applicable(resp *http.Response) bool {
+	return resp.Header.Get(shared.HeaderLocation) != ""
+}
+
+// CanResume returns true if the token can rehydrate this poller type.
+func CanResume(token map[string]any) bool {
+	t, ok := token["type"]
+	if !ok {
+		return false
+	}
+	tt, ok := t.(string)
+	if !ok {
+		return false
+	}
+	return tt == kind
+}
+
+// Poller is an LRO poller that uses the Location pattern.
+type Poller[T any] struct {
+	pl   exported.Pipeline
+	resp *http.Response
+
+	Type     string `json:"type"`
+	PollURL  string `json:"pollURL"`
+	CurState string `json:"state"`
+}
+
+// New creates a new Poller from the provided initial response.
+// Pass nil for response to create an empty Poller for rehydration.
+func New[T any](pl exported.Pipeline, resp *http.Response) (*Poller[T], error) {
+	if resp == nil {
+		log.Write(log.EventLRO, "Resuming Location poller.")
+		return &Poller[T]{pl: pl}, nil
+	}
+	log.Write(log.EventLRO, "Using Location poller.")
+	locURL := resp.Header.Get(shared.HeaderLocation)
+	if locURL == "" {
+		return nil, errors.New("response is missing Location header")
+	}
+	if !poller.IsValidURL(locURL) {
+		return nil, fmt.Errorf("invalid polling URL %s", locURL)
+	}
+	// check for provisioning state.  if the operation is a RELO
+	// and terminates synchronously this will prevent extra polling.
+	// it's ok if there's no provisioning state.
+	state, _ := poller.GetProvisioningState(resp)
+	if state == "" {
+		state = poller.StatusInProgress
+	}
+	return &Poller[T]{
+		pl:       pl,
+		resp:     resp,
+		Type:     kind,
+		PollURL:  locURL,
+		CurState: state,
+	}, nil
+}
+
+func (p *Poller[T]) Done() bool {
+	return poller.IsTerminalState(p.CurState)
+}
+
+func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) {
+	err := pollers.PollHelper(ctx, p.PollURL, p.pl, func(resp *http.Response) (string, error) {
+		// location polling can return an updated polling URL
+		if h := resp.Header.Get(shared.HeaderLocation); h != "" {
+			p.PollURL = h
+		}
+		// if provisioning state is available, use that.  this is only
+		// for some ARM LRO scenarios (e.g. DELETE with a Location header)
+		// so if it's missing then use HTTP status code.
+		provState, _ := poller.GetProvisioningState(resp)
+		p.resp = resp
+		if provState != "" {
+			p.CurState = provState
+		} else if resp.StatusCode == http.StatusAccepted {
+			p.CurState = poller.StatusInProgress
+		} else if resp.StatusCode > 199 && resp.StatusCode < 300 {
+			// any 2xx other than a 202 indicates success
+			p.CurState = poller.StatusSucceeded
+		} else if pollers.IsNonTerminalHTTPStatusCode(resp) {
+			// the request timed out or is being throttled.
+			// DO NOT include this as a terminal failure. preserve
+			// the existing state and return the response.
+		} else {
+			p.CurState = poller.StatusFailed
+		}
+		return p.CurState, nil
+	})
+	if err != nil {
+		return nil, err
+	}
+	return p.resp, nil
+}
+
+func (p *Poller[T]) Result(ctx context.Context, out *T) error {
+	return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), "", out)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op/op.go 🔗

@@ -0,0 +1,148 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package op
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/poller"
+)
+
+// Applicable returns true if the LRO is using Operation-Location.
+func Applicable(resp *http.Response) bool {
+	return resp.Header.Get(shared.HeaderOperationLocation) != ""
+}
+
+// CanResume returns true if the token can rehydrate this poller type.
+func CanResume(token map[string]any) bool {
+	_, ok := token["oplocURL"]
+	return ok
+}
+
+// Poller is an LRO poller that uses the Operation-Location pattern.
+type Poller[T any] struct {
+	pl   exported.Pipeline
+	resp *http.Response
+
+	OpLocURL   string                `json:"oplocURL"`
+	LocURL     string                `json:"locURL"`
+	OrigURL    string                `json:"origURL"`
+	Method     string                `json:"method"`
+	FinalState pollers.FinalStateVia `json:"finalState"`
+	ResultPath string                `json:"resultPath"`
+	CurState   string                `json:"state"`
+}
+
+// New creates a new Poller from the provided initial response.
+// Pass nil for response to create an empty Poller for rehydration.
+func New[T any](pl exported.Pipeline, resp *http.Response, finalState pollers.FinalStateVia, resultPath string) (*Poller[T], error) {
+	if resp == nil {
+		log.Write(log.EventLRO, "Resuming Operation-Location poller.")
+		return &Poller[T]{pl: pl}, nil
+	}
+	log.Write(log.EventLRO, "Using Operation-Location poller.")
+	opURL := resp.Header.Get(shared.HeaderOperationLocation)
+	if opURL == "" {
+		return nil, errors.New("response is missing Operation-Location header")
+	}
+	if !poller.IsValidURL(opURL) {
+		return nil, fmt.Errorf("invalid Operation-Location URL %s", opURL)
+	}
+	locURL := resp.Header.Get(shared.HeaderLocation)
+	// Location header is optional
+	if locURL != "" && !poller.IsValidURL(locURL) {
+		return nil, fmt.Errorf("invalid Location URL %s", locURL)
+	}
+	// default initial state to InProgress.  if the
+	// service sent us a status then use that instead.
+	curState := poller.StatusInProgress
+	status, err := poller.GetStatus(resp)
+	if err != nil && !errors.Is(err, poller.ErrNoBody) {
+		return nil, err
+	}
+	if status != "" {
+		curState = status
+	}
+
+	return &Poller[T]{
+		pl:         pl,
+		resp:       resp,
+		OpLocURL:   opURL,
+		LocURL:     locURL,
+		OrigURL:    resp.Request.URL.String(),
+		Method:     resp.Request.Method,
+		FinalState: finalState,
+		ResultPath: resultPath,
+		CurState:   curState,
+	}, nil
+}
+
+func (p *Poller[T]) Done() bool {
+	return poller.IsTerminalState(p.CurState)
+}
+
+func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) {
+	err := pollers.PollHelper(ctx, p.OpLocURL, p.pl, func(resp *http.Response) (string, error) {
+		if !poller.StatusCodeValid(resp) {
+			p.resp = resp
+			return "", exported.NewResponseError(resp)
+		}
+		state, err := poller.GetStatus(resp)
+		if err != nil {
+			return "", err
+		} else if state == "" {
+			return "", errors.New("the response did not contain a status")
+		}
+		p.resp = resp
+		p.CurState = state
+		return p.CurState, nil
+	})
+	if err != nil {
+		return nil, err
+	}
+	return p.resp, nil
+}
+
+func (p *Poller[T]) Result(ctx context.Context, out *T) error {
+	var req *exported.Request
+	var err error
+
+	if p.FinalState == pollers.FinalStateViaLocation && p.LocURL != "" {
+		req, err = exported.NewRequest(ctx, http.MethodGet, p.LocURL)
+	} else if rl, rlErr := poller.GetResourceLocation(p.resp); rlErr != nil && !errors.Is(rlErr, poller.ErrNoBody) {
+		return rlErr
+	} else if rl != "" {
+		req, err = exported.NewRequest(ctx, http.MethodGet, rl)
+	} else if p.Method == http.MethodPatch || p.Method == http.MethodPut {
+		req, err = exported.NewRequest(ctx, http.MethodGet, p.OrigURL)
+	} else if p.Method == http.MethodPost && p.LocURL != "" {
+		req, err = exported.NewRequest(ctx, http.MethodGet, p.LocURL)
+	}
+	if err != nil {
+		return err
+	}
+
+	// if a final GET request has been created, execute it
+	if req != nil {
+		// no JSON path when making a final GET request
+		p.ResultPath = ""
+		resp, err := p.pl.Do(req)
+		if err != nil {
+			return err
+		}
+		p.resp = resp
+	}
+
+	return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), p.ResultPath, out)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/poller.go 🔗

@@ -0,0 +1,24 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package pollers
+
+// FinalStateVia is the enumerated type for the possible final-state-via values.
+type FinalStateVia string
+
+const (
+	// FinalStateViaAzureAsyncOp indicates the final payload comes from the Azure-AsyncOperation URL.
+	FinalStateViaAzureAsyncOp FinalStateVia = "azure-async-operation"
+
+	// FinalStateViaLocation indicates the final payload comes from the Location URL.
+	FinalStateViaLocation FinalStateVia = "location"
+
+	// FinalStateViaOriginalURI indicates the final payload comes from the original URL.
+	FinalStateViaOriginalURI FinalStateVia = "original-uri"
+
+	// FinalStateViaOpLocation indicates the final payload comes from the Operation-Location URL.
+	FinalStateViaOpLocation FinalStateVia = "operation-location"
+)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/util.go 🔗

@@ -0,0 +1,212 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package pollers
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"reflect"
+
+	azexported "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/poller"
+)
+
+// getTokenTypeName creates a type name from the type parameter T.
+func getTokenTypeName[T any]() (string, error) {
+	tt := shared.TypeOfT[T]()
+	var n string
+	if tt.Kind() == reflect.Pointer {
+		n = "*"
+		tt = tt.Elem()
+	}
+	n += tt.Name()
+	if n == "" {
+		return "", errors.New("nameless types are not allowed")
+	}
+	return n, nil
+}
+
+type resumeTokenWrapper[T any] struct {
+	Type  string `json:"type"`
+	Token T      `json:"token"`
+}
+
+// NewResumeToken creates a resume token from the specified type.
+// An error is returned if the generic type has no name (e.g. struct{}).
+func NewResumeToken[TResult, TSource any](from TSource) (string, error) {
+	n, err := getTokenTypeName[TResult]()
+	if err != nil {
+		return "", err
+	}
+	b, err := json.Marshal(resumeTokenWrapper[TSource]{
+		Type:  n,
+		Token: from,
+	})
+	if err != nil {
+		return "", err
+	}
+	return string(b), nil
+}
+
+// ExtractToken returns the poller-specific token information from the provided token value.
+func ExtractToken(token string) ([]byte, error) {
+	raw := map[string]json.RawMessage{}
+	if err := json.Unmarshal([]byte(token), &raw); err != nil {
+		return nil, err
+	}
+	// this is dependent on the type resumeTokenWrapper[T]
+	tk, ok := raw["token"]
+	if !ok {
+		return nil, errors.New("missing token value")
+	}
+	return tk, nil
+}
+
+// IsTokenValid returns an error if the specified token isn't applicable for generic type T.
+func IsTokenValid[T any](token string) error {
+	raw := map[string]any{}
+	if err := json.Unmarshal([]byte(token), &raw); err != nil {
+		return err
+	}
+	t, ok := raw["type"]
+	if !ok {
+		return errors.New("missing type value")
+	}
+	tt, ok := t.(string)
+	if !ok {
+		return fmt.Errorf("invalid type format %T", t)
+	}
+	n, err := getTokenTypeName[T]()
+	if err != nil {
+		return err
+	}
+	if tt != n {
+		return fmt.Errorf("cannot resume from this poller token. token is for type %s, not %s", tt, n)
+	}
+	return nil
+}
+
+// used if the operation synchronously completed
+type NopPoller[T any] struct {
+	resp   *http.Response
+	result T
+}
+
+// NewNopPoller creates a NopPoller from the provided response.
+// It unmarshals the response body into an instance of T.
+func NewNopPoller[T any](resp *http.Response) (*NopPoller[T], error) {
+	np := &NopPoller[T]{resp: resp}
+	if resp.StatusCode == http.StatusNoContent {
+		return np, nil
+	}
+	payload, err := exported.Payload(resp, nil)
+	if err != nil {
+		return nil, err
+	}
+	if len(payload) == 0 {
+		return np, nil
+	}
+	if err = json.Unmarshal(payload, &np.result); err != nil {
+		return nil, err
+	}
+	return np, nil
+}
+
+func (*NopPoller[T]) Done() bool {
+	return true
+}
+
+func (p *NopPoller[T]) Poll(context.Context) (*http.Response, error) {
+	return p.resp, nil
+}
+
+func (p *NopPoller[T]) Result(ctx context.Context, out *T) error {
+	*out = p.result
+	return nil
+}
+
+// PollHelper creates and executes the request, calling update() with the response.
+// If the request fails, the update func is not called.
+// The update func returns the state of the operation for logging purposes or an error
+// if it fails to extract the required state from the response.
+func PollHelper(ctx context.Context, endpoint string, pl azexported.Pipeline, update func(resp *http.Response) (string, error)) error {
+	req, err := azexported.NewRequest(ctx, http.MethodGet, endpoint)
+	if err != nil {
+		return err
+	}
+	resp, err := pl.Do(req)
+	if err != nil {
+		return err
+	}
+	state, err := update(resp)
+	if err != nil {
+		return err
+	}
+	log.Writef(log.EventLRO, "State %s", state)
+	return nil
+}
+
+// ResultHelper processes the response as success or failure.
+// In the success case, it unmarshals the payload into either a new instance of T or out.
+// In the failure case, it creates an *azcore.Response error from the response.
+func ResultHelper[T any](resp *http.Response, failed bool, jsonPath string, out *T) error {
+	// short-circuit the simple success case with no response body to unmarshal
+	if resp.StatusCode == http.StatusNoContent {
+		return nil
+	}
+
+	defer resp.Body.Close()
+	if !poller.StatusCodeValid(resp) || failed {
+		// the LRO failed.  unmarshall the error and update state
+		return azexported.NewResponseError(resp)
+	}
+
+	// success case
+	payload, err := exported.Payload(resp, nil)
+	if err != nil {
+		return err
+	}
+
+	if jsonPath != "" && len(payload) > 0 {
+		// extract the payload from the specified JSON path.
+		// do this before the zero-length check in case there
+		// is no payload.
+		jsonBody := map[string]json.RawMessage{}
+		if err = json.Unmarshal(payload, &jsonBody); err != nil {
+			return err
+		}
+		payload = jsonBody[jsonPath]
+	}
+
+	if len(payload) == 0 {
+		return nil
+	}
+
+	if err = json.Unmarshal(payload, out); err != nil {
+		return err
+	}
+	return nil
+}
+
+// IsNonTerminalHTTPStatusCode returns true if the HTTP status code should be
+// considered non-terminal thus eligible for retry.
+func IsNonTerminalHTTPStatusCode(resp *http.Response) bool {
+	return exported.HasStatusCode(resp,
+		http.StatusRequestTimeout,      // 408
+		http.StatusTooManyRequests,     // 429
+		http.StatusInternalServerError, // 500
+		http.StatusBadGateway,          // 502
+		http.StatusServiceUnavailable,  // 503
+		http.StatusGatewayTimeout,      // 504
+	)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/constants.go 🔗

@@ -0,0 +1,44 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package shared
+
+const (
+	ContentTypeAppJSON   = "application/json"
+	ContentTypeAppXML    = "application/xml"
+	ContentTypeTextPlain = "text/plain"
+)
+
+const (
+	HeaderAuthorization          = "Authorization"
+	HeaderAuxiliaryAuthorization = "x-ms-authorization-auxiliary"
+	HeaderAzureAsync             = "Azure-AsyncOperation"
+	HeaderContentLength          = "Content-Length"
+	HeaderContentType            = "Content-Type"
+	HeaderFakePollerStatus       = "Fake-Poller-Status"
+	HeaderLocation               = "Location"
+	HeaderOperationLocation      = "Operation-Location"
+	HeaderRetryAfter             = "Retry-After"
+	HeaderRetryAfterMS           = "Retry-After-Ms"
+	HeaderUserAgent              = "User-Agent"
+	HeaderWWWAuthenticate        = "WWW-Authenticate"
+	HeaderXMSClientRequestID     = "x-ms-client-request-id"
+	HeaderXMSRequestID           = "x-ms-request-id"
+	HeaderXMSErrorCode           = "x-ms-error-code"
+	HeaderXMSRetryAfterMS        = "x-ms-retry-after-ms"
+)
+
+const BearerTokenPrefix = "Bearer "
+
+const TracingNamespaceAttrName = "az.namespace"
+
+const (
+	// Module is the name of the calling module used in telemetry data.
+	Module = "azcore"
+
+	// Version is the semantic version (see http://semver.org) of this module.
+	Version = "v1.17.0"
+)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/shared.go 🔗

@@ -0,0 +1,149 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package shared
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+	"reflect"
+	"regexp"
+	"strconv"
+	"time"
+)
+
+// NOTE: when adding a new context key type, it likely needs to be
+// added to the deny-list of key types in ContextWithDeniedValues
+
+// CtxWithHTTPHeaderKey is used as a context key for adding/retrieving http.Header.
+type CtxWithHTTPHeaderKey struct{}
+
+// CtxWithRetryOptionsKey is used as a context key for adding/retrieving RetryOptions.
+type CtxWithRetryOptionsKey struct{}
+
+// CtxWithCaptureResponse is used as a context key for retrieving the raw response.
+type CtxWithCaptureResponse struct{}
+
+// CtxWithTracingTracer is used as a context key for adding/retrieving tracing.Tracer.
+type CtxWithTracingTracer struct{}
+
+// CtxAPINameKey is used as a context key for adding/retrieving the API name.
+type CtxAPINameKey struct{}
+
+// Delay waits for the duration to elapse or the context to be cancelled.
+func Delay(ctx context.Context, delay time.Duration) error {
+	select {
+	case <-time.After(delay):
+		return nil
+	case <-ctx.Done():
+		return ctx.Err()
+	}
+}
+
+// RetryAfter returns non-zero if the response contains one of the headers with a "retry after" value.
+// Headers are checked in the following order: retry-after-ms, x-ms-retry-after-ms, retry-after
+func RetryAfter(resp *http.Response) time.Duration {
+	if resp == nil {
+		return 0
+	}
+
+	type retryData struct {
+		header string
+		units  time.Duration
+
+		// custom is used when the regular algorithm failed and is optional.
+		// the returned duration is used verbatim (units is not applied).
+		custom func(string) time.Duration
+	}
+
+	nop := func(string) time.Duration { return 0 }
+
+	// the headers are listed in order of preference
+	retries := []retryData{
+		{
+			header: HeaderRetryAfterMS,
+			units:  time.Millisecond,
+			custom: nop,
+		},
+		{
+			header: HeaderXMSRetryAfterMS,
+			units:  time.Millisecond,
+			custom: nop,
+		},
+		{
+			header: HeaderRetryAfter,
+			units:  time.Second,
+
+			// retry-after values are expressed in either number of
+			// seconds or an HTTP-date indicating when to try again
+			custom: func(ra string) time.Duration {
+				t, err := time.Parse(time.RFC1123, ra)
+				if err != nil {
+					return 0
+				}
+				return time.Until(t)
+			},
+		},
+	}
+
+	for _, retry := range retries {
+		v := resp.Header.Get(retry.header)
+		if v == "" {
+			continue
+		}
+		if retryAfter, _ := strconv.Atoi(v); retryAfter > 0 {
+			return time.Duration(retryAfter) * retry.units
+		} else if d := retry.custom(v); d > 0 {
+			return d
+		}
+	}
+
+	return 0
+}
+
+// TypeOfT returns the type of the generic type param.
+func TypeOfT[T any]() reflect.Type {
+	// you can't, at present, obtain the type of
+	// a type parameter, so this is the trick
+	return reflect.TypeOf((*T)(nil)).Elem()
+}
+
+// TransportFunc is a helper to use a first-class func to satisfy the Transporter interface.
+type TransportFunc func(*http.Request) (*http.Response, error)
+
+// Do implements the Transporter interface for the TransportFunc type.
+func (pf TransportFunc) Do(req *http.Request) (*http.Response, error) {
+	return pf(req)
+}
+
+// ValidateModVer verifies that moduleVersion is a valid semver 2.0 string.
+func ValidateModVer(moduleVersion string) error {
+	modVerRegx := regexp.MustCompile(`^v\d+\.\d+\.\d+(?:-[a-zA-Z0-9_.-]+)?$`)
+	if !modVerRegx.MatchString(moduleVersion) {
+		return fmt.Errorf("malformed moduleVersion param value %s", moduleVersion)
+	}
+	return nil
+}
+
+// ContextWithDeniedValues wraps an existing [context.Context], denying access to certain context values.
+// Pipeline policies that create new requests to be sent down their own pipeline MUST wrap the caller's
+// context with an instance of this type. This is to prevent context values from flowing across disjoint
+// requests which can have unintended side-effects.
+type ContextWithDeniedValues struct {
+	context.Context
+}
+
+// Value implements part of the [context.Context] interface.
+// It acts as a deny-list for certain context keys.
+func (c *ContextWithDeniedValues) Value(key any) any {
+	switch key.(type) {
+	case CtxAPINameKey, CtxWithCaptureResponse, CtxWithHTTPHeaderKey, CtxWithRetryOptionsKey, CtxWithTracingTracer:
+		return nil
+	default:
+		return c.Context.Value(key)
+	}
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/doc.go 🔗

@@ -0,0 +1,10 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright 2017 Microsoft Corporation. All rights reserved.
+// Use of this source code is governed by an MIT
+// license that can be found in the LICENSE file.
+
+// Package log contains functionality for configuring logging behavior.
+// Default logging to stderr can be enabled by setting environment variable AZURE_SDK_GO_LOGGING to "all".
+package log

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/log.go 🔗

@@ -0,0 +1,55 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+// Package log provides functionality for configuring logging facilities.
+package log
+
+import (
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+// Event is used to group entries.  Each group can be toggled on or off.
+type Event = log.Event
+
+const (
+	// EventRequest entries contain information about HTTP requests.
+	// This includes information like the URL, query parameters, and headers.
+	EventRequest Event = "Request"
+
+	// EventResponse entries contain information about HTTP responses.
+	// This includes information like the HTTP status code, headers, and request URL.
+	EventResponse Event = "Response"
+
+	// EventResponseError entries contain information about HTTP responses that returned
+	// an *azcore.ResponseError (i.e. responses with a non 2xx HTTP status code).
+	// This includes the contents of ResponseError.Error().
+	EventResponseError Event = "ResponseError"
+
+	// EventRetryPolicy entries contain information specific to the retry policy in use.
+	EventRetryPolicy Event = "Retry"
+
+	// EventLRO entries contain information specific to long-running operations.
+	// This includes information like polling location, operation state, and sleep intervals.
+	EventLRO Event = "LongRunningOperation"
+)
+
+// SetEvents is used to control which events are written to
+// the log.  By default all log events are writen.
+// NOTE: this is not goroutine safe and should be called before using SDK clients.
+func SetEvents(cls ...Event) {
+	log.SetEvents(cls...)
+}
+
+// SetListener will set the Logger to write to the specified Listener.
+// NOTE: this is not goroutine safe and should be called before using SDK clients.
+func SetListener(lst func(Event, string)) {
+	log.SetListener(lst)
+}
+
+// for testing purposes
+func resetEvents() {
+	log.TestResetEvents()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/doc.go 🔗

@@ -0,0 +1,10 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright 2017 Microsoft Corporation. All rights reserved.
+// Use of this source code is governed by an MIT
+// license that can be found in the LICENSE file.
+
+// Package policy contains the definitions needed for configuring in-box pipeline policies
+// and creating custom policies.
+package policy

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/policy.go 🔗

@@ -0,0 +1,198 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package policy
+
+import (
+	"context"
+	"net/http"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing"
+)
+
+// Policy represents an extensibility point for the Pipeline that can mutate the specified
+// Request and react to the received Response.
+type Policy = exported.Policy
+
+// Transporter represents an HTTP pipeline transport used to send HTTP requests and receive responses.
+type Transporter = exported.Transporter
+
+// Request is an abstraction over the creation of an HTTP request as it passes through the pipeline.
+// Don't use this type directly, use runtime.NewRequest() instead.
+type Request = exported.Request
+
+// ClientOptions contains optional settings for a client's pipeline.
+// Instances can be shared across calls to SDK client constructors when uniform configuration is desired.
+// Zero-value fields will have their specified default values applied during use.
+type ClientOptions struct {
+	// APIVersion overrides the default version requested of the service.
+	// Set with caution as this package version has not been tested with arbitrary service versions.
+	APIVersion string
+
+	// Cloud specifies a cloud for the client. The default is Azure Public Cloud.
+	Cloud cloud.Configuration
+
+	// InsecureAllowCredentialWithHTTP enables authenticated requests over HTTP.
+	// By default, authenticated requests to an HTTP endpoint are rejected by the client.
+	// WARNING: setting this to true will allow sending the credential in clear text. Use with caution.
+	InsecureAllowCredentialWithHTTP bool
+
+	// Logging configures the built-in logging policy.
+	Logging LogOptions
+
+	// Retry configures the built-in retry policy.
+	Retry RetryOptions
+
+	// Telemetry configures the built-in telemetry policy.
+	Telemetry TelemetryOptions
+
+	// TracingProvider configures the tracing provider.
+	// It defaults to a no-op tracer.
+	TracingProvider tracing.Provider
+
+	// Transport sets the transport for HTTP requests.
+	Transport Transporter
+
+	// PerCallPolicies contains custom policies to inject into the pipeline.
+	// Each policy is executed once per request.
+	PerCallPolicies []Policy
+
+	// PerRetryPolicies contains custom policies to inject into the pipeline.
+	// Each policy is executed once per request, and for each retry of that request.
+	PerRetryPolicies []Policy
+}
+
+// LogOptions configures the logging policy's behavior.
+type LogOptions struct {
+	// IncludeBody indicates if request and response bodies should be included in logging.
+	// The default value is false.
+	// NOTE: enabling this can lead to disclosure of sensitive information, use with care.
+	IncludeBody bool
+
+	// AllowedHeaders is the slice of headers to log with their values intact.
+	// All headers not in the slice will have their values REDACTED.
+	// Applies to request and response headers.
+	AllowedHeaders []string
+
+	// AllowedQueryParams is the slice of query parameters to log with their values intact.
+	// All query parameters not in the slice will have their values REDACTED.
+	AllowedQueryParams []string
+}
+
+// RetryOptions configures the retry policy's behavior.
+// Zero-value fields will have their specified default values applied during use.
+// This allows for modification of a subset of fields.
+type RetryOptions struct {
+	// MaxRetries specifies the maximum number of attempts a failed operation will be retried
+	// before producing an error.
+	// The default value is three.  A value less than zero means one try and no retries.
+	MaxRetries int32
+
+	// TryTimeout indicates the maximum time allowed for any single try of an HTTP request.
+	// This is disabled by default.  Specify a value greater than zero to enable.
+	// NOTE: Setting this to a small value might cause premature HTTP request time-outs.
+	TryTimeout time.Duration
+
+	// RetryDelay specifies the initial amount of delay to use before retrying an operation.
+	// The value is used only if the HTTP response does not contain a Retry-After header.
+	// The delay increases exponentially with each retry up to the maximum specified by MaxRetryDelay.
+	// The default value is four seconds.  A value less than zero means no delay between retries.
+	RetryDelay time.Duration
+
+	// MaxRetryDelay specifies the maximum delay allowed before retrying an operation.
+	// Typically the value is greater than or equal to the value specified in RetryDelay.
+	// The default Value is 60 seconds.  A value less than zero means there is no cap.
+	MaxRetryDelay time.Duration
+
+	// StatusCodes specifies the HTTP status codes that indicate the operation should be retried.
+	// A nil slice will use the following values.
+	//   http.StatusRequestTimeout      408
+	//   http.StatusTooManyRequests     429
+	//   http.StatusInternalServerError 500
+	//   http.StatusBadGateway          502
+	//   http.StatusServiceUnavailable  503
+	//   http.StatusGatewayTimeout      504
+	// Specifying values will replace the default values.
+	// Specifying an empty slice will disable retries for HTTP status codes.
+	StatusCodes []int
+
+	// ShouldRetry evaluates if the retry policy should retry the request.
+	// When specified, the function overrides comparison against the list of
+	// HTTP status codes and error checking within the retry policy. Context
+	// and NonRetriable errors remain evaluated before calling ShouldRetry.
+	// The *http.Response and error parameters are mutually exclusive, i.e.
+	// if one is nil, the other is not nil.
+	// A return value of true means the retry policy should retry.
+	ShouldRetry func(*http.Response, error) bool
+}
+
+// TelemetryOptions configures the telemetry policy's behavior.
+type TelemetryOptions struct {
+	// ApplicationID is an application-specific identification string to add to the User-Agent.
+	// It has a maximum length of 24 characters and must not contain any spaces.
+	ApplicationID string
+
+	// Disabled will prevent the addition of any telemetry data to the User-Agent.
+	Disabled bool
+}
+
+// TokenRequestOptions contain specific parameter that may be used by credentials types when attempting to get a token.
+type TokenRequestOptions = exported.TokenRequestOptions
+
+// BearerTokenOptions configures the bearer token policy's behavior.
+type BearerTokenOptions struct {
+	// AuthorizationHandler allows SDK developers to run client-specific logic when BearerTokenPolicy must authorize a request.
+	// When this field isn't set, the policy follows its default behavior of authorizing every request with a bearer token from
+	// its given credential.
+	AuthorizationHandler AuthorizationHandler
+
+	// InsecureAllowCredentialWithHTTP enables authenticated requests over HTTP.
+	// By default, authenticated requests to an HTTP endpoint are rejected by the client.
+	// WARNING: setting this to true will allow sending the bearer token in clear text. Use with caution.
+	InsecureAllowCredentialWithHTTP bool
+}
+
+// AuthorizationHandler allows SDK developers to insert custom logic that runs when BearerTokenPolicy must authorize a request.
+type AuthorizationHandler struct {
+	// OnRequest provides TokenRequestOptions the policy can use to acquire a token for a request. The policy calls OnRequest
+	// whenever it needs a token and may call it multiple times for the same request. Its func parameter authorizes the request
+	// with a token from the policy's credential. Implementations that need to perform I/O should use the Request's context,
+	// available from Request.Raw().Context(). When OnRequest returns an error, the policy propagates that error and doesn't send
+	// the request. When OnRequest is nil, the policy follows its default behavior, which is to authorize the request with a token
+	// from its credential according to its configuration.
+	OnRequest func(*Request, func(TokenRequestOptions) error) error
+
+	// OnChallenge allows clients to implement custom HTTP authentication challenge handling. BearerTokenPolicy calls it upon
+	// receiving a 401 response containing multiple Bearer challenges or a challenge BearerTokenPolicy itself can't handle.
+	// OnChallenge is responsible for parsing challenge(s) (the Response's WWW-Authenticate header) and reauthorizing the
+	// Request accordingly. Its func argument authorizes the Request with a token from the policy's credential using the given
+	// TokenRequestOptions. OnChallenge should honor the Request's context, available from Request.Raw().Context(). When
+	// OnChallenge returns nil, the policy will send the Request again.
+	OnChallenge func(*Request, *http.Response, func(TokenRequestOptions) error) error
+}
+
+// WithCaptureResponse applies the HTTP response retrieval annotation to the parent context.
+// The resp parameter will contain the HTTP response after the request has completed.
+func WithCaptureResponse(parent context.Context, resp **http.Response) context.Context {
+	return context.WithValue(parent, shared.CtxWithCaptureResponse{}, resp)
+}
+
+// WithHTTPHeader adds the specified http.Header to the parent context.
+// Use this to specify custom HTTP headers at the API-call level.
+// Any overlapping headers will have their values replaced with the values specified here.
+func WithHTTPHeader(parent context.Context, header http.Header) context.Context {
+	return context.WithValue(parent, shared.CtxWithHTTPHeaderKey{}, header)
+}
+
+// WithRetryOptions adds the specified RetryOptions to the parent context.
+// Use this to specify custom RetryOptions at the API-call level.
+func WithRetryOptions(parent context.Context, options RetryOptions) context.Context {
+	return context.WithValue(parent, shared.CtxWithRetryOptionsKey{}, options)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/doc.go 🔗

@@ -0,0 +1,10 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright 2017 Microsoft Corporation. All rights reserved.
+// Use of this source code is governed by an MIT
+// license that can be found in the LICENSE file.
+
+// Package runtime contains various facilities for creating requests and handling responses.
+// The content is intended for SDK authors.
+package runtime

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/errors.go 🔗

@@ -0,0 +1,27 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+)
+
+// NewResponseError creates an *azcore.ResponseError from the provided HTTP response.
+// Call this when a service request returns a non-successful status code.
+// The error code will be extracted from the *http.Response, either from the x-ms-error-code
+// header (preferred) or attempted to be parsed from the response body.
+func NewResponseError(resp *http.Response) error {
+	return exported.NewResponseError(resp)
+}
+
+// NewResponseErrorWithErrorCode creates an *azcore.ResponseError from the provided HTTP response and errorCode.
+// Use this variant when the error code is in a non-standard location.
+func NewResponseErrorWithErrorCode(resp *http.Response, errorCode string) error {
+	return exported.NewResponseErrorWithErrorCode(resp, errorCode)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pager.go 🔗

@@ -0,0 +1,138 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"reflect"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing"
+)
+
+// PagingHandler contains the required data for constructing a Pager.
+type PagingHandler[T any] struct {
+	// More returns a boolean indicating if there are more pages to fetch.
+	// It uses the provided page to make the determination.
+	More func(T) bool
+
+	// Fetcher fetches the first and subsequent pages.
+	Fetcher func(context.Context, *T) (T, error)
+
+	// Tracer contains the Tracer from the client that's creating the Pager.
+	Tracer tracing.Tracer
+}
+
+// Pager provides operations for iterating over paged responses.
+// Methods on this type are not safe for concurrent use.
+type Pager[T any] struct {
+	current   *T
+	handler   PagingHandler[T]
+	tracer    tracing.Tracer
+	firstPage bool
+}
+
+// NewPager creates an instance of Pager using the specified PagingHandler.
+// Pass a non-nil T for firstPage if the first page has already been retrieved.
+func NewPager[T any](handler PagingHandler[T]) *Pager[T] {
+	return &Pager[T]{
+		handler:   handler,
+		tracer:    handler.Tracer,
+		firstPage: true,
+	}
+}
+
+// More returns true if there are more pages to retrieve.
+func (p *Pager[T]) More() bool {
+	if p.current != nil {
+		return p.handler.More(*p.current)
+	}
+	return true
+}
+
+// NextPage advances the pager to the next page.
+func (p *Pager[T]) NextPage(ctx context.Context) (T, error) {
+	if p.current != nil {
+		if p.firstPage {
+			// we get here if it's an LRO-pager, we already have the first page
+			p.firstPage = false
+			return *p.current, nil
+		} else if !p.handler.More(*p.current) {
+			return *new(T), errors.New("no more pages")
+		}
+	} else {
+		// non-LRO case, first page
+		p.firstPage = false
+	}
+
+	var err error
+	ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.NextPage", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil)
+	defer func() { endSpan(err) }()
+
+	resp, err := p.handler.Fetcher(ctx, p.current)
+	if err != nil {
+		return *new(T), err
+	}
+	p.current = &resp
+	return *p.current, nil
+}
+
+// UnmarshalJSON implements the json.Unmarshaler interface for Pager[T].
+func (p *Pager[T]) UnmarshalJSON(data []byte) error {
+	return json.Unmarshal(data, &p.current)
+}
+
+// FetcherForNextLinkOptions contains the optional values for [FetcherForNextLink].
+type FetcherForNextLinkOptions struct {
+	// NextReq is the func to be called when requesting subsequent pages.
+	// Used for paged operations that have a custom next link operation.
+	NextReq func(context.Context, string) (*policy.Request, error)
+
+	// StatusCodes contains additional HTTP status codes indicating success.
+	// The default value is http.StatusOK.
+	StatusCodes []int
+}
+
+// FetcherForNextLink is a helper containing boilerplate code to simplify creating a PagingHandler[T].Fetcher from a next link URL.
+//   - ctx is the [context.Context] controlling the lifetime of the HTTP operation
+//   - pl is the [Pipeline] used to dispatch the HTTP request
+//   - nextLink is the URL used to fetch the next page. the empty string indicates the first page is to be requested
+//   - firstReq is the func to be called when creating the request for the first page
+//   - options contains any optional parameters, pass nil to accept the default values
+func FetcherForNextLink(ctx context.Context, pl Pipeline, nextLink string, firstReq func(context.Context) (*policy.Request, error), options *FetcherForNextLinkOptions) (*http.Response, error) {
+	var req *policy.Request
+	var err error
+	if options == nil {
+		options = &FetcherForNextLinkOptions{}
+	}
+	if nextLink == "" {
+		req, err = firstReq(ctx)
+	} else if nextLink, err = EncodeQueryParams(nextLink); err == nil {
+		if options.NextReq != nil {
+			req, err = options.NextReq(ctx, nextLink)
+		} else {
+			req, err = NewRequest(ctx, http.MethodGet, nextLink)
+		}
+	}
+	if err != nil {
+		return nil, err
+	}
+	resp, err := pl.Do(req)
+	if err != nil {
+		return nil, err
+	}
+	successCodes := []int{http.StatusOK}
+	successCodes = append(successCodes, options.StatusCodes...)
+	if !HasStatusCode(resp, successCodes...) {
+		return nil, NewResponseError(resp)
+	}
+	return resp, nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pipeline.go 🔗

@@ -0,0 +1,94 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+// PipelineOptions contains Pipeline options for SDK developers
+type PipelineOptions struct {
+	// AllowedHeaders is the slice of headers to log with their values intact.
+	// All headers not in the slice will have their values REDACTED.
+	// Applies to request and response headers.
+	AllowedHeaders []string
+
+	// AllowedQueryParameters is the slice of query parameters to log with their values intact.
+	// All query parameters not in the slice will have their values REDACTED.
+	AllowedQueryParameters []string
+
+	// APIVersion overrides the default version requested of the service.
+	// Set with caution as this package version has not been tested with arbitrary service versions.
+	APIVersion APIVersionOptions
+
+	// PerCall contains custom policies to inject into the pipeline.
+	// Each policy is executed once per request.
+	PerCall []policy.Policy
+
+	// PerRetry contains custom policies to inject into the pipeline.
+	// Each policy is executed once per request, and for each retry of that request.
+	PerRetry []policy.Policy
+
+	// Tracing contains options used to configure distributed tracing.
+	Tracing TracingOptions
+}
+
+// TracingOptions contains tracing options for SDK developers.
+type TracingOptions struct {
+	// Namespace contains the value to use for the az.namespace span attribute.
+	Namespace string
+}
+
+// Pipeline represents a primitive for sending HTTP requests and receiving responses.
+// Its behavior can be extended by specifying policies during construction.
+type Pipeline = exported.Pipeline
+
+// NewPipeline creates a pipeline from connection options, with any additional policies as specified.
+// Policies from ClientOptions are placed after policies from PipelineOptions.
+// The module and version parameters are used by the telemetry policy, when enabled.
+func NewPipeline(module, version string, plOpts PipelineOptions, options *policy.ClientOptions) Pipeline {
+	cp := policy.ClientOptions{}
+	if options != nil {
+		cp = *options
+	}
+	if len(plOpts.AllowedHeaders) > 0 {
+		headers := make([]string, len(plOpts.AllowedHeaders)+len(cp.Logging.AllowedHeaders))
+		copy(headers, plOpts.AllowedHeaders)
+		headers = append(headers, cp.Logging.AllowedHeaders...)
+		cp.Logging.AllowedHeaders = headers
+	}
+	if len(plOpts.AllowedQueryParameters) > 0 {
+		qp := make([]string, len(plOpts.AllowedQueryParameters)+len(cp.Logging.AllowedQueryParams))
+		copy(qp, plOpts.AllowedQueryParameters)
+		qp = append(qp, cp.Logging.AllowedQueryParams...)
+		cp.Logging.AllowedQueryParams = qp
+	}
+	// we put the includeResponsePolicy at the very beginning so that the raw response
+	// is populated with the final response (some policies might mutate the response)
+	policies := []policy.Policy{exported.PolicyFunc(includeResponsePolicy)}
+	if cp.APIVersion != "" {
+		policies = append(policies, newAPIVersionPolicy(cp.APIVersion, &plOpts.APIVersion))
+	}
+	if !cp.Telemetry.Disabled {
+		policies = append(policies, NewTelemetryPolicy(module, version, &cp.Telemetry))
+	}
+	policies = append(policies, plOpts.PerCall...)
+	policies = append(policies, cp.PerCallPolicies...)
+	policies = append(policies, NewRetryPolicy(&cp.Retry))
+	policies = append(policies, plOpts.PerRetry...)
+	policies = append(policies, cp.PerRetryPolicies...)
+	policies = append(policies, exported.PolicyFunc(httpHeaderPolicy))
+	policies = append(policies, newHTTPTracePolicy(cp.Logging.AllowedQueryParams))
+	policies = append(policies, NewLogPolicy(&cp.Logging))
+	policies = append(policies, exported.PolicyFunc(bodyDownloadPolicy))
+	transport := cp.Transport
+	if transport == nil {
+		transport = defaultHTTPClient
+	}
+	return exported.NewPipeline(transport, policies...)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_api_version.go 🔗

@@ -0,0 +1,75 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"errors"
+	"fmt"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+// APIVersionOptions contains options for API versions
+type APIVersionOptions struct {
+	// Location indicates where to set the version on a request, for example in a header or query param
+	Location APIVersionLocation
+	// Name is the name of the header or query parameter, for example "api-version"
+	Name string
+}
+
+// APIVersionLocation indicates which part of a request identifies the service version
+type APIVersionLocation int
+
+const (
+	// APIVersionLocationQueryParam indicates a query parameter
+	APIVersionLocationQueryParam = 0
+	// APIVersionLocationHeader indicates a header
+	APIVersionLocationHeader = 1
+)
+
+// newAPIVersionPolicy constructs an APIVersionPolicy. If version is "", Do will be a no-op. If version
+// isn't empty and opts.Name is empty, Do will return an error.
+func newAPIVersionPolicy(version string, opts *APIVersionOptions) *apiVersionPolicy {
+	if opts == nil {
+		opts = &APIVersionOptions{}
+	}
+	return &apiVersionPolicy{location: opts.Location, name: opts.Name, version: version}
+}
+
+// apiVersionPolicy enables users to set the API version of every request a client sends.
+type apiVersionPolicy struct {
+	// location indicates whether "name" refers to a query parameter or header.
+	location APIVersionLocation
+
+	// name of the query param or header whose value should be overridden; provided by the client.
+	name string
+
+	// version is the value (provided by the user) that replaces the default version value.
+	version string
+}
+
+// Do sets the request's API version, if the policy is configured to do so, replacing any prior value.
+func (a *apiVersionPolicy) Do(req *policy.Request) (*http.Response, error) {
+	if a.version != "" {
+		if a.name == "" {
+			// user set ClientOptions.APIVersion but the client ctor didn't set PipelineOptions.APIVersionOptions
+			return nil, errors.New("this client doesn't support overriding its API version")
+		}
+		switch a.location {
+		case APIVersionLocationHeader:
+			req.Raw().Header.Set(a.name, a.version)
+		case APIVersionLocationQueryParam:
+			q := req.Raw().URL.Query()
+			q.Set(a.name, a.version)
+			req.Raw().URL.RawQuery = q.Encode()
+		default:
+			return nil, fmt.Errorf("unknown APIVersionLocation %d", a.location)
+		}
+	}
+	return req.Next()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_bearer_token.go 🔗

@@ -0,0 +1,236 @@
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"encoding/base64"
+	"errors"
+	"net/http"
+	"regexp"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/temporal"
+)
+
+// BearerTokenPolicy authorizes requests with bearer tokens acquired from a TokenCredential.
+// It handles [Continuous Access Evaluation] (CAE) challenges. Clients needing to handle
+// additional authentication challenges, or needing more control over authorization, should
+// provide a [policy.AuthorizationHandler] in [policy.BearerTokenOptions].
+//
+// [Continuous Access Evaluation]: https://learn.microsoft.com/entra/identity/conditional-access/concept-continuous-access-evaluation
+type BearerTokenPolicy struct {
+	// mainResource is the resource to be retreived using the tenant specified in the credential
+	mainResource *temporal.Resource[exported.AccessToken, acquiringResourceState]
+	// the following fields are read-only
+	authzHandler policy.AuthorizationHandler
+	cred         exported.TokenCredential
+	scopes       []string
+	allowHTTP    bool
+}
+
+type acquiringResourceState struct {
+	req *policy.Request
+	p   *BearerTokenPolicy
+	tro policy.TokenRequestOptions
+}
+
+// acquire acquires or updates the resource; only one
+// thread/goroutine at a time ever calls this function
+func acquire(state acquiringResourceState) (newResource exported.AccessToken, newExpiration time.Time, err error) {
+	tk, err := state.p.cred.GetToken(&shared.ContextWithDeniedValues{Context: state.req.Raw().Context()}, state.tro)
+	if err != nil {
+		return exported.AccessToken{}, time.Time{}, err
+	}
+	return tk, tk.ExpiresOn, nil
+}
+
+// NewBearerTokenPolicy creates a policy object that authorizes requests with bearer tokens.
+// cred: an azcore.TokenCredential implementation such as a credential object from azidentity
+// scopes: the list of permission scopes required for the token.
+// opts: optional settings. Pass nil to accept default values; this is the same as passing a zero-value options.
+func NewBearerTokenPolicy(cred exported.TokenCredential, scopes []string, opts *policy.BearerTokenOptions) *BearerTokenPolicy {
+	if opts == nil {
+		opts = &policy.BearerTokenOptions{}
+	}
+	ah := opts.AuthorizationHandler
+	if ah.OnRequest == nil {
+		// Set a default OnRequest that simply requests a token with the given scopes. OnChallenge
+		// doesn't get a default so the policy can use a nil check to determine whether the caller
+		// provided an implementation.
+		ah.OnRequest = func(_ *policy.Request, authNZ func(policy.TokenRequestOptions) error) error {
+			// authNZ sets EnableCAE: true in all cases, no need to duplicate that here
+			return authNZ(policy.TokenRequestOptions{Scopes: scopes})
+		}
+	}
+	return &BearerTokenPolicy{
+		authzHandler: ah,
+		cred:         cred,
+		scopes:       scopes,
+		mainResource: temporal.NewResource(acquire),
+		allowHTTP:    opts.InsecureAllowCredentialWithHTTP,
+	}
+}
+
+// authenticateAndAuthorize returns a function which authorizes req with a token from the policy's credential
+func (b *BearerTokenPolicy) authenticateAndAuthorize(req *policy.Request) func(policy.TokenRequestOptions) error {
+	return func(tro policy.TokenRequestOptions) error {
+		tro.EnableCAE = true
+		as := acquiringResourceState{p: b, req: req, tro: tro}
+		tk, err := b.mainResource.Get(as)
+		if err != nil {
+			return err
+		}
+		req.Raw().Header.Set(shared.HeaderAuthorization, shared.BearerTokenPrefix+tk.Token)
+		return nil
+	}
+}
+
+// Do authorizes a request with a bearer token
+func (b *BearerTokenPolicy) Do(req *policy.Request) (*http.Response, error) {
+	// skip adding the authorization header if no TokenCredential was provided.
+	// this prevents a panic that might be hard to diagnose and allows testing
+	// against http endpoints that don't require authentication.
+	if b.cred == nil {
+		return req.Next()
+	}
+
+	if err := checkHTTPSForAuth(req, b.allowHTTP); err != nil {
+		return nil, err
+	}
+
+	err := b.authzHandler.OnRequest(req, b.authenticateAndAuthorize(req))
+	if err != nil {
+		return nil, errorinfo.NonRetriableError(err)
+	}
+
+	res, err := req.Next()
+	if err != nil {
+		return nil, err
+	}
+
+	res, err = b.handleChallenge(req, res, false)
+	return res, err
+}
+
+// handleChallenge handles authentication challenges either directly (for CAE challenges) or by calling
+// the AuthorizationHandler. It's a no-op when the response doesn't include an authentication challenge.
+// It will recurse at most once, to handle a CAE challenge following a non-CAE challenge handled by the
+// AuthorizationHandler.
+func (b *BearerTokenPolicy) handleChallenge(req *policy.Request, res *http.Response, recursed bool) (*http.Response, error) {
+	var err error
+	if res.StatusCode == http.StatusUnauthorized {
+		b.mainResource.Expire()
+		if res.Header.Get(shared.HeaderWWWAuthenticate) != "" {
+			caeChallenge, parseErr := parseCAEChallenge(res)
+			if parseErr != nil {
+				return res, parseErr
+			}
+			switch {
+			case caeChallenge != nil:
+				authNZ := func(tro policy.TokenRequestOptions) error {
+					// Take the TokenRequestOptions provided by OnRequest and add the challenge claims. The value
+					// will be empty at time of writing because CAE is the only feature involving claims. If in
+					// the future some client needs to specify unrelated claims, this function may need to merge
+					// them with the challenge claims.
+					tro.Claims = caeChallenge.params["claims"]
+					return b.authenticateAndAuthorize(req)(tro)
+				}
+				if err = b.authzHandler.OnRequest(req, authNZ); err == nil {
+					if err = req.RewindBody(); err == nil {
+						res, err = req.Next()
+					}
+				}
+			case b.authzHandler.OnChallenge != nil && !recursed:
+				if err = b.authzHandler.OnChallenge(req, res, b.authenticateAndAuthorize(req)); err == nil {
+					if err = req.RewindBody(); err == nil {
+						if res, err = req.Next(); err == nil {
+							res, err = b.handleChallenge(req, res, true)
+						}
+					}
+				} else {
+					// don't retry challenge handling errors
+					err = errorinfo.NonRetriableError(err)
+				}
+			default:
+				// return the response to the pipeline
+			}
+		}
+	}
+	return res, err
+}
+
+func checkHTTPSForAuth(req *policy.Request, allowHTTP bool) error {
+	if strings.ToLower(req.Raw().URL.Scheme) != "https" && !allowHTTP {
+		return errorinfo.NonRetriableError(errors.New("authenticated requests are not permitted for non TLS protected (https) endpoints"))
+	}
+	return nil
+}
+
+// parseCAEChallenge returns a *authChallenge representing Response's CAE challenge (nil when Response has none).
+// If Response includes a CAE challenge having invalid claims, it returns a NonRetriableError.
+func parseCAEChallenge(res *http.Response) (*authChallenge, error) {
+	var (
+		caeChallenge *authChallenge
+		err          error
+	)
+	for _, c := range parseChallenges(res) {
+		if c.scheme == "Bearer" {
+			if claims := c.params["claims"]; claims != "" && c.params["error"] == "insufficient_claims" {
+				if b, de := base64.StdEncoding.DecodeString(claims); de == nil {
+					c.params["claims"] = string(b)
+					caeChallenge = &c
+				} else {
+					// don't include the decoding error because it's something
+					// unhelpful like "illegal base64 data at input byte 42"
+					err = errorinfo.NonRetriableError(errors.New("authentication challenge contains invalid claims: " + claims))
+				}
+				break
+			}
+		}
+	}
+	return caeChallenge, err
+}
+
+var (
+	challenge, challengeParams *regexp.Regexp
+	once                       = &sync.Once{}
+)
+
+type authChallenge struct {
+	scheme string
+	params map[string]string
+}
+
+// parseChallenges assumes authentication challenges have quoted parameter values
+func parseChallenges(res *http.Response) []authChallenge {
+	once.Do(func() {
+		// matches challenges having quoted parameters, capturing scheme and parameters
+		challenge = regexp.MustCompile(`(?:(\w+) ((?:\w+="[^"]*",?\s*)+))`)
+		// captures parameter names and values in a match of the above expression
+		challengeParams = regexp.MustCompile(`(\w+)="([^"]*)"`)
+	})
+	parsed := []authChallenge{}
+	// WWW-Authenticate can have multiple values, each containing multiple challenges
+	for _, h := range res.Header.Values(shared.HeaderWWWAuthenticate) {
+		for _, sm := range challenge.FindAllStringSubmatch(h, -1) {
+			// sm is [challenge, scheme, params] (see regexp documentation on submatches)
+			c := authChallenge{
+				params: make(map[string]string),
+				scheme: sm[1],
+			}
+			for _, sm := range challengeParams.FindAllStringSubmatch(sm[2], -1) {
+				// sm is [key="value", key, value] (see regexp documentation on submatches)
+				c.params[sm[1]] = sm[2]
+			}
+			parsed = append(parsed, c)
+		}
+	}
+	return parsed
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_body_download.go 🔗

@@ -0,0 +1,72 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"fmt"
+	"net/http"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo"
+)
+
+// bodyDownloadPolicy creates a policy object that downloads the response's body to a []byte.
+func bodyDownloadPolicy(req *policy.Request) (*http.Response, error) {
+	resp, err := req.Next()
+	if err != nil {
+		return resp, err
+	}
+	var opValues bodyDownloadPolicyOpValues
+	// don't skip downloading error response bodies
+	if req.OperationValue(&opValues); opValues.Skip && resp.StatusCode < 400 {
+		return resp, err
+	}
+	// Either bodyDownloadPolicyOpValues was not specified (so skip is false)
+	// or it was specified and skip is false: don't skip downloading the body
+	_, err = Payload(resp)
+	if err != nil {
+		return resp, newBodyDownloadError(err, req)
+	}
+	return resp, err
+}
+
+// bodyDownloadPolicyOpValues is the struct containing the per-operation values
+type bodyDownloadPolicyOpValues struct {
+	Skip bool
+}
+
+type bodyDownloadError struct {
+	err error
+}
+
+func newBodyDownloadError(err error, req *policy.Request) error {
+	// on failure, only retry the request for idempotent operations.
+	// we currently identify them as DELETE, GET, and PUT requests.
+	if m := strings.ToUpper(req.Raw().Method); m == http.MethodDelete || m == http.MethodGet || m == http.MethodPut {
+		// error is safe for retry
+		return err
+	}
+	// wrap error to avoid retries
+	return &bodyDownloadError{
+		err: err,
+	}
+}
+
+func (b *bodyDownloadError) Error() string {
+	return fmt.Sprintf("body download policy: %s", b.err.Error())
+}
+
+func (b *bodyDownloadError) NonRetriable() {
+	// marker method
+}
+
+func (b *bodyDownloadError) Unwrap() error {
+	return b.err
+}
+
+var _ errorinfo.NonRetriable = (*bodyDownloadError)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_header.go 🔗

@@ -0,0 +1,40 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+// newHTTPHeaderPolicy creates a policy object that adds custom HTTP headers to a request
+func httpHeaderPolicy(req *policy.Request) (*http.Response, error) {
+	// check if any custom HTTP headers have been specified
+	if header := req.Raw().Context().Value(shared.CtxWithHTTPHeaderKey{}); header != nil {
+		for k, v := range header.(http.Header) {
+			// use Set to replace any existing value
+			// it also canonicalizes the header key
+			req.Raw().Header.Set(k, v[0])
+			// add any remaining values
+			for i := 1; i < len(v); i++ {
+				req.Raw().Header.Add(k, v[i])
+			}
+		}
+	}
+	return req.Next()
+}
+
+// WithHTTPHeader adds the specified http.Header to the parent context.
+// Use this to specify custom HTTP headers at the API-call level.
+// Any overlapping headers will have their values replaced with the values specified here.
+// Deprecated: use [policy.WithHTTPHeader] instead.
+func WithHTTPHeader(parent context.Context, header http.Header) context.Context {
+	return policy.WithHTTPHeader(parent, header)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_trace.go 🔗

@@ -0,0 +1,154 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing"
+)
+
+const (
+	attrHTTPMethod     = "http.method"
+	attrHTTPURL        = "http.url"
+	attrHTTPUserAgent  = "http.user_agent"
+	attrHTTPStatusCode = "http.status_code"
+
+	attrAZClientReqID  = "az.client_request_id"
+	attrAZServiceReqID = "az.service_request_id"
+
+	attrNetPeerName = "net.peer.name"
+)
+
+// newHTTPTracePolicy creates a new instance of the httpTracePolicy.
+//   - allowedQueryParams contains the user-specified query parameters that don't need to be redacted from the trace
+func newHTTPTracePolicy(allowedQueryParams []string) exported.Policy {
+	return &httpTracePolicy{allowedQP: getAllowedQueryParams(allowedQueryParams)}
+}
+
+// httpTracePolicy is a policy that creates a trace for the HTTP request and its response
+type httpTracePolicy struct {
+	allowedQP map[string]struct{}
+}
+
+// Do implements the pipeline.Policy interfaces for the httpTracePolicy type.
+func (h *httpTracePolicy) Do(req *policy.Request) (resp *http.Response, err error) {
+	rawTracer := req.Raw().Context().Value(shared.CtxWithTracingTracer{})
+	if tracer, ok := rawTracer.(tracing.Tracer); ok && tracer.Enabled() {
+		attributes := []tracing.Attribute{
+			{Key: attrHTTPMethod, Value: req.Raw().Method},
+			{Key: attrHTTPURL, Value: getSanitizedURL(*req.Raw().URL, h.allowedQP)},
+			{Key: attrNetPeerName, Value: req.Raw().URL.Host},
+		}
+
+		if ua := req.Raw().Header.Get(shared.HeaderUserAgent); ua != "" {
+			attributes = append(attributes, tracing.Attribute{Key: attrHTTPUserAgent, Value: ua})
+		}
+		if reqID := req.Raw().Header.Get(shared.HeaderXMSClientRequestID); reqID != "" {
+			attributes = append(attributes, tracing.Attribute{Key: attrAZClientReqID, Value: reqID})
+		}
+
+		ctx := req.Raw().Context()
+		ctx, span := tracer.Start(ctx, "HTTP "+req.Raw().Method, &tracing.SpanOptions{
+			Kind:       tracing.SpanKindClient,
+			Attributes: attributes,
+		})
+
+		defer func() {
+			if resp != nil {
+				span.SetAttributes(tracing.Attribute{Key: attrHTTPStatusCode, Value: resp.StatusCode})
+				if resp.StatusCode > 399 {
+					span.SetStatus(tracing.SpanStatusError, resp.Status)
+				}
+				if reqID := resp.Header.Get(shared.HeaderXMSRequestID); reqID != "" {
+					span.SetAttributes(tracing.Attribute{Key: attrAZServiceReqID, Value: reqID})
+				}
+			} else if err != nil {
+				var urlErr *url.Error
+				if errors.As(err, &urlErr) {
+					// calling *url.Error.Error() will include the unsanitized URL
+					// which we don't want. in addition, we already have the HTTP verb
+					// and sanitized URL in the trace so we aren't losing any info
+					err = urlErr.Err
+				}
+				span.SetStatus(tracing.SpanStatusError, err.Error())
+			}
+			span.End()
+		}()
+
+		req = req.WithContext(ctx)
+	}
+	resp, err = req.Next()
+	return
+}
+
+// StartSpanOptions contains the optional values for StartSpan.
+type StartSpanOptions struct {
+	// Kind indicates the kind of Span.
+	Kind tracing.SpanKind
+	// Attributes contains key-value pairs of attributes for the span.
+	Attributes []tracing.Attribute
+}
+
+// StartSpan starts a new tracing span.
+// You must call the returned func to terminate the span. Pass the applicable error
+// if the span will exit with an error condition.
+//   - ctx is the parent context of the newly created context
+//   - name is the name of the span. this is typically the fully qualified name of an API ("Client.Method")
+//   - tracer is the client's Tracer for creating spans
+//   - options contains optional values. pass nil to accept any default values
+func StartSpan(ctx context.Context, name string, tracer tracing.Tracer, options *StartSpanOptions) (context.Context, func(error)) {
+	if !tracer.Enabled() {
+		return ctx, func(err error) {}
+	}
+
+	// we MUST propagate the active tracer before returning so that the trace policy can access it
+	ctx = context.WithValue(ctx, shared.CtxWithTracingTracer{}, tracer)
+
+	if activeSpan := ctx.Value(ctxActiveSpan{}); activeSpan != nil {
+		// per the design guidelines, if a SDK method Foo() calls SDK method Bar(),
+		// then the span for Bar() must be suppressed. however, if Bar() makes a REST
+		// call, then Bar's HTTP span must be a child of Foo's span.
+		// however, there is an exception to this rule. if the SDK method Foo() is a
+		// messaging producer/consumer, and it takes a callback that's a SDK method
+		// Bar(), then the span for Bar() must _not_ be suppressed.
+		if kind := activeSpan.(tracing.SpanKind); kind == tracing.SpanKindClient || kind == tracing.SpanKindInternal {
+			return ctx, func(err error) {}
+		}
+	}
+
+	if options == nil {
+		options = &StartSpanOptions{}
+	}
+	if options.Kind == 0 {
+		options.Kind = tracing.SpanKindInternal
+	}
+
+	ctx, span := tracer.Start(ctx, name, &tracing.SpanOptions{
+		Kind:       options.Kind,
+		Attributes: options.Attributes,
+	})
+	ctx = context.WithValue(ctx, ctxActiveSpan{}, options.Kind)
+	return ctx, func(err error) {
+		if err != nil {
+			errType := strings.Replace(fmt.Sprintf("%T", err), "*exported.", "*azcore.", 1)
+			span.SetStatus(tracing.SpanStatusError, fmt.Sprintf("%s:\n%s", errType, err.Error()))
+		}
+		span.End()
+	}
+}
+
+// ctxActiveSpan is used as a context key for indicating a SDK client span is in progress.
+type ctxActiveSpan struct{}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_include_response.go 🔗

@@ -0,0 +1,35 @@
+//go:build go1.16
+// +build go1.16
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+// includeResponsePolicy creates a policy that retrieves the raw HTTP response upon request
+func includeResponsePolicy(req *policy.Request) (*http.Response, error) {
+	resp, err := req.Next()
+	if resp == nil {
+		return resp, err
+	}
+	if httpOutRaw := req.Raw().Context().Value(shared.CtxWithCaptureResponse{}); httpOutRaw != nil {
+		httpOut := httpOutRaw.(**http.Response)
+		*httpOut = resp
+	}
+	return resp, err
+}
+
+// WithCaptureResponse applies the HTTP response retrieval annotation to the parent context.
+// The resp parameter will contain the HTTP response after the request has completed.
+// Deprecated: use [policy.WithCaptureResponse] instead.
+func WithCaptureResponse(parent context.Context, resp **http.Response) context.Context {
+	return policy.WithCaptureResponse(parent, resp)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_key_credential.go 🔗

@@ -0,0 +1,64 @@
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+// KeyCredentialPolicy authorizes requests with a [azcore.KeyCredential].
+type KeyCredentialPolicy struct {
+	cred      *exported.KeyCredential
+	header    string
+	prefix    string
+	allowHTTP bool
+}
+
+// KeyCredentialPolicyOptions contains the optional values configuring [KeyCredentialPolicy].
+type KeyCredentialPolicyOptions struct {
+	// InsecureAllowCredentialWithHTTP enables authenticated requests over HTTP.
+	// By default, authenticated requests to an HTTP endpoint are rejected by the client.
+	// WARNING: setting this to true will allow sending the authentication key in clear text. Use with caution.
+	InsecureAllowCredentialWithHTTP bool
+
+	// Prefix is used if the key requires a prefix before it's inserted into the HTTP request.
+	Prefix string
+}
+
+// NewKeyCredentialPolicy creates a new instance of [KeyCredentialPolicy].
+//   - cred is the [azcore.KeyCredential] used to authenticate with the service
+//   - header is the name of the HTTP request header in which the key is placed
+//   - options contains optional configuration, pass nil to accept the default values
+func NewKeyCredentialPolicy(cred *exported.KeyCredential, header string, options *KeyCredentialPolicyOptions) *KeyCredentialPolicy {
+	if options == nil {
+		options = &KeyCredentialPolicyOptions{}
+	}
+	return &KeyCredentialPolicy{
+		cred:      cred,
+		header:    header,
+		prefix:    options.Prefix,
+		allowHTTP: options.InsecureAllowCredentialWithHTTP,
+	}
+}
+
+// Do implementes the Do method on the [policy.Polilcy] interface.
+func (k *KeyCredentialPolicy) Do(req *policy.Request) (*http.Response, error) {
+	// skip adding the authorization header if no KeyCredential was provided.
+	// this prevents a panic that might be hard to diagnose and allows testing
+	// against http endpoints that don't require authentication.
+	if k.cred != nil {
+		if err := checkHTTPSForAuth(req, k.allowHTTP); err != nil {
+			return nil, err
+		}
+		val := exported.KeyCredentialGet(k.cred)
+		if k.prefix != "" {
+			val = k.prefix + val
+		}
+		req.Raw().Header.Add(k.header, val)
+	}
+	return req.Next()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_logging.go 🔗

@@ -0,0 +1,264 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"bytes"
+	"fmt"
+	"io"
+	"net/http"
+	"net/url"
+	"sort"
+	"strings"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/diag"
+)
+
+type logPolicy struct {
+	includeBody    bool
+	allowedHeaders map[string]struct{}
+	allowedQP      map[string]struct{}
+}
+
+// NewLogPolicy creates a request/response logging policy object configured using the specified options.
+// Pass nil to accept the default values; this is the same as passing a zero-value options.
+func NewLogPolicy(o *policy.LogOptions) policy.Policy {
+	if o == nil {
+		o = &policy.LogOptions{}
+	}
+	// construct default hash set of allowed headers
+	allowedHeaders := map[string]struct{}{
+		"accept":                        {},
+		"cache-control":                 {},
+		"connection":                    {},
+		"content-length":                {},
+		"content-type":                  {},
+		"date":                          {},
+		"etag":                          {},
+		"expires":                       {},
+		"if-match":                      {},
+		"if-modified-since":             {},
+		"if-none-match":                 {},
+		"if-unmodified-since":           {},
+		"last-modified":                 {},
+		"ms-cv":                         {},
+		"pragma":                        {},
+		"request-id":                    {},
+		"retry-after":                   {},
+		"server":                        {},
+		"traceparent":                   {},
+		"transfer-encoding":             {},
+		"user-agent":                    {},
+		"www-authenticate":              {},
+		"x-ms-request-id":               {},
+		"x-ms-client-request-id":        {},
+		"x-ms-return-client-request-id": {},
+	}
+	// add any caller-specified allowed headers to the set
+	for _, ah := range o.AllowedHeaders {
+		allowedHeaders[strings.ToLower(ah)] = struct{}{}
+	}
+	// now do the same thing for query params
+	allowedQP := getAllowedQueryParams(o.AllowedQueryParams)
+	return &logPolicy{
+		includeBody:    o.IncludeBody,
+		allowedHeaders: allowedHeaders,
+		allowedQP:      allowedQP,
+	}
+}
+
+// getAllowedQueryParams merges the default set of allowed query parameters
+// with a custom set (usually comes from client options).
+func getAllowedQueryParams(customAllowedQP []string) map[string]struct{} {
+	allowedQP := map[string]struct{}{
+		"api-version": {},
+	}
+	for _, qp := range customAllowedQP {
+		allowedQP[strings.ToLower(qp)] = struct{}{}
+	}
+	return allowedQP
+}
+
+// logPolicyOpValues is the struct containing the per-operation values
+type logPolicyOpValues struct {
+	try   int32
+	start time.Time
+}
+
+func (p *logPolicy) Do(req *policy.Request) (*http.Response, error) {
+	// Get the per-operation values. These are saved in the Message's map so that they persist across each retry calling into this policy object.
+	var opValues logPolicyOpValues
+	if req.OperationValue(&opValues); opValues.start.IsZero() {
+		opValues.start = time.Now() // If this is the 1st try, record this operation's start time
+	}
+	opValues.try++ // The first try is #1 (not #0)
+	req.SetOperationValue(opValues)
+
+	// Log the outgoing request as informational
+	if log.Should(log.EventRequest) {
+		b := &bytes.Buffer{}
+		fmt.Fprintf(b, "==> OUTGOING REQUEST (Try=%d)\n", opValues.try)
+		p.writeRequestWithResponse(b, req, nil, nil)
+		var err error
+		if p.includeBody {
+			err = writeReqBody(req, b)
+		}
+		log.Write(log.EventRequest, b.String())
+		if err != nil {
+			return nil, err
+		}
+	}
+
+	// Set the time for this particular retry operation and then Do the operation.
+	tryStart := time.Now()
+	response, err := req.Next() // Make the request
+	tryEnd := time.Now()
+	tryDuration := tryEnd.Sub(tryStart)
+	opDuration := tryEnd.Sub(opValues.start)
+
+	if log.Should(log.EventResponse) {
+		// We're going to log this; build the string to log
+		b := &bytes.Buffer{}
+		fmt.Fprintf(b, "==> REQUEST/RESPONSE (Try=%d/%v, OpTime=%v) -- ", opValues.try, tryDuration, opDuration)
+		if err != nil { // This HTTP request did not get a response from the service
+			fmt.Fprint(b, "REQUEST ERROR\n")
+		} else {
+			fmt.Fprint(b, "RESPONSE RECEIVED\n")
+		}
+
+		p.writeRequestWithResponse(b, req, response, err)
+		if err != nil {
+			// skip frames runtime.Callers() and runtime.StackTrace()
+			b.WriteString(diag.StackTrace(2, 32))
+		} else if p.includeBody {
+			err = writeRespBody(response, b)
+		}
+		log.Write(log.EventResponse, b.String())
+	}
+	return response, err
+}
+
+const redactedValue = "REDACTED"
+
+// getSanitizedURL returns a sanitized string for the provided url.URL
+func getSanitizedURL(u url.URL, allowedQueryParams map[string]struct{}) string {
+	// redact applicable query params
+	qp := u.Query()
+	for k := range qp {
+		if _, ok := allowedQueryParams[strings.ToLower(k)]; !ok {
+			qp.Set(k, redactedValue)
+		}
+	}
+	u.RawQuery = qp.Encode()
+	return u.String()
+}
+
+// writeRequestWithResponse appends a formatted HTTP request into a Buffer. If request and/or err are
+// not nil, then these are also written into the Buffer.
+func (p *logPolicy) writeRequestWithResponse(b *bytes.Buffer, req *policy.Request, resp *http.Response, err error) {
+	// Write the request into the buffer.
+	fmt.Fprint(b, "   "+req.Raw().Method+" "+getSanitizedURL(*req.Raw().URL, p.allowedQP)+"\n")
+	p.writeHeader(b, req.Raw().Header)
+	if resp != nil {
+		fmt.Fprintln(b, "   --------------------------------------------------------------------------------")
+		fmt.Fprint(b, "   RESPONSE Status: "+resp.Status+"\n")
+		p.writeHeader(b, resp.Header)
+	}
+	if err != nil {
+		fmt.Fprintln(b, "   --------------------------------------------------------------------------------")
+		fmt.Fprint(b, "   ERROR:\n"+err.Error()+"\n")
+	}
+}
+
+// formatHeaders appends an HTTP request's or response's header into a Buffer.
+func (p *logPolicy) writeHeader(b *bytes.Buffer, header http.Header) {
+	if len(header) == 0 {
+		b.WriteString("   (no headers)\n")
+		return
+	}
+	keys := make([]string, 0, len(header))
+	// Alphabetize the headers
+	for k := range header {
+		keys = append(keys, k)
+	}
+	sort.Strings(keys)
+	for _, k := range keys {
+		// don't use Get() as it will canonicalize k which might cause a mismatch
+		value := header[k][0]
+		// redact all header values not in the allow-list
+		if _, ok := p.allowedHeaders[strings.ToLower(k)]; !ok {
+			value = redactedValue
+		}
+		fmt.Fprintf(b, "   %s: %+v\n", k, value)
+	}
+}
+
+// returns true if the request/response body should be logged.
+// this is determined by looking at the content-type header value.
+func shouldLogBody(b *bytes.Buffer, contentType string) bool {
+	contentType = strings.ToLower(contentType)
+	if strings.HasPrefix(contentType, "text") ||
+		strings.Contains(contentType, "json") ||
+		strings.Contains(contentType, "xml") {
+		return true
+	}
+	fmt.Fprintf(b, "   Skip logging body for %s\n", contentType)
+	return false
+}
+
+// writes to a buffer, used for logging purposes
+func writeReqBody(req *policy.Request, b *bytes.Buffer) error {
+	if req.Raw().Body == nil {
+		fmt.Fprint(b, "   Request contained no body\n")
+		return nil
+	}
+	if ct := req.Raw().Header.Get(shared.HeaderContentType); !shouldLogBody(b, ct) {
+		return nil
+	}
+	body, err := io.ReadAll(req.Raw().Body)
+	if err != nil {
+		fmt.Fprintf(b, "   Failed to read request body: %s\n", err.Error())
+		return err
+	}
+	if err := req.RewindBody(); err != nil {
+		return err
+	}
+	logBody(b, body)
+	return nil
+}
+
+// writes to a buffer, used for logging purposes
+func writeRespBody(resp *http.Response, b *bytes.Buffer) error {
+	ct := resp.Header.Get(shared.HeaderContentType)
+	if ct == "" {
+		fmt.Fprint(b, "   Response contained no body\n")
+		return nil
+	} else if !shouldLogBody(b, ct) {
+		return nil
+	}
+	body, err := Payload(resp)
+	if err != nil {
+		fmt.Fprintf(b, "   Failed to read response body: %s\n", err.Error())
+		return err
+	}
+	if len(body) > 0 {
+		logBody(b, body)
+	} else {
+		fmt.Fprint(b, "   Response contained no body\n")
+	}
+	return nil
+}
+
+func logBody(b *bytes.Buffer, body []byte) {
+	fmt.Fprintln(b, "   --------------------------------------------------------------------------------")
+	fmt.Fprintln(b, string(body))
+	fmt.Fprintln(b, "   --------------------------------------------------------------------------------")
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_request_id.go 🔗

@@ -0,0 +1,34 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/uuid"
+)
+
+type requestIDPolicy struct{}
+
+// NewRequestIDPolicy returns a policy that add the x-ms-client-request-id header
+func NewRequestIDPolicy() policy.Policy {
+	return &requestIDPolicy{}
+}
+
+func (r *requestIDPolicy) Do(req *policy.Request) (*http.Response, error) {
+	if req.Raw().Header.Get(shared.HeaderXMSClientRequestID) == "" {
+		id, err := uuid.New()
+		if err != nil {
+			return nil, err
+		}
+		req.Raw().Header.Set(shared.HeaderXMSClientRequestID, id.String())
+	}
+
+	return req.Next()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_retry.go 🔗

@@ -0,0 +1,276 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"errors"
+	"io"
+	"math"
+	"math/rand"
+	"net/http"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/exported"
+)
+
+const (
+	defaultMaxRetries = 3
+)
+
+func setDefaults(o *policy.RetryOptions) {
+	if o.MaxRetries == 0 {
+		o.MaxRetries = defaultMaxRetries
+	} else if o.MaxRetries < 0 {
+		o.MaxRetries = 0
+	}
+
+	// SDK guidelines specify the default MaxRetryDelay is 60 seconds
+	if o.MaxRetryDelay == 0 {
+		o.MaxRetryDelay = 60 * time.Second
+	} else if o.MaxRetryDelay < 0 {
+		// not really an unlimited cap, but sufficiently large enough to be considered as such
+		o.MaxRetryDelay = math.MaxInt64
+	}
+	if o.RetryDelay == 0 {
+		o.RetryDelay = 800 * time.Millisecond
+	} else if o.RetryDelay < 0 {
+		o.RetryDelay = 0
+	}
+	if o.StatusCodes == nil {
+		// NOTE: if you change this list, you MUST update the docs in policy/policy.go
+		o.StatusCodes = []int{
+			http.StatusRequestTimeout,      // 408
+			http.StatusTooManyRequests,     // 429
+			http.StatusInternalServerError, // 500
+			http.StatusBadGateway,          // 502
+			http.StatusServiceUnavailable,  // 503
+			http.StatusGatewayTimeout,      // 504
+		}
+	}
+}
+
+func calcDelay(o policy.RetryOptions, try int32) time.Duration { // try is >=1; never 0
+	// avoid overflow when shifting left
+	factor := time.Duration(math.MaxInt64)
+	if try < 63 {
+		factor = time.Duration(int64(1<<try) - 1)
+	}
+
+	delay := factor * o.RetryDelay
+	if delay < factor {
+		// overflow has happened so set to max value
+		delay = time.Duration(math.MaxInt64)
+	}
+
+	// Introduce jitter:  [0.0, 1.0) / 2 = [0.0, 0.5) + 0.8 = [0.8, 1.3)
+	jitterMultiplier := rand.Float64()/2 + 0.8 // NOTE: We want math/rand; not crypto/rand
+
+	delayFloat := float64(delay) * jitterMultiplier
+	if delayFloat > float64(math.MaxInt64) {
+		// the jitter pushed us over MaxInt64, so just use MaxInt64
+		delay = time.Duration(math.MaxInt64)
+	} else {
+		delay = time.Duration(delayFloat)
+	}
+
+	if delay > o.MaxRetryDelay { // MaxRetryDelay is backfilled with non-negative value
+		delay = o.MaxRetryDelay
+	}
+
+	return delay
+}
+
+// NewRetryPolicy creates a policy object configured using the specified options.
+// Pass nil to accept the default values; this is the same as passing a zero-value options.
+func NewRetryPolicy(o *policy.RetryOptions) policy.Policy {
+	if o == nil {
+		o = &policy.RetryOptions{}
+	}
+	p := &retryPolicy{options: *o}
+	return p
+}
+
+type retryPolicy struct {
+	options policy.RetryOptions
+}
+
+func (p *retryPolicy) Do(req *policy.Request) (resp *http.Response, err error) {
+	options := p.options
+	// check if the retry options have been overridden for this call
+	if override := req.Raw().Context().Value(shared.CtxWithRetryOptionsKey{}); override != nil {
+		options = override.(policy.RetryOptions)
+	}
+	setDefaults(&options)
+	// Exponential retry algorithm: ((2 ^ attempt) - 1) * delay * random(0.8, 1.2)
+	// When to retry: connection failure or temporary/timeout.
+	var rwbody *retryableRequestBody
+	if req.Body() != nil {
+		// wrap the body so we control when it's actually closed.
+		// do this outside the for loop so defers don't accumulate.
+		rwbody = &retryableRequestBody{body: req.Body()}
+		defer rwbody.realClose()
+	}
+	try := int32(1)
+	for {
+		resp = nil // reset
+		// unfortunately we don't have access to the custom allow-list of query params, so we'll redact everything but the default allowed QPs
+		log.Writef(log.EventRetryPolicy, "=====> Try=%d for %s %s", try, req.Raw().Method, getSanitizedURL(*req.Raw().URL, getAllowedQueryParams(nil)))
+
+		// For each try, seek to the beginning of the Body stream. We do this even for the 1st try because
+		// the stream may not be at offset 0 when we first get it and we want the same behavior for the
+		// 1st try as for additional tries.
+		err = req.RewindBody()
+		if err != nil {
+			return
+		}
+		// RewindBody() restores Raw().Body to its original state, so set our rewindable after
+		if rwbody != nil {
+			req.Raw().Body = rwbody
+		}
+
+		if options.TryTimeout == 0 {
+			clone := req.Clone(req.Raw().Context())
+			resp, err = clone.Next()
+		} else {
+			// Set the per-try time for this particular retry operation and then Do the operation.
+			tryCtx, tryCancel := context.WithTimeout(req.Raw().Context(), options.TryTimeout)
+			clone := req.Clone(tryCtx)
+			resp, err = clone.Next() // Make the request
+			// if the body was already downloaded or there was an error it's safe to cancel the context now
+			if err != nil {
+				tryCancel()
+			} else if exported.PayloadDownloaded(resp) {
+				tryCancel()
+			} else {
+				// must cancel the context after the body has been read and closed
+				resp.Body = &contextCancelReadCloser{cf: tryCancel, body: resp.Body}
+			}
+		}
+		if err == nil {
+			log.Writef(log.EventRetryPolicy, "response %d", resp.StatusCode)
+		} else {
+			log.Writef(log.EventRetryPolicy, "error %v", err)
+		}
+
+		if ctxErr := req.Raw().Context().Err(); ctxErr != nil {
+			// don't retry if the parent context has been cancelled or its deadline exceeded
+			err = ctxErr
+			log.Writef(log.EventRetryPolicy, "abort due to %v", err)
+			return
+		}
+
+		// check if the error is not retriable
+		var nre errorinfo.NonRetriable
+		if errors.As(err, &nre) {
+			// the error says it's not retriable so don't retry
+			log.Writef(log.EventRetryPolicy, "non-retriable error %T", nre)
+			return
+		}
+
+		if options.ShouldRetry != nil {
+			// a non-nil ShouldRetry overrides our HTTP status code check
+			if !options.ShouldRetry(resp, err) {
+				// predicate says we shouldn't retry
+				log.Write(log.EventRetryPolicy, "exit due to ShouldRetry")
+				return
+			}
+		} else if err == nil && !HasStatusCode(resp, options.StatusCodes...) {
+			// if there is no error and the response code isn't in the list of retry codes then we're done.
+			log.Write(log.EventRetryPolicy, "exit due to non-retriable status code")
+			return
+		}
+
+		if try == options.MaxRetries+1 {
+			// max number of tries has been reached, don't sleep again
+			log.Writef(log.EventRetryPolicy, "MaxRetries %d exceeded", options.MaxRetries)
+			return
+		}
+
+		// use the delay from retry-after if available
+		delay := shared.RetryAfter(resp)
+		if delay <= 0 {
+			delay = calcDelay(options, try)
+		} else if delay > options.MaxRetryDelay {
+			// the retry-after delay exceeds the the cap so don't retry
+			log.Writef(log.EventRetryPolicy, "Retry-After delay %s exceeds MaxRetryDelay of %s", delay, options.MaxRetryDelay)
+			return
+		}
+
+		// drain before retrying so nothing is leaked
+		Drain(resp)
+
+		log.Writef(log.EventRetryPolicy, "End Try #%d, Delay=%v", try, delay)
+		select {
+		case <-time.After(delay):
+			try++
+		case <-req.Raw().Context().Done():
+			err = req.Raw().Context().Err()
+			log.Writef(log.EventRetryPolicy, "abort due to %v", err)
+			return
+		}
+	}
+}
+
+// WithRetryOptions adds the specified RetryOptions to the parent context.
+// Use this to specify custom RetryOptions at the API-call level.
+// Deprecated: use [policy.WithRetryOptions] instead.
+func WithRetryOptions(parent context.Context, options policy.RetryOptions) context.Context {
+	return policy.WithRetryOptions(parent, options)
+}
+
+// ********** The following type/methods implement the retryableRequestBody (a ReadSeekCloser)
+
+// This struct is used when sending a body to the network
+type retryableRequestBody struct {
+	body io.ReadSeeker // Seeking is required to support retries
+}
+
+// Read reads a block of data from an inner stream and reports progress
+func (b *retryableRequestBody) Read(p []byte) (n int, err error) {
+	return b.body.Read(p)
+}
+
+func (b *retryableRequestBody) Seek(offset int64, whence int) (offsetFromStart int64, err error) {
+	return b.body.Seek(offset, whence)
+}
+
+func (b *retryableRequestBody) Close() error {
+	// We don't want the underlying transport to close the request body on transient failures so this is a nop.
+	// The retry policy closes the request body upon success.
+	return nil
+}
+
+func (b *retryableRequestBody) realClose() error {
+	if c, ok := b.body.(io.Closer); ok {
+		return c.Close()
+	}
+	return nil
+}
+
+// ********** The following type/methods implement the contextCancelReadCloser
+
+// contextCancelReadCloser combines an io.ReadCloser with a cancel func.
+// it ensures the cancel func is invoked once the body has been read and closed.
+type contextCancelReadCloser struct {
+	cf   context.CancelFunc
+	body io.ReadCloser
+}
+
+func (rc *contextCancelReadCloser) Read(p []byte) (n int, err error) {
+	return rc.body.Read(p)
+}
+
+func (rc *contextCancelReadCloser) Close() error {
+	err := rc.body.Close()
+	rc.cf()
+	return err
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_sas_credential.go 🔗

@@ -0,0 +1,55 @@
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+// SASCredentialPolicy authorizes requests with a [azcore.SASCredential].
+type SASCredentialPolicy struct {
+	cred      *exported.SASCredential
+	header    string
+	allowHTTP bool
+}
+
+// SASCredentialPolicyOptions contains the optional values configuring [SASCredentialPolicy].
+type SASCredentialPolicyOptions struct {
+	// InsecureAllowCredentialWithHTTP enables authenticated requests over HTTP.
+	// By default, authenticated requests to an HTTP endpoint are rejected by the client.
+	// WARNING: setting this to true will allow sending the authentication key in clear text. Use with caution.
+	InsecureAllowCredentialWithHTTP bool
+}
+
+// NewSASCredentialPolicy creates a new instance of [SASCredentialPolicy].
+//   - cred is the [azcore.SASCredential] used to authenticate with the service
+//   - header is the name of the HTTP request header in which the shared access signature is placed
+//   - options contains optional configuration, pass nil to accept the default values
+func NewSASCredentialPolicy(cred *exported.SASCredential, header string, options *SASCredentialPolicyOptions) *SASCredentialPolicy {
+	if options == nil {
+		options = &SASCredentialPolicyOptions{}
+	}
+	return &SASCredentialPolicy{
+		cred:      cred,
+		header:    header,
+		allowHTTP: options.InsecureAllowCredentialWithHTTP,
+	}
+}
+
+// Do implementes the Do method on the [policy.Polilcy] interface.
+func (k *SASCredentialPolicy) Do(req *policy.Request) (*http.Response, error) {
+	// skip adding the authorization header if no SASCredential was provided.
+	// this prevents a panic that might be hard to diagnose and allows testing
+	// against http endpoints that don't require authentication.
+	if k.cred != nil {
+		if err := checkHTTPSForAuth(req, k.allowHTTP); err != nil {
+			return nil, err
+		}
+		req.Raw().Header.Add(k.header, exported.SASCredentialGet(k.cred))
+	}
+	return req.Next()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_telemetry.go 🔗

@@ -0,0 +1,83 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"bytes"
+	"fmt"
+	"net/http"
+	"os"
+	"runtime"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+)
+
+type telemetryPolicy struct {
+	telemetryValue string
+}
+
+// NewTelemetryPolicy creates a telemetry policy object that adds telemetry information to outgoing HTTP requests.
+// The format is [<application_id> ]azsdk-go-<mod>/<ver> <platform_info>.
+// Pass nil to accept the default values; this is the same as passing a zero-value options.
+func NewTelemetryPolicy(mod, ver string, o *policy.TelemetryOptions) policy.Policy {
+	if o == nil {
+		o = &policy.TelemetryOptions{}
+	}
+	tp := telemetryPolicy{}
+	if o.Disabled {
+		return &tp
+	}
+	b := &bytes.Buffer{}
+	// normalize ApplicationID
+	if o.ApplicationID != "" {
+		o.ApplicationID = strings.ReplaceAll(o.ApplicationID, " ", "/")
+		if len(o.ApplicationID) > 24 {
+			o.ApplicationID = o.ApplicationID[:24]
+		}
+		b.WriteString(o.ApplicationID)
+		b.WriteRune(' ')
+	}
+	// mod might be the fully qualified name. in that case, we just want the package name
+	if i := strings.LastIndex(mod, "/"); i > -1 {
+		mod = mod[i+1:]
+	}
+	b.WriteString(formatTelemetry(mod, ver))
+	b.WriteRune(' ')
+	b.WriteString(platformInfo)
+	tp.telemetryValue = b.String()
+	return &tp
+}
+
+func formatTelemetry(comp, ver string) string {
+	return fmt.Sprintf("azsdk-go-%s/%s", comp, ver)
+}
+
+func (p telemetryPolicy) Do(req *policy.Request) (*http.Response, error) {
+	if p.telemetryValue == "" {
+		return req.Next()
+	}
+	// preserve the existing User-Agent string
+	if ua := req.Raw().Header.Get(shared.HeaderUserAgent); ua != "" {
+		p.telemetryValue = fmt.Sprintf("%s %s", p.telemetryValue, ua)
+	}
+	req.Raw().Header.Set(shared.HeaderUserAgent, p.telemetryValue)
+	return req.Next()
+}
+
+// NOTE: the ONLY function that should write to this variable is this func
+var platformInfo = func() string {
+	operatingSystem := runtime.GOOS // Default OS string
+	switch operatingSystem {
+	case "windows":
+		operatingSystem = os.Getenv("OS") // Get more specific OS information
+	case "linux": // accept default OS info
+	case "freebsd": //  accept default OS info
+	}
+	return fmt.Sprintf("(%s; %s)", runtime.Version(), operatingSystem)
+}()

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/poller.go 🔗

@@ -0,0 +1,396 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"flag"
+	"fmt"
+	"net/http"
+	"reflect"
+	"strings"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/poller"
+)
+
+// FinalStateVia is the enumerated type for the possible final-state-via values.
+type FinalStateVia = pollers.FinalStateVia
+
+const (
+	// FinalStateViaAzureAsyncOp indicates the final payload comes from the Azure-AsyncOperation URL.
+	FinalStateViaAzureAsyncOp = pollers.FinalStateViaAzureAsyncOp
+
+	// FinalStateViaLocation indicates the final payload comes from the Location URL.
+	FinalStateViaLocation = pollers.FinalStateViaLocation
+
+	// FinalStateViaOriginalURI indicates the final payload comes from the original URL.
+	FinalStateViaOriginalURI = pollers.FinalStateViaOriginalURI
+
+	// FinalStateViaOpLocation indicates the final payload comes from the Operation-Location URL.
+	FinalStateViaOpLocation = pollers.FinalStateViaOpLocation
+)
+
+// NewPollerOptions contains the optional parameters for NewPoller.
+type NewPollerOptions[T any] struct {
+	// FinalStateVia contains the final-state-via value for the LRO.
+	// NOTE: used only for Azure-AsyncOperation and Operation-Location LROs.
+	FinalStateVia FinalStateVia
+
+	// OperationLocationResultPath contains the JSON path to the result's
+	// payload when it's included with the terminal success response.
+	// NOTE: only used for Operation-Location LROs.
+	OperationLocationResultPath string
+
+	// Response contains a preconstructed response type.
+	// The final payload will be unmarshaled into it and returned.
+	Response *T
+
+	// Handler[T] contains a custom polling implementation.
+	Handler PollingHandler[T]
+
+	// Tracer contains the Tracer from the client that's creating the Poller.
+	Tracer tracing.Tracer
+}
+
+// NewPoller creates a Poller based on the provided initial response.
+func NewPoller[T any](resp *http.Response, pl exported.Pipeline, options *NewPollerOptions[T]) (*Poller[T], error) {
+	if options == nil {
+		options = &NewPollerOptions[T]{}
+	}
+	result := options.Response
+	if result == nil {
+		result = new(T)
+	}
+	if options.Handler != nil {
+		return &Poller[T]{
+			op:     options.Handler,
+			resp:   resp,
+			result: result,
+			tracer: options.Tracer,
+		}, nil
+	}
+
+	defer resp.Body.Close()
+	// this is a back-stop in case the swagger is incorrect (i.e. missing one or more status codes for success).
+	// ideally the codegen should return an error if the initial response failed and not even create a poller.
+	if !poller.StatusCodeValid(resp) {
+		return nil, errors.New("the operation failed or was cancelled")
+	}
+
+	// determine the polling method
+	var opr PollingHandler[T]
+	var err error
+	if fake.Applicable(resp) {
+		opr, err = fake.New[T](pl, resp)
+	} else if async.Applicable(resp) {
+		// async poller must be checked first as it can also have a location header
+		opr, err = async.New[T](pl, resp, options.FinalStateVia)
+	} else if op.Applicable(resp) {
+		// op poller must be checked before loc as it can also have a location header
+		opr, err = op.New[T](pl, resp, options.FinalStateVia, options.OperationLocationResultPath)
+	} else if loc.Applicable(resp) {
+		opr, err = loc.New[T](pl, resp)
+	} else if body.Applicable(resp) {
+		// must test body poller last as it's a subset of the other pollers.
+		// TODO: this is ambiguous for PATCH/PUT if it returns a 200 with no polling headers (sync completion)
+		opr, err = body.New[T](pl, resp)
+	} else if m := resp.Request.Method; resp.StatusCode == http.StatusAccepted && (m == http.MethodDelete || m == http.MethodPost) {
+		// if we get here it means we have a 202 with no polling headers.
+		// for DELETE and POST this is a hard error per ARM RPC spec.
+		return nil, errors.New("response is missing polling URL")
+	} else {
+		opr, err = pollers.NewNopPoller[T](resp)
+	}
+
+	if err != nil {
+		return nil, err
+	}
+	return &Poller[T]{
+		op:     opr,
+		resp:   resp,
+		result: result,
+		tracer: options.Tracer,
+	}, nil
+}
+
+// NewPollerFromResumeTokenOptions contains the optional parameters for NewPollerFromResumeToken.
+type NewPollerFromResumeTokenOptions[T any] struct {
+	// Response contains a preconstructed response type.
+	// The final payload will be unmarshaled into it and returned.
+	Response *T
+
+	// Handler[T] contains a custom polling implementation.
+	Handler PollingHandler[T]
+
+	// Tracer contains the Tracer from the client that's creating the Poller.
+	Tracer tracing.Tracer
+}
+
+// NewPollerFromResumeToken creates a Poller from a resume token string.
+func NewPollerFromResumeToken[T any](token string, pl exported.Pipeline, options *NewPollerFromResumeTokenOptions[T]) (*Poller[T], error) {
+	if options == nil {
+		options = &NewPollerFromResumeTokenOptions[T]{}
+	}
+	result := options.Response
+	if result == nil {
+		result = new(T)
+	}
+
+	if err := pollers.IsTokenValid[T](token); err != nil {
+		return nil, err
+	}
+	raw, err := pollers.ExtractToken(token)
+	if err != nil {
+		return nil, err
+	}
+	var asJSON map[string]any
+	if err := json.Unmarshal(raw, &asJSON); err != nil {
+		return nil, err
+	}
+
+	opr := options.Handler
+	// now rehydrate the poller based on the encoded poller type
+	if fake.CanResume(asJSON) {
+		opr, _ = fake.New[T](pl, nil)
+	} else if opr != nil {
+		log.Writef(log.EventLRO, "Resuming custom poller %T.", opr)
+	} else if async.CanResume(asJSON) {
+		opr, _ = async.New[T](pl, nil, "")
+	} else if body.CanResume(asJSON) {
+		opr, _ = body.New[T](pl, nil)
+	} else if loc.CanResume(asJSON) {
+		opr, _ = loc.New[T](pl, nil)
+	} else if op.CanResume(asJSON) {
+		opr, _ = op.New[T](pl, nil, "", "")
+	} else {
+		return nil, fmt.Errorf("unhandled poller token %s", string(raw))
+	}
+	if err := json.Unmarshal(raw, &opr); err != nil {
+		return nil, err
+	}
+	return &Poller[T]{
+		op:     opr,
+		result: result,
+		tracer: options.Tracer,
+	}, nil
+}
+
+// PollingHandler[T] abstracts the differences among poller implementations.
+type PollingHandler[T any] interface {
+	// Done returns true if the LRO has reached a terminal state.
+	Done() bool
+
+	// Poll fetches the latest state of the LRO.
+	Poll(context.Context) (*http.Response, error)
+
+	// Result is called once the LRO has reached a terminal state. It populates the out parameter
+	// with the result of the operation.
+	Result(ctx context.Context, out *T) error
+}
+
+// Poller encapsulates a long-running operation, providing polling facilities until the operation reaches a terminal state.
+// Methods on this type are not safe for concurrent use.
+type Poller[T any] struct {
+	op     PollingHandler[T]
+	resp   *http.Response
+	err    error
+	result *T
+	tracer tracing.Tracer
+	done   bool
+}
+
+// PollUntilDoneOptions contains the optional values for the Poller[T].PollUntilDone() method.
+type PollUntilDoneOptions struct {
+	// Frequency is the time to wait between polling intervals in absence of a Retry-After header. Allowed minimum is one second.
+	// Pass zero to accept the default value (30s).
+	Frequency time.Duration
+}
+
+// PollUntilDone will poll the service endpoint until a terminal state is reached, an error is received, or the context expires.
+// It internally uses Poll(), Done(), and Result() in its polling loop, sleeping for the specified duration between intervals.
+// options: pass nil to accept the default values.
+// NOTE: the default polling frequency is 30 seconds which works well for most operations.  However, some operations might
+// benefit from a shorter or longer duration.
+func (p *Poller[T]) PollUntilDone(ctx context.Context, options *PollUntilDoneOptions) (res T, err error) {
+	if options == nil {
+		options = &PollUntilDoneOptions{}
+	}
+	cp := *options
+	if cp.Frequency == 0 {
+		cp.Frequency = 30 * time.Second
+	}
+
+	ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.PollUntilDone", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil)
+	defer func() { endSpan(err) }()
+
+	// skip the floor check when executing tests so they don't take so long
+	if isTest := flag.Lookup("test.v"); isTest == nil && cp.Frequency < time.Second {
+		err = errors.New("polling frequency minimum is one second")
+		return
+	}
+
+	start := time.Now()
+	logPollUntilDoneExit := func(v any) {
+		log.Writef(log.EventLRO, "END PollUntilDone() for %T: %v, total time: %s", p.op, v, time.Since(start))
+	}
+	log.Writef(log.EventLRO, "BEGIN PollUntilDone() for %T", p.op)
+	if p.resp != nil {
+		// initial check for a retry-after header existing on the initial response
+		if retryAfter := shared.RetryAfter(p.resp); retryAfter > 0 {
+			log.Writef(log.EventLRO, "initial Retry-After delay for %s", retryAfter.String())
+			if err = shared.Delay(ctx, retryAfter); err != nil {
+				logPollUntilDoneExit(err)
+				return
+			}
+		}
+	}
+	// begin polling the endpoint until a terminal state is reached
+	for {
+		var resp *http.Response
+		resp, err = p.Poll(ctx)
+		if err != nil {
+			logPollUntilDoneExit(err)
+			return
+		}
+		if p.Done() {
+			logPollUntilDoneExit("succeeded")
+			res, err = p.Result(ctx)
+			return
+		}
+		d := cp.Frequency
+		if retryAfter := shared.RetryAfter(resp); retryAfter > 0 {
+			log.Writef(log.EventLRO, "Retry-After delay for %s", retryAfter.String())
+			d = retryAfter
+		} else {
+			log.Writef(log.EventLRO, "delay for %s", d.String())
+		}
+		if err = shared.Delay(ctx, d); err != nil {
+			logPollUntilDoneExit(err)
+			return
+		}
+	}
+}
+
+// Poll fetches the latest state of the LRO.  It returns an HTTP response or error.
+// If Poll succeeds, the poller's state is updated and the HTTP response is returned.
+// If Poll fails, the poller's state is unmodified and the error is returned.
+// Calling Poll on an LRO that has reached a terminal state will return the last HTTP response.
+func (p *Poller[T]) Poll(ctx context.Context) (resp *http.Response, err error) {
+	if p.Done() {
+		// the LRO has reached a terminal state, don't poll again
+		resp = p.resp
+		return
+	}
+
+	ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.Poll", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil)
+	defer func() { endSpan(err) }()
+
+	resp, err = p.op.Poll(ctx)
+	if err != nil {
+		return
+	}
+	p.resp = resp
+	return
+}
+
+// Done returns true if the LRO has reached a terminal state.
+// Once a terminal state is reached, call Result().
+func (p *Poller[T]) Done() bool {
+	return p.op.Done()
+}
+
+// Result returns the result of the LRO and is meant to be used in conjunction with Poll and Done.
+// If the LRO completed successfully, a populated instance of T is returned.
+// If the LRO failed or was canceled, an *azcore.ResponseError error is returned.
+// Calling this on an LRO in a non-terminal state will return an error.
+func (p *Poller[T]) Result(ctx context.Context) (res T, err error) {
+	if !p.Done() {
+		err = errors.New("poller is in a non-terminal state")
+		return
+	}
+	if p.done {
+		// the result has already been retrieved, return the cached value
+		if p.err != nil {
+			err = p.err
+			return
+		}
+		res = *p.result
+		return
+	}
+
+	ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.Result", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil)
+	defer func() { endSpan(err) }()
+
+	err = p.op.Result(ctx, p.result)
+	var respErr *exported.ResponseError
+	if errors.As(err, &respErr) {
+		if pollers.IsNonTerminalHTTPStatusCode(respErr.RawResponse) {
+			// the request failed in a non-terminal way.
+			// don't cache the error or mark the Poller as done
+			return
+		}
+		// the LRO failed. record the error
+		p.err = err
+	} else if err != nil {
+		// the call to Result failed, don't cache anything in this case
+		return
+	}
+	p.done = true
+	if p.err != nil {
+		err = p.err
+		return
+	}
+	res = *p.result
+	return
+}
+
+// ResumeToken returns a value representing the poller that can be used to resume
+// the LRO at a later time. ResumeTokens are unique per service operation.
+// The token's format should be considered opaque and is subject to change.
+// Calling this on an LRO in a terminal state will return an error.
+func (p *Poller[T]) ResumeToken() (string, error) {
+	if p.Done() {
+		return "", errors.New("poller is in a terminal state")
+	}
+	tk, err := pollers.NewResumeToken[T](p.op)
+	if err != nil {
+		return "", err
+	}
+	return tk, err
+}
+
+// extracts the type name from the string returned from reflect.Value.Name()
+func shortenTypeName(s string) string {
+	// the value is formatted as follows
+	// Poller[module/Package.Type].Method
+	// we want to shorten the generic type parameter string to Type
+	// anything we don't recognize will be left as-is
+	begin := strings.Index(s, "[")
+	end := strings.Index(s, "]")
+	if begin == -1 || end == -1 {
+		return s
+	}
+
+	typeName := s[begin+1 : end]
+	if i := strings.LastIndex(typeName, "."); i > -1 {
+		typeName = typeName[i+1:]
+	}
+	return s[:begin+1] + typeName + s[end:]
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/request.go 🔗

@@ -0,0 +1,281 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"encoding/xml"
+	"errors"
+	"fmt"
+	"io"
+	"mime/multipart"
+	"net/http"
+	"net/textproto"
+	"net/url"
+	"path"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/uuid"
+)
+
+// Base64Encoding is usesd to specify which base-64 encoder/decoder to use when
+// encoding/decoding a slice of bytes to/from a string.
+type Base64Encoding = exported.Base64Encoding
+
+const (
+	// Base64StdFormat uses base64.StdEncoding for encoding and decoding payloads.
+	Base64StdFormat Base64Encoding = exported.Base64StdFormat
+
+	// Base64URLFormat uses base64.RawURLEncoding for encoding and decoding payloads.
+	Base64URLFormat Base64Encoding = exported.Base64URLFormat
+)
+
+// NewRequest creates a new policy.Request with the specified input.
+// The endpoint MUST be properly encoded before calling this function.
+func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*policy.Request, error) {
+	return exported.NewRequest(ctx, httpMethod, endpoint)
+}
+
+// NewRequestFromRequest creates a new policy.Request with an existing *http.Request
+func NewRequestFromRequest(req *http.Request) (*policy.Request, error) {
+	return exported.NewRequestFromRequest(req)
+}
+
+// EncodeQueryParams will parse and encode any query parameters in the specified URL.
+// Any semicolons will automatically be escaped.
+func EncodeQueryParams(u string) (string, error) {
+	before, after, found := strings.Cut(u, "?")
+	if !found {
+		return u, nil
+	}
+	// starting in Go 1.17, url.ParseQuery will reject semicolons in query params.
+	// so, we must escape them first. note that this assumes that semicolons aren't
+	// being used as query param separators which is per the current RFC.
+	// for more info:
+	// https://github.com/golang/go/issues/25192
+	// https://github.com/golang/go/issues/50034
+	qp, err := url.ParseQuery(strings.ReplaceAll(after, ";", "%3B"))
+	if err != nil {
+		return "", err
+	}
+	return before + "?" + qp.Encode(), nil
+}
+
+// JoinPaths concatenates multiple URL path segments into one path,
+// inserting path separation characters as required. JoinPaths will preserve
+// query parameters in the root path
+func JoinPaths(root string, paths ...string) string {
+	if len(paths) == 0 {
+		return root
+	}
+
+	qps := ""
+	if strings.Contains(root, "?") {
+		splitPath := strings.Split(root, "?")
+		root, qps = splitPath[0], splitPath[1]
+	}
+
+	p := path.Join(paths...)
+	// path.Join will remove any trailing slashes.
+	// if one was provided, preserve it.
+	if strings.HasSuffix(paths[len(paths)-1], "/") && !strings.HasSuffix(p, "/") {
+		p += "/"
+	}
+
+	if qps != "" {
+		p = p + "?" + qps
+	}
+
+	if strings.HasSuffix(root, "/") && strings.HasPrefix(p, "/") {
+		root = root[:len(root)-1]
+	} else if !strings.HasSuffix(root, "/") && !strings.HasPrefix(p, "/") {
+		p = "/" + p
+	}
+	return root + p
+}
+
+// EncodeByteArray will base-64 encode the byte slice v.
+func EncodeByteArray(v []byte, format Base64Encoding) string {
+	return exported.EncodeByteArray(v, format)
+}
+
+// MarshalAsByteArray will base-64 encode the byte slice v, then calls SetBody.
+// The encoded value is treated as a JSON string.
+func MarshalAsByteArray(req *policy.Request, v []byte, format Base64Encoding) error {
+	// send as a JSON string
+	encode := fmt.Sprintf("\"%s\"", EncodeByteArray(v, format))
+	// tsp generated code can set Content-Type so we must prefer that
+	return exported.SetBody(req, exported.NopCloser(strings.NewReader(encode)), shared.ContentTypeAppJSON, false)
+}
+
+// MarshalAsJSON calls json.Marshal() to get the JSON encoding of v then calls SetBody.
+func MarshalAsJSON(req *policy.Request, v any) error {
+	b, err := json.Marshal(v)
+	if err != nil {
+		return fmt.Errorf("error marshalling type %T: %s", v, err)
+	}
+	// tsp generated code can set Content-Type so we must prefer that
+	return exported.SetBody(req, exported.NopCloser(bytes.NewReader(b)), shared.ContentTypeAppJSON, false)
+}
+
+// MarshalAsXML calls xml.Marshal() to get the XML encoding of v then calls SetBody.
+func MarshalAsXML(req *policy.Request, v any) error {
+	b, err := xml.Marshal(v)
+	if err != nil {
+		return fmt.Errorf("error marshalling type %T: %s", v, err)
+	}
+	// inclue the XML header as some services require it
+	b = []byte(xml.Header + string(b))
+	return req.SetBody(exported.NopCloser(bytes.NewReader(b)), shared.ContentTypeAppXML)
+}
+
+// SetMultipartFormData writes the specified keys/values as multi-part form fields with the specified value.
+// File content must be specified as an [io.ReadSeekCloser] or [streaming.MultipartContent].
+// Byte slices will be treated as JSON. All other values are treated as string values.
+func SetMultipartFormData(req *policy.Request, formData map[string]any) error {
+	body := bytes.Buffer{}
+	writer := multipart.NewWriter(&body)
+
+	writeContent := func(fieldname, filename string, src io.Reader) error {
+		fd, err := writer.CreateFormFile(fieldname, filename)
+		if err != nil {
+			return err
+		}
+		// copy the data to the form file
+		if _, err = io.Copy(fd, src); err != nil {
+			return err
+		}
+		return nil
+	}
+
+	quoteEscaper := strings.NewReplacer("\\", "\\\\", `"`, "\\\"")
+
+	writeMultipartContent := func(fieldname string, mpc streaming.MultipartContent) error {
+		if mpc.Body == nil {
+			return errors.New("streaming.MultipartContent.Body cannot be nil")
+		}
+
+		// use fieldname for the file name when unspecified
+		filename := fieldname
+
+		if mpc.ContentType == "" && mpc.Filename == "" {
+			return writeContent(fieldname, filename, mpc.Body)
+		}
+		if mpc.Filename != "" {
+			filename = mpc.Filename
+		}
+		// this is pretty much copied from multipart.Writer.CreateFormFile
+		// but lets us set the caller provided Content-Type and filename
+		h := make(textproto.MIMEHeader)
+		h.Set("Content-Disposition",
+			fmt.Sprintf(`form-data; name="%s"; filename="%s"`,
+				quoteEscaper.Replace(fieldname), quoteEscaper.Replace(filename)))
+		contentType := "application/octet-stream"
+		if mpc.ContentType != "" {
+			contentType = mpc.ContentType
+		}
+		h.Set("Content-Type", contentType)
+		fd, err := writer.CreatePart(h)
+		if err != nil {
+			return err
+		}
+		// copy the data to the form file
+		if _, err = io.Copy(fd, mpc.Body); err != nil {
+			return err
+		}
+		return nil
+	}
+
+	// the same as multipart.Writer.WriteField but lets us specify the Content-Type
+	writeField := func(fieldname, contentType string, value string) error {
+		h := make(textproto.MIMEHeader)
+		h.Set("Content-Disposition",
+			fmt.Sprintf(`form-data; name="%s"`, quoteEscaper.Replace(fieldname)))
+		h.Set("Content-Type", contentType)
+		fd, err := writer.CreatePart(h)
+		if err != nil {
+			return err
+		}
+		if _, err = fd.Write([]byte(value)); err != nil {
+			return err
+		}
+		return nil
+	}
+
+	for k, v := range formData {
+		if rsc, ok := v.(io.ReadSeekCloser); ok {
+			if err := writeContent(k, k, rsc); err != nil {
+				return err
+			}
+			continue
+		} else if rscs, ok := v.([]io.ReadSeekCloser); ok {
+			for _, rsc := range rscs {
+				if err := writeContent(k, k, rsc); err != nil {
+					return err
+				}
+			}
+			continue
+		} else if mpc, ok := v.(streaming.MultipartContent); ok {
+			if err := writeMultipartContent(k, mpc); err != nil {
+				return err
+			}
+			continue
+		} else if mpcs, ok := v.([]streaming.MultipartContent); ok {
+			for _, mpc := range mpcs {
+				if err := writeMultipartContent(k, mpc); err != nil {
+					return err
+				}
+			}
+			continue
+		}
+
+		var content string
+		contentType := shared.ContentTypeTextPlain
+		switch tt := v.(type) {
+		case []byte:
+			// JSON, don't quote it
+			content = string(tt)
+			contentType = shared.ContentTypeAppJSON
+		case string:
+			content = tt
+		default:
+			// ensure the value is in string format
+			content = fmt.Sprintf("%v", v)
+		}
+
+		if err := writeField(k, contentType, content); err != nil {
+			return err
+		}
+	}
+	if err := writer.Close(); err != nil {
+		return err
+	}
+	return req.SetBody(exported.NopCloser(bytes.NewReader(body.Bytes())), writer.FormDataContentType())
+}
+
+// SkipBodyDownload will disable automatic downloading of the response body.
+func SkipBodyDownload(req *policy.Request) {
+	req.SetOperationValue(bodyDownloadPolicyOpValues{Skip: true})
+}
+
+// CtxAPINameKey is used as a context key for adding/retrieving the API name.
+type CtxAPINameKey = shared.CtxAPINameKey
+
+// NewUUID returns a new UUID using the RFC4122 algorithm.
+func NewUUID() (string, error) {
+	u, err := uuid.New()
+	if err != nil {
+		return "", err
+	}
+	return u.String(), nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/response.go 🔗

@@ -0,0 +1,109 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"bytes"
+	"encoding/json"
+	"encoding/xml"
+	"fmt"
+	"io"
+	"net/http"
+
+	azexported "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/exported"
+)
+
+// Payload reads and returns the response body or an error.
+// On a successful read, the response body is cached.
+// Subsequent reads will access the cached value.
+func Payload(resp *http.Response) ([]byte, error) {
+	return exported.Payload(resp, nil)
+}
+
+// HasStatusCode returns true if the Response's status code is one of the specified values.
+func HasStatusCode(resp *http.Response, statusCodes ...int) bool {
+	return exported.HasStatusCode(resp, statusCodes...)
+}
+
+// UnmarshalAsByteArray will base-64 decode the received payload and place the result into the value pointed to by v.
+func UnmarshalAsByteArray(resp *http.Response, v *[]byte, format Base64Encoding) error {
+	p, err := Payload(resp)
+	if err != nil {
+		return err
+	}
+	return DecodeByteArray(string(p), v, format)
+}
+
+// UnmarshalAsJSON calls json.Unmarshal() to unmarshal the received payload into the value pointed to by v.
+func UnmarshalAsJSON(resp *http.Response, v any) error {
+	payload, err := Payload(resp)
+	if err != nil {
+		return err
+	}
+	// TODO: verify early exit is correct
+	if len(payload) == 0 {
+		return nil
+	}
+	err = removeBOM(resp)
+	if err != nil {
+		return err
+	}
+	err = json.Unmarshal(payload, v)
+	if err != nil {
+		err = fmt.Errorf("unmarshalling type %T: %s", v, err)
+	}
+	return err
+}
+
+// UnmarshalAsXML calls xml.Unmarshal() to unmarshal the received payload into the value pointed to by v.
+func UnmarshalAsXML(resp *http.Response, v any) error {
+	payload, err := Payload(resp)
+	if err != nil {
+		return err
+	}
+	// TODO: verify early exit is correct
+	if len(payload) == 0 {
+		return nil
+	}
+	err = removeBOM(resp)
+	if err != nil {
+		return err
+	}
+	err = xml.Unmarshal(payload, v)
+	if err != nil {
+		err = fmt.Errorf("unmarshalling type %T: %s", v, err)
+	}
+	return err
+}
+
+// Drain reads the response body to completion then closes it.  The bytes read are discarded.
+func Drain(resp *http.Response) {
+	if resp != nil && resp.Body != nil {
+		_, _ = io.Copy(io.Discard, resp.Body)
+		resp.Body.Close()
+	}
+}
+
+// removeBOM removes any byte-order mark prefix from the payload if present.
+func removeBOM(resp *http.Response) error {
+	_, err := exported.Payload(resp, &exported.PayloadOptions{
+		BytesModifier: func(b []byte) []byte {
+			// UTF8
+			return bytes.TrimPrefix(b, []byte("\xef\xbb\xbf"))
+		},
+	})
+	if err != nil {
+		return err
+	}
+	return nil
+}
+
+// DecodeByteArray will base-64 decode the provided string into v.
+func DecodeByteArray(s string, v *[]byte, format Base64Encoding) error {
+	return azexported.DecodeByteArray(s, v, format)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_http_client.go 🔗

@@ -0,0 +1,48 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package runtime
+
+import (
+	"crypto/tls"
+	"net"
+	"net/http"
+	"time"
+
+	"golang.org/x/net/http2"
+)
+
+var defaultHTTPClient *http.Client
+
+func init() {
+	defaultTransport := &http.Transport{
+		Proxy: http.ProxyFromEnvironment,
+		DialContext: defaultTransportDialContext(&net.Dialer{
+			Timeout:   30 * time.Second,
+			KeepAlive: 30 * time.Second,
+		}),
+		ForceAttemptHTTP2:     true,
+		MaxIdleConns:          100,
+		MaxIdleConnsPerHost:   10,
+		IdleConnTimeout:       90 * time.Second,
+		TLSHandshakeTimeout:   10 * time.Second,
+		ExpectContinueTimeout: 1 * time.Second,
+		TLSClientConfig: &tls.Config{
+			MinVersion:    tls.VersionTLS12,
+			Renegotiation: tls.RenegotiateFreelyAsClient,
+		},
+	}
+	// TODO: evaluate removing this once https://github.com/golang/go/issues/59690 has been fixed
+	if http2Transport, err := http2.ConfigureTransports(defaultTransport); err == nil {
+		// if the connection has been idle for 10 seconds, send a ping frame for a health check
+		http2Transport.ReadIdleTimeout = 10 * time.Second
+		// if there's no response to the ping within the timeout, the connection will be closed
+		http2Transport.PingTimeout = 5 * time.Second
+	}
+	defaultHTTPClient = &http.Client{
+		Transport: defaultTransport,
+	}
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/doc.go 🔗

@@ -0,0 +1,9 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright 2017 Microsoft Corporation. All rights reserved.
+// Use of this source code is governed by an MIT
+// license that can be found in the LICENSE file.
+
+// Package streaming contains helpers for streaming IO operations and progress reporting.
+package streaming

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/progress.go 🔗

@@ -0,0 +1,89 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package streaming
+
+import (
+	"io"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported"
+)
+
+type progress struct {
+	rc     io.ReadCloser
+	rsc    io.ReadSeekCloser
+	pr     func(bytesTransferred int64)
+	offset int64
+}
+
+// NopCloser returns a ReadSeekCloser with a no-op close method wrapping the provided io.ReadSeeker.
+// In addition to adding a Close method to an io.ReadSeeker, this can also be used to wrap an
+// io.ReadSeekCloser with a no-op Close method to allow explicit control of when the io.ReedSeekCloser
+// has its underlying stream closed.
+func NopCloser(rs io.ReadSeeker) io.ReadSeekCloser {
+	return exported.NopCloser(rs)
+}
+
+// NewRequestProgress adds progress reporting to an HTTP request's body stream.
+func NewRequestProgress(body io.ReadSeekCloser, pr func(bytesTransferred int64)) io.ReadSeekCloser {
+	return &progress{
+		rc:     body,
+		rsc:    body,
+		pr:     pr,
+		offset: 0,
+	}
+}
+
+// NewResponseProgress adds progress reporting to an HTTP response's body stream.
+func NewResponseProgress(body io.ReadCloser, pr func(bytesTransferred int64)) io.ReadCloser {
+	return &progress{
+		rc:     body,
+		rsc:    nil,
+		pr:     pr,
+		offset: 0,
+	}
+}
+
+// Read reads a block of data from an inner stream and reports progress
+func (p *progress) Read(b []byte) (n int, err error) {
+	n, err = p.rc.Read(b)
+	if err != nil && err != io.EOF {
+		return
+	}
+	p.offset += int64(n)
+	// Invokes the user's callback method to report progress
+	p.pr(p.offset)
+	return
+}
+
+// Seek only expects a zero or from beginning.
+func (p *progress) Seek(offset int64, whence int) (int64, error) {
+	// This should only ever be called with offset = 0 and whence = io.SeekStart
+	n, err := p.rsc.Seek(offset, whence)
+	if err == nil {
+		p.offset = int64(n)
+	}
+	return n, err
+}
+
+// requestBodyProgress supports Close but the underlying stream may not; if it does, Close will close it.
+func (p *progress) Close() error {
+	return p.rc.Close()
+}
+
+// MultipartContent contains streaming content used in multipart/form payloads.
+type MultipartContent struct {
+	// Body contains the required content body.
+	Body io.ReadSeekCloser
+
+	// ContentType optionally specifies the HTTP Content-Type for this Body.
+	// The default value is application/octet-stream.
+	ContentType string
+
+	// Filename optionally specifies the filename for this Body.
+	// The default value is the field name for the multipart/form section.
+	Filename string
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/constants.go 🔗

@@ -0,0 +1,41 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package tracing
+
+// SpanKind represents the role of a Span inside a Trace. Often, this defines how a Span will be processed and visualized by various backends.
+type SpanKind int
+
+const (
+	// SpanKindInternal indicates the span represents an internal operation within an application.
+	SpanKindInternal SpanKind = 1
+
+	// SpanKindServer indicates the span covers server-side handling of a request.
+	SpanKindServer SpanKind = 2
+
+	// SpanKindClient indicates the span describes a request to a remote service.
+	SpanKindClient SpanKind = 3
+
+	// SpanKindProducer indicates the span was created by a messaging producer.
+	SpanKindProducer SpanKind = 4
+
+	// SpanKindConsumer indicates the span was created by a messaging consumer.
+	SpanKindConsumer SpanKind = 5
+)
+
+// SpanStatus represents the status of a span.
+type SpanStatus int
+
+const (
+	// SpanStatusUnset is the default status code.
+	SpanStatusUnset SpanStatus = 0
+
+	// SpanStatusError indicates the operation contains an error.
+	SpanStatusError SpanStatus = 1
+
+	// SpanStatusOK indicates the operation completed successfully.
+	SpanStatusOK SpanStatus = 2
+)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/tracing.go 🔗

@@ -0,0 +1,191 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+// Package tracing contains the definitions needed to support distributed tracing.
+package tracing
+
+import (
+	"context"
+)
+
+// ProviderOptions contains the optional values when creating a Provider.
+type ProviderOptions struct {
+	// for future expansion
+}
+
+// NewProvider creates a new Provider with the specified values.
+//   - newTracerFn is the underlying implementation for creating Tracer instances
+//   - options contains optional values; pass nil to accept the default value
+func NewProvider(newTracerFn func(name, version string) Tracer, options *ProviderOptions) Provider {
+	return Provider{
+		newTracerFn: newTracerFn,
+	}
+}
+
+// Provider is the factory that creates Tracer instances.
+// It defaults to a no-op provider.
+type Provider struct {
+	newTracerFn func(name, version string) Tracer
+}
+
+// NewTracer creates a new Tracer for the specified module name and version.
+//   - module - the fully qualified name of the module
+//   - version - the version of the module
+func (p Provider) NewTracer(module, version string) (tracer Tracer) {
+	if p.newTracerFn != nil {
+		tracer = p.newTracerFn(module, version)
+	}
+	return
+}
+
+/////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+// TracerOptions contains the optional values when creating a Tracer.
+type TracerOptions struct {
+	// SpanFromContext contains the implementation for the Tracer.SpanFromContext method.
+	SpanFromContext func(context.Context) Span
+}
+
+// NewTracer creates a Tracer with the specified values.
+//   - newSpanFn is the underlying implementation for creating Span instances
+//   - options contains optional values; pass nil to accept the default value
+func NewTracer(newSpanFn func(ctx context.Context, spanName string, options *SpanOptions) (context.Context, Span), options *TracerOptions) Tracer {
+	if options == nil {
+		options = &TracerOptions{}
+	}
+	return Tracer{
+		newSpanFn:         newSpanFn,
+		spanFromContextFn: options.SpanFromContext,
+	}
+}
+
+// Tracer is the factory that creates Span instances.
+type Tracer struct {
+	attrs             []Attribute
+	newSpanFn         func(ctx context.Context, spanName string, options *SpanOptions) (context.Context, Span)
+	spanFromContextFn func(ctx context.Context) Span
+}
+
+// Start creates a new span and a context.Context that contains it.
+//   - ctx is the parent context for this span. If it contains a Span, the newly created span will be a child of that span, else it will be a root span
+//   - spanName identifies the span within a trace, it's typically the fully qualified API name
+//   - options contains optional values for the span, pass nil to accept any defaults
+func (t Tracer) Start(ctx context.Context, spanName string, options *SpanOptions) (context.Context, Span) {
+	if t.newSpanFn != nil {
+		opts := SpanOptions{}
+		if options != nil {
+			opts = *options
+		}
+		opts.Attributes = append(opts.Attributes, t.attrs...)
+		return t.newSpanFn(ctx, spanName, &opts)
+	}
+	return ctx, Span{}
+}
+
+// SetAttributes sets attrs to be applied to each Span. If a key from attrs
+// already exists for an attribute of the Span it will be overwritten with
+// the value contained in attrs.
+func (t *Tracer) SetAttributes(attrs ...Attribute) {
+	t.attrs = append(t.attrs, attrs...)
+}
+
+// Enabled returns true if this Tracer is capable of creating Spans.
+func (t Tracer) Enabled() bool {
+	return t.newSpanFn != nil
+}
+
+// SpanFromContext returns the Span associated with the current context.
+// If the provided context has no Span, false is returned.
+func (t Tracer) SpanFromContext(ctx context.Context) Span {
+	if t.spanFromContextFn != nil {
+		return t.spanFromContextFn(ctx)
+	}
+	return Span{}
+}
+
+// SpanOptions contains optional settings for creating a span.
+type SpanOptions struct {
+	// Kind indicates the kind of Span.
+	Kind SpanKind
+
+	// Attributes contains key-value pairs of attributes for the span.
+	Attributes []Attribute
+}
+
+/////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+// SpanImpl abstracts the underlying implementation for Span,
+// allowing it to work with various tracing implementations.
+// Any zero-values will have their default, no-op behavior.
+type SpanImpl struct {
+	// End contains the implementation for the Span.End method.
+	End func()
+
+	// SetAttributes contains the implementation for the Span.SetAttributes method.
+	SetAttributes func(...Attribute)
+
+	// AddEvent contains the implementation for the Span.AddEvent method.
+	AddEvent func(string, ...Attribute)
+
+	// SetStatus contains the implementation for the Span.SetStatus method.
+	SetStatus func(SpanStatus, string)
+}
+
+// NewSpan creates a Span with the specified implementation.
+func NewSpan(impl SpanImpl) Span {
+	return Span{
+		impl: impl,
+	}
+}
+
+// Span is a single unit of a trace.  A trace can contain multiple spans.
+// A zero-value Span provides a no-op implementation.
+type Span struct {
+	impl SpanImpl
+}
+
+// End terminates the span and MUST be called before the span leaves scope.
+// Any further updates to the span will be ignored after End is called.
+func (s Span) End() {
+	if s.impl.End != nil {
+		s.impl.End()
+	}
+}
+
+// SetAttributes sets the specified attributes on the Span.
+// Any existing attributes with the same keys will have their values overwritten.
+func (s Span) SetAttributes(attrs ...Attribute) {
+	if s.impl.SetAttributes != nil {
+		s.impl.SetAttributes(attrs...)
+	}
+}
+
+// AddEvent adds a named event with an optional set of attributes to the span.
+func (s Span) AddEvent(name string, attrs ...Attribute) {
+	if s.impl.AddEvent != nil {
+		s.impl.AddEvent(name, attrs...)
+	}
+}
+
+// SetStatus sets the status on the span along with a description.
+func (s Span) SetStatus(code SpanStatus, desc string) {
+	if s.impl.SetStatus != nil {
+		s.impl.SetStatus(code, desc)
+	}
+}
+
+/////////////////////////////////////////////////////////////////////////////////////////////////////////////
+
+// Attribute is a key-value pair.
+type Attribute struct {
+	// Key is the name of the attribute.
+	Key string
+
+	// Value is the attribute's value.
+	// Types that are natively supported include int64, float64, int, bool, string.
+	// Any other type will be formatted per rules of fmt.Sprintf("%v").
+	Value any
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/CHANGELOG.md 🔗

@@ -0,0 +1,575 @@
+# Release History
+
+## 1.7.0 (2024-06-20)
+
+### Features Added
+* `AzurePipelinesCredential` authenticates an Azure Pipelines service connection with
+  workload identity federation
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.7.0-beta.1
+* Removed the persistent token caching API. It will return in v1.8.0-beta.1
+
+## 1.7.0-beta.1 (2024-06-10)
+
+### Features Added
+* Restored `AzurePipelinesCredential` and persistent token caching API
+
+## Breaking Changes
+> These changes affect only code written against a beta version such as v1.6.0-beta.4
+* Values which `NewAzurePipelinesCredential` read from environment variables in
+  prior versions are now parameters
+* Renamed `AzurePipelinesServiceConnectionCredentialOptions` to `AzurePipelinesCredentialOptions`
+
+### Bugs Fixed
+* Managed identity bug fixes
+
+## 1.6.0 (2024-06-10)
+
+### Features Added
+* `NewOnBehalfOfCredentialWithClientAssertions` creates an on-behalf-of credential
+  that authenticates with client assertions such as federated credentials
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.6.0-beta.4
+* Removed `AzurePipelinesCredential` and the persistent token caching API.
+  They will return in v1.7.0-beta.1
+
+### Bugs Fixed
+* Managed identity bug fixes
+
+## 1.6.0-beta.4 (2024-05-14)
+
+### Features Added
+* `AzurePipelinesCredential` authenticates an Azure Pipeline service connection with
+  workload identity federation
+
+## 1.6.0-beta.3 (2024-04-09)
+
+### Breaking Changes
+* `DefaultAzureCredential` now sends a probe request with no retries for IMDS managed identity
+  environments to avoid excessive retry delays when the IMDS endpoint is not available. This
+  should improve credential chain resolution for local development scenarios.
+
+### Bugs Fixed
+* `ManagedIdentityCredential` now specifies resource IDs correctly for Azure Container Instances
+
+## 1.5.2 (2024-04-09)
+
+### Bugs Fixed
+* `ManagedIdentityCredential` now specifies resource IDs correctly for Azure Container Instances
+
+### Other Changes
+* Restored v1.4.0 error behavior for empty tenant IDs
+* Upgraded dependencies
+
+## 1.6.0-beta.2 (2024-02-06)
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.6.0-beta.1
+* Replaced `ErrAuthenticationRequired` with `AuthenticationRequiredError`, a struct
+  type that carries the `TokenRequestOptions` passed to the `GetToken` call which
+  returned the error.
+
+### Bugs Fixed
+* Fixed more cases in which credential chains like `DefaultAzureCredential`
+  should try their next credential after attempting managed identity
+  authentication in a Docker Desktop container
+
+### Other Changes
+* `AzureCLICredential` uses the CLI's `expires_on` value for token expiration
+
+## 1.6.0-beta.1 (2024-01-17)
+
+### Features Added
+* Restored persistent token caching API first added in v1.5.0-beta.1
+* Added `AzureCLICredentialOptions.Subscription`
+
+## 1.5.1 (2024-01-17)
+
+### Bugs Fixed
+* `InteractiveBrowserCredential` handles `AdditionallyAllowedTenants` correctly
+
+## 1.5.0 (2024-01-16)
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.5.0-beta.1
+* Removed persistent token caching. It will return in v1.6.0-beta.1
+
+### Bugs Fixed
+* Credentials now preserve MSAL headers e.g. X-Client-Sku
+
+### Other Changes
+* Upgraded dependencies
+
+## 1.5.0-beta.2 (2023-11-07)
+
+### Features Added
+* `DefaultAzureCredential` and `ManagedIdentityCredential` support Azure ML managed identity
+* Added spans for distributed tracing.
+
+## 1.5.0-beta.1 (2023-10-10)
+
+### Features Added
+* Optional persistent token caching for most credentials. Set `TokenCachePersistenceOptions`
+  on a credential's options to enable and configure this. See the package documentation for
+  this version and [TOKEN_CACHING.md](https://aka.ms/azsdk/go/identity/caching) for more
+  details.
+* `AzureDeveloperCLICredential` authenticates with the Azure Developer CLI (`azd`). This
+  credential is also part of the `DefaultAzureCredential` authentication flow.
+
+## 1.4.0 (2023-10-10)
+
+### Bugs Fixed
+* `ManagedIdentityCredential` will now retry when IMDS responds 410 or 503
+
+## 1.4.0-beta.5 (2023-09-12)
+
+### Features Added
+* Service principal credentials can request CAE tokens
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.4.0-beta.4
+* Whether `GetToken` requests a CAE token is now determined by `TokenRequestOptions.EnableCAE`. Azure
+  SDK clients which support CAE will set this option automatically. Credentials no longer request CAE
+  tokens by default or observe the environment variable "AZURE_IDENTITY_DISABLE_CP1".
+
+### Bugs Fixed
+* Credential chains such as `DefaultAzureCredential` now try their next credential, if any, when
+  managed identity authentication fails in a Docker Desktop container
+  ([#21417](https://github.com/Azure/azure-sdk-for-go/issues/21417))
+
+## 1.4.0-beta.4 (2023-08-16)
+
+### Other Changes
+* Upgraded dependencies
+
+## 1.3.1 (2023-08-16)
+
+### Other Changes
+* Upgraded dependencies
+
+## 1.4.0-beta.3 (2023-08-08)
+
+### Bugs Fixed
+* One invocation of `AzureCLICredential.GetToken()` and `OnBehalfOfCredential.GetToken()`
+  can no longer make two authentication attempts
+
+## 1.4.0-beta.2 (2023-07-14)
+
+### Other Changes
+* `DefaultAzureCredentialOptions.TenantID` applies to workload identity authentication
+* Upgraded dependencies
+
+## 1.4.0-beta.1 (2023-06-06)
+
+### Other Changes
+* Re-enabled CAE support as in v1.3.0-beta.3
+
+## 1.3.0 (2023-05-09)
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.3.0-beta.5
+* Renamed `NewOnBehalfOfCredentialFromCertificate` to `NewOnBehalfOfCredentialWithCertificate`
+* Renamed `NewOnBehalfOfCredentialFromSecret` to `NewOnBehalfOfCredentialWithSecret`
+
+### Other Changes
+* Upgraded to MSAL v1.0.0
+
+## 1.3.0-beta.5 (2023-04-11)
+
+### Breaking Changes
+> These changes affect only code written against a beta version such as v1.3.0-beta.4
+* Moved `NewWorkloadIdentityCredential()` parameters into `WorkloadIdentityCredentialOptions`.
+  The constructor now reads default configuration from environment variables set by the Azure
+  workload identity webhook by default.
+  ([#20478](https://github.com/Azure/azure-sdk-for-go/pull/20478))
+* Removed CAE support. It will return in v1.4.0-beta.1
+  ([#20479](https://github.com/Azure/azure-sdk-for-go/pull/20479))
+
+### Bugs Fixed
+* Fixed an issue in `DefaultAzureCredential` that could cause the managed identity endpoint check to fail in rare circumstances.
+
+## 1.3.0-beta.4 (2023-03-08)
+
+### Features Added
+* Added `WorkloadIdentityCredentialOptions.AdditionallyAllowedTenants` and `.DisableInstanceDiscovery`
+
+### Bugs Fixed
+* Credentials now synchronize within `GetToken()` so a single instance can be shared among goroutines
+  ([#20044](https://github.com/Azure/azure-sdk-for-go/issues/20044))
+
+### Other Changes
+* Upgraded dependencies
+
+## 1.2.2 (2023-03-07)
+
+### Other Changes
+* Upgraded dependencies
+
+## 1.3.0-beta.3 (2023-02-07)
+
+### Features Added
+* By default, credentials set client capability "CP1" to enable support for
+  [Continuous Access Evaluation (CAE)](https://learn.microsoft.com/entra/identity-platform/app-resilience-continuous-access-evaluation).
+  This indicates to Microsoft Entra ID that your application can handle CAE claims challenges.
+  You can disable this behavior by setting the environment variable "AZURE_IDENTITY_DISABLE_CP1" to "true".
+* `InteractiveBrowserCredentialOptions.LoginHint` enables pre-populating the login
+  prompt with a username ([#15599](https://github.com/Azure/azure-sdk-for-go/pull/15599))
+* Service principal and user credentials support ADFS authentication on Azure Stack.
+  Specify "adfs" as the credential's tenant.
+* Applications running in private or disconnected clouds can prevent credentials from
+  requesting Microsoft Entra instance metadata by setting the `DisableInstanceDiscovery`
+  field on credential options.
+* Many credentials can now be configured to authenticate in multiple tenants. The
+  options types for these credentials have an `AdditionallyAllowedTenants` field
+  that specifies additional tenants in which the credential may authenticate.
+
+## 1.3.0-beta.2 (2023-01-10)
+
+### Features Added
+* Added `OnBehalfOfCredential` to support the on-behalf-of flow
+  ([#16642](https://github.com/Azure/azure-sdk-for-go/issues/16642))
+
+### Bugs Fixed
+* `AzureCLICredential` reports token expiration in local time (should be UTC)
+
+### Other Changes
+* `AzureCLICredential` imposes its default timeout only when the `Context`
+  passed to `GetToken()` has no deadline
+* Added `NewCredentialUnavailableError()`. This function constructs an error indicating
+  a credential can't authenticate and an encompassing `ChainedTokenCredential` should
+  try its next credential, if any.
+
+## 1.3.0-beta.1 (2022-12-13)
+
+### Features Added
+* `WorkloadIdentityCredential` and `DefaultAzureCredential` support
+  Workload Identity Federation on Kubernetes. `DefaultAzureCredential`
+  support requires environment variable configuration as set by the
+  Workload Identity webhook.
+  ([#15615](https://github.com/Azure/azure-sdk-for-go/issues/15615))
+
+## 1.2.0 (2022-11-08)
+
+### Other Changes
+* This version includes all fixes and features from 1.2.0-beta.*
+
+## 1.2.0-beta.3 (2022-10-11)
+
+### Features Added
+* `ManagedIdentityCredential` caches tokens in memory
+
+### Bugs Fixed
+* `ClientCertificateCredential` sends only the leaf cert for SNI authentication
+
+## 1.2.0-beta.2 (2022-08-10)
+
+### Features Added
+* Added `ClientAssertionCredential` to enable applications to authenticate
+  with custom client assertions
+
+### Other Changes
+* Updated AuthenticationFailedError with links to TROUBLESHOOTING.md for relevant errors
+* Upgraded `microsoft-authentication-library-for-go` requirement to v0.6.0
+
+## 1.2.0-beta.1 (2022-06-07)
+
+### Features Added
+* `EnvironmentCredential` reads certificate passwords from `AZURE_CLIENT_CERTIFICATE_PASSWORD`
+  ([#17099](https://github.com/Azure/azure-sdk-for-go/pull/17099))
+
+## 1.1.0 (2022-06-07)
+
+### Features Added
+* `ClientCertificateCredential` and `ClientSecretCredential` support ESTS-R. First-party
+  applications can set environment variable `AZURE_REGIONAL_AUTHORITY_NAME` with a
+  region name.
+  ([#15605](https://github.com/Azure/azure-sdk-for-go/issues/15605))
+
+## 1.0.1 (2022-06-07)
+
+### Other Changes
+* Upgrade `microsoft-authentication-library-for-go` requirement to v0.5.1
+  ([#18176](https://github.com/Azure/azure-sdk-for-go/issues/18176))
+
+## 1.0.0 (2022-05-12)
+
+### Features Added
+* `DefaultAzureCredential` reads environment variable `AZURE_CLIENT_ID` for the
+  client ID of a user-assigned managed identity
+  ([#17293](https://github.com/Azure/azure-sdk-for-go/pull/17293))
+
+### Breaking Changes
+* Removed `AuthorizationCodeCredential`. Use `InteractiveBrowserCredential` instead
+  to authenticate a user with the authorization code flow.
+* Instances of `AuthenticationFailedError` are now returned by pointer.
+* `GetToken()` returns `azcore.AccessToken` by value
+
+### Bugs Fixed
+* `AzureCLICredential` panics after receiving an unexpected error type
+  ([#17490](https://github.com/Azure/azure-sdk-for-go/issues/17490))
+
+### Other Changes
+* `GetToken()` returns an error when the caller specifies no scope
+* Updated to the latest versions of `golang.org/x/crypto`, `azcore` and `internal`
+
+## 0.14.0 (2022-04-05)
+
+### Breaking Changes
+* This module now requires Go 1.18
+* Removed `AuthorityHost`. Credentials are now configured for sovereign or private
+  clouds with the API in `azcore/cloud`, for example:
+  ```go
+  // before
+  opts := azidentity.ClientSecretCredentialOptions{AuthorityHost: azidentity.AzureGovernment}
+  cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, &opts)
+
+  // after
+  import "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
+
+  opts := azidentity.ClientSecretCredentialOptions{}
+  opts.Cloud = cloud.AzureGovernment
+  cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, &opts)
+  ```
+
+## 0.13.2 (2022-03-08)
+
+### Bugs Fixed
+* Prevented a data race in `DefaultAzureCredential` and `ChainedTokenCredential`
+  ([#17144](https://github.com/Azure/azure-sdk-for-go/issues/17144))
+
+### Other Changes
+* Upgraded App Service managed identity version from 2017-09-01 to 2019-08-01
+  ([#17086](https://github.com/Azure/azure-sdk-for-go/pull/17086))
+
+## 0.13.1 (2022-02-08)
+
+### Features Added
+* `EnvironmentCredential` supports certificate SNI authentication when
+  `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN` is "true".
+  ([#16851](https://github.com/Azure/azure-sdk-for-go/pull/16851))
+
+### Bugs Fixed
+* `ManagedIdentityCredential.GetToken()` now returns an error when configured for
+   a user assigned identity in Azure Cloud Shell (which doesn't support such identities)
+   ([#16946](https://github.com/Azure/azure-sdk-for-go/pull/16946))
+
+### Other Changes
+* `NewDefaultAzureCredential()` logs non-fatal errors. These errors are also included in the
+  error returned by `DefaultAzureCredential.GetToken()` when it's unable to acquire a token
+  from any source. ([#15923](https://github.com/Azure/azure-sdk-for-go/issues/15923))
+
+## 0.13.0 (2022-01-11)
+
+### Breaking Changes
+* Replaced `AuthenticationFailedError.RawResponse()` with a field having the same name
+* Unexported `CredentialUnavailableError`
+* Instances of `ChainedTokenCredential` will now skip looping through the list of source credentials and re-use the first successful credential on subsequent calls to `GetToken`.
+  * If `ChainedTokenCredentialOptions.RetrySources` is true, `ChainedTokenCredential` will continue to try all of the originally provided credentials each time the `GetToken` method is called.
+  * `ChainedTokenCredential.successfulCredential` will contain a reference to the last successful credential.
+  * `DefaultAzureCredenial` will also re-use the first successful credential on subsequent calls to `GetToken`.
+  * `DefaultAzureCredential.chain.successfulCredential` will also contain a reference to the last successful credential.
+
+### Other Changes
+* `ManagedIdentityCredential` no longer probes IMDS before requesting a token
+  from it. Also, an error response from IMDS no longer disables a credential
+  instance. Following an error, a credential instance will continue to send
+  requests to IMDS as necessary.
+* Adopted MSAL for user and service principal authentication
+* Updated `azcore` requirement to 0.21.0
+
+## 0.12.0 (2021-11-02)
+### Breaking Changes
+* Raised minimum go version to 1.16
+* Removed `NewAuthenticationPolicy()` from credentials. Clients should instead use azcore's
+ `runtime.NewBearerTokenPolicy()` to construct a bearer token authorization policy.
+* The `AuthorityHost` field in credential options structs is now a custom type,
+  `AuthorityHost`, with underlying type `string`
+* `NewChainedTokenCredential` has a new signature to accommodate a placeholder
+  options struct:
+  ```go
+  // before
+  cred, err := NewChainedTokenCredential(credA, credB)
+
+  // after
+  cred, err := NewChainedTokenCredential([]azcore.TokenCredential{credA, credB}, nil)
+  ```
+* Removed `ExcludeAzureCLICredential`, `ExcludeEnvironmentCredential`, and `ExcludeMSICredential`
+  from `DefaultAzureCredentialOptions`
+* `NewClientCertificateCredential` requires a `[]*x509.Certificate` and `crypto.PrivateKey` instead of
+  a path to a certificate file. Added `ParseCertificates` to simplify getting these in common cases:
+  ```go
+  // before
+  cred, err := NewClientCertificateCredential("tenant", "client-id", "/cert.pem", nil)
+
+  // after
+  certData, err := os.ReadFile("/cert.pem")
+  certs, key, err := ParseCertificates(certData, password)
+  cred, err := NewClientCertificateCredential(tenantID, clientID, certs, key, nil)
+  ```
+* Removed `InteractiveBrowserCredentialOptions.ClientSecret` and `.Port`
+* Removed `AADAuthenticationFailedError`
+* Removed `id` parameter of `NewManagedIdentityCredential()`. User assigned identities are now
+  specified by `ManagedIdentityCredentialOptions.ID`:
+  ```go
+  // before
+  cred, err := NewManagedIdentityCredential("client-id", nil)
+  // or, for a resource ID
+  opts := &ManagedIdentityCredentialOptions{ID: ResourceID}
+  cred, err := NewManagedIdentityCredential("/subscriptions/...", opts)
+
+  // after
+  clientID := ClientID("7cf7db0d-...")
+  opts := &ManagedIdentityCredentialOptions{ID: clientID}
+  // or, for a resource ID
+  resID: ResourceID("/subscriptions/...")
+  opts := &ManagedIdentityCredentialOptions{ID: resID}
+  cred, err := NewManagedIdentityCredential(opts)
+  ```
+* `DeviceCodeCredentialOptions.UserPrompt` has a new type: `func(context.Context, DeviceCodeMessage) error`
+* Credential options structs now embed `azcore.ClientOptions`. In addition to changing literal initialization
+  syntax, this change renames `HTTPClient` fields to `Transport`.
+* Renamed `LogCredential` to `EventCredential`
+* `AzureCLICredential` no longer reads the environment variable `AZURE_CLI_PATH`
+* `NewManagedIdentityCredential` no longer reads environment variables `AZURE_CLIENT_ID` and
+  `AZURE_RESOURCE_ID`. Use `ManagedIdentityCredentialOptions.ID` instead.
+* Unexported `AuthenticationFailedError` and `CredentialUnavailableError` structs. In their place are two
+  interfaces having the same names.
+
+### Bugs Fixed
+* `AzureCLICredential.GetToken` no longer mutates its `opts.Scopes`
+
+### Features Added
+* Added connection configuration options to `DefaultAzureCredentialOptions`
+* `AuthenticationFailedError.RawResponse()` returns the HTTP response motivating the error,
+  if available
+
+### Other Changes
+* `NewDefaultAzureCredential()` returns `*DefaultAzureCredential` instead of `*ChainedTokenCredential`
+* Added `TenantID` field to `DefaultAzureCredentialOptions` and `AzureCLICredentialOptions`
+
+## 0.11.0 (2021-09-08)
+### Breaking Changes
+* Unexported `AzureCLICredentialOptions.TokenProvider` and its type,
+  `AzureCLITokenProvider`
+
+### Bug Fixes
+* `ManagedIdentityCredential.GetToken` returns `CredentialUnavailableError`
+  when IMDS has no assigned identity, signaling `DefaultAzureCredential` to
+  try other credentials
+
+
+## 0.10.0 (2021-08-30)
+### Breaking Changes
+* Update based on `azcore` refactor [#15383](https://github.com/Azure/azure-sdk-for-go/pull/15383)
+
+## 0.9.3 (2021-08-20)
+
+### Bugs Fixed
+* `ManagedIdentityCredential.GetToken` no longer mutates its `opts.Scopes`
+
+### Other Changes
+* Bumps version of `azcore` to `v0.18.1`
+
+
+## 0.9.2 (2021-07-23)
+### Features Added
+* Adding support for Service Fabric environment in `ManagedIdentityCredential`
+* Adding an option for using a resource ID instead of client ID in `ManagedIdentityCredential`
+
+
+## 0.9.1 (2021-05-24)
+### Features Added
+* Add LICENSE.txt and bump version information
+
+
+## 0.9.0 (2021-05-21)
+### Features Added
+* Add support for authenticating in Azure Stack environments
+* Enable user assigned identities for the IMDS scenario in `ManagedIdentityCredential`
+* Add scope to resource conversion in `GetToken()` on `ManagedIdentityCredential`
+
+
+## 0.8.0 (2021-01-20)
+### Features Added
+* Updating documentation
+
+
+## 0.7.1 (2021-01-04)
+### Features Added
+* Adding port option to `InteractiveBrowserCredential`
+
+
+## 0.7.0 (2020-12-11)
+### Features Added
+* Add `redirectURI` parameter back to authentication code flow
+
+
+## 0.6.1 (2020-12-09)
+### Features Added
+* Updating query parameter in `ManagedIdentityCredential` and updating datetime string for parsing managed identity access tokens.
+
+
+## 0.6.0 (2020-11-16)
+### Features Added
+* Remove `RedirectURL` parameter from auth code flow to align with the MSAL implementation which relies on the native client redirect URL.
+
+
+## 0.5.0 (2020-10-30)
+### Features Added
+* Flattening credential options
+
+
+## 0.4.3 (2020-10-21)
+### Features Added
+* Adding Azure Arc support in `ManagedIdentityCredential`
+
+
+## 0.4.2 (2020-10-16)
+### Features Added
+* Typo fixes
+
+
+## 0.4.1 (2020-10-16)
+### Features Added
+* Ensure authority hosts are only HTTPs
+
+
+## 0.4.0 (2020-10-16)
+### Features Added
+* Adding options structs for credentials
+
+
+## 0.3.0 (2020-10-09)
+### Features Added
+* Update `DeviceCodeCredential` callback
+
+
+## 0.2.2 (2020-10-09)
+### Features Added
+* Add `AuthorizationCodeCredential`
+
+
+## 0.2.1 (2020-10-06)
+### Features Added
+* Add `InteractiveBrowserCredential`
+
+
+## 0.2.0 (2020-09-11)
+### Features Added
+* Refactor `azidentity` on top of `azcore` refactor
+* Updated policies to conform to `policy.Policy` interface changes.
+* Updated non-retriable errors to conform to `azcore.NonRetriableError`.
+* Fixed calls to `Request.SetBody()` to include content type.
+* Switched endpoints to string types and removed extra parsing code.
+
+
+## 0.1.1 (2020-09-02)
+### Features Added
+* Add `AzureCLICredential` to `DefaultAzureCredential` chain
+
+
+## 0.1.0 (2020-07-23)
+### Features Added
+* Initial Release. Azure Identity library that provides Microsoft Entra token authentication support for the SDK.

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/LICENSE.txt 🔗

@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) Microsoft Corporation.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/MIGRATION.md 🔗

@@ -0,0 +1,307 @@
+# Migrating from autorest/adal to azidentity
+
+`azidentity` provides Microsoft Entra ID ([formerly Azure Active Directory](https://learn.microsoft.com/entra/fundamentals/new-name)) authentication for the newest Azure SDK modules (`github.com/azure-sdk-for-go/sdk/...`). Older Azure SDK packages (`github.com/azure-sdk-for-go/services/...`) use types from `github.com/go-autorest/autorest/adal` instead.
+
+This guide shows common authentication code using `autorest/adal` and its equivalent using `azidentity`.
+
+## Table of contents
+
+- [Acquire a token](#acquire-a-token)
+- [Client certificate authentication](#client-certificate-authentication)
+- [Client secret authentication](#client-secret-authentication)
+- [Configuration](#configuration)
+- [Device code authentication](#device-code-authentication)
+- [Managed identity](#managed-identity)
+- [Use azidentity credentials with older packages](#use-azidentity-credentials-with-older-packages)
+
+## Configuration
+
+### `autorest/adal`
+
+Token providers require a token audience (resource identifier) and an instance of `adal.OAuthConfig`, which requires a Microsoft Entra endpoint and tenant:
+
+```go
+import "github.com/Azure/go-autorest/autorest/adal"
+
+oauthCfg, err := adal.NewOAuthConfig("https://login.chinacloudapi.cn", tenantID)
+handle(err)
+
+spt, err := adal.NewServicePrincipalTokenWithSecret(
+    *oauthCfg, clientID, "https://management.chinacloudapi.cn/", &adal.ServicePrincipalTokenSecret{ClientSecret: secret},
+)
+```
+
+### `azidentity`
+
+A credential instance can acquire tokens for any audience. The audience for each token is determined by the client requesting it. Credentials require endpoint configuration only for sovereign or private clouds. The `azcore/cloud` package has predefined configuration for sovereign clouds such as Azure China:
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
+    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+)
+
+clientOpts := azcore.ClientOptions{Cloud: cloud.AzureChina}
+
+cred, err := azidentity.NewClientSecretCredential(
+    tenantID, clientID, secret, &azidentity.ClientSecretCredentialOptions{ClientOptions: clientOpts},
+)
+handle(err)
+```
+
+## Client secret authentication
+
+### `autorest/adal`
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions"
+    "github.com/Azure/go-autorest/autorest"
+    "github.com/Azure/go-autorest/autorest/adal"
+)
+
+oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID)
+handle(err)
+spt, err := adal.NewServicePrincipalTokenWithSecret(
+    *oauthCfg, clientID, "https://management.azure.com/", &adal.ServicePrincipalTokenSecret{ClientSecret: secret},
+)
+handle(err)
+
+client := subscriptions.NewClient()
+client.Authorizer = autorest.NewBearerAuthorizer(spt)
+```
+
+### `azidentity`
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+    "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions"
+)
+
+cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, nil)
+handle(err)
+
+client, err := armsubscriptions.NewClient(cred, nil)
+handle(err)
+```
+
+## Client certificate authentication
+
+### `autorest/adal`
+
+```go
+import (
+    "os"
+
+    "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions"
+    "github.com/Azure/go-autorest/autorest"
+    "github.com/Azure/go-autorest/autorest/adal"
+)
+certData, err := os.ReadFile("./example.pfx")
+handle(err)
+
+certificate, rsaPrivateKey, err := decodePkcs12(certData, "")
+handle(err)
+
+oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID)
+handle(err)
+
+spt, err := adal.NewServicePrincipalTokenFromCertificate(
+    *oauthConfig, clientID, certificate, rsaPrivateKey, "https://management.azure.com/",
+)
+
+client := subscriptions.NewClient()
+client.Authorizer = autorest.NewBearerAuthorizer(spt)
+```
+
+### `azidentity`
+
+```go
+import (
+    "os"
+
+    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+    "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions"
+)
+
+certData, err := os.ReadFile("./example.pfx")
+handle(err)
+
+certs, key, err := azidentity.ParseCertificates(certData, nil)
+handle(err)
+
+cred, err = azidentity.NewClientCertificateCredential(tenantID, clientID, certs, key, nil)
+handle(err)
+
+client, err := armsubscriptions.NewClient(cred, nil)
+handle(err)
+```
+
+## Managed identity
+
+### `autorest/adal`
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions"
+    "github.com/Azure/go-autorest/autorest"
+    "github.com/Azure/go-autorest/autorest/adal"
+)
+
+spt, err := adal.NewServicePrincipalTokenFromManagedIdentity("https://management.azure.com/", nil)
+handle(err)
+
+client := subscriptions.NewClient()
+client.Authorizer = autorest.NewBearerAuthorizer(spt)
+```
+
+### `azidentity`
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+    "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions"
+)
+
+cred, err := azidentity.NewManagedIdentityCredential(nil)
+handle(err)
+
+client, err := armsubscriptions.NewClient(cred, nil)
+handle(err)
+```
+
+### User-assigned identities
+
+`autorest/adal`:
+
+```go
+import "github.com/Azure/go-autorest/autorest/adal"
+
+opts := &adal.ManagedIdentityOptions{ClientID: "..."}
+spt, err := adal.NewServicePrincipalTokenFromManagedIdentity("https://management.azure.com/")
+handle(err)
+```
+
+`azidentity`:
+
+```go
+import "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+
+opts := azidentity.ManagedIdentityCredentialOptions{ID: azidentity.ClientID("...")}
+cred, err := azidentity.NewManagedIdentityCredential(&opts)
+handle(err)
+```
+
+## Device code authentication
+
+### `autorest/adal`
+
+```go
+import (
+    "fmt"
+    "net/http"
+
+    "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions"
+    "github.com/Azure/go-autorest/autorest"
+    "github.com/Azure/go-autorest/autorest/adal"
+)
+
+oauthClient := &http.Client{}
+oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID)
+handle(err)
+resource := "https://management.azure.com/"
+deviceCode, err := adal.InitiateDeviceAuth(oauthClient, *oauthCfg, clientID, resource)
+handle(err)
+
+// display instructions, wait for the user to authenticate
+fmt.Println(*deviceCode.Message)
+token, err := adal.WaitForUserCompletion(oauthClient, deviceCode)
+handle(err)
+
+spt, err := adal.NewServicePrincipalTokenFromManualToken(*oauthCfg, clientID, resource, *token)
+handle(err)
+
+client := subscriptions.NewClient()
+client.Authorizer = autorest.NewBearerAuthorizer(spt)
+```
+
+### `azidentity`
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+    "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions"
+)
+
+cred, err := azidentity.NewDeviceCodeCredential(nil)
+handle(err)
+
+client, err := armsubscriptions.NewSubscriptionsClient(cred, nil)
+handle(err)
+```
+
+`azidentity.DeviceCodeCredential` will guide a user through authentication, printing instructions to the console by default. The user prompt is customizable. For more information, see the [package documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DeviceCodeCredential).
+
+## Acquire a token
+
+### `autorest/adal`
+
+```go
+import "github.com/Azure/go-autorest/autorest/adal"
+
+oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID)
+handle(err)
+
+spt, err := adal.NewServicePrincipalTokenWithSecret(
+    *oauthCfg, clientID, "https://vault.azure.net", &adal.ServicePrincipalTokenSecret{ClientSecret: secret},
+)
+
+err = spt.Refresh()
+if err == nil {
+    token := spt.Token
+}
+```
+
+### `azidentity`
+
+In ordinary usage, application code doesn't need to request tokens from credentials directly. Azure SDK clients handle token acquisition and refreshing internally. However, applications may call `GetToken()` to do so. All credential types have this method.
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+)
+
+cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, nil)
+handle(err)
+
+tk, err := cred.GetToken(
+    context.TODO(), policy.TokenRequestOptions{Scopes: []string{"https://vault.azure.net/.default"}},
+)
+if err == nil {
+    token := tk.Token
+}
+```
+
+Note that `azidentity` credentials use the Microsoft Entra endpoint, which requires OAuth 2 scopes instead of the resource identifiers `autorest/adal` expects. For more information, see [Microsoft Entra ID documentation](https://learn.microsoft.com/entra/identity-platform/permissions-consent-overview).
+
+## Use azidentity credentials with older packages
+
+The [azidext module](https://pkg.go.dev/github.com/jongio/azidext/go/azidext) provides an adapter for `azidentity` credential types. The adapter enables using the credential types with older Azure SDK clients. For example:
+
+```go
+import (
+    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+    "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions"
+    "github.com/jongio/azidext/go/azidext"
+)
+
+cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, nil)
+handle(err)
+
+client := subscriptions.NewClient()
+client.Authorizer = azidext.NewTokenCredentialAdapter(cred, []string{"https://management.azure.com//.default"})
+```
+
+![Impressions](https://azure-sdk-impressions.azurewebsites.net/api/impressions/azure-sdk-for-go%2Fsdk%2Fazidentity%2FMIGRATION.png)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/README.md 🔗

@@ -0,0 +1,258 @@
+# Azure Identity Client Module for Go
+
+The Azure Identity module provides Microsoft Entra ID ([formerly Azure Active Directory](https://learn.microsoft.com/entra/fundamentals/new-name)) token authentication support across the Azure SDK. It includes a set of `TokenCredential` implementations, which can be used with Azure SDK clients supporting token authentication.
+
+[![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/azidentity)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity)
+| [Microsoft Entra ID documentation](https://learn.microsoft.com/entra/identity/)
+| [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/azidentity)
+
+# Getting started
+
+## Install the module
+
+This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management.
+
+Install the Azure Identity module:
+
+```sh
+go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
+```
+
+## Prerequisites
+
+- an [Azure subscription](https://azure.microsoft.com/free/)
+- Go 1.18
+
+### Authenticating during local development
+
+When debugging and executing code locally, developers typically use their own accounts to authenticate calls to Azure services. The `azidentity` module supports authenticating through developer tools to simplify local development.
+
+#### Authenticating via the Azure CLI
+
+`DefaultAzureCredential` and `AzureCLICredential` can authenticate as the user
+signed in to the [Azure CLI](https://learn.microsoft.com/cli/azure). To sign in to the Azure CLI, run `az login`. On a system with a default web browser, the Azure CLI will launch the browser to authenticate a user.
+
+When no default browser is available, `az login` will use the device code
+authentication flow. This can also be selected manually by running `az login --use-device-code`.
+
+#### Authenticate via the Azure Developer CLI
+
+Developers coding outside of an IDE can also use the [Azure Developer CLI](https://aka.ms/azure-dev) to authenticate. Applications using the `DefaultAzureCredential` or the `AzureDeveloperCLICredential` can use the account logged in to the Azure Developer CLI to authenticate calls in their application when running locally.
+
+To authenticate with the Azure Developer CLI, run `azd auth login`. On a system with a default web browser, `azd` will launch the browser to authenticate. On systems without a default web browser, run `azd auth login --use-device-code` to use the device code authentication flow.
+
+## Key concepts
+
+### Credentials
+
+A credential is a type which contains or can obtain the data needed for a
+service client to authenticate requests. Service clients across the Azure SDK
+accept a credential instance when they are constructed, and use that credential
+to authenticate requests.
+
+The `azidentity` module focuses on OAuth authentication with Microsoft Entra ID. It offers a variety of credential types capable of acquiring a Microsoft Entra access token. See [Credential Types](#credential-types "Credential Types") for a list of this module's credential types.
+
+### DefaultAzureCredential
+
+`DefaultAzureCredential` is appropriate for most apps that will be deployed to Azure. It combines common production credentials with development credentials. It attempts to authenticate via the following mechanisms in this order, stopping when one succeeds:
+
+![DefaultAzureCredential authentication flow](img/mermaidjs/DefaultAzureCredentialAuthFlow.svg)
+
+1. **Environment** - `DefaultAzureCredential` will read account information specified via [environment variables](#environment-variables) and use it to authenticate.
+1. **Workload Identity** - If the app is deployed on Kubernetes with environment variables set by the workload identity webhook, `DefaultAzureCredential` will authenticate the configured identity.
+1. **Managed Identity** - If the app is deployed to an Azure host with managed identity enabled, `DefaultAzureCredential` will authenticate with it.
+1. **Azure CLI** - If a user or service principal has authenticated via the Azure CLI `az login` command, `DefaultAzureCredential` will authenticate that identity.
+1. **Azure Developer CLI** - If the developer has authenticated via the Azure Developer CLI `azd auth login` command, the `DefaultAzureCredential` will authenticate with that account.
+
+> Note: `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
+
+## Managed Identity
+
+`DefaultAzureCredential` and `ManagedIdentityCredential` support
+[managed identity authentication](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/overview)
+in any hosting environment which supports managed identities, such as (this list is not exhaustive):
+* [Azure App Service](https://learn.microsoft.com/azure/app-service/overview-managed-identity)
+* [Azure Arc](https://learn.microsoft.com/azure/azure-arc/servers/managed-identity-authentication)
+* [Azure Cloud Shell](https://learn.microsoft.com/azure/cloud-shell/msi-authorization)
+* [Azure Kubernetes Service](https://learn.microsoft.com/azure/aks/use-managed-identity)
+* [Azure Service Fabric](https://learn.microsoft.com/azure/service-fabric/concepts-managed-identity)
+* [Azure Virtual Machines](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/how-to-use-vm-token)
+
+## Examples
+
+- [Authenticate with DefaultAzureCredential](#authenticate-with-defaultazurecredential "Authenticate with DefaultAzureCredential")
+- [Define a custom authentication flow with ChainedTokenCredential](#define-a-custom-authentication-flow-with-chainedtokencredential "Define a custom authentication flow with ChainedTokenCredential")
+- [Specify a user-assigned managed identity for DefaultAzureCredential](#specify-a-user-assigned-managed-identity-for-defaultazurecredential)
+
+### Authenticate with DefaultAzureCredential
+
+This example demonstrates authenticating a client from the `armresources` module with `DefaultAzureCredential`.
+
+```go
+cred, err := azidentity.NewDefaultAzureCredential(nil)
+if err != nil {
+  // handle error
+}
+
+client := armresources.NewResourceGroupsClient("subscription ID", cred, nil)
+```
+
+### Specify a user-assigned managed identity for DefaultAzureCredential
+
+To configure `DefaultAzureCredential` to authenticate a user-assigned managed identity, set the environment variable `AZURE_CLIENT_ID` to the identity's client ID.
+
+### Define a custom authentication flow with `ChainedTokenCredential`
+
+`DefaultAzureCredential` is generally the quickest way to get started developing apps for Azure. For more advanced scenarios, `ChainedTokenCredential` links multiple credential instances to be tried sequentially when authenticating. It will try each chained credential in turn until one provides a token or fails to authenticate due to an error.
+
+The following example demonstrates creating a credential, which will attempt to authenticate using managed identity. It will fall back to authenticating via the Azure CLI when a managed identity is unavailable.
+
+```go
+managed, err := azidentity.NewManagedIdentityCredential(nil)
+if err != nil {
+  // handle error
+}
+azCLI, err := azidentity.NewAzureCLICredential(nil)
+if err != nil {
+  // handle error
+}
+chain, err := azidentity.NewChainedTokenCredential([]azcore.TokenCredential{managed, azCLI}, nil)
+if err != nil {
+  // handle error
+}
+
+client := armresources.NewResourceGroupsClient("subscription ID", chain, nil)
+```
+
+## Credential Types
+
+### Authenticating Azure Hosted Applications
+
+|Credential|Usage
+|-|-
+|[DefaultAzureCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DefaultAzureCredential)|Simplified authentication experience for getting started developing Azure apps
+|[ChainedTokenCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ChainedTokenCredential)|Define custom authentication flows, composing multiple credentials
+|[EnvironmentCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#EnvironmentCredential)|Authenticate a service principal or user configured by environment variables
+|[ManagedIdentityCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ManagedIdentityCredential)|Authenticate the managed identity of an Azure resource
+|[WorkloadIdentityCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#WorkloadIdentityCredential)|Authenticate a workload identity on Kubernetes
+
+### Authenticating Service Principals
+
+|Credential|Usage
+|-|-
+|[AzurePipelinesCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#AzurePipelinesCredential)|Authenticate an Azure Pipelines [service connection](https://learn.microsoft.com/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml)
+|[ClientAssertionCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ClientAssertionCredential)|Authenticate a service principal with a signed client assertion
+|[ClientCertificateCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ClientCertificateCredential)|Authenticate a service principal with a certificate
+|[ClientSecretCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ClientSecretCredential)|Authenticate a service principal with a secret
+
+### Authenticating Users
+
+|Credential|Usage
+|-|-
+|[InteractiveBrowserCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#InteractiveBrowserCredential)|Interactively authenticate a user with the default web browser
+|[DeviceCodeCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DeviceCodeCredential)|Interactively authenticate a user on a device with limited UI
+|[UsernamePasswordCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#UsernamePasswordCredential)|Authenticate a user with a username and password
+
+### Authenticating via Development Tools
+
+|Credential|Usage
+|-|-
+|[AzureCLICredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#AzureCLICredential)|Authenticate as the user signed in to the Azure CLI
+|[`AzureDeveloperCLICredential`](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#AzureDeveloperCLICredential)|Authenticates as the user signed in to the Azure Developer CLI
+
+## Environment Variables
+
+`DefaultAzureCredential` and `EnvironmentCredential` can be configured with environment variables. Each type of authentication requires values for specific variables:
+
+#### Service principal with secret
+
+|variable name|value
+|-|-
+|`AZURE_CLIENT_ID`|ID of a Microsoft Entra application
+|`AZURE_TENANT_ID`|ID of the application's Microsoft Entra tenant
+|`AZURE_CLIENT_SECRET`|one of the application's client secrets
+
+#### Service principal with certificate
+
+|variable name|value
+|-|-
+|`AZURE_CLIENT_ID`|ID of a Microsoft Entra application
+|`AZURE_TENANT_ID`|ID of the application's Microsoft Entra tenant
+|`AZURE_CLIENT_CERTIFICATE_PATH`|path to a certificate file including private key
+|`AZURE_CLIENT_CERTIFICATE_PASSWORD`|password of the certificate file, if any
+
+#### Username and password
+
+|variable name|value
+|-|-
+|`AZURE_CLIENT_ID`|ID of a Microsoft Entra application
+|`AZURE_USERNAME`|a username (usually an email address)
+|`AZURE_PASSWORD`|that user's password
+
+Configuration is attempted in the above order. For example, if values for a
+client secret and certificate are both present, the client secret will be used.
+
+## Token caching
+
+Token caching is an `azidentity` feature that allows apps to:
+
+* Cache tokens in memory (default) or on disk (opt-in).
+* Improve resilience and performance.
+* Reduce the number of requests made to Microsoft Entra ID to obtain access tokens.
+
+For more details, see the [token caching documentation](https://aka.ms/azsdk/go/identity/caching).
+
+## Troubleshooting
+
+### Error Handling
+
+Credentials return an `error` when they fail to authenticate or lack data they require to authenticate. For guidance on resolving errors from specific credential types, see the [troubleshooting guide](https://aka.ms/azsdk/go/identity/troubleshoot).
+
+For more details on handling specific Microsoft Entra errors, see the Microsoft Entra [error code documentation](https://learn.microsoft.com/entra/identity-platform/reference-error-codes).
+
+### Logging
+
+This module uses the classification-based logging implementation in `azcore`. To enable console logging for all SDK modules, set `AZURE_SDK_GO_LOGGING` to `all`. Use the `azcore/log` package to control log event output or to enable logs for `azidentity` only. For example:
+```go
+import azlog "github.com/Azure/azure-sdk-for-go/sdk/azcore/log"
+
+// print log output to stdout
+azlog.SetListener(func(event azlog.Event, s string) {
+    fmt.Println(s)
+})
+
+// include only azidentity credential logs
+azlog.SetEvents(azidentity.EventAuthentication)
+```
+
+Credentials log basic information only, such as `GetToken` success or failure and errors. These log entries don't contain authentication secrets but may contain sensitive information.
+
+## Next steps
+
+Client and management modules listed on the [Azure SDK releases page](https://azure.github.io/azure-sdk/releases/latest/go.html) support authenticating with `azidentity` credential types. You can learn more about using these libraries in their documentation, which is linked from the release page.
+
+## Provide Feedback
+
+If you encounter bugs or have suggestions, please
+[open an issue](https://github.com/Azure/azure-sdk-for-go/issues).
+
+## Contributing
+
+This project welcomes contributions and suggestions. Most contributions require
+you to agree to a Contributor License Agreement (CLA) declaring that you have
+the right to, and actually do, grant us the rights to use your contribution.
+For details, visit [https://cla.microsoft.com](https://cla.microsoft.com).
+
+When you submit a pull request, a CLA-bot will automatically determine whether
+you need to provide a CLA and decorate the PR appropriately (e.g., label,
+comment). Simply follow the instructions provided by the bot. You will only
+need to do this once across all repos using our CLA.
+
+This project has adopted the
+[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+For more information, see the
+[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
+or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any
+additional questions or comments.
+
+![Impressions](https://azure-sdk-impressions.azurewebsites.net/api/impressions/azure-sdk-for-go%2Fsdk%2Fazidentity%2FREADME.png)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TOKEN_CACHING.MD 🔗

@@ -0,0 +1,71 @@
+## Token caching in the Azure Identity client module
+
+*Token caching* is a feature provided by the Azure Identity library that allows apps to:
+
+- Improve their resilience and performance.
+- Reduce the number of requests made to Microsoft Entra ID to obtain access tokens.
+- Reduce the number of times the user is prompted to authenticate.
+
+When an app needs to access a protected Azure resource, it typically needs to obtain an access token from Entra ID. Obtaining that token involves sending a request to Entra ID and may also involve prompting the user. Entra ID then validates the credentials provided in the request and issues an access token.
+
+Token caching, via the Azure Identity library, allows the app to store this access token [in memory](#in-memory-token-caching), where it's accessible to the current process, or [on disk](#persistent-token-caching) where it can be accessed across application or process invocations. The token can then be retrieved quickly and easily the next time the app needs to access the same resource. The app can avoid making another request to Entra ID, which reduces network traffic and improves resilience. Additionally, in scenarios where the app is authenticating users, token caching also avoids prompting the user each time new tokens are requested.
+
+### In-memory token caching
+
+*In-memory token caching* is the default option provided by the Azure Identity library. This caching approach allows apps to store access tokens in memory. With in-memory token caching, the library first determines if a valid access token for the requested resource is already stored in memory. If a valid token is found, it's returned to the app without the need to make another request to Entra ID. If a valid token isn't found, the library will automatically acquire a token by sending a request to Entra ID. The in-memory token cache provided by the Azure Identity library is thread-safe.
+
+**Note:** When Azure Identity library credentials are used with Azure service libraries (for example, Azure Blob Storage), the in-memory token caching is active in the `Pipeline` layer as well. All `TokenCredential` implementations are supported there, including custom implementations external to the Azure Identity library.
+
+#### Caching cannot be disabled
+
+As there are many levels of caching, it's not possible disable in-memory caching. However, the in-memory cache may be cleared by creating a new credential instance.
+
+### Persistent token caching
+
+> Only azidentity v1.5.0-beta versions support persistent token caching
+
+*Persistent disk token caching* is an opt-in feature in the Azure Identity library. The feature allows apps to cache access tokens in an encrypted, persistent storage mechanism. As indicated in the following table, the storage mechanism differs across operating systems.
+
+| Operating system | Storage mechanism                     |
+|------------------|---------------------------------------|
+| Linux            | kernel key retention service (keyctl) |
+| macOS            | Keychain                              |
+| Windows          | DPAPI                                 |
+
+By default the token cache will protect any data which is persisted using the user data protection APIs available on the current platform.
+However, there are cases where no data protection is available, and applications may choose to allow storing the token cache in an unencrypted state by setting `TokenCachePersistenceOptions.AllowUnencryptedStorage` to `true`. This allows a credential to fall back to unencrypted storage if it can't encrypt the cache. However, we do not recommend using this storage method due to its significantly lower security measures. In addition, tokens are not encrypted solely to the current user, which could potentially allow unauthorized access to the cache by individuals with machine access.
+
+With persistent disk token caching enabled, the library first determines if a valid access token for the requested resource is already stored in the persistent cache. If a valid token is found, it's returned to the app without the need to make another request to Entra ID. Additionally, the tokens are preserved across app runs, which:
+
+- Makes the app more resilient to failures.
+- Ensures the app can continue to function during an Entra ID outage or disruption.
+- Avoids having to prompt users to authenticate each time the process is restarted.
+
+>IMPORTANT! The token cache contains sensitive data and **MUST** be protected to prevent compromising accounts. All application decisions regarding the persistence of the token cache must consider that a breach of its content will fully compromise all the accounts it contains.
+
+#### Example code
+
+See the [package documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity@v1.6.0-beta.2#pkg-overview) for example code demonstrating how to configure persistent caching and access cached data.
+
+### Credentials supporting token caching
+
+The following table indicates the state of in-memory and persistent caching in each credential type.
+
+**Note:** In-memory caching is activated by default. Persistent token caching needs to be enabled as shown in [this example](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity@v1.5.0-beta.1#example-package-PersistentCache).
+
+| Credential                     | In-memory token caching                                             | Persistent token caching |
+|--------------------------------|---------------------------------------------------------------------|--------------------------|
+| `AzureCLICredential`           | Not Supported                                                       | Not Supported            |
+| `AzureDeveloperCLICredential`  | Not Supported                                                       | Not Supported            |
+| `AzurePipelinesCredential`     | Supported                                                           | Supported                |
+| `ClientAssertionCredential`    | Supported                                                           | Supported                |
+| `ClientCertificateCredential`  | Supported                                                           | Supported                |
+| `ClientSecretCredential`       | Supported                                                           | Supported                |
+| `DefaultAzureCredential`       | Supported if the target credential in the default chain supports it | Not Supported            |
+| `DeviceCodeCredential`         | Supported                                                           | Supported                |
+| `EnvironmentCredential`        | Supported                                                           | Not Supported            |
+| `InteractiveBrowserCredential` | Supported                                                           | Supported                |
+| `ManagedIdentityCredential`    | Supported                                                           | Not Supported            |
+| `OnBehalfOfCredential`         | Supported                                                           | Supported                |
+| `UsernamePasswordCredential`   | Supported                                                           | Supported                |
+| `WorkloadIdentityCredential`   | Supported                                                           | Supported                |

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TROUBLESHOOTING.md 🔗

@@ -0,0 +1,241 @@
+# Troubleshoot Azure Identity authentication issues
+
+This troubleshooting guide covers failure investigation techniques, common errors for the credential types in the `azidentity` module, and mitigation steps to resolve these errors.
+
+## Table of contents
+
+- [Handle azidentity errors](#handle-azidentity-errors)
+  - [Permission issues](#permission-issues)
+- [Find relevant information in errors](#find-relevant-information-in-errors)
+- [Enable and configure logging](#enable-and-configure-logging)
+- [Troubleshoot AzureCLICredential authentication issues](#troubleshoot-azureclicredential-authentication-issues)
+- [Troubleshoot AzureDeveloperCLICredential authentication issues](#troubleshoot-azuredeveloperclicredential-authentication-issues)
+- [Troubleshoot AzurePipelinesCredential authentication issues](#troubleshoot-azurepipelinescredential-authentication-issues)
+- [Troubleshoot ClientCertificateCredential authentication issues](#troubleshoot-clientcertificatecredential-authentication-issues)
+- [Troubleshoot ClientSecretCredential authentication issues](#troubleshoot-clientsecretcredential-authentication-issues)
+- [Troubleshoot DefaultAzureCredential authentication issues](#troubleshoot-defaultazurecredential-authentication-issues)
+- [Troubleshoot EnvironmentCredential authentication issues](#troubleshoot-environmentcredential-authentication-issues)
+- [Troubleshoot ManagedIdentityCredential authentication issues](#troubleshoot-managedidentitycredential-authentication-issues)
+  - [Azure App Service and Azure Functions managed identity](#azure-app-service-and-azure-functions-managed-identity)
+  - [Azure Kubernetes Service managed identity](#azure-kubernetes-service-managed-identity)
+  - [Azure Virtual Machine managed identity](#azure-virtual-machine-managed-identity)
+- [Troubleshoot UsernamePasswordCredential authentication issues](#troubleshoot-usernamepasswordcredential-authentication-issues)
+- [Troubleshoot WorkloadIdentityCredential authentication issues](#troubleshoot-workloadidentitycredential-authentication-issues)
+- [Get additional help](#get-additional-help)
+
+## Handle azidentity errors
+
+Any service client method that makes a request to the service may return an error due to authentication failure. This is because the credential authenticates on the first call to the service and on any subsequent call that needs to refresh an access token. Authentication errors include a description of the failure and possibly an error message from Microsoft Entra ID. Depending on the application, these errors may or may not be recoverable.
+
+### Permission issues
+
+Service client errors with a status code of 401 or 403 often indicate that authentication succeeded but the caller doesn't have permission to access the specified API. Check the service documentation to determine which RBAC roles are needed for the request, and ensure the authenticated user or service principal has the appropriate role assignments.
+
+## Find relevant information in errors
+
+Authentication errors can include responses from Microsoft Entra ID and often contain information helpful in diagnosis. Consider the following error message:
+
+```
+ClientSecretCredential authentication failed
+POST https://login.microsoftonline.com/3c631bb7-a9f7-4343-a5ba-a615913/oauth2/v2.0/token
+--------------------------------------------------------------------------------
+RESPONSE 401 Unauthorized
+--------------------------------------------------------------------------------
+{
+  "error": "invalid_client",
+  "error_description": "AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app '86be4c01-505b-45e9-bfc0-9b825fd84'.\r\nTrace ID: 03da4b8e-5ffe-48ca-9754-aff4276f0100\r\nCorrelation ID: 7b12f9bb-2eef-42e3-ad75-eee69ec9088d\r\nTimestamp: 2022-03-02 18:25:26Z",
+  "error_codes": [
+    7000215
+  ],
+  "timestamp": "2022-03-02 18:25:26Z",
+  "trace_id": "03da4b8e-5ffe-48ca-9754-aff4276f0100",
+  "correlation_id": "7b12f9bb-2eef-42e3-ad75-eee69ec9088d",
+  "error_uri": "https://login.microsoftonline.com/error?code=7000215"
+}
+--------------------------------------------------------------------------------
+```
+
+This error contains several pieces of information:
+
+- __Failing Credential Type__: The type of credential that failed to authenticate. This can be helpful when diagnosing issues with chained credential types such as `DefaultAzureCredential` or `ChainedTokenCredential`.
+
+- __Microsoft Entra ID Error Code and Message__: The error code and message returned by Microsoft Entra ID. This can give insight into the specific reason the request failed. For instance, in this case authentication failed because the provided client secret is incorrect. [Microsoft Entra ID documentation](https://learn.microsoft.com/entra/identity-platform/reference-error-codes#aadsts-error-codes) has more information on AADSTS error codes.
+
+- __Correlation ID and Timestamp__: The correlation ID and timestamp identify the request in server-side logs. This information can be useful to support engineers diagnosing unexpected Microsoft Entra failures.
+
+### Enable and configure logging
+
+`azidentity` provides the same logging capabilities as the rest of the Azure SDK. The simplest way to see the logs to help debug authentication issues is to print credential logs to the console.
+```go
+import azlog "github.com/Azure/azure-sdk-for-go/sdk/azcore/log"
+
+// print log output to stdout
+azlog.SetListener(func(event azlog.Event, s string) {
+    fmt.Println(s)
+})
+
+// include only azidentity credential logs
+azlog.SetEvents(azidentity.EventAuthentication)
+```
+
+<a id="dac"></a>
+## Troubleshoot DefaultAzureCredential authentication issues
+
+| Error |Description| Mitigation |
+|---|---|---|
+|"DefaultAzureCredential failed to acquire a token"|No credential in the `DefaultAzureCredential` chain provided a token|<ul><li>[Enable logging](#enable-and-configure-logging) to get further diagnostic information.</li><li>Consult the troubleshooting guide for underlying credential types for more information.</li><ul><li>[EnvironmentCredential](#troubleshoot-environmentcredential-authentication-issues)</li><li>[ManagedIdentityCredential](#troubleshoot-managedidentitycredential-authentication-issues)</li><li>[AzureCLICredential](#troubleshoot-azureclicredential-authentication-issues)</li></ul>|
+|Error from the client with a status code of 401 or 403|Authentication succeeded but the authorizing Azure service responded with a 401 (Unauthorized), or 403 (Forbidden) status code|<ul><li>[Enable logging](#enable-and-configure-logging) to determine which credential in the chain returned the authenticating token.</li><li>If an unexpected credential is returning a token, check application configuration such as environment variables.</li><li>Ensure the correct role is assigned to the authenticated identity. For example, a service specific role rather than the subscription Owner role.</li></ul>|
+|"managed identity timed out"|`DefaultAzureCredential` sets a short timeout on its first managed identity authentication attempt to prevent very long timeouts during local development when no managed identity is available. That timeout causes this error in production when an application requests a token before the hosting environment is ready to provide one.|Use [ManagedIdentityCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ManagedIdentityCredential) directly, at least in production. It doesn't set a timeout on its authentication attempts.|
+
+## Troubleshoot EnvironmentCredential authentication issues
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+|Missing or incomplete environment variable configuration|A valid combination of environment variables wasn't set|Ensure the appropriate environment variables are set for the intended authentication method as described in the [module documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#EnvironmentCredential)|
+
+<a id="client-secret"></a>
+## Troubleshoot ClientSecretCredential authentication issues
+
+| Error Code | Issue | Mitigation |
+|---|---|---|
+|AADSTS7000215|An invalid client secret was provided.|Ensure the secret provided to the credential constructor is valid. If unsure, create a new client secret using the Azure portal. Details on creating a new client secret are in [Microsoft Entra ID documentation](https://learn.microsoft.com/entra/identity-platform/howto-create-service-principal-portal#option-2-create-a-new-application-secret).|
+|AADSTS7000222|An expired client secret was provided.|Create a new client secret using the Azure portal. Details on creating a new client secret are in [Microsoft Entra ID documentation](https://learn.microsoft.com/entra/identity-platform/howto-create-service-principal-portal#option-2-create-a-new-application-secret).|
+|AADSTS700016|The specified application wasn't found in the specified tenant.|Ensure the client and tenant IDs provided to the credential constructor are correct for your application registration. For multi-tenant apps, ensure the application has been added to the desired tenant by a tenant admin. To add a new application in the desired tenant, follow the [Microsoft Entra ID instructions](https://learn.microsoft.com/entra/identity-platform/howto-create-service-principal-portal).|
+
+<a id="client-cert"></a>
+## Troubleshoot ClientCertificateCredential authentication issues
+
+| Error Code | Description | Mitigation |
+|---|---|---|
+|AADSTS700027|Client assertion contains an invalid signature.|Ensure the specified certificate has been uploaded to the application registration as described in [Microsoft Entra ID documentation](https://learn.microsoft.com/entra/identity-platform/howto-create-service-principal-portal#option-1-upload-a-certificate).|
+|AADSTS700016|The specified application wasn't found in the specified tenant.|Ensure the client and tenant IDs provided to the credential constructor are correct for your application registration. For multi-tenant apps, ensure the application has been added to the desired tenant by a tenant admin. To add a new application in the desired tenant, follow the [Microsoft Entra ID instructions](https://learn.microsoft.com/entra/identity-platform/howto-create-service-principal-portal).|
+
+<a id="username-password"></a>
+## Troubleshoot UsernamePasswordCredential authentication issues
+
+| Error Code | Issue | Mitigation |
+|---|---|---|
+|AADSTS50126|The provided username or password is invalid.|Ensure the username and password provided to the credential constructor are valid.|
+
+<a id="managed-id"></a>
+## Troubleshoot ManagedIdentityCredential authentication issues
+
+`ManagedIdentityCredential` is designed to work on a variety of Azure hosts support managed identity. Configuration and troubleshooting vary from host to host. The below table lists the Azure hosts that can be assigned a managed identity and are supported by `ManagedIdentityCredential`.
+
+|Host Environment| | |
+|---|---|---|
+|Azure Virtual Machines and Scale Sets|[Configuration](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/qs-configure-portal-windows-vm)|[Troubleshooting](#azure-virtual-machine-managed-identity)|
+|Azure App Service and Azure Functions|[Configuration](https://learn.microsoft.com/azure/app-service/overview-managed-identity)|[Troubleshooting](#azure-app-service-and-azure-functions-managed-identity)|
+|Azure Kubernetes Service|[Configuration](https://azure.github.io/aad-pod-identity/docs/)|[Troubleshooting](#azure-kubernetes-service-managed-identity)|
+|Azure Arc|[Configuration](https://learn.microsoft.com/azure/azure-arc/servers/managed-identity-authentication)||
+|Azure Service Fabric|[Configuration](https://learn.microsoft.com/azure/service-fabric/concepts-managed-identity)||
+
+### Azure Virtual Machine managed identity
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+|The requested identity hasn’t been assigned to this resource.|The IMDS endpoint responded with a status code of 400, indicating the requested identity isn’t assigned to the VM.|If using a user assigned identity, ensure the specified ID is correct.<p/><p/>If using a system assigned identity, make sure it has been enabled as described in [managed identity documentation](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm).|
+|The request failed due to a gateway error.|The request to the IMDS endpoint failed due to a gateway error, 502 or 504 status code.|IMDS doesn't support requests via proxy or gateway. Disable proxies or gateways running on the VM for requests to the IMDS endpoint `http://169.254.169.254`|
+|No response received from the managed identity endpoint.|No response was received for the request to IMDS or the request timed out.|<ul><li>Ensure the VM is configured for managed identity as described in [managed identity documentation](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/qs-configure-portal-windows-vm).</li><li>Verify the IMDS endpoint is reachable on the VM. See [below](#verify-imds-is-available-on-the-vm) for instructions.</li></ul>|
+|Multiple attempts failed to obtain a token from the managed identity endpoint.|The credential has exhausted its retries for a token request.|<ul><li>Refer to the error message for more details on specific failures.<li>Ensure the VM is configured for managed identity as described in [managed identity documentation](https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/qs-configure-portal-windows-vm).</li><li>Verify the IMDS endpoint is reachable on the VM. See [below](#verify-imds-is-available-on-the-vm) for instructions.</li></ul>|
+
+#### Verify IMDS is available on the VM
+
+If you have access to the VM, you can use `curl` to verify the managed identity endpoint is available.
+
+```sh
+curl 'http://169.254.169.254/metadata/identity/oauth2/token?resource=https://management.core.windows.net&api-version=2018-02-01' -H "Metadata: true"
+```
+
+> This command's output will contain an access token and SHOULD NOT BE SHARED, to avoid compromising account security.
+
+### Azure App Service and Azure Functions managed identity
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+|Get "`http://169.254.169.254/...`" i/o timeout|The App Service host hasn't set environment variables for managed identity configuration.|<ul><li>Ensure the App Service is configured for managed identity as described in [App Service documentation](https://learn.microsoft.com/azure/app-service/overview-managed-identity).</li><li>Verify the App Service environment is properly configured and the managed identity endpoint is available. See [below](#verify-the-app-service-managed-identity-endpoint-is-available) for instructions.</li></ul>|
+
+#### Verify the App Service managed identity endpoint is available
+
+If you can SSH into the App Service, you can verify managed identity is available in the environment. First ensure the environment variables `IDENTITY_ENDPOINT` and `IDENTITY_SECRET` are set. Then you can verify the managed identity endpoint is available using `curl`.
+
+```sh
+curl "$IDENTITY_ENDPOINT?resource=https://management.core.windows.net&api-version=2019-08-01" -H "X-IDENTITY-HEADER: $IDENTITY_HEADER"
+```
+
+> This command's output will contain an access token and SHOULD NOT BE SHARED, to avoid compromising account security.
+
+### Azure Kubernetes Service managed identity
+
+#### Pod Identity
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+|"no azure identity found for request clientID"|The application attempted to authenticate before an identity was assigned to its pod|Verify the pod is labeled correctly. This also occurs when a correctly labeled pod authenticates before the identity is ready. To prevent initialization races, configure NMI to set the Retry-After header in its responses as described in [Pod Identity documentation](https://azure.github.io/aad-pod-identity/docs/configure/feature_flags/#set-retry-after-header-in-nmi-response).
+
+<a id="azure-cli"></a>
+## Troubleshoot AzureCLICredential authentication issues
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+|Azure CLI not found on path|The Azure CLI isn’t installed or isn't on the application's path.|<ul><li>Ensure the Azure CLI is installed as described in [Azure CLI documentation](https://learn.microsoft.com/cli/azure/install-azure-cli).</li><li>Validate the installation location is in the application's `PATH` environment variable.</li></ul>|
+|Please run 'az login' to set up account|No account is currently logged into the Azure CLI, or the login has expired.|<ul><li>Run `az login` to log into the Azure CLI. More information about Azure CLI authentication is available in the [Azure CLI documentation](https://learn.microsoft.com/cli/azure/authenticate-azure-cli).</li><li>Verify that the Azure CLI can obtain tokens. See [below](#verify-the-azure-cli-can-obtain-tokens) for instructions.</li></ul>|
+
+#### Verify the Azure CLI can obtain tokens
+
+You can manually verify that the Azure CLI can authenticate and obtain tokens. First, use the `account` command to verify the logged in account.
+
+```azurecli
+az account show
+```
+
+Once you've verified the Azure CLI is using the correct account, you can validate that it's able to obtain tokens for that account.
+
+```azurecli
+az account get-access-token --output json --resource https://management.core.windows.net
+```
+
+> This command's output will contain an access token and SHOULD NOT BE SHARED, to avoid compromising account security.
+
+<a id="azd"></a>
+## Troubleshoot AzureDeveloperCLICredential authentication issues
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+|Azure Developer CLI not found on path|The Azure Developer CLI isn't installed or couldn't be found.|<ul><li>Ensure the Azure Developer CLI is properly installed. See the installation instructions at [Install or update the Azure Developer CLI](https://learn.microsoft.com/azure/developer/azure-developer-cli/install-azd).</li><li>Validate the installation location has been added to the `PATH` environment variable.</li></ul>|
+|Please run "azd auth login"|No account is logged into the Azure Developer CLI, or the login has expired.|<ul><li>Log in to the Azure Developer CLI using the `azd login` command.</li><li>Validate that the Azure Developer CLI can obtain tokens. For instructions, see [Verify the Azure Developer CLI can obtain tokens](#verify-the-azure-developer-cli-can-obtain-tokens).</li></ul>|
+
+#### Verify the Azure Developer CLI can obtain tokens
+
+You can manually verify that the Azure Developer CLI is properly authenticated and can obtain tokens. First, use the `config` command to verify the account that is currently logged in to the Azure Developer CLI.
+
+```sh
+azd config list
+```
+
+Once you've verified the Azure Developer CLI is using correct account, you can validate that it's able to obtain tokens for this account.
+
+```sh
+azd auth token --output json --scope https://management.core.windows.net/.default
+```
+>Note that output of this command will contain a valid access token, and SHOULD NOT BE SHARED to avoid compromising account security.
+
+<a id="workload"></a>
+## Troubleshoot `WorkloadIdentityCredential` authentication issues
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+|no client ID/tenant ID/token file specified|Incomplete configuration|In most cases these values are provided via environment variables set by Azure Workload Identity.<ul><li>If your application runs on Azure Kubernetes Servide (AKS) or a cluster that has deployed the Azure Workload Identity admission webhook, check pod labels and service account configuration. See the [AKS documentation](https://learn.microsoft.com/azure/aks/workload-identity-deploy-cluster#disable-workload-identity) and [Azure Workload Identity troubleshooting guide](https://azure.github.io/azure-workload-identity/docs/troubleshooting.html) for more details.<li>If your application isn't running on AKS or your cluster hasn't deployed the Workload Identity admission webhook, set these values in `WorkloadIdentityCredentialOptions`
+
+<a id="apc"></a>
+## Troubleshoot AzurePipelinesCredential authentication issues
+
+| Error Message |Description| Mitigation |
+|---|---|---|
+| AADSTS900023: Specified tenant identifier 'some tenant ID' is neither a valid DNS name, nor a valid external domain.|The `tenantID` argument to `NewAzurePipelinesCredential` is incorrect| Verify the tenant ID. It must identify the tenant of the user-assigned managed identity or service principal configured for the service connection.|
+| No service connection found with identifier |The `serviceConnectionID` argument to `NewAzurePipelinesCredential` is incorrect| Verify the service connection ID. This parameter refers to the `resourceId` of the Azure Service Connection. It can also be found in the query string of the service connection's configuration in Azure DevOps. [Azure Pipelines documentation](https://learn.microsoft.com/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml) has more information about service connections.|
+|302 (Found) response from OIDC endpoint|The `systemAccessToken` argument to `NewAzurePipelinesCredential` is incorrect|Check pipeline configuration. This value comes from the predefined variable `System.AccessToken` [as described in Azure Pipelines documentation](https://learn.microsoft.com/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#systemaccesstoken).|
+
+## Get additional help
+
+Additional information on ways to reach out for support can be found in [SUPPORT.md](https://github.com/Azure/azure-sdk-for-go/blob/main/SUPPORT.md).

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/authentication_record.go 🔗

@@ -0,0 +1,95 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/url"
+	"strings"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/public"
+)
+
+var supportedAuthRecordVersions = []string{"1.0"}
+
+// authenticationRecord is non-secret account information about an authenticated user that user credentials such as
+// [DeviceCodeCredential] and [InteractiveBrowserCredential] can use to access previously cached authentication
+// data. Call these credentials' Authenticate method to get an authenticationRecord for a user.
+type authenticationRecord struct {
+	// Authority is the URL of the authority that issued the token.
+	Authority string `json:"authority"`
+
+	// ClientID is the ID of the application that authenticated the user.
+	ClientID string `json:"clientId"`
+
+	// HomeAccountID uniquely identifies the account.
+	HomeAccountID string `json:"homeAccountId"`
+
+	// TenantID identifies the tenant in which the user authenticated.
+	TenantID string `json:"tenantId"`
+
+	// Username is the user's preferred username.
+	Username string `json:"username"`
+
+	// Version of the AuthenticationRecord.
+	Version string `json:"version"`
+}
+
+// UnmarshalJSON implements json.Unmarshaler for AuthenticationRecord
+func (a *authenticationRecord) UnmarshalJSON(b []byte) error {
+	// Default unmarshaling is fine but we want to return an error if the record's version isn't supported i.e., we
+	// want to inspect the unmarshalled values before deciding whether to return an error. Unmarshaling a formally
+	// different type enables this by assigning all the fields without recursing into this method.
+	type r authenticationRecord
+	err := json.Unmarshal(b, (*r)(a))
+	if err != nil {
+		return err
+	}
+	if a.Version == "" {
+		return errors.New("AuthenticationRecord must have a version")
+	}
+	for _, v := range supportedAuthRecordVersions {
+		if a.Version == v {
+			return nil
+		}
+	}
+	return fmt.Errorf("unsupported AuthenticationRecord version %q. This module supports %v", a.Version, supportedAuthRecordVersions)
+}
+
+// account returns the AuthenticationRecord as an MSAL Account. The account is zero-valued when the AuthenticationRecord is zero-valued.
+func (a *authenticationRecord) account() public.Account {
+	return public.Account{
+		Environment:       a.Authority,
+		HomeAccountID:     a.HomeAccountID,
+		PreferredUsername: a.Username,
+	}
+}
+
+func newAuthenticationRecord(ar public.AuthResult) (authenticationRecord, error) {
+	u, err := url.Parse(ar.IDToken.Issuer)
+	if err != nil {
+		return authenticationRecord{}, fmt.Errorf("Authenticate expected a URL issuer but got %q", ar.IDToken.Issuer)
+	}
+	tenant := ar.IDToken.TenantID
+	if tenant == "" {
+		tenant = strings.Trim(u.Path, "/")
+	}
+	username := ar.IDToken.PreferredUsername
+	if username == "" {
+		username = ar.IDToken.UPN
+	}
+	return authenticationRecord{
+		Authority:     fmt.Sprintf("%s://%s", u.Scheme, u.Host),
+		ClientID:      ar.IDToken.Audience,
+		HomeAccountID: ar.Account.HomeAccountID,
+		TenantID:      tenant,
+		Username:      username,
+		Version:       "1.0",
+	}, nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azidentity.go 🔗

@@ -0,0 +1,190 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"bytes"
+	"context"
+	"errors"
+	"fmt"
+	"io"
+	"net/http"
+	"net/url"
+	"os"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
+	"github.com/Azure/azure-sdk-for-go/sdk/azidentity/internal"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/public"
+)
+
+const (
+	azureAdditionallyAllowedTenants = "AZURE_ADDITIONALLY_ALLOWED_TENANTS"
+	azureAuthorityHost              = "AZURE_AUTHORITY_HOST"
+	azureClientCertificatePassword  = "AZURE_CLIENT_CERTIFICATE_PASSWORD"
+	azureClientCertificatePath      = "AZURE_CLIENT_CERTIFICATE_PATH"
+	azureClientID                   = "AZURE_CLIENT_ID"
+	azureClientSecret               = "AZURE_CLIENT_SECRET"
+	azureFederatedTokenFile         = "AZURE_FEDERATED_TOKEN_FILE"
+	azurePassword                   = "AZURE_PASSWORD"
+	azureRegionalAuthorityName      = "AZURE_REGIONAL_AUTHORITY_NAME"
+	azureTenantID                   = "AZURE_TENANT_ID"
+	azureUsername                   = "AZURE_USERNAME"
+
+	organizationsTenantID   = "organizations"
+	developerSignOnClientID = "04b07795-8ddb-461a-bbee-02f9e1bf7b46"
+	defaultSuffix           = "/.default"
+
+	traceNamespace      = "Microsoft.Entra"
+	traceOpGetToken     = "GetToken"
+	traceOpAuthenticate = "Authenticate"
+)
+
+var (
+	// capability CP1 indicates the client application is capable of handling CAE claims challenges
+	cp1                = []string{"CP1"}
+	errInvalidTenantID = errors.New("invalid tenantID. You can locate your tenantID by following the instructions listed here: https://learn.microsoft.com/partner-center/find-ids-and-domain-names")
+)
+
+// tokenCachePersistenceOptions contains options for persistent token caching
+type tokenCachePersistenceOptions = internal.TokenCachePersistenceOptions
+
+// setAuthorityHost initializes the authority host for credentials. Precedence is:
+//  1. cloud.Configuration.ActiveDirectoryAuthorityHost value set by user
+//  2. value of AZURE_AUTHORITY_HOST
+//  3. default: Azure Public Cloud
+func setAuthorityHost(cc cloud.Configuration) (string, error) {
+	host := cc.ActiveDirectoryAuthorityHost
+	if host == "" {
+		if len(cc.Services) > 0 {
+			return "", errors.New("missing ActiveDirectoryAuthorityHost for specified cloud")
+		}
+		host = cloud.AzurePublic.ActiveDirectoryAuthorityHost
+		if envAuthorityHost := os.Getenv(azureAuthorityHost); envAuthorityHost != "" {
+			host = envAuthorityHost
+		}
+	}
+	u, err := url.Parse(host)
+	if err != nil {
+		return "", err
+	}
+	if u.Scheme != "https" {
+		return "", errors.New("cannot use an authority host without https")
+	}
+	return host, nil
+}
+
+// resolveAdditionalTenants returns a copy of tenants, simplified when tenants contains a wildcard
+func resolveAdditionalTenants(tenants []string) []string {
+	if len(tenants) == 0 {
+		return nil
+	}
+	for _, t := range tenants {
+		// a wildcard makes all other values redundant
+		if t == "*" {
+			return []string{"*"}
+		}
+	}
+	cp := make([]string, len(tenants))
+	copy(cp, tenants)
+	return cp
+}
+
+// resolveTenant returns the correct tenant for a token request
+func resolveTenant(defaultTenant, specified, credName string, additionalTenants []string) (string, error) {
+	if specified == "" || specified == defaultTenant {
+		return defaultTenant, nil
+	}
+	if defaultTenant == "adfs" {
+		return "", errors.New("ADFS doesn't support tenants")
+	}
+	if !validTenantID(specified) {
+		return "", errInvalidTenantID
+	}
+	for _, t := range additionalTenants {
+		if t == "*" || t == specified {
+			return specified, nil
+		}
+	}
+	return "", fmt.Errorf(`%s isn't configured to acquire tokens for tenant %q. To enable acquiring tokens for this tenant add it to the AdditionallyAllowedTenants on the credential options, or add "*" to allow acquiring tokens for any tenant`, credName, specified)
+}
+
+func alphanumeric(r rune) bool {
+	return ('0' <= r && r <= '9') || ('a' <= r && r <= 'z') || ('A' <= r && r <= 'Z')
+}
+
+func validTenantID(tenantID string) bool {
+	if len(tenantID) < 1 {
+		return false
+	}
+	for _, r := range tenantID {
+		if !(alphanumeric(r) || r == '.' || r == '-') {
+			return false
+		}
+	}
+	return true
+}
+
+func doForClient(client *azcore.Client, r *http.Request) (*http.Response, error) {
+	req, err := runtime.NewRequest(r.Context(), r.Method, r.URL.String())
+	if err != nil {
+		return nil, err
+	}
+	if r.Body != nil && r.Body != http.NoBody {
+		// create a rewindable body from the existing body as required
+		var body io.ReadSeekCloser
+		if rsc, ok := r.Body.(io.ReadSeekCloser); ok {
+			body = rsc
+		} else {
+			b, err := io.ReadAll(r.Body)
+			if err != nil {
+				return nil, err
+			}
+			body = streaming.NopCloser(bytes.NewReader(b))
+		}
+		err = req.SetBody(body, r.Header.Get("Content-Type"))
+		if err != nil {
+			return nil, err
+		}
+	}
+
+	// copy headers to the new request, ignoring any for which the new request has a value
+	h := req.Raw().Header
+	for key, vals := range r.Header {
+		if _, has := h[key]; !has {
+			for _, val := range vals {
+				h.Add(key, val)
+			}
+		}
+	}
+
+	resp, err := client.Pipeline().Do(req)
+	if err != nil {
+		return nil, err
+	}
+	return resp, err
+}
+
+// enables fakes for test scenarios
+type msalConfidentialClient interface {
+	AcquireTokenSilent(ctx context.Context, scopes []string, options ...confidential.AcquireSilentOption) (confidential.AuthResult, error)
+	AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, options ...confidential.AcquireByAuthCodeOption) (confidential.AuthResult, error)
+	AcquireTokenByCredential(ctx context.Context, scopes []string, options ...confidential.AcquireByCredentialOption) (confidential.AuthResult, error)
+	AcquireTokenOnBehalfOf(ctx context.Context, userAssertion string, scopes []string, options ...confidential.AcquireOnBehalfOfOption) (confidential.AuthResult, error)
+}
+
+// enables fakes for test scenarios
+type msalPublicClient interface {
+	AcquireTokenSilent(ctx context.Context, scopes []string, options ...public.AcquireSilentOption) (public.AuthResult, error)
+	AcquireTokenByUsernamePassword(ctx context.Context, scopes []string, username string, password string, options ...public.AcquireByUsernamePasswordOption) (public.AuthResult, error)
+	AcquireTokenByDeviceCode(ctx context.Context, scopes []string, options ...public.AcquireByDeviceCodeOption) (public.DeviceCode, error)
+	AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, options ...public.AcquireByAuthCodeOption) (public.AuthResult, error)
+	AcquireTokenInteractive(ctx context.Context, scopes []string, options ...public.AcquireInteractiveOption) (public.AuthResult, error)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_cli_credential.go 🔗

@@ -0,0 +1,190 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"os"
+	"os/exec"
+	"runtime"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+const credNameAzureCLI = "AzureCLICredential"
+
+type azTokenProvider func(ctx context.Context, scopes []string, tenant, subscription string) ([]byte, error)
+
+// AzureCLICredentialOptions contains optional parameters for AzureCLICredential.
+type AzureCLICredentialOptions struct {
+	// AdditionallyAllowedTenants specifies tenants for which the credential may acquire tokens, in addition
+	// to TenantID. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant the
+	// logged in account can access.
+	AdditionallyAllowedTenants []string
+
+	// Subscription is the name or ID of a subscription. Set this to acquire tokens for an account other
+	// than the Azure CLI's current account.
+	Subscription string
+
+	// TenantID identifies the tenant the credential should authenticate in.
+	// Defaults to the CLI's default tenant, which is typically the home tenant of the logged in user.
+	TenantID string
+
+	// inDefaultChain is true when the credential is part of DefaultAzureCredential
+	inDefaultChain bool
+	// tokenProvider is used by tests to fake invoking az
+	tokenProvider azTokenProvider
+}
+
+// init returns an instance of AzureCLICredentialOptions initialized with default values.
+func (o *AzureCLICredentialOptions) init() {
+	if o.tokenProvider == nil {
+		o.tokenProvider = defaultAzTokenProvider
+	}
+}
+
+// AzureCLICredential authenticates as the identity logged in to the Azure CLI.
+type AzureCLICredential struct {
+	mu   *sync.Mutex
+	opts AzureCLICredentialOptions
+}
+
+// NewAzureCLICredential constructs an AzureCLICredential. Pass nil to accept default options.
+func NewAzureCLICredential(options *AzureCLICredentialOptions) (*AzureCLICredential, error) {
+	cp := AzureCLICredentialOptions{}
+	if options != nil {
+		cp = *options
+	}
+	for _, r := range cp.Subscription {
+		if !(alphanumeric(r) || r == '-' || r == '_' || r == ' ' || r == '.') {
+			return nil, fmt.Errorf("%s: invalid Subscription %q", credNameAzureCLI, cp.Subscription)
+		}
+	}
+	if cp.TenantID != "" && !validTenantID(cp.TenantID) {
+		return nil, errInvalidTenantID
+	}
+	cp.init()
+	cp.AdditionallyAllowedTenants = resolveAdditionalTenants(cp.AdditionallyAllowedTenants)
+	return &AzureCLICredential{mu: &sync.Mutex{}, opts: cp}, nil
+}
+
+// GetToken requests a token from the Azure CLI. This credential doesn't cache tokens, so every call invokes the CLI.
+// This method is called automatically by Azure SDK clients.
+func (c *AzureCLICredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	at := azcore.AccessToken{}
+	if len(opts.Scopes) != 1 {
+		return at, errors.New(credNameAzureCLI + ": GetToken() requires exactly one scope")
+	}
+	if !validScope(opts.Scopes[0]) {
+		return at, fmt.Errorf("%s.GetToken(): invalid scope %q", credNameAzureCLI, opts.Scopes[0])
+	}
+	tenant, err := resolveTenant(c.opts.TenantID, opts.TenantID, credNameAzureCLI, c.opts.AdditionallyAllowedTenants)
+	if err != nil {
+		return at, err
+	}
+	c.mu.Lock()
+	defer c.mu.Unlock()
+	b, err := c.opts.tokenProvider(ctx, opts.Scopes, tenant, c.opts.Subscription)
+	if err == nil {
+		at, err = c.createAccessToken(b)
+	}
+	if err != nil {
+		err = unavailableIfInChain(err, c.opts.inDefaultChain)
+		return at, err
+	}
+	msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", credNameAzureCLI, strings.Join(opts.Scopes, ", "))
+	log.Write(EventAuthentication, msg)
+	return at, nil
+}
+
+// defaultAzTokenProvider invokes the Azure CLI to acquire a token. It assumes
+// callers have verified that all string arguments are safe to pass to the CLI.
+var defaultAzTokenProvider azTokenProvider = func(ctx context.Context, scopes []string, tenantID, subscription string) ([]byte, error) {
+	// pass the CLI a Microsoft Entra ID v1 resource because we don't know which CLI version is installed and older ones don't support v2 scopes
+	resource := strings.TrimSuffix(scopes[0], defaultSuffix)
+	// set a default timeout for this authentication iff the application hasn't done so already
+	var cancel context.CancelFunc
+	if _, hasDeadline := ctx.Deadline(); !hasDeadline {
+		ctx, cancel = context.WithTimeout(ctx, cliTimeout)
+		defer cancel()
+	}
+	commandLine := "az account get-access-token -o json --resource " + resource
+	if tenantID != "" {
+		commandLine += " --tenant " + tenantID
+	}
+	if subscription != "" {
+		// subscription needs quotes because it may contain spaces
+		commandLine += ` --subscription "` + subscription + `"`
+	}
+	var cliCmd *exec.Cmd
+	if runtime.GOOS == "windows" {
+		dir := os.Getenv("SYSTEMROOT")
+		if dir == "" {
+			return nil, newCredentialUnavailableError(credNameAzureCLI, "environment variable 'SYSTEMROOT' has no value")
+		}
+		cliCmd = exec.CommandContext(ctx, "cmd.exe", "/c", commandLine)
+		cliCmd.Dir = dir
+	} else {
+		cliCmd = exec.CommandContext(ctx, "/bin/sh", "-c", commandLine)
+		cliCmd.Dir = "/bin"
+	}
+	cliCmd.Env = os.Environ()
+	var stderr bytes.Buffer
+	cliCmd.Stderr = &stderr
+
+	output, err := cliCmd.Output()
+	if err != nil {
+		msg := stderr.String()
+		var exErr *exec.ExitError
+		if errors.As(err, &exErr) && exErr.ExitCode() == 127 || strings.HasPrefix(msg, "'az' is not recognized") {
+			msg = "Azure CLI not found on path"
+		}
+		if msg == "" {
+			msg = err.Error()
+		}
+		return nil, newCredentialUnavailableError(credNameAzureCLI, msg)
+	}
+
+	return output, nil
+}
+
+func (c *AzureCLICredential) createAccessToken(tk []byte) (azcore.AccessToken, error) {
+	t := struct {
+		AccessToken string `json:"accessToken"`
+		Expires_On  int64  `json:"expires_on"`
+		ExpiresOn   string `json:"expiresOn"`
+	}{}
+	err := json.Unmarshal(tk, &t)
+	if err != nil {
+		return azcore.AccessToken{}, err
+	}
+
+	exp := time.Unix(t.Expires_On, 0)
+	if t.Expires_On == 0 {
+		exp, err = time.ParseInLocation("2006-01-02 15:04:05.999999", t.ExpiresOn, time.Local)
+		if err != nil {
+			return azcore.AccessToken{}, fmt.Errorf("%s: error parsing token expiration time %q: %v", credNameAzureCLI, t.ExpiresOn, err)
+		}
+	}
+
+	converted := azcore.AccessToken{
+		Token:     t.AccessToken,
+		ExpiresOn: exp.UTC(),
+	}
+	return converted, nil
+}
+
+var _ azcore.TokenCredential = (*AzureCLICredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_developer_cli_credential.go 🔗

@@ -0,0 +1,169 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"os"
+	"os/exec"
+	"runtime"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+const credNameAzureDeveloperCLI = "AzureDeveloperCLICredential"
+
+type azdTokenProvider func(ctx context.Context, scopes []string, tenant string) ([]byte, error)
+
+// AzureDeveloperCLICredentialOptions contains optional parameters for AzureDeveloperCLICredential.
+type AzureDeveloperCLICredentialOptions struct {
+	// AdditionallyAllowedTenants specifies tenants for which the credential may acquire tokens, in addition
+	// to TenantID. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant the
+	// logged in account can access.
+	AdditionallyAllowedTenants []string
+
+	// TenantID identifies the tenant the credential should authenticate in. Defaults to the azd environment,
+	// which is the tenant of the selected Azure subscription.
+	TenantID string
+
+	// inDefaultChain is true when the credential is part of DefaultAzureCredential
+	inDefaultChain bool
+	// tokenProvider is used by tests to fake invoking azd
+	tokenProvider azdTokenProvider
+}
+
+// AzureDeveloperCLICredential authenticates as the identity logged in to the [Azure Developer CLI].
+//
+// [Azure Developer CLI]: https://learn.microsoft.com/azure/developer/azure-developer-cli/overview
+type AzureDeveloperCLICredential struct {
+	mu   *sync.Mutex
+	opts AzureDeveloperCLICredentialOptions
+}
+
+// NewAzureDeveloperCLICredential constructs an AzureDeveloperCLICredential. Pass nil to accept default options.
+func NewAzureDeveloperCLICredential(options *AzureDeveloperCLICredentialOptions) (*AzureDeveloperCLICredential, error) {
+	cp := AzureDeveloperCLICredentialOptions{}
+	if options != nil {
+		cp = *options
+	}
+	if cp.TenantID != "" && !validTenantID(cp.TenantID) {
+		return nil, errInvalidTenantID
+	}
+	if cp.tokenProvider == nil {
+		cp.tokenProvider = defaultAzdTokenProvider
+	}
+	return &AzureDeveloperCLICredential{mu: &sync.Mutex{}, opts: cp}, nil
+}
+
+// GetToken requests a token from the Azure Developer CLI. This credential doesn't cache tokens, so every call invokes azd.
+// This method is called automatically by Azure SDK clients.
+func (c *AzureDeveloperCLICredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	at := azcore.AccessToken{}
+	if len(opts.Scopes) == 0 {
+		return at, errors.New(credNameAzureDeveloperCLI + ": GetToken() requires at least one scope")
+	}
+	for _, scope := range opts.Scopes {
+		if !validScope(scope) {
+			return at, fmt.Errorf("%s.GetToken(): invalid scope %q", credNameAzureDeveloperCLI, scope)
+		}
+	}
+	tenant, err := resolveTenant(c.opts.TenantID, opts.TenantID, credNameAzureDeveloperCLI, c.opts.AdditionallyAllowedTenants)
+	if err != nil {
+		return at, err
+	}
+	c.mu.Lock()
+	defer c.mu.Unlock()
+	b, err := c.opts.tokenProvider(ctx, opts.Scopes, tenant)
+	if err == nil {
+		at, err = c.createAccessToken(b)
+	}
+	if err != nil {
+		err = unavailableIfInChain(err, c.opts.inDefaultChain)
+		return at, err
+	}
+	msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", credNameAzureDeveloperCLI, strings.Join(opts.Scopes, ", "))
+	log.Write(EventAuthentication, msg)
+	return at, nil
+}
+
+// defaultAzTokenProvider invokes the Azure Developer CLI to acquire a token. It assumes
+// callers have verified that all string arguments are safe to pass to the CLI.
+var defaultAzdTokenProvider azdTokenProvider = func(ctx context.Context, scopes []string, tenant string) ([]byte, error) {
+	// set a default timeout for this authentication iff the application hasn't done so already
+	var cancel context.CancelFunc
+	if _, hasDeadline := ctx.Deadline(); !hasDeadline {
+		ctx, cancel = context.WithTimeout(ctx, cliTimeout)
+		defer cancel()
+	}
+	commandLine := "azd auth token -o json"
+	if tenant != "" {
+		commandLine += " --tenant-id " + tenant
+	}
+	for _, scope := range scopes {
+		commandLine += " --scope " + scope
+	}
+	var cliCmd *exec.Cmd
+	if runtime.GOOS == "windows" {
+		dir := os.Getenv("SYSTEMROOT")
+		if dir == "" {
+			return nil, newCredentialUnavailableError(credNameAzureDeveloperCLI, "environment variable 'SYSTEMROOT' has no value")
+		}
+		cliCmd = exec.CommandContext(ctx, "cmd.exe", "/c", commandLine)
+		cliCmd.Dir = dir
+	} else {
+		cliCmd = exec.CommandContext(ctx, "/bin/sh", "-c", commandLine)
+		cliCmd.Dir = "/bin"
+	}
+	cliCmd.Env = os.Environ()
+	var stderr bytes.Buffer
+	cliCmd.Stderr = &stderr
+	output, err := cliCmd.Output()
+	if err != nil {
+		msg := stderr.String()
+		var exErr *exec.ExitError
+		if errors.As(err, &exErr) && exErr.ExitCode() == 127 || strings.HasPrefix(msg, "'azd' is not recognized") {
+			msg = "Azure Developer CLI not found on path"
+		} else if strings.Contains(msg, "azd auth login") {
+			msg = `please run "azd auth login" from a command prompt to authenticate before using this credential`
+		}
+		if msg == "" {
+			msg = err.Error()
+		}
+		return nil, newCredentialUnavailableError(credNameAzureDeveloperCLI, msg)
+	}
+	return output, nil
+}
+
+func (c *AzureDeveloperCLICredential) createAccessToken(tk []byte) (azcore.AccessToken, error) {
+	t := struct {
+		AccessToken string `json:"token"`
+		ExpiresOn   string `json:"expiresOn"`
+	}{}
+	err := json.Unmarshal(tk, &t)
+	if err != nil {
+		return azcore.AccessToken{}, err
+	}
+	exp, err := time.Parse("2006-01-02T15:04:05Z", t.ExpiresOn)
+	if err != nil {
+		return azcore.AccessToken{}, fmt.Errorf("error parsing token expiration time %q: %v", t.ExpiresOn, err)
+	}
+	return azcore.AccessToken{
+		ExpiresOn: exp.UTC(),
+		Token:     t.AccessToken,
+	}, nil
+}
+
+var _ azcore.TokenCredential = (*AzureDeveloperCLICredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_pipelines_credential.go 🔗

@@ -0,0 +1,140 @@
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"os"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+)
+
+const (
+	credNameAzurePipelines = "AzurePipelinesCredential"
+	oidcAPIVersion         = "7.1"
+	systemOIDCRequestURI   = "SYSTEM_OIDCREQUESTURI"
+)
+
+// AzurePipelinesCredential authenticates with workload identity federation in an Azure Pipeline. See
+// [Azure Pipelines documentation] for more information.
+//
+// [Azure Pipelines documentation]: https://learn.microsoft.com/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#create-an-azure-resource-manager-service-connection-that-uses-workload-identity-federation
+type AzurePipelinesCredential struct {
+	connectionID, oidcURI, systemAccessToken string
+	cred                                     *ClientAssertionCredential
+}
+
+// AzurePipelinesCredentialOptions contains optional parameters for AzurePipelinesCredential.
+type AzurePipelinesCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens.
+	// Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the
+	// application is registered.
+	AdditionallyAllowedTenants []string
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+}
+
+// NewAzurePipelinesCredential is the constructor for AzurePipelinesCredential.
+//
+//   - tenantID: tenant ID of the service principal federated with the service connection
+//   - clientID: client ID of that service principal
+//   - serviceConnectionID: ID of the service connection to authenticate
+//   - systemAccessToken: security token for the running build. See [Azure Pipelines documentation] for
+//     an example showing how to get this value.
+//
+// [Azure Pipelines documentation]: https://learn.microsoft.com/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml#systemaccesstoken
+func NewAzurePipelinesCredential(tenantID, clientID, serviceConnectionID, systemAccessToken string, options *AzurePipelinesCredentialOptions) (*AzurePipelinesCredential, error) {
+	if !validTenantID(tenantID) {
+		return nil, errInvalidTenantID
+	}
+	if clientID == "" {
+		return nil, errors.New("no client ID specified")
+	}
+	if serviceConnectionID == "" {
+		return nil, errors.New("no service connection ID specified")
+	}
+	if systemAccessToken == "" {
+		return nil, errors.New("no system access token specified")
+	}
+	u := os.Getenv(systemOIDCRequestURI)
+	if u == "" {
+		return nil, fmt.Errorf("no value for environment variable %s. This should be set by Azure Pipelines", systemOIDCRequestURI)
+	}
+	a := AzurePipelinesCredential{
+		connectionID:      serviceConnectionID,
+		oidcURI:           u,
+		systemAccessToken: systemAccessToken,
+	}
+	if options == nil {
+		options = &AzurePipelinesCredentialOptions{}
+	}
+	caco := ClientAssertionCredentialOptions{
+		AdditionallyAllowedTenants: options.AdditionallyAllowedTenants,
+		ClientOptions:              options.ClientOptions,
+		DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+	}
+	cred, err := NewClientAssertionCredential(tenantID, clientID, a.getAssertion, &caco)
+	if err != nil {
+		return nil, err
+	}
+	cred.client.name = credNameAzurePipelines
+	a.cred = cred
+	return &a, nil
+}
+
+// GetToken requests an access token from Microsoft Entra ID. Azure SDK clients call this method automatically.
+func (a *AzurePipelinesCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameAzurePipelines+"."+traceOpGetToken, a.cred.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := a.cred.GetToken(ctx, opts)
+	return tk, err
+}
+
+func (a *AzurePipelinesCredential) getAssertion(ctx context.Context) (string, error) {
+	url := a.oidcURI + "?api-version=" + oidcAPIVersion + "&serviceConnectionId=" + a.connectionID
+	url, err := runtime.EncodeQueryParams(url)
+	if err != nil {
+		return "", newAuthenticationFailedError(credNameAzurePipelines, "couldn't encode OIDC URL: "+err.Error(), nil, nil)
+	}
+	req, err := http.NewRequestWithContext(ctx, http.MethodPost, url, nil)
+	if err != nil {
+		return "", newAuthenticationFailedError(credNameAzurePipelines, "couldn't create OIDC token request: "+err.Error(), nil, nil)
+	}
+	req.Header.Set("Authorization", "Bearer "+a.systemAccessToken)
+	res, err := doForClient(a.cred.client.azClient, req)
+	if err != nil {
+		return "", newAuthenticationFailedError(credNameAzurePipelines, "couldn't send OIDC token request: "+err.Error(), nil, nil)
+	}
+	if res.StatusCode != http.StatusOK {
+		msg := res.Status + " response from the OIDC endpoint. Check service connection ID and Pipeline configuration"
+		// include the response because its body, if any, probably contains an error message.
+		// OK responses aren't included with errors because they probably contain secrets
+		return "", newAuthenticationFailedError(credNameAzurePipelines, msg, res, nil)
+	}
+	b, err := runtime.Payload(res)
+	if err != nil {
+		return "", newAuthenticationFailedError(credNameAzurePipelines, "couldn't read OIDC response content: "+err.Error(), nil, nil)
+	}
+	var r struct {
+		OIDCToken string `json:"oidcToken"`
+	}
+	err = json.Unmarshal(b, &r)
+	if err != nil {
+		return "", newAuthenticationFailedError(credNameAzurePipelines, "unexpected response from OIDC endpoint", nil, nil)
+	}
+	return r.OIDCToken, nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/chained_token_credential.go 🔗

@@ -0,0 +1,138 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"strings"
+	"sync"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+// ChainedTokenCredentialOptions contains optional parameters for ChainedTokenCredential.
+type ChainedTokenCredentialOptions struct {
+	// RetrySources configures how the credential uses its sources. When true, the credential always attempts to
+	// authenticate through each source in turn, stopping when one succeeds. When false, the credential authenticates
+	// only through this first successful source--it never again tries the sources which failed.
+	RetrySources bool
+}
+
+// ChainedTokenCredential links together multiple credentials and tries them sequentially when authenticating. By default,
+// it tries all the credentials until one authenticates, after which it always uses that credential.
+type ChainedTokenCredential struct {
+	cond                 *sync.Cond
+	iterating            bool
+	name                 string
+	retrySources         bool
+	sources              []azcore.TokenCredential
+	successfulCredential azcore.TokenCredential
+}
+
+// NewChainedTokenCredential creates a ChainedTokenCredential. Pass nil for options to accept defaults.
+func NewChainedTokenCredential(sources []azcore.TokenCredential, options *ChainedTokenCredentialOptions) (*ChainedTokenCredential, error) {
+	if len(sources) == 0 {
+		return nil, errors.New("sources must contain at least one TokenCredential")
+	}
+	for _, source := range sources {
+		if source == nil { // cannot have a nil credential in the chain or else the application will panic when GetToken() is called on nil
+			return nil, errors.New("sources cannot contain nil")
+		}
+	}
+	cp := make([]azcore.TokenCredential, len(sources))
+	copy(cp, sources)
+	if options == nil {
+		options = &ChainedTokenCredentialOptions{}
+	}
+	return &ChainedTokenCredential{
+		cond:         sync.NewCond(&sync.Mutex{}),
+		name:         "ChainedTokenCredential",
+		retrySources: options.RetrySources,
+		sources:      cp,
+	}, nil
+}
+
+// GetToken calls GetToken on the chained credentials in turn, stopping when one returns a token.
+// This method is called automatically by Azure SDK clients.
+func (c *ChainedTokenCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	if !c.retrySources {
+		// ensure only one goroutine at a time iterates the sources and perhaps sets c.successfulCredential
+		c.cond.L.Lock()
+		for {
+			if c.successfulCredential != nil {
+				c.cond.L.Unlock()
+				return c.successfulCredential.GetToken(ctx, opts)
+			}
+			if !c.iterating {
+				c.iterating = true
+				// allow other goroutines to wait while this one iterates
+				c.cond.L.Unlock()
+				break
+			}
+			c.cond.Wait()
+		}
+	}
+
+	var (
+		err                  error
+		errs                 []error
+		successfulCredential azcore.TokenCredential
+		token                azcore.AccessToken
+		unavailableErr       credentialUnavailable
+	)
+	for _, cred := range c.sources {
+		token, err = cred.GetToken(ctx, opts)
+		if err == nil {
+			log.Writef(EventAuthentication, "%s authenticated with %s", c.name, extractCredentialName(cred))
+			successfulCredential = cred
+			break
+		}
+		errs = append(errs, err)
+		// continue to the next source iff this one returned credentialUnavailableError
+		if !errors.As(err, &unavailableErr) {
+			break
+		}
+	}
+	if c.iterating {
+		c.cond.L.Lock()
+		// this is nil when all credentials returned an error
+		c.successfulCredential = successfulCredential
+		c.iterating = false
+		c.cond.L.Unlock()
+		c.cond.Broadcast()
+	}
+	// err is the error returned by the last GetToken call. It will be nil when that call succeeds
+	if err != nil {
+		// return credentialUnavailableError iff all sources did so; return AuthenticationFailedError otherwise
+		msg := createChainedErrorMessage(errs)
+		if errors.As(err, &unavailableErr) {
+			err = newCredentialUnavailableError(c.name, msg)
+		} else {
+			res := getResponseFromError(err)
+			err = newAuthenticationFailedError(c.name, msg, res, err)
+		}
+	}
+	return token, err
+}
+
+func createChainedErrorMessage(errs []error) string {
+	msg := "failed to acquire a token.\nAttempted credentials:"
+	for _, err := range errs {
+		msg += fmt.Sprintf("\n\t%s", err.Error())
+	}
+	return msg
+}
+
+func extractCredentialName(credential azcore.TokenCredential) string {
+	return strings.TrimPrefix(fmt.Sprintf("%T", credential), "*azidentity.")
+}
+
+var _ azcore.TokenCredential = (*ChainedTokenCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/ci.yml 🔗

@@ -0,0 +1,46 @@
+# NOTE: Please refer to https://aka.ms/azsdk/engsys/ci-yaml before editing this file.
+trigger:
+  branches:
+    include:
+      - main
+      - feature/*
+      - hotfix/*
+      - release/*
+  paths:
+    include:
+      - sdk/azidentity/
+
+pr:
+  branches:
+    include:
+      - main
+      - feature/*
+      - hotfix/*
+      - release/*
+  paths:
+    include:
+      - sdk/azidentity/
+
+extends:
+    template: /eng/pipelines/templates/jobs/archetype-sdk-client.yml
+    parameters:
+      CloudConfig:
+        Public:
+          SubscriptionConfigurations:
+            - $(sub-config-azure-cloud-test-resources)
+            - $(sub-config-identity-test-resources)
+      EnvVars:
+        SYSTEM_ACCESSTOKEN: $(System.AccessToken)
+      RunLiveTests: true
+      ServiceDirectory: azidentity
+      UsePipelineProxy: false
+
+      ${{ if endsWith(variables['Build.DefinitionName'], 'weekly') }}:
+        MatrixConfigs:
+          - Name: managed_identity_matrix
+            GenerateVMJobs: true
+            Path: sdk/azidentity/managed-identity-matrix.json
+            Selection: sparse
+        MatrixReplace:
+          - Pool=.*LINUXPOOL.*/azsdk-pool-mms-ubuntu-2204-identitymsi
+          - OSVmImage=.*LINUXNEXTVMIMAGE.*/azsdk-pool-mms-ubuntu-2204-1espt

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_assertion_credential.go 🔗

@@ -0,0 +1,85 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"errors"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+)
+
+const credNameAssertion = "ClientAssertionCredential"
+
+// ClientAssertionCredential authenticates an application with assertions provided by a callback function.
+// This credential is for advanced scenarios. [ClientCertificateCredential] has a more convenient API for
+// the most common assertion scenario, authenticating a service principal with a certificate. See
+// [Microsoft Entra ID documentation] for details of the assertion format.
+//
+// [Microsoft Entra ID documentation]: https://learn.microsoft.com/entra/identity-platform/certificate-credentials#assertion-format
+type ClientAssertionCredential struct {
+	client *confidentialClient
+}
+
+// ClientAssertionCredentialOptions contains optional parameters for ClientAssertionCredential.
+type ClientAssertionCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens.
+	// Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the
+	// application is registered.
+	AdditionallyAllowedTenants []string
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+
+	// tokenCachePersistenceOptions enables persistent token caching when not nil.
+	tokenCachePersistenceOptions *tokenCachePersistenceOptions
+}
+
+// NewClientAssertionCredential constructs a ClientAssertionCredential. The getAssertion function must be thread safe. Pass nil for options to accept defaults.
+func NewClientAssertionCredential(tenantID, clientID string, getAssertion func(context.Context) (string, error), options *ClientAssertionCredentialOptions) (*ClientAssertionCredential, error) {
+	if getAssertion == nil {
+		return nil, errors.New("getAssertion must be a function that returns assertions")
+	}
+	if options == nil {
+		options = &ClientAssertionCredentialOptions{}
+	}
+	cred := confidential.NewCredFromAssertionCallback(
+		func(ctx context.Context, _ confidential.AssertionRequestOptions) (string, error) {
+			return getAssertion(ctx)
+		},
+	)
+	msalOpts := confidentialClientOptions{
+		AdditionallyAllowedTenants:   options.AdditionallyAllowedTenants,
+		ClientOptions:                options.ClientOptions,
+		DisableInstanceDiscovery:     options.DisableInstanceDiscovery,
+		tokenCachePersistenceOptions: options.tokenCachePersistenceOptions,
+	}
+	c, err := newConfidentialClient(tenantID, clientID, credNameAssertion, cred, msalOpts)
+	if err != nil {
+		return nil, err
+	}
+	return &ClientAssertionCredential{client: c}, nil
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (c *ClientAssertionCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameAssertion+"."+traceOpGetToken, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+var _ azcore.TokenCredential = (*ClientAssertionCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_certificate_credential.go 🔗

@@ -0,0 +1,174 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"crypto"
+	"crypto/x509"
+	"encoding/pem"
+	"errors"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+	"golang.org/x/crypto/pkcs12"
+)
+
+const credNameCert = "ClientCertificateCredential"
+
+// ClientCertificateCredentialOptions contains optional parameters for ClientCertificateCredential.
+type ClientCertificateCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens.
+	// Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the
+	// application is registered.
+	AdditionallyAllowedTenants []string
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+
+	// SendCertificateChain controls whether the credential sends the public certificate chain in the x5c
+	// header of each token request's JWT. This is required for Subject Name/Issuer (SNI) authentication.
+	// Defaults to False.
+	SendCertificateChain bool
+
+	// tokenCachePersistenceOptions enables persistent token caching when not nil.
+	tokenCachePersistenceOptions *tokenCachePersistenceOptions
+}
+
+// ClientCertificateCredential authenticates a service principal with a certificate.
+type ClientCertificateCredential struct {
+	client *confidentialClient
+}
+
+// NewClientCertificateCredential constructs a ClientCertificateCredential. Pass nil for options to accept defaults. See
+// [ParseCertificates] for help loading a certificate.
+func NewClientCertificateCredential(tenantID string, clientID string, certs []*x509.Certificate, key crypto.PrivateKey, options *ClientCertificateCredentialOptions) (*ClientCertificateCredential, error) {
+	if len(certs) == 0 {
+		return nil, errors.New("at least one certificate is required")
+	}
+	if options == nil {
+		options = &ClientCertificateCredentialOptions{}
+	}
+	cred, err := confidential.NewCredFromCert(certs, key)
+	if err != nil {
+		return nil, err
+	}
+	msalOpts := confidentialClientOptions{
+		AdditionallyAllowedTenants:   options.AdditionallyAllowedTenants,
+		ClientOptions:                options.ClientOptions,
+		DisableInstanceDiscovery:     options.DisableInstanceDiscovery,
+		SendX5C:                      options.SendCertificateChain,
+		tokenCachePersistenceOptions: options.tokenCachePersistenceOptions,
+	}
+	c, err := newConfidentialClient(tenantID, clientID, credNameCert, cred, msalOpts)
+	if err != nil {
+		return nil, err
+	}
+	return &ClientCertificateCredential{client: c}, nil
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (c *ClientCertificateCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameCert+"."+traceOpGetToken, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+// ParseCertificates loads certificates and a private key, in PEM or PKCS#12 format, for use with [NewClientCertificateCredential].
+// Pass nil for password if the private key isn't encrypted. This function has limitations, for example it can't decrypt keys in
+// PEM format or PKCS#12 certificates that use SHA256 for message authentication. If you encounter such limitations, consider
+// using another module to load the certificate and private key.
+func ParseCertificates(certData []byte, password []byte) ([]*x509.Certificate, crypto.PrivateKey, error) {
+	var blocks []*pem.Block
+	var err error
+	if len(password) == 0 {
+		blocks, err = loadPEMCert(certData)
+	}
+	if len(blocks) == 0 || err != nil {
+		blocks, err = loadPKCS12Cert(certData, string(password))
+	}
+	if err != nil {
+		return nil, nil, err
+	}
+	var certs []*x509.Certificate
+	var pk crypto.PrivateKey
+	for _, block := range blocks {
+		switch block.Type {
+		case "CERTIFICATE":
+			c, err := x509.ParseCertificate(block.Bytes)
+			if err != nil {
+				return nil, nil, err
+			}
+			certs = append(certs, c)
+		case "PRIVATE KEY":
+			if pk != nil {
+				return nil, nil, errors.New("certData contains multiple private keys")
+			}
+			pk, err = x509.ParsePKCS8PrivateKey(block.Bytes)
+			if err != nil {
+				pk, err = x509.ParsePKCS1PrivateKey(block.Bytes)
+			}
+			if err != nil {
+				return nil, nil, err
+			}
+		case "RSA PRIVATE KEY":
+			if pk != nil {
+				return nil, nil, errors.New("certData contains multiple private keys")
+			}
+			pk, err = x509.ParsePKCS1PrivateKey(block.Bytes)
+			if err != nil {
+				return nil, nil, err
+			}
+		}
+	}
+	if len(certs) == 0 {
+		return nil, nil, errors.New("found no certificate")
+	}
+	if pk == nil {
+		return nil, nil, errors.New("found no private key")
+	}
+	return certs, pk, nil
+}
+
+func loadPEMCert(certData []byte) ([]*pem.Block, error) {
+	blocks := []*pem.Block{}
+	for {
+		var block *pem.Block
+		block, certData = pem.Decode(certData)
+		if block == nil {
+			break
+		}
+		blocks = append(blocks, block)
+	}
+	if len(blocks) == 0 {
+		return nil, errors.New("didn't find any PEM blocks")
+	}
+	return blocks, nil
+}
+
+func loadPKCS12Cert(certData []byte, password string) ([]*pem.Block, error) {
+	blocks, err := pkcs12.ToPEM(certData, password)
+	if err != nil {
+		return nil, err
+	}
+	if len(blocks) == 0 {
+		// not mentioning PKCS12 in this message because we end up here when certData is garbage
+		return nil, errors.New("didn't find any certificate content")
+	}
+	return blocks, err
+}
+
+var _ azcore.TokenCredential = (*ClientCertificateCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_secret_credential.go 🔗

@@ -0,0 +1,75 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+)
+
+const credNameSecret = "ClientSecretCredential"
+
+// ClientSecretCredentialOptions contains optional parameters for ClientSecretCredential.
+type ClientSecretCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens.
+	// Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the
+	// application is registered.
+	AdditionallyAllowedTenants []string
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+
+	// tokenCachePersistenceOptions enables persistent token caching when not nil.
+	tokenCachePersistenceOptions *tokenCachePersistenceOptions
+}
+
+// ClientSecretCredential authenticates an application with a client secret.
+type ClientSecretCredential struct {
+	client *confidentialClient
+}
+
+// NewClientSecretCredential constructs a ClientSecretCredential. Pass nil for options to accept defaults.
+func NewClientSecretCredential(tenantID string, clientID string, clientSecret string, options *ClientSecretCredentialOptions) (*ClientSecretCredential, error) {
+	if options == nil {
+		options = &ClientSecretCredentialOptions{}
+	}
+	cred, err := confidential.NewCredFromSecret(clientSecret)
+	if err != nil {
+		return nil, err
+	}
+	msalOpts := confidentialClientOptions{
+		AdditionallyAllowedTenants:   options.AdditionallyAllowedTenants,
+		ClientOptions:                options.ClientOptions,
+		DisableInstanceDiscovery:     options.DisableInstanceDiscovery,
+		tokenCachePersistenceOptions: options.tokenCachePersistenceOptions,
+	}
+	c, err := newConfidentialClient(tenantID, clientID, credNameSecret, cred, msalOpts)
+	if err != nil {
+		return nil, err
+	}
+	return &ClientSecretCredential{client: c}, nil
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (c *ClientSecretCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameSecret+"."+traceOpGetToken, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+var _ azcore.TokenCredential = (*ClientSecretCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/confidential_client.go 🔗

@@ -0,0 +1,184 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+	"os"
+	"strings"
+	"sync"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/azidentity/internal"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+)
+
+type confidentialClientOptions struct {
+	azcore.ClientOptions
+
+	AdditionallyAllowedTenants []string
+	// Assertion for on-behalf-of authentication
+	Assertion                         string
+	DisableInstanceDiscovery, SendX5C bool
+	tokenCachePersistenceOptions      *tokenCachePersistenceOptions
+}
+
+// confidentialClient wraps the MSAL confidential client
+type confidentialClient struct {
+	cae, noCAE               msalConfidentialClient
+	caeMu, noCAEMu, clientMu *sync.Mutex
+	clientID, tenantID       string
+	cred                     confidential.Credential
+	host                     string
+	name                     string
+	opts                     confidentialClientOptions
+	region                   string
+	azClient                 *azcore.Client
+}
+
+func newConfidentialClient(tenantID, clientID, name string, cred confidential.Credential, opts confidentialClientOptions) (*confidentialClient, error) {
+	if !validTenantID(tenantID) {
+		return nil, errInvalidTenantID
+	}
+	host, err := setAuthorityHost(opts.Cloud)
+	if err != nil {
+		return nil, err
+	}
+	client, err := azcore.NewClient(module, version, runtime.PipelineOptions{
+		Tracing: runtime.TracingOptions{
+			Namespace: traceNamespace,
+		},
+	}, &opts.ClientOptions)
+	if err != nil {
+		return nil, err
+	}
+	opts.AdditionallyAllowedTenants = resolveAdditionalTenants(opts.AdditionallyAllowedTenants)
+	return &confidentialClient{
+		caeMu:    &sync.Mutex{},
+		clientID: clientID,
+		clientMu: &sync.Mutex{},
+		cred:     cred,
+		host:     host,
+		name:     name,
+		noCAEMu:  &sync.Mutex{},
+		opts:     opts,
+		region:   os.Getenv(azureRegionalAuthorityName),
+		tenantID: tenantID,
+		azClient: client,
+	}, nil
+}
+
+// GetToken requests an access token from MSAL, checking the cache first.
+func (c *confidentialClient) GetToken(ctx context.Context, tro policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	if len(tro.Scopes) < 1 {
+		return azcore.AccessToken{}, fmt.Errorf("%s.GetToken() requires at least one scope", c.name)
+	}
+	// we don't resolve the tenant for managed identities because they acquire tokens only from their home tenants
+	if c.name != credNameManagedIdentity {
+		tenant, err := c.resolveTenant(tro.TenantID)
+		if err != nil {
+			return azcore.AccessToken{}, err
+		}
+		tro.TenantID = tenant
+	}
+	client, mu, err := c.client(tro)
+	if err != nil {
+		return azcore.AccessToken{}, err
+	}
+	mu.Lock()
+	defer mu.Unlock()
+	var ar confidential.AuthResult
+	if c.opts.Assertion != "" {
+		ar, err = client.AcquireTokenOnBehalfOf(ctx, c.opts.Assertion, tro.Scopes, confidential.WithClaims(tro.Claims), confidential.WithTenantID(tro.TenantID))
+	} else {
+		ar, err = client.AcquireTokenSilent(ctx, tro.Scopes, confidential.WithClaims(tro.Claims), confidential.WithTenantID(tro.TenantID))
+		if err != nil {
+			ar, err = client.AcquireTokenByCredential(ctx, tro.Scopes, confidential.WithClaims(tro.Claims), confidential.WithTenantID(tro.TenantID))
+		}
+	}
+	if err != nil {
+		// We could get a credentialUnavailableError from managed identity authentication because in that case the error comes from our code.
+		// We return it directly because it affects the behavior of credential chains. Otherwise, we return AuthenticationFailedError.
+		var unavailableErr credentialUnavailable
+		if !errors.As(err, &unavailableErr) {
+			res := getResponseFromError(err)
+			err = newAuthenticationFailedError(c.name, err.Error(), res, err)
+		}
+	} else {
+		msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", c.name, strings.Join(ar.GrantedScopes, ", "))
+		log.Write(EventAuthentication, msg)
+	}
+	return azcore.AccessToken{Token: ar.AccessToken, ExpiresOn: ar.ExpiresOn.UTC()}, err
+}
+
+func (c *confidentialClient) client(tro policy.TokenRequestOptions) (msalConfidentialClient, *sync.Mutex, error) {
+	c.clientMu.Lock()
+	defer c.clientMu.Unlock()
+	if tro.EnableCAE {
+		if c.cae == nil {
+			client, err := c.newMSALClient(true)
+			if err != nil {
+				return nil, nil, err
+			}
+			c.cae = client
+		}
+		return c.cae, c.caeMu, nil
+	}
+	if c.noCAE == nil {
+		client, err := c.newMSALClient(false)
+		if err != nil {
+			return nil, nil, err
+		}
+		c.noCAE = client
+	}
+	return c.noCAE, c.noCAEMu, nil
+}
+
+func (c *confidentialClient) newMSALClient(enableCAE bool) (msalConfidentialClient, error) {
+	cache, err := internal.NewCache(c.opts.tokenCachePersistenceOptions, enableCAE)
+	if err != nil {
+		return nil, err
+	}
+	authority := runtime.JoinPaths(c.host, c.tenantID)
+	o := []confidential.Option{
+		confidential.WithAzureRegion(c.region),
+		confidential.WithCache(cache),
+		confidential.WithHTTPClient(c),
+	}
+	if enableCAE {
+		o = append(o, confidential.WithClientCapabilities(cp1))
+	}
+	if c.opts.SendX5C {
+		o = append(o, confidential.WithX5C())
+	}
+	if c.opts.DisableInstanceDiscovery || strings.ToLower(c.tenantID) == "adfs" {
+		o = append(o, confidential.WithInstanceDiscovery(false))
+	}
+	return confidential.New(authority, c.clientID, c.cred, o...)
+}
+
+// resolveTenant returns the correct WithTenantID() argument for a token request given the client's
+// configuration, or an error when that configuration doesn't allow the specified tenant
+func (c *confidentialClient) resolveTenant(specified string) (string, error) {
+	return resolveTenant(c.tenantID, specified, c.name, c.opts.AdditionallyAllowedTenants)
+}
+
+// these methods satisfy the MSAL ops.HTTPClient interface
+
+func (c *confidentialClient) CloseIdleConnections() {
+	// do nothing
+}
+
+func (c *confidentialClient) Do(r *http.Request) (*http.Response, error) {
+	return doForClient(c.azClient, r)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/default_azure_credential.go 🔗

@@ -0,0 +1,165 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"os"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+// DefaultAzureCredentialOptions contains optional parameters for DefaultAzureCredential.
+// These options may not apply to all credentials in the chain.
+type DefaultAzureCredentialOptions struct {
+	// ClientOptions has additional options for credentials that use an Azure SDK HTTP pipeline. These options don't apply
+	// to credential types that authenticate via external tools such as the Azure CLI.
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. Add
+	// the wildcard value "*" to allow the credential to acquire tokens for any tenant. This value can also be
+	// set as a semicolon delimited list of tenants in the environment variable AZURE_ADDITIONALLY_ALLOWED_TENANTS.
+	AdditionallyAllowedTenants []string
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+	// TenantID sets the default tenant for authentication via the Azure CLI and workload identity.
+	TenantID string
+}
+
+// DefaultAzureCredential is a default credential chain for applications that will deploy to Azure.
+// It combines credentials suitable for deployment with credentials suitable for local development.
+// It attempts to authenticate with each of these credential types, in the following order, stopping
+// when one provides a token:
+//
+//   - [EnvironmentCredential]
+//   - [WorkloadIdentityCredential], if environment variable configuration is set by the Azure workload
+//     identity webhook. Use [WorkloadIdentityCredential] directly when not using the webhook or needing
+//     more control over its configuration.
+//   - [ManagedIdentityCredential]
+//   - [AzureCLICredential]
+//   - [AzureDeveloperCLICredential]
+//
+// Consult the documentation for these credential types for more information on how they authenticate.
+// Once a credential has successfully authenticated, DefaultAzureCredential will use that credential for
+// every subsequent authentication.
+type DefaultAzureCredential struct {
+	chain *ChainedTokenCredential
+}
+
+// NewDefaultAzureCredential creates a DefaultAzureCredential. Pass nil for options to accept defaults.
+func NewDefaultAzureCredential(options *DefaultAzureCredentialOptions) (*DefaultAzureCredential, error) {
+	var creds []azcore.TokenCredential
+	var errorMessages []string
+
+	if options == nil {
+		options = &DefaultAzureCredentialOptions{}
+	}
+	additionalTenants := options.AdditionallyAllowedTenants
+	if len(additionalTenants) == 0 {
+		if tenants := os.Getenv(azureAdditionallyAllowedTenants); tenants != "" {
+			additionalTenants = strings.Split(tenants, ";")
+		}
+	}
+
+	envCred, err := NewEnvironmentCredential(&EnvironmentCredentialOptions{
+		ClientOptions:              options.ClientOptions,
+		DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+		additionallyAllowedTenants: additionalTenants,
+	})
+	if err == nil {
+		creds = append(creds, envCred)
+	} else {
+		errorMessages = append(errorMessages, "EnvironmentCredential: "+err.Error())
+		creds = append(creds, &defaultCredentialErrorReporter{credType: "EnvironmentCredential", err: err})
+	}
+
+	wic, err := NewWorkloadIdentityCredential(&WorkloadIdentityCredentialOptions{
+		AdditionallyAllowedTenants: additionalTenants,
+		ClientOptions:              options.ClientOptions,
+		DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+		TenantID:                   options.TenantID,
+	})
+	if err == nil {
+		creds = append(creds, wic)
+	} else {
+		errorMessages = append(errorMessages, credNameWorkloadIdentity+": "+err.Error())
+		creds = append(creds, &defaultCredentialErrorReporter{credType: credNameWorkloadIdentity, err: err})
+	}
+
+	o := &ManagedIdentityCredentialOptions{ClientOptions: options.ClientOptions, dac: true}
+	if ID, ok := os.LookupEnv(azureClientID); ok {
+		o.ID = ClientID(ID)
+	}
+	miCred, err := NewManagedIdentityCredential(o)
+	if err == nil {
+		creds = append(creds, miCred)
+	} else {
+		errorMessages = append(errorMessages, credNameManagedIdentity+": "+err.Error())
+		creds = append(creds, &defaultCredentialErrorReporter{credType: credNameManagedIdentity, err: err})
+	}
+
+	cliCred, err := NewAzureCLICredential(&AzureCLICredentialOptions{AdditionallyAllowedTenants: additionalTenants, TenantID: options.TenantID})
+	if err == nil {
+		creds = append(creds, cliCred)
+	} else {
+		errorMessages = append(errorMessages, credNameAzureCLI+": "+err.Error())
+		creds = append(creds, &defaultCredentialErrorReporter{credType: credNameAzureCLI, err: err})
+	}
+
+	azdCred, err := NewAzureDeveloperCLICredential(&AzureDeveloperCLICredentialOptions{
+		AdditionallyAllowedTenants: additionalTenants,
+		TenantID:                   options.TenantID,
+	})
+	if err == nil {
+		creds = append(creds, azdCred)
+	} else {
+		errorMessages = append(errorMessages, credNameAzureDeveloperCLI+": "+err.Error())
+		creds = append(creds, &defaultCredentialErrorReporter{credType: credNameAzureDeveloperCLI, err: err})
+	}
+
+	if len(errorMessages) > 0 {
+		log.Writef(EventAuthentication, "NewDefaultAzureCredential failed to initialize some credentials:\n\t%s", strings.Join(errorMessages, "\n\t"))
+	}
+
+	chain, err := NewChainedTokenCredential(creds, nil)
+	if err != nil {
+		return nil, err
+	}
+	chain.name = "DefaultAzureCredential"
+	return &DefaultAzureCredential{chain: chain}, nil
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (c *DefaultAzureCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	return c.chain.GetToken(ctx, opts)
+}
+
+var _ azcore.TokenCredential = (*DefaultAzureCredential)(nil)
+
+// defaultCredentialErrorReporter is a substitute for credentials that couldn't be constructed.
+// Its GetToken method always returns a credentialUnavailableError having the same message as
+// the error that prevented constructing the credential. This ensures the message is present
+// in the error returned by ChainedTokenCredential.GetToken()
+type defaultCredentialErrorReporter struct {
+	credType string
+	err      error
+}
+
+func (d *defaultCredentialErrorReporter) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	if _, ok := d.err.(credentialUnavailable); ok {
+		return azcore.AccessToken{}, d.err
+	}
+	return azcore.AccessToken{}, newCredentialUnavailableError(d.credType, d.err.Error())
+}
+
+var _ azcore.TokenCredential = (*defaultCredentialErrorReporter)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/developer_credential_util.go 🔗

@@ -0,0 +1,38 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"errors"
+	"time"
+)
+
+// cliTimeout is the default timeout for authentication attempts via CLI tools
+const cliTimeout = 10 * time.Second
+
+// unavailableIfInChain returns err or, if the credential was invoked by DefaultAzureCredential, a
+// credentialUnavailableError having the same message. This ensures DefaultAzureCredential will try
+// the next credential in its chain (another developer credential).
+func unavailableIfInChain(err error, inDefaultChain bool) error {
+	if err != nil && inDefaultChain {
+		var unavailableErr credentialUnavailable
+		if !errors.As(err, &unavailableErr) {
+			err = newCredentialUnavailableError(credNameAzureDeveloperCLI, err.Error())
+		}
+	}
+	return err
+}
+
+// validScope is for credentials authenticating via external tools. The authority validates scopes for all other credentials.
+func validScope(scope string) bool {
+	for _, r := range scope {
+		if !(alphanumeric(r) || r == '.' || r == '-' || r == '_' || r == '/' || r == ':') {
+			return false
+		}
+	}
+	return true
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/device_code_credential.go 🔗

@@ -0,0 +1,138 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"fmt"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+)
+
+const credNameDeviceCode = "DeviceCodeCredential"
+
+// DeviceCodeCredentialOptions contains optional parameters for DeviceCodeCredential.
+type DeviceCodeCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire
+	// tokens. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant.
+	AdditionallyAllowedTenants []string
+
+	// authenticationRecord returned by a call to a credential's Authenticate method. Set this option
+	// to enable the credential to use data from a previous authentication.
+	authenticationRecord authenticationRecord
+
+	// ClientID is the ID of the application users will authenticate to.
+	// Defaults to the ID of an Azure development application.
+	ClientID string
+
+	// disableAutomaticAuthentication prevents the credential from automatically prompting the user to authenticate.
+	// When this option is true, GetToken will return authenticationRequiredError when user interaction is necessary
+	// to acquire a token.
+	disableAutomaticAuthentication bool
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+
+	// TenantID is the Microsoft Entra tenant the credential authenticates in. Defaults to the
+	// "organizations" tenant, which can authenticate work and school accounts. Required for single-tenant
+	// applications.
+	TenantID string
+
+	// tokenCachePersistenceOptions enables persistent token caching when not nil.
+	tokenCachePersistenceOptions *tokenCachePersistenceOptions
+
+	// UserPrompt controls how the credential presents authentication instructions. The credential calls
+	// this function with authentication details when it receives a device code. By default, the credential
+	// prints these details to stdout.
+	UserPrompt func(context.Context, DeviceCodeMessage) error
+}
+
+func (o *DeviceCodeCredentialOptions) init() {
+	if o.TenantID == "" {
+		o.TenantID = organizationsTenantID
+	}
+	if o.ClientID == "" {
+		o.ClientID = developerSignOnClientID
+	}
+	if o.UserPrompt == nil {
+		o.UserPrompt = func(ctx context.Context, dc DeviceCodeMessage) error {
+			fmt.Println(dc.Message)
+			return nil
+		}
+	}
+}
+
+// DeviceCodeMessage contains the information a user needs to complete authentication.
+type DeviceCodeMessage struct {
+	// UserCode is the user code returned by the service.
+	UserCode string `json:"user_code"`
+	// VerificationURL is the URL at which the user must authenticate.
+	VerificationURL string `json:"verification_uri"`
+	// Message is user instruction from Microsoft Entra ID.
+	Message string `json:"message"`
+}
+
+// DeviceCodeCredential acquires tokens for a user via the device code flow, which has the
+// user browse to a Microsoft Entra URL, enter a code, and authenticate. It's useful
+// for authenticating a user in an environment without a web browser, such as an SSH session.
+// If a web browser is available, [InteractiveBrowserCredential] is more convenient because it
+// automatically opens a browser to the login page.
+type DeviceCodeCredential struct {
+	client *publicClient
+}
+
+// NewDeviceCodeCredential creates a DeviceCodeCredential. Pass nil to accept default options.
+func NewDeviceCodeCredential(options *DeviceCodeCredentialOptions) (*DeviceCodeCredential, error) {
+	cp := DeviceCodeCredentialOptions{}
+	if options != nil {
+		cp = *options
+	}
+	cp.init()
+	msalOpts := publicClientOptions{
+		AdditionallyAllowedTenants:     cp.AdditionallyAllowedTenants,
+		ClientOptions:                  cp.ClientOptions,
+		DeviceCodePrompt:               cp.UserPrompt,
+		DisableAutomaticAuthentication: cp.disableAutomaticAuthentication,
+		DisableInstanceDiscovery:       cp.DisableInstanceDiscovery,
+		Record:                         cp.authenticationRecord,
+		TokenCachePersistenceOptions:   cp.tokenCachePersistenceOptions,
+	}
+	c, err := newPublicClient(cp.TenantID, cp.ClientID, credNameDeviceCode, msalOpts)
+	if err != nil {
+		return nil, err
+	}
+	c.name = credNameDeviceCode
+	return &DeviceCodeCredential{client: c}, nil
+}
+
+// Authenticate a user via the device code flow. Subsequent calls to GetToken will automatically use the returned AuthenticationRecord.
+func (c *DeviceCodeCredential) authenticate(ctx context.Context, opts *policy.TokenRequestOptions) (authenticationRecord, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameDeviceCode+"."+traceOpAuthenticate, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.Authenticate(ctx, opts)
+	return tk, err
+}
+
+// GetToken requests an access token from Microsoft Entra ID. It will begin the device code flow and poll until the user completes authentication.
+// This method is called automatically by Azure SDK clients.
+func (c *DeviceCodeCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameDeviceCode+"."+traceOpGetToken, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+var _ azcore.TokenCredential = (*DeviceCodeCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/environment_credential.go 🔗

@@ -0,0 +1,167 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"os"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+)
+
+const envVarSendCertChain = "AZURE_CLIENT_SEND_CERTIFICATE_CHAIN"
+
+// EnvironmentCredentialOptions contains optional parameters for EnvironmentCredential
+type EnvironmentCredentialOptions struct {
+	azcore.ClientOptions
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+	// additionallyAllowedTenants is used only by NewDefaultAzureCredential() to enable that constructor's explicit
+	// option to override the value of AZURE_ADDITIONALLY_ALLOWED_TENANTS. Applications using EnvironmentCredential
+	// directly should set that variable instead. This field should remain unexported to preserve this credential's
+	// unambiguous "all configuration from environment variables" design.
+	additionallyAllowedTenants []string
+}
+
+// EnvironmentCredential authenticates a service principal with a secret or certificate, or a user with a password, depending
+// on environment variable configuration. It reads configuration from these variables, in the following order:
+//
+// # Service principal with client secret
+//
+// AZURE_TENANT_ID: ID of the service principal's tenant. Also called its "directory" ID.
+//
+// AZURE_CLIENT_ID: the service principal's client ID
+//
+// AZURE_CLIENT_SECRET: one of the service principal's client secrets
+//
+// # Service principal with certificate
+//
+// AZURE_TENANT_ID: ID of the service principal's tenant. Also called its "directory" ID.
+//
+// AZURE_CLIENT_ID: the service principal's client ID
+//
+// AZURE_CLIENT_CERTIFICATE_PATH: path to a PEM or PKCS12 certificate file including the private key.
+//
+// AZURE_CLIENT_CERTIFICATE_PASSWORD: (optional) password for the certificate file.
+//
+// Note that this credential uses [ParseCertificates] to load the certificate and key from the file. If this
+// function isn't able to parse your certificate, use [ClientCertificateCredential] instead.
+//
+// # User with username and password
+//
+// AZURE_TENANT_ID: (optional) tenant to authenticate in. Defaults to "organizations".
+//
+// AZURE_CLIENT_ID: client ID of the application the user will authenticate to
+//
+// AZURE_USERNAME: a username (usually an email address)
+//
+// AZURE_PASSWORD: the user's password
+//
+// # Configuration for multitenant applications
+//
+// To enable multitenant authentication, set AZURE_ADDITIONALLY_ALLOWED_TENANTS with a semicolon delimited list of tenants
+// the credential may request tokens from in addition to the tenant specified by AZURE_TENANT_ID. Set
+// AZURE_ADDITIONALLY_ALLOWED_TENANTS to "*" to enable the credential to request a token from any tenant.
+type EnvironmentCredential struct {
+	cred azcore.TokenCredential
+}
+
+// NewEnvironmentCredential creates an EnvironmentCredential. Pass nil to accept default options.
+func NewEnvironmentCredential(options *EnvironmentCredentialOptions) (*EnvironmentCredential, error) {
+	if options == nil {
+		options = &EnvironmentCredentialOptions{}
+	}
+	tenantID := os.Getenv(azureTenantID)
+	if tenantID == "" {
+		return nil, errors.New("missing environment variable AZURE_TENANT_ID")
+	}
+	clientID := os.Getenv(azureClientID)
+	if clientID == "" {
+		return nil, errors.New("missing environment variable " + azureClientID)
+	}
+	// tenants set by NewDefaultAzureCredential() override the value of AZURE_ADDITIONALLY_ALLOWED_TENANTS
+	additionalTenants := options.additionallyAllowedTenants
+	if len(additionalTenants) == 0 {
+		if tenants := os.Getenv(azureAdditionallyAllowedTenants); tenants != "" {
+			additionalTenants = strings.Split(tenants, ";")
+		}
+	}
+	if clientSecret := os.Getenv(azureClientSecret); clientSecret != "" {
+		log.Write(EventAuthentication, "EnvironmentCredential will authenticate with ClientSecretCredential")
+		o := &ClientSecretCredentialOptions{
+			AdditionallyAllowedTenants: additionalTenants,
+			ClientOptions:              options.ClientOptions,
+			DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+		}
+		cred, err := NewClientSecretCredential(tenantID, clientID, clientSecret, o)
+		if err != nil {
+			return nil, err
+		}
+		return &EnvironmentCredential{cred: cred}, nil
+	}
+	if certPath := os.Getenv(azureClientCertificatePath); certPath != "" {
+		log.Write(EventAuthentication, "EnvironmentCredential will authenticate with ClientCertificateCredential")
+		certData, err := os.ReadFile(certPath)
+		if err != nil {
+			return nil, fmt.Errorf(`failed to read certificate file "%s": %v`, certPath, err)
+		}
+		var password []byte
+		if v := os.Getenv(azureClientCertificatePassword); v != "" {
+			password = []byte(v)
+		}
+		certs, key, err := ParseCertificates(certData, password)
+		if err != nil {
+			return nil, fmt.Errorf("failed to parse %q due to error %q. This may be due to a limitation of this module's certificate loader. Consider calling NewClientCertificateCredential instead", certPath, err.Error())
+		}
+		o := &ClientCertificateCredentialOptions{
+			AdditionallyAllowedTenants: additionalTenants,
+			ClientOptions:              options.ClientOptions,
+			DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+		}
+		if v, ok := os.LookupEnv(envVarSendCertChain); ok {
+			o.SendCertificateChain = v == "1" || strings.ToLower(v) == "true"
+		}
+		cred, err := NewClientCertificateCredential(tenantID, clientID, certs, key, o)
+		if err != nil {
+			return nil, err
+		}
+		return &EnvironmentCredential{cred: cred}, nil
+	}
+	if username := os.Getenv(azureUsername); username != "" {
+		if password := os.Getenv(azurePassword); password != "" {
+			log.Write(EventAuthentication, "EnvironmentCredential will authenticate with UsernamePasswordCredential")
+			o := &UsernamePasswordCredentialOptions{
+				AdditionallyAllowedTenants: additionalTenants,
+				ClientOptions:              options.ClientOptions,
+				DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+			}
+			cred, err := NewUsernamePasswordCredential(tenantID, clientID, username, password, o)
+			if err != nil {
+				return nil, err
+			}
+			return &EnvironmentCredential{cred: cred}, nil
+		}
+		return nil, errors.New("no value for AZURE_PASSWORD")
+	}
+	return nil, errors.New("incomplete environment variable configuration. Only AZURE_TENANT_ID and AZURE_CLIENT_ID are set")
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (c *EnvironmentCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	return c.cred.GetToken(ctx, opts)
+}
+
+var _ azcore.TokenCredential = (*EnvironmentCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/errors.go 🔗

@@ -0,0 +1,170 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"bytes"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo"
+	msal "github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors"
+)
+
+// getResponseFromError retrieves the response carried by
+// an AuthenticationFailedError or MSAL CallErr, if any
+func getResponseFromError(err error) *http.Response {
+	var a *AuthenticationFailedError
+	var c msal.CallErr
+	var res *http.Response
+	if errors.As(err, &c) {
+		res = c.Resp
+	} else if errors.As(err, &a) {
+		res = a.RawResponse
+	}
+	return res
+}
+
+// AuthenticationFailedError indicates an authentication request has failed.
+type AuthenticationFailedError struct {
+	// RawResponse is the HTTP response motivating the error, if available.
+	RawResponse *http.Response
+
+	credType string
+	message  string
+	err      error
+}
+
+func newAuthenticationFailedError(credType string, message string, resp *http.Response, err error) error {
+	return &AuthenticationFailedError{credType: credType, message: message, RawResponse: resp, err: err}
+}
+
+// Error implements the error interface. Note that the message contents are not contractual and can change over time.
+func (e *AuthenticationFailedError) Error() string {
+	if e.RawResponse == nil {
+		return e.credType + ": " + e.message
+	}
+	msg := &bytes.Buffer{}
+	fmt.Fprintf(msg, "%s authentication failed. %s\n", e.credType, e.message)
+	if e.RawResponse.Request != nil {
+		fmt.Fprintf(msg, "%s %s://%s%s\n", e.RawResponse.Request.Method, e.RawResponse.Request.URL.Scheme, e.RawResponse.Request.URL.Host, e.RawResponse.Request.URL.Path)
+	} else {
+		// this happens when the response is created from a custom HTTP transporter,
+		// which doesn't guarantee to bind the original request to the response
+		fmt.Fprintln(msg, "Request information not available")
+	}
+	fmt.Fprintln(msg, "--------------------------------------------------------------------------------")
+	fmt.Fprintf(msg, "RESPONSE %s\n", e.RawResponse.Status)
+	fmt.Fprintln(msg, "--------------------------------------------------------------------------------")
+	body, err := runtime.Payload(e.RawResponse)
+	switch {
+	case err != nil:
+		fmt.Fprintf(msg, "Error reading response body: %v", err)
+	case len(body) > 0:
+		if err := json.Indent(msg, body, "", "  "); err != nil {
+			// failed to pretty-print so just dump it verbatim
+			fmt.Fprint(msg, string(body))
+		}
+	default:
+		fmt.Fprint(msg, "Response contained no body")
+	}
+	fmt.Fprintln(msg, "\n--------------------------------------------------------------------------------")
+	var anchor string
+	switch e.credType {
+	case credNameAzureCLI:
+		anchor = "azure-cli"
+	case credNameAzureDeveloperCLI:
+		anchor = "azd"
+	case credNameAzurePipelines:
+		anchor = "apc"
+	case credNameCert:
+		anchor = "client-cert"
+	case credNameSecret:
+		anchor = "client-secret"
+	case credNameManagedIdentity:
+		anchor = "managed-id"
+	case credNameUserPassword:
+		anchor = "username-password"
+	case credNameWorkloadIdentity:
+		anchor = "workload"
+	}
+	if anchor != "" {
+		fmt.Fprintf(msg, "To troubleshoot, visit https://aka.ms/azsdk/go/identity/troubleshoot#%s", anchor)
+	}
+	return msg.String()
+}
+
+// NonRetriable indicates the request which provoked this error shouldn't be retried.
+func (*AuthenticationFailedError) NonRetriable() {
+	// marker method
+}
+
+var _ errorinfo.NonRetriable = (*AuthenticationFailedError)(nil)
+
+// authenticationRequiredError indicates a credential's Authenticate method must be called to acquire a token
+// because the credential requires user interaction and is configured not to request it automatically.
+type authenticationRequiredError struct {
+	credentialUnavailableError
+
+	// TokenRequestOptions for the required token. Pass this to the credential's Authenticate method.
+	TokenRequestOptions policy.TokenRequestOptions
+}
+
+func newauthenticationRequiredError(credType string, tro policy.TokenRequestOptions) error {
+	return &authenticationRequiredError{
+		credentialUnavailableError: credentialUnavailableError{
+			credType + " can't acquire a token without user interaction. Call Authenticate to authenticate a user interactively",
+		},
+		TokenRequestOptions: tro,
+	}
+}
+
+var (
+	_ credentialUnavailable  = (*authenticationRequiredError)(nil)
+	_ errorinfo.NonRetriable = (*authenticationRequiredError)(nil)
+)
+
+type credentialUnavailable interface {
+	error
+	credentialUnavailable()
+}
+
+type credentialUnavailableError struct {
+	message string
+}
+
+// newCredentialUnavailableError is an internal helper that ensures consistent error message formatting
+func newCredentialUnavailableError(credType, message string) error {
+	msg := fmt.Sprintf("%s: %s", credType, message)
+	return &credentialUnavailableError{msg}
+}
+
+// NewCredentialUnavailableError constructs an error indicating a credential can't attempt authentication
+// because it lacks required data or state. When [ChainedTokenCredential] receives this error it will try
+// its next credential, if any.
+func NewCredentialUnavailableError(message string) error {
+	return &credentialUnavailableError{message}
+}
+
+// Error implements the error interface. Note that the message contents are not contractual and can change over time.
+func (e *credentialUnavailableError) Error() string {
+	return e.message
+}
+
+// NonRetriable is a marker method indicating this error should not be retried. It has no implementation.
+func (*credentialUnavailableError) NonRetriable() {}
+
+func (*credentialUnavailableError) credentialUnavailable() {}
+
+var (
+	_ credentialUnavailable  = (*credentialUnavailableError)(nil)
+	_ errorinfo.NonRetriable = (*credentialUnavailableError)(nil)
+)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/go.work.sum 🔗

@@ -0,0 +1,60 @@
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.0-beta.1 h1:ODs3brnqQM99Tq1PffODpAViYv3Bf8zOg464MU7p5ew=
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.0-beta.1/go.mod h1:3Ug6Qzto9anB6mGlEdgYMDF5zHQ+wwhEaYR4s17PHMw=
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.0 h1:fb8kj/Dh4CSwgsOzHeZY4Xh68cFVbzXx+ONXGMY//4w=
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.0/go.mod h1:uReU2sSxZExRPBAg3qKzmAucSi51+SP1OhohieR821Q=
+github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM=
+github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2/go.mod h1:yInRyqWXAuaPrgI7p70+lDDgh3mlBohis29jGMISnmc=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
+github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/keybase/dbus v0.0.0-20220506165403-5aa21ea2c23a/go.mod h1:YPNKjjE7Ubp9dTbnWvsP3HT+hYnY6TfXzubYTBeUxc8=
+github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/montanaflynn/stats v0.7.0/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
+github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
+github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
+github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
+github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
+github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
+github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
+golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
+golang.org/x/crypto v0.16.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
+golang.org/x/crypto v0.17.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4=
+golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
+golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
+golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
+golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
+golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
+golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
+golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
+golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
+golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
+golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
+golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o=
+golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU=
+golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
+golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
+golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
+golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
+golang.org/x/term v0.19.0/go.mod h1:2CuTdWZ7KHSQwUzKva0cbMg6q2DMI3Mmxp+gKJbskEk=
+golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
+golang.org/x/term v0.21.0/go.mod h1:ooXLefLobQVslOqselCNF4SxFAaoS6KujMbsGzSDmX0=
+golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
+golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
+golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
+gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
+gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/interactive_browser_credential.go 🔗

@@ -0,0 +1,118 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+)
+
+const credNameBrowser = "InteractiveBrowserCredential"
+
+// InteractiveBrowserCredentialOptions contains optional parameters for InteractiveBrowserCredential.
+type InteractiveBrowserCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire
+	// tokens. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant.
+	AdditionallyAllowedTenants []string
+
+	// authenticationRecord returned by a call to a credential's Authenticate method. Set this option
+	// to enable the credential to use data from a previous authentication.
+	authenticationRecord authenticationRecord
+
+	// ClientID is the ID of the application users will authenticate to.
+	// Defaults to the ID of an Azure development application.
+	ClientID string
+
+	// disableAutomaticAuthentication prevents the credential from automatically prompting the user to authenticate.
+	// When this option is true, GetToken will return authenticationRequiredError when user interaction is necessary
+	// to acquire a token.
+	disableAutomaticAuthentication bool
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+
+	// LoginHint pre-populates the account prompt with a username. Users may choose to authenticate a different account.
+	LoginHint string
+
+	// RedirectURL is the URL Microsoft Entra ID will redirect to with the access token. This is required
+	// only when setting ClientID, and must match a redirect URI in the application's registration.
+	// Applications which have registered "http://localhost" as a redirect URI need not set this option.
+	RedirectURL string
+
+	// TenantID is the Microsoft Entra tenant the credential authenticates in. Defaults to the
+	// "organizations" tenant, which can authenticate work and school accounts.
+	TenantID string
+
+	// tokenCachePersistenceOptions enables persistent token caching when not nil.
+	tokenCachePersistenceOptions *tokenCachePersistenceOptions
+}
+
+func (o *InteractiveBrowserCredentialOptions) init() {
+	if o.TenantID == "" {
+		o.TenantID = organizationsTenantID
+	}
+	if o.ClientID == "" {
+		o.ClientID = developerSignOnClientID
+	}
+}
+
+// InteractiveBrowserCredential opens a browser to interactively authenticate a user.
+type InteractiveBrowserCredential struct {
+	client *publicClient
+}
+
+// NewInteractiveBrowserCredential constructs a new InteractiveBrowserCredential. Pass nil to accept default options.
+func NewInteractiveBrowserCredential(options *InteractiveBrowserCredentialOptions) (*InteractiveBrowserCredential, error) {
+	cp := InteractiveBrowserCredentialOptions{}
+	if options != nil {
+		cp = *options
+	}
+	cp.init()
+	msalOpts := publicClientOptions{
+		AdditionallyAllowedTenants:     cp.AdditionallyAllowedTenants,
+		ClientOptions:                  cp.ClientOptions,
+		DisableAutomaticAuthentication: cp.disableAutomaticAuthentication,
+		DisableInstanceDiscovery:       cp.DisableInstanceDiscovery,
+		LoginHint:                      cp.LoginHint,
+		Record:                         cp.authenticationRecord,
+		RedirectURL:                    cp.RedirectURL,
+		TokenCachePersistenceOptions:   cp.tokenCachePersistenceOptions,
+	}
+	c, err := newPublicClient(cp.TenantID, cp.ClientID, credNameBrowser, msalOpts)
+	if err != nil {
+		return nil, err
+	}
+	return &InteractiveBrowserCredential{client: c}, nil
+}
+
+// Authenticate a user via the default browser. Subsequent calls to GetToken will automatically use the returned AuthenticationRecord.
+func (c *InteractiveBrowserCredential) authenticate(ctx context.Context, opts *policy.TokenRequestOptions) (authenticationRecord, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameBrowser+"."+traceOpAuthenticate, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.Authenticate(ctx, opts)
+	return tk, err
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (c *InteractiveBrowserCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameBrowser+"."+traceOpGetToken, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+var _ azcore.TokenCredential = (*InteractiveBrowserCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/internal/exported.go 🔗

@@ -0,0 +1,18 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package internal
+
+// TokenCachePersistenceOptions contains options for persistent token caching
+type TokenCachePersistenceOptions struct {
+	// AllowUnencryptedStorage controls whether the cache should fall back to storing its data in plain text
+	// when encryption isn't possible. Setting this true doesn't disable encryption. The cache always attempts
+	// encryption before falling back to plaintext storage.
+	AllowUnencryptedStorage bool
+
+	// Name identifies the cache. Set this to isolate data from other applications.
+	Name string
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/internal/internal.go 🔗

@@ -0,0 +1,31 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package internal
+
+import (
+	"errors"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache"
+)
+
+var errMissingImport = errors.New("import github.com/Azure/azure-sdk-for-go/sdk/azidentity/cache to enable persistent caching")
+
+// NewCache constructs a persistent token cache when "o" isn't nil. Applications that intend to
+// use a persistent cache must first import the cache module, which will replace this function
+// with a platform-specific implementation.
+var NewCache = func(o *TokenCachePersistenceOptions, enableCAE bool) (cache.ExportReplace, error) {
+	if o == nil {
+		return nil, nil
+	}
+	return nil, errMissingImport
+}
+
+// CacheFilePath returns the path to the cache file for the given name.
+// Defining it in this package makes it available to azidentity tests.
+var CacheFilePath = func(name string) (string, error) {
+	return "", errMissingImport
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/logging.go 🔗

@@ -0,0 +1,14 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import "github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+
+// EventAuthentication entries contain information about authentication.
+// This includes information like the names of environment variables
+// used when obtaining credentials and the type of credential used.
+const EventAuthentication log.Event = "Authentication"

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_client.go 🔗

@@ -0,0 +1,501 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+	"os"
+	"path/filepath"
+	"runtime"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	azruntime "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+)
+
+const (
+	arcIMDSEndpoint          = "IMDS_ENDPOINT"
+	defaultIdentityClientID  = "DEFAULT_IDENTITY_CLIENT_ID"
+	identityEndpoint         = "IDENTITY_ENDPOINT"
+	identityHeader           = "IDENTITY_HEADER"
+	identityServerThumbprint = "IDENTITY_SERVER_THUMBPRINT"
+	headerMetadata           = "Metadata"
+	imdsEndpoint             = "http://169.254.169.254/metadata/identity/oauth2/token"
+	miResID                  = "mi_res_id"
+	msiEndpoint              = "MSI_ENDPOINT"
+	msiResID                 = "msi_res_id"
+	msiSecret                = "MSI_SECRET"
+	imdsAPIVersion           = "2018-02-01"
+	azureArcAPIVersion       = "2019-08-15"
+	qpClientID               = "client_id"
+	serviceFabricAPIVersion  = "2019-07-01-preview"
+)
+
+var imdsProbeTimeout = time.Second
+
+type msiType int
+
+const (
+	msiTypeAppService msiType = iota
+	msiTypeAzureArc
+	msiTypeAzureML
+	msiTypeCloudShell
+	msiTypeIMDS
+	msiTypeServiceFabric
+)
+
+type managedIdentityClient struct {
+	azClient  *azcore.Client
+	endpoint  string
+	id        ManagedIDKind
+	msiType   msiType
+	probeIMDS bool
+}
+
+// arcKeyDirectory returns the directory expected to contain Azure Arc keys
+var arcKeyDirectory = func() (string, error) {
+	switch runtime.GOOS {
+	case "linux":
+		return "/var/opt/azcmagent/tokens", nil
+	case "windows":
+		pd := os.Getenv("ProgramData")
+		if pd == "" {
+			return "", errors.New("environment variable ProgramData has no value")
+		}
+		return filepath.Join(pd, "AzureConnectedMachineAgent", "Tokens"), nil
+	default:
+		return "", fmt.Errorf("unsupported OS %q", runtime.GOOS)
+	}
+}
+
+type wrappedNumber json.Number
+
+func (n *wrappedNumber) UnmarshalJSON(b []byte) error {
+	c := string(b)
+	if c == "\"\"" {
+		return nil
+	}
+	return json.Unmarshal(b, (*json.Number)(n))
+}
+
+// setIMDSRetryOptionDefaults sets zero-valued fields to default values appropriate for IMDS
+func setIMDSRetryOptionDefaults(o *policy.RetryOptions) {
+	if o.MaxRetries == 0 {
+		o.MaxRetries = 5
+	}
+	if o.MaxRetryDelay == 0 {
+		o.MaxRetryDelay = 1 * time.Minute
+	}
+	if o.RetryDelay == 0 {
+		o.RetryDelay = 2 * time.Second
+	}
+	if o.StatusCodes == nil {
+		o.StatusCodes = []int{
+			// IMDS docs recommend retrying 404, 410, 429 and 5xx
+			// https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/how-to-use-vm-token#error-handling
+			http.StatusNotFound,                      // 404
+			http.StatusGone,                          // 410
+			http.StatusTooManyRequests,               // 429
+			http.StatusInternalServerError,           // 500
+			http.StatusNotImplemented,                // 501
+			http.StatusBadGateway,                    // 502
+			http.StatusServiceUnavailable,            // 503
+			http.StatusGatewayTimeout,                // 504
+			http.StatusHTTPVersionNotSupported,       // 505
+			http.StatusVariantAlsoNegotiates,         // 506
+			http.StatusInsufficientStorage,           // 507
+			http.StatusLoopDetected,                  // 508
+			http.StatusNotExtended,                   // 510
+			http.StatusNetworkAuthenticationRequired, // 511
+		}
+	}
+	if o.TryTimeout == 0 {
+		o.TryTimeout = 1 * time.Minute
+	}
+}
+
+// newManagedIdentityClient creates a new instance of the ManagedIdentityClient with the ManagedIdentityCredentialOptions
+// that are passed into it along with a default pipeline.
+// options: ManagedIdentityCredentialOptions configure policies for the pipeline and the authority host that
+// will be used to retrieve tokens and authenticate
+func newManagedIdentityClient(options *ManagedIdentityCredentialOptions) (*managedIdentityClient, error) {
+	if options == nil {
+		options = &ManagedIdentityCredentialOptions{}
+	}
+	cp := options.ClientOptions
+	c := managedIdentityClient{id: options.ID, endpoint: imdsEndpoint, msiType: msiTypeIMDS}
+	env := "IMDS"
+	if endpoint, ok := os.LookupEnv(identityEndpoint); ok {
+		if _, ok := os.LookupEnv(identityHeader); ok {
+			if _, ok := os.LookupEnv(identityServerThumbprint); ok {
+				env = "Service Fabric"
+				c.endpoint = endpoint
+				c.msiType = msiTypeServiceFabric
+			} else {
+				env = "App Service"
+				c.endpoint = endpoint
+				c.msiType = msiTypeAppService
+			}
+		} else if _, ok := os.LookupEnv(arcIMDSEndpoint); ok {
+			env = "Azure Arc"
+			c.endpoint = endpoint
+			c.msiType = msiTypeAzureArc
+		}
+	} else if endpoint, ok := os.LookupEnv(msiEndpoint); ok {
+		c.endpoint = endpoint
+		if _, ok := os.LookupEnv(msiSecret); ok {
+			env = "Azure ML"
+			c.msiType = msiTypeAzureML
+		} else {
+			env = "Cloud Shell"
+			c.msiType = msiTypeCloudShell
+		}
+	} else {
+		c.probeIMDS = options.dac
+		setIMDSRetryOptionDefaults(&cp.Retry)
+	}
+
+	client, err := azcore.NewClient(module, version, azruntime.PipelineOptions{
+		Tracing: azruntime.TracingOptions{
+			Namespace: traceNamespace,
+		},
+	}, &cp)
+	if err != nil {
+		return nil, err
+	}
+	c.azClient = client
+
+	if log.Should(EventAuthentication) {
+		log.Writef(EventAuthentication, "Managed Identity Credential will use %s managed identity", env)
+	}
+
+	return &c, nil
+}
+
+// provideToken acquires a token for MSAL's confidential.Client, which caches the token
+func (c *managedIdentityClient) provideToken(ctx context.Context, params confidential.TokenProviderParameters) (confidential.TokenProviderResult, error) {
+	result := confidential.TokenProviderResult{}
+	tk, err := c.authenticate(ctx, c.id, params.Scopes)
+	if err == nil {
+		result.AccessToken = tk.Token
+		result.ExpiresInSeconds = int(time.Until(tk.ExpiresOn).Seconds())
+	}
+	return result, err
+}
+
+// authenticate acquires an access token
+func (c *managedIdentityClient) authenticate(ctx context.Context, id ManagedIDKind, scopes []string) (azcore.AccessToken, error) {
+	// no need to synchronize around this value because it's true only when DefaultAzureCredential constructed the client,
+	// and in that case ChainedTokenCredential.GetToken synchronizes goroutines that would execute this block
+	if c.probeIMDS {
+		cx, cancel := context.WithTimeout(ctx, imdsProbeTimeout)
+		defer cancel()
+		cx = policy.WithRetryOptions(cx, policy.RetryOptions{MaxRetries: -1})
+		req, err := azruntime.NewRequest(cx, http.MethodGet, c.endpoint)
+		if err == nil {
+			_, err = c.azClient.Pipeline().Do(req)
+		}
+		if err != nil {
+			msg := err.Error()
+			if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
+				msg = "managed identity timed out. See https://aka.ms/azsdk/go/identity/troubleshoot#dac for more information"
+			}
+			return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, msg)
+		}
+		// send normal token requests from now on because something responded
+		c.probeIMDS = false
+	}
+
+	msg, err := c.createAuthRequest(ctx, id, scopes)
+	if err != nil {
+		return azcore.AccessToken{}, err
+	}
+
+	resp, err := c.azClient.Pipeline().Do(msg)
+	if err != nil {
+		return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, err.Error(), nil, err)
+	}
+
+	if azruntime.HasStatusCode(resp, http.StatusOK, http.StatusCreated) {
+		return c.createAccessToken(resp)
+	}
+
+	if c.msiType == msiTypeIMDS {
+		switch resp.StatusCode {
+		case http.StatusBadRequest:
+			if id != nil {
+				return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "the requested identity isn't assigned to this resource", resp, nil)
+			}
+			msg := "failed to authenticate a system assigned identity"
+			if body, err := azruntime.Payload(resp); err == nil && len(body) > 0 {
+				msg += fmt.Sprintf(". The endpoint responded with %s", body)
+			}
+			return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, msg)
+		case http.StatusForbidden:
+			// Docker Desktop runs a proxy that responds 403 to IMDS token requests. If we get that response,
+			// we return credentialUnavailableError so credential chains continue to their next credential
+			body, err := azruntime.Payload(resp)
+			if err == nil && strings.Contains(string(body), "unreachable") {
+				return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, fmt.Sprintf("unexpected response %q", string(body)))
+			}
+		}
+	}
+
+	return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "authentication failed", resp, nil)
+}
+
+func (c *managedIdentityClient) createAccessToken(res *http.Response) (azcore.AccessToken, error) {
+	value := struct {
+		// these are the only fields that we use
+		Token        string        `json:"access_token,omitempty"`
+		RefreshToken string        `json:"refresh_token,omitempty"`
+		ExpiresIn    wrappedNumber `json:"expires_in,omitempty"` // this field should always return the number of seconds for which a token is valid
+		ExpiresOn    interface{}   `json:"expires_on,omitempty"` // the value returned in this field varies between a number and a date string
+	}{}
+	if err := azruntime.UnmarshalAsJSON(res, &value); err != nil {
+		return azcore.AccessToken{}, fmt.Errorf("internal AccessToken: %v", err)
+	}
+	if value.ExpiresIn != "" {
+		expiresIn, err := json.Number(value.ExpiresIn).Int64()
+		if err != nil {
+			return azcore.AccessToken{}, err
+		}
+		return azcore.AccessToken{Token: value.Token, ExpiresOn: time.Now().Add(time.Second * time.Duration(expiresIn)).UTC()}, nil
+	}
+	switch v := value.ExpiresOn.(type) {
+	case float64:
+		return azcore.AccessToken{Token: value.Token, ExpiresOn: time.Unix(int64(v), 0).UTC()}, nil
+	case string:
+		if expiresOn, err := strconv.Atoi(v); err == nil {
+			return azcore.AccessToken{Token: value.Token, ExpiresOn: time.Unix(int64(expiresOn), 0).UTC()}, nil
+		}
+		return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "unexpected expires_on value: "+v, res, nil)
+	default:
+		msg := fmt.Sprintf("unsupported type received in expires_on: %T, %v", v, v)
+		return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, msg, res, nil)
+	}
+}
+
+func (c *managedIdentityClient) createAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) {
+	switch c.msiType {
+	case msiTypeIMDS:
+		return c.createIMDSAuthRequest(ctx, id, scopes)
+	case msiTypeAppService:
+		return c.createAppServiceAuthRequest(ctx, id, scopes)
+	case msiTypeAzureArc:
+		// need to perform preliminary request to retreive the secret key challenge provided by the HIMDS service
+		key, err := c.getAzureArcSecretKey(ctx, scopes)
+		if err != nil {
+			msg := fmt.Sprintf("failed to retreive secret key from the identity endpoint: %v", err)
+			return nil, newAuthenticationFailedError(credNameManagedIdentity, msg, nil, err)
+		}
+		return c.createAzureArcAuthRequest(ctx, id, scopes, key)
+	case msiTypeAzureML:
+		return c.createAzureMLAuthRequest(ctx, id, scopes)
+	case msiTypeServiceFabric:
+		return c.createServiceFabricAuthRequest(ctx, id, scopes)
+	case msiTypeCloudShell:
+		return c.createCloudShellAuthRequest(ctx, id, scopes)
+	default:
+		return nil, newCredentialUnavailableError(credNameManagedIdentity, "managed identity isn't supported in this environment")
+	}
+}
+
+func (c *managedIdentityClient) createIMDSAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) {
+	request, err := azruntime.NewRequest(ctx, http.MethodGet, c.endpoint)
+	if err != nil {
+		return nil, err
+	}
+	request.Raw().Header.Set(headerMetadata, "true")
+	q := request.Raw().URL.Query()
+	q.Add("api-version", imdsAPIVersion)
+	q.Add("resource", strings.Join(scopes, " "))
+	if id != nil {
+		if id.idKind() == miResourceID {
+			q.Add(msiResID, id.String())
+		} else {
+			q.Add(qpClientID, id.String())
+		}
+	}
+	request.Raw().URL.RawQuery = q.Encode()
+	return request, nil
+}
+
+func (c *managedIdentityClient) createAppServiceAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) {
+	request, err := azruntime.NewRequest(ctx, http.MethodGet, c.endpoint)
+	if err != nil {
+		return nil, err
+	}
+	request.Raw().Header.Set("X-IDENTITY-HEADER", os.Getenv(identityHeader))
+	q := request.Raw().URL.Query()
+	q.Add("api-version", "2019-08-01")
+	q.Add("resource", scopes[0])
+	if id != nil {
+		if id.idKind() == miResourceID {
+			q.Add(miResID, id.String())
+		} else {
+			q.Add(qpClientID, id.String())
+		}
+	}
+	request.Raw().URL.RawQuery = q.Encode()
+	return request, nil
+}
+
+func (c *managedIdentityClient) createAzureMLAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) {
+	request, err := azruntime.NewRequest(ctx, http.MethodGet, c.endpoint)
+	if err != nil {
+		return nil, err
+	}
+	request.Raw().Header.Set("secret", os.Getenv(msiSecret))
+	q := request.Raw().URL.Query()
+	q.Add("api-version", "2017-09-01")
+	q.Add("resource", strings.Join(scopes, " "))
+	q.Add("clientid", os.Getenv(defaultIdentityClientID))
+	if id != nil {
+		if id.idKind() == miResourceID {
+			log.Write(EventAuthentication, "WARNING: Azure ML doesn't support specifying a managed identity by resource ID")
+			q.Set("clientid", "")
+			q.Set(miResID, id.String())
+		} else {
+			q.Set("clientid", id.String())
+		}
+	}
+	request.Raw().URL.RawQuery = q.Encode()
+	return request, nil
+}
+
+func (c *managedIdentityClient) createServiceFabricAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) {
+	request, err := azruntime.NewRequest(ctx, http.MethodGet, c.endpoint)
+	if err != nil {
+		return nil, err
+	}
+	q := request.Raw().URL.Query()
+	request.Raw().Header.Set("Accept", "application/json")
+	request.Raw().Header.Set("Secret", os.Getenv(identityHeader))
+	q.Add("api-version", serviceFabricAPIVersion)
+	q.Add("resource", strings.Join(scopes, " "))
+	if id != nil {
+		log.Write(EventAuthentication, "WARNING: Service Fabric doesn't support selecting a user-assigned identity at runtime")
+		if id.idKind() == miResourceID {
+			q.Add(miResID, id.String())
+		} else {
+			q.Add(qpClientID, id.String())
+		}
+	}
+	request.Raw().URL.RawQuery = q.Encode()
+	return request, nil
+}
+
+func (c *managedIdentityClient) getAzureArcSecretKey(ctx context.Context, resources []string) (string, error) {
+	// create the request to retreive the secret key challenge provided by the HIMDS service
+	request, err := azruntime.NewRequest(ctx, http.MethodGet, c.endpoint)
+	if err != nil {
+		return "", err
+	}
+	request.Raw().Header.Set(headerMetadata, "true")
+	q := request.Raw().URL.Query()
+	q.Add("api-version", azureArcAPIVersion)
+	q.Add("resource", strings.Join(resources, " "))
+	request.Raw().URL.RawQuery = q.Encode()
+	// send the initial request to get the short-lived secret key
+	response, err := c.azClient.Pipeline().Do(request)
+	if err != nil {
+		return "", err
+	}
+	// the endpoint is expected to return a 401 with the WWW-Authenticate header set to the location
+	// of the secret key file. Any other status code indicates an error in the request.
+	if response.StatusCode != 401 {
+		msg := fmt.Sprintf("expected a 401 response, received %d", response.StatusCode)
+		return "", newAuthenticationFailedError(credNameManagedIdentity, msg, response, nil)
+	}
+	header := response.Header.Get("WWW-Authenticate")
+	if len(header) == 0 {
+		return "", newAuthenticationFailedError(credNameManagedIdentity, "HIMDS response has no WWW-Authenticate header", nil, nil)
+	}
+	// the WWW-Authenticate header is expected in the following format: Basic realm=/some/file/path.key
+	_, p, found := strings.Cut(header, "=")
+	if !found {
+		return "", newAuthenticationFailedError(credNameManagedIdentity, "unexpected WWW-Authenticate header from HIMDS: "+header, nil, nil)
+	}
+	expected, err := arcKeyDirectory()
+	if err != nil {
+		return "", err
+	}
+	if filepath.Dir(p) != expected || !strings.HasSuffix(p, ".key") {
+		return "", newAuthenticationFailedError(credNameManagedIdentity, "unexpected file path from HIMDS service: "+p, nil, nil)
+	}
+	f, err := os.Stat(p)
+	if err != nil {
+		return "", newAuthenticationFailedError(credNameManagedIdentity, fmt.Sprintf("could not stat %q: %v", p, err), nil, nil)
+	}
+	if s := f.Size(); s > 4096 {
+		return "", newAuthenticationFailedError(credNameManagedIdentity, fmt.Sprintf("key is too large (%d bytes)", s), nil, nil)
+	}
+	key, err := os.ReadFile(p)
+	if err != nil {
+		return "", newAuthenticationFailedError(credNameManagedIdentity, fmt.Sprintf("could not read %q: %v", p, err), nil, nil)
+	}
+	return string(key), nil
+}
+
+func (c *managedIdentityClient) createAzureArcAuthRequest(ctx context.Context, id ManagedIDKind, resources []string, key string) (*policy.Request, error) {
+	request, err := azruntime.NewRequest(ctx, http.MethodGet, c.endpoint)
+	if err != nil {
+		return nil, err
+	}
+	request.Raw().Header.Set(headerMetadata, "true")
+	request.Raw().Header.Set("Authorization", fmt.Sprintf("Basic %s", key))
+	q := request.Raw().URL.Query()
+	q.Add("api-version", azureArcAPIVersion)
+	q.Add("resource", strings.Join(resources, " "))
+	if id != nil {
+		log.Write(EventAuthentication, "WARNING: Azure Arc doesn't support user-assigned managed identities")
+		if id.idKind() == miResourceID {
+			q.Add(miResID, id.String())
+		} else {
+			q.Add(qpClientID, id.String())
+		}
+	}
+	request.Raw().URL.RawQuery = q.Encode()
+	return request, nil
+}
+
+func (c *managedIdentityClient) createCloudShellAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) {
+	request, err := azruntime.NewRequest(ctx, http.MethodPost, c.endpoint)
+	if err != nil {
+		return nil, err
+	}
+	request.Raw().Header.Set(headerMetadata, "true")
+	data := url.Values{}
+	data.Set("resource", strings.Join(scopes, " "))
+	dataEncoded := data.Encode()
+	body := streaming.NopCloser(strings.NewReader(dataEncoded))
+	if err := request.SetBody(body, "application/x-www-form-urlencoded"); err != nil {
+		return nil, err
+	}
+	if id != nil {
+		log.Write(EventAuthentication, "WARNING: Cloud Shell doesn't support user-assigned managed identities")
+		q := request.Raw().URL.Query()
+		if id.idKind() == miResourceID {
+			q.Add(miResID, id.String())
+		} else {
+			q.Add(qpClientID, id.String())
+		}
+	}
+	return request, nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_credential.go 🔗

@@ -0,0 +1,128 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"fmt"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+)
+
+const credNameManagedIdentity = "ManagedIdentityCredential"
+
+type managedIdentityIDKind int
+
+const (
+	miClientID   managedIdentityIDKind = 0
+	miResourceID managedIdentityIDKind = 1
+)
+
+// ManagedIDKind identifies the ID of a managed identity as either a client or resource ID
+type ManagedIDKind interface {
+	fmt.Stringer
+	idKind() managedIdentityIDKind
+}
+
+// ClientID is the client ID of a user-assigned managed identity.
+type ClientID string
+
+func (ClientID) idKind() managedIdentityIDKind {
+	return miClientID
+}
+
+// String returns the string value of the ID.
+func (c ClientID) String() string {
+	return string(c)
+}
+
+// ResourceID is the resource ID of a user-assigned managed identity.
+type ResourceID string
+
+func (ResourceID) idKind() managedIdentityIDKind {
+	return miResourceID
+}
+
+// String returns the string value of the ID.
+func (r ResourceID) String() string {
+	return string(r)
+}
+
+// ManagedIdentityCredentialOptions contains optional parameters for ManagedIdentityCredential.
+type ManagedIdentityCredentialOptions struct {
+	azcore.ClientOptions
+
+	// ID is the ID of a managed identity the credential should authenticate. Set this field to use a specific identity
+	// instead of the hosting environment's default. The value may be the identity's client ID or resource ID, but note that
+	// some platforms don't accept resource IDs.
+	ID ManagedIDKind
+
+	// dac indicates whether the credential is part of DefaultAzureCredential. When true, and the environment doesn't have
+	// configuration for a specific managed identity API, the credential tries to determine whether IMDS is available before
+	// sending its first token request. It does this by sending a malformed request with a short timeout. Any response to that
+	// request is taken to mean IMDS is available, in which case the credential will send ordinary token requests thereafter
+	// with no special timeout. The purpose of this behavior is to prevent a very long timeout when IMDS isn't available.
+	dac bool
+}
+
+// ManagedIdentityCredential authenticates an Azure managed identity in any hosting environment supporting managed identities.
+// This credential authenticates a system-assigned identity by default. Use ManagedIdentityCredentialOptions.ID to specify a
+// user-assigned identity. See Microsoft Entra ID documentation for more information about managed identities:
+// https://learn.microsoft.com/entra/identity/managed-identities-azure-resources/overview
+type ManagedIdentityCredential struct {
+	client *confidentialClient
+	mic    *managedIdentityClient
+}
+
+// NewManagedIdentityCredential creates a ManagedIdentityCredential. Pass nil to accept default options.
+func NewManagedIdentityCredential(options *ManagedIdentityCredentialOptions) (*ManagedIdentityCredential, error) {
+	if options == nil {
+		options = &ManagedIdentityCredentialOptions{}
+	}
+	mic, err := newManagedIdentityClient(options)
+	if err != nil {
+		return nil, err
+	}
+	cred := confidential.NewCredFromTokenProvider(mic.provideToken)
+
+	// It's okay to give MSAL an invalid client ID because MSAL will use it only as part of a cache key.
+	// ManagedIdentityClient handles all the details of authentication and won't receive this value from MSAL.
+	clientID := "SYSTEM-ASSIGNED-MANAGED-IDENTITY"
+	if options.ID != nil {
+		clientID = options.ID.String()
+	}
+	// similarly, it's okay to give MSAL an incorrect tenant because MSAL won't use the value
+	c, err := newConfidentialClient("common", clientID, credNameManagedIdentity, cred, confidentialClientOptions{
+		ClientOptions: options.ClientOptions,
+	})
+	if err != nil {
+		return nil, err
+	}
+	return &ManagedIdentityCredential{client: c, mic: mic}, nil
+}
+
+// GetToken requests an access token from the hosting environment. This method is called automatically by Azure SDK clients.
+func (c *ManagedIdentityCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameManagedIdentity+"."+traceOpGetToken, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+
+	if len(opts.Scopes) != 1 {
+		err = fmt.Errorf("%s.GetToken() requires exactly one scope", credNameManagedIdentity)
+		return azcore.AccessToken{}, err
+	}
+	// managed identity endpoints require a Microsoft Entra ID v1 resource (i.e. token audience), not a v2 scope, so we remove "/.default" here
+	opts.Scopes = []string{strings.TrimSuffix(opts.Scopes[0], defaultSuffix)}
+	tk, err := c.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+var _ azcore.TokenCredential = (*ManagedIdentityCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/on_behalf_of_credential.go 🔗

@@ -0,0 +1,113 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"crypto"
+	"crypto/x509"
+	"errors"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential"
+)
+
+const credNameOBO = "OnBehalfOfCredential"
+
+// OnBehalfOfCredential authenticates a service principal via the on-behalf-of flow. This is typically used by
+// middle-tier services that authorize requests to other services with a delegated user identity. Because this
+// is not an interactive authentication flow, an application using it must have admin consent for any delegated
+// permissions before requesting tokens for them. See [Microsoft Entra ID documentation] for more details.
+//
+// [Microsoft Entra ID documentation]: https://learn.microsoft.com/entra/identity-platform/v2-oauth2-on-behalf-of-flow
+type OnBehalfOfCredential struct {
+	client *confidentialClient
+}
+
+// OnBehalfOfCredentialOptions contains optional parameters for OnBehalfOfCredential
+type OnBehalfOfCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens.
+	// Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the
+	// application is registered.
+	AdditionallyAllowedTenants []string
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+
+	// SendCertificateChain applies only when the credential is configured to authenticate with a certificate.
+	// This setting controls whether the credential sends the public certificate chain in the x5c header of each
+	// token request's JWT. This is required for, and only used in, Subject Name/Issuer (SNI) authentication.
+	SendCertificateChain bool
+}
+
+// NewOnBehalfOfCredentialWithCertificate constructs an OnBehalfOfCredential that authenticates with a certificate.
+// See [ParseCertificates] for help loading a certificate.
+func NewOnBehalfOfCredentialWithCertificate(tenantID, clientID, userAssertion string, certs []*x509.Certificate, key crypto.PrivateKey, options *OnBehalfOfCredentialOptions) (*OnBehalfOfCredential, error) {
+	cred, err := confidential.NewCredFromCert(certs, key)
+	if err != nil {
+		return nil, err
+	}
+	return newOnBehalfOfCredential(tenantID, clientID, userAssertion, cred, options)
+}
+
+// NewOnBehalfOfCredentialWithClientAssertions constructs an OnBehalfOfCredential that authenticates with client assertions.
+// userAssertion is the user's access token for the application. The getAssertion function should return client assertions
+// that authenticate the application to Microsoft Entra ID, such as federated credentials.
+func NewOnBehalfOfCredentialWithClientAssertions(tenantID, clientID, userAssertion string, getAssertion func(context.Context) (string, error), options *OnBehalfOfCredentialOptions) (*OnBehalfOfCredential, error) {
+	if getAssertion == nil {
+		return nil, errors.New("getAssertion can't be nil. It must be a function that returns client assertions")
+	}
+	cred := confidential.NewCredFromAssertionCallback(func(ctx context.Context, _ confidential.AssertionRequestOptions) (string, error) {
+		return getAssertion(ctx)
+	})
+	return newOnBehalfOfCredential(tenantID, clientID, userAssertion, cred, options)
+}
+
+// NewOnBehalfOfCredentialWithSecret constructs an OnBehalfOfCredential that authenticates with a client secret.
+func NewOnBehalfOfCredentialWithSecret(tenantID, clientID, userAssertion, clientSecret string, options *OnBehalfOfCredentialOptions) (*OnBehalfOfCredential, error) {
+	cred, err := confidential.NewCredFromSecret(clientSecret)
+	if err != nil {
+		return nil, err
+	}
+	return newOnBehalfOfCredential(tenantID, clientID, userAssertion, cred, options)
+}
+
+func newOnBehalfOfCredential(tenantID, clientID, userAssertion string, cred confidential.Credential, options *OnBehalfOfCredentialOptions) (*OnBehalfOfCredential, error) {
+	if options == nil {
+		options = &OnBehalfOfCredentialOptions{}
+	}
+	opts := confidentialClientOptions{
+		AdditionallyAllowedTenants: options.AdditionallyAllowedTenants,
+		Assertion:                  userAssertion,
+		ClientOptions:              options.ClientOptions,
+		DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+		SendX5C:                    options.SendCertificateChain,
+	}
+	c, err := newConfidentialClient(tenantID, clientID, credNameOBO, cred, opts)
+	if err != nil {
+		return nil, err
+	}
+	return &OnBehalfOfCredential{c}, nil
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (o *OnBehalfOfCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameOBO+"."+traceOpGetToken, o.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := o.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+var _ azcore.TokenCredential = (*OnBehalfOfCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/public_client.go 🔗

@@ -0,0 +1,273 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+	"strings"
+	"sync"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+	"github.com/Azure/azure-sdk-for-go/sdk/azidentity/internal"
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/log"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/public"
+
+	// this import ensures well-known configurations in azcore/cloud have ARM audiences for Authenticate()
+	_ "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime"
+)
+
+type publicClientOptions struct {
+	azcore.ClientOptions
+
+	AdditionallyAllowedTenants     []string
+	DeviceCodePrompt               func(context.Context, DeviceCodeMessage) error
+	DisableAutomaticAuthentication bool
+	DisableInstanceDiscovery       bool
+	LoginHint, RedirectURL         string
+	Record                         authenticationRecord
+	TokenCachePersistenceOptions   *tokenCachePersistenceOptions
+	Username, Password             string
+}
+
+// publicClient wraps the MSAL public client
+type publicClient struct {
+	cae, noCAE               msalPublicClient
+	caeMu, noCAEMu, clientMu *sync.Mutex
+	clientID, tenantID       string
+	defaultScope             []string
+	host                     string
+	name                     string
+	opts                     publicClientOptions
+	record                   authenticationRecord
+	azClient                 *azcore.Client
+}
+
+var errScopeRequired = errors.New("authenticating in this environment requires specifying a scope in TokenRequestOptions")
+
+func newPublicClient(tenantID, clientID, name string, o publicClientOptions) (*publicClient, error) {
+	if !validTenantID(tenantID) {
+		return nil, errInvalidTenantID
+	}
+	host, err := setAuthorityHost(o.Cloud)
+	if err != nil {
+		return nil, err
+	}
+	// if the application specified a cloud configuration, use its ARM audience as the default scope for Authenticate()
+	audience := o.Cloud.Services[cloud.ResourceManager].Audience
+	if audience == "" {
+		// no cloud configuration, or no ARM audience, specified; try to map the host to a well-known one (all of which have a trailing slash)
+		if !strings.HasSuffix(host, "/") {
+			host += "/"
+		}
+		switch host {
+		case cloud.AzureChina.ActiveDirectoryAuthorityHost:
+			audience = cloud.AzureChina.Services[cloud.ResourceManager].Audience
+		case cloud.AzureGovernment.ActiveDirectoryAuthorityHost:
+			audience = cloud.AzureGovernment.Services[cloud.ResourceManager].Audience
+		case cloud.AzurePublic.ActiveDirectoryAuthorityHost:
+			audience = cloud.AzurePublic.Services[cloud.ResourceManager].Audience
+		}
+	}
+	// if we didn't come up with an audience, the application will have to specify a scope for Authenticate()
+	var defaultScope []string
+	if audience != "" {
+		defaultScope = []string{audience + defaultSuffix}
+	}
+	client, err := azcore.NewClient(module, version, runtime.PipelineOptions{
+		Tracing: runtime.TracingOptions{
+			Namespace: traceNamespace,
+		},
+	}, &o.ClientOptions)
+	if err != nil {
+		return nil, err
+	}
+	o.AdditionallyAllowedTenants = resolveAdditionalTenants(o.AdditionallyAllowedTenants)
+	return &publicClient{
+		caeMu:        &sync.Mutex{},
+		clientID:     clientID,
+		clientMu:     &sync.Mutex{},
+		defaultScope: defaultScope,
+		host:         host,
+		name:         name,
+		noCAEMu:      &sync.Mutex{},
+		opts:         o,
+		record:       o.Record,
+		tenantID:     tenantID,
+		azClient:     client,
+	}, nil
+}
+
+func (p *publicClient) Authenticate(ctx context.Context, tro *policy.TokenRequestOptions) (authenticationRecord, error) {
+	if tro == nil {
+		tro = &policy.TokenRequestOptions{}
+	}
+	if len(tro.Scopes) == 0 {
+		if p.defaultScope == nil {
+			return authenticationRecord{}, errScopeRequired
+		}
+		tro.Scopes = p.defaultScope
+	}
+	client, mu, err := p.client(*tro)
+	if err != nil {
+		return authenticationRecord{}, err
+	}
+	mu.Lock()
+	defer mu.Unlock()
+	_, err = p.reqToken(ctx, client, *tro)
+	if err == nil {
+		scope := strings.Join(tro.Scopes, ", ")
+		msg := fmt.Sprintf("%s.Authenticate() acquired a token for scope %q", p.name, scope)
+		log.Write(EventAuthentication, msg)
+	}
+	return p.record, err
+}
+
+// GetToken requests an access token from MSAL, checking the cache first.
+func (p *publicClient) GetToken(ctx context.Context, tro policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	if len(tro.Scopes) < 1 {
+		return azcore.AccessToken{}, fmt.Errorf("%s.GetToken() requires at least one scope", p.name)
+	}
+	tenant, err := p.resolveTenant(tro.TenantID)
+	if err != nil {
+		return azcore.AccessToken{}, err
+	}
+	client, mu, err := p.client(tro)
+	if err != nil {
+		return azcore.AccessToken{}, err
+	}
+	mu.Lock()
+	defer mu.Unlock()
+	ar, err := client.AcquireTokenSilent(ctx, tro.Scopes, public.WithSilentAccount(p.record.account()), public.WithClaims(tro.Claims), public.WithTenantID(tenant))
+	if err == nil {
+		return p.token(ar, err)
+	}
+	if p.opts.DisableAutomaticAuthentication {
+		return azcore.AccessToken{}, newauthenticationRequiredError(p.name, tro)
+	}
+	at, err := p.reqToken(ctx, client, tro)
+	if err == nil {
+		msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", p.name, strings.Join(ar.GrantedScopes, ", "))
+		log.Write(EventAuthentication, msg)
+	}
+	return at, err
+}
+
+// reqToken requests a token from the MSAL public client. It's separate from GetToken() to enable Authenticate() to bypass the cache.
+func (p *publicClient) reqToken(ctx context.Context, c msalPublicClient, tro policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	tenant, err := p.resolveTenant(tro.TenantID)
+	if err != nil {
+		return azcore.AccessToken{}, err
+	}
+	var ar public.AuthResult
+	switch p.name {
+	case credNameBrowser:
+		ar, err = c.AcquireTokenInteractive(ctx, tro.Scopes,
+			public.WithClaims(tro.Claims),
+			public.WithLoginHint(p.opts.LoginHint),
+			public.WithRedirectURI(p.opts.RedirectURL),
+			public.WithTenantID(tenant),
+		)
+	case credNameDeviceCode:
+		dc, e := c.AcquireTokenByDeviceCode(ctx, tro.Scopes, public.WithClaims(tro.Claims), public.WithTenantID(tenant))
+		if e != nil {
+			return azcore.AccessToken{}, e
+		}
+		err = p.opts.DeviceCodePrompt(ctx, DeviceCodeMessage{
+			Message:         dc.Result.Message,
+			UserCode:        dc.Result.UserCode,
+			VerificationURL: dc.Result.VerificationURL,
+		})
+		if err == nil {
+			ar, err = dc.AuthenticationResult(ctx)
+		}
+	case credNameUserPassword:
+		ar, err = c.AcquireTokenByUsernamePassword(ctx, tro.Scopes, p.opts.Username, p.opts.Password, public.WithClaims(tro.Claims), public.WithTenantID(tenant))
+	default:
+		return azcore.AccessToken{}, fmt.Errorf("unknown credential %q", p.name)
+	}
+	return p.token(ar, err)
+}
+
+func (p *publicClient) client(tro policy.TokenRequestOptions) (msalPublicClient, *sync.Mutex, error) {
+	p.clientMu.Lock()
+	defer p.clientMu.Unlock()
+	if tro.EnableCAE {
+		if p.cae == nil {
+			client, err := p.newMSALClient(true)
+			if err != nil {
+				return nil, nil, err
+			}
+			p.cae = client
+		}
+		return p.cae, p.caeMu, nil
+	}
+	if p.noCAE == nil {
+		client, err := p.newMSALClient(false)
+		if err != nil {
+			return nil, nil, err
+		}
+		p.noCAE = client
+	}
+	return p.noCAE, p.noCAEMu, nil
+}
+
+func (p *publicClient) newMSALClient(enableCAE bool) (msalPublicClient, error) {
+	cache, err := internal.NewCache(p.opts.TokenCachePersistenceOptions, enableCAE)
+	if err != nil {
+		return nil, err
+	}
+	o := []public.Option{
+		public.WithAuthority(runtime.JoinPaths(p.host, p.tenantID)),
+		public.WithCache(cache),
+		public.WithHTTPClient(p),
+	}
+	if enableCAE {
+		o = append(o, public.WithClientCapabilities(cp1))
+	}
+	if p.opts.DisableInstanceDiscovery || strings.ToLower(p.tenantID) == "adfs" {
+		o = append(o, public.WithInstanceDiscovery(false))
+	}
+	return public.New(p.clientID, o...)
+}
+
+func (p *publicClient) token(ar public.AuthResult, err error) (azcore.AccessToken, error) {
+	if err == nil {
+		p.record, err = newAuthenticationRecord(ar)
+	} else {
+		res := getResponseFromError(err)
+		err = newAuthenticationFailedError(p.name, err.Error(), res, err)
+	}
+	return azcore.AccessToken{Token: ar.AccessToken, ExpiresOn: ar.ExpiresOn.UTC()}, err
+}
+
+// resolveTenant returns the correct WithTenantID() argument for a token request given the client's
+// configuration, or an error when that configuration doesn't allow the specified tenant
+func (p *publicClient) resolveTenant(specified string) (string, error) {
+	t, err := resolveTenant(p.tenantID, specified, p.name, p.opts.AdditionallyAllowedTenants)
+	if t == p.tenantID {
+		// callers pass this value to MSAL's WithTenantID(). There's no need to redundantly specify
+		// the client's default tenant and doing so is an error when that tenant is "organizations"
+		t = ""
+	}
+	return t, err
+}
+
+// these methods satisfy the MSAL ops.HTTPClient interface
+
+func (p *publicClient) CloseIdleConnections() {
+	// do nothing
+}
+
+func (p *publicClient) Do(r *http.Request) (*http.Response, error) {
+	return doForClient(p.azClient, r)
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-post.ps1 🔗

@@ -0,0 +1,112 @@
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License.
+
+# IMPORTANT: Do not invoke this file directly. Please instead run eng/common/TestResources/New-TestResources.ps1 from the repository root.
+
+param (
+  [hashtable] $AdditionalParameters = @{},
+  [hashtable] $DeploymentOutputs
+)
+
+$ErrorActionPreference = 'Stop'
+$PSNativeCommandUseErrorActionPreference = $true
+
+if ($CI) {
+  if (!$AdditionalParameters['deployResources']) {
+    Write-Host "Skipping post-provisioning script because resources weren't deployed"
+    return
+  }
+  az login --service-principal -u $DeploymentOutputs['AZIDENTITY_CLIENT_ID'] -p $DeploymentOutputs['AZIDENTITY_CLIENT_SECRET'] --tenant $DeploymentOutputs['AZIDENTITY_TENANT_ID']
+  az account set --subscription $DeploymentOutputs['AZIDENTITY_SUBSCRIPTION_ID']
+}
+
+Write-Host "Building container"
+$image = "$($DeploymentOutputs['AZIDENTITY_ACR_LOGIN_SERVER'])/azidentity-managed-id-test"
+Set-Content -Path "$PSScriptRoot/Dockerfile" -Value @"
+FROM mcr.microsoft.com/oss/go/microsoft/golang:latest as builder
+ENV GOARCH=amd64 GOWORK=off
+COPY . /azidentity
+WORKDIR /azidentity/testdata/managed-id-test
+RUN go mod tidy
+RUN go build -o /build/managed-id-test .
+RUN GOOS=windows go build -o /build/managed-id-test.exe .
+
+FROM mcr.microsoft.com/mirror/docker/library/alpine:3.16
+RUN apk add gcompat
+COPY --from=builder /build/* .
+RUN chmod +x managed-id-test
+CMD ["./managed-id-test"]
+"@
+# build from sdk/azidentity because we need that dir in the context (because the test app uses local azidentity)
+docker build -t $image "$PSScriptRoot"
+az acr login -n $DeploymentOutputs['AZIDENTITY_ACR_NAME']
+docker push $image
+
+$rg = $DeploymentOutputs['AZIDENTITY_RESOURCE_GROUP']
+
+# ACI is easier to provision here than in the bicep file because the image isn't available before now
+Write-Host "Deploying Azure Container Instance"
+$aciName = "azidentity-test"
+az container create -g $rg -n $aciName --image $image `
+  --acr-identity $($DeploymentOutputs['AZIDENTITY_USER_ASSIGNED_IDENTITY']) `
+  --assign-identity [system] $($DeploymentOutputs['AZIDENTITY_USER_ASSIGNED_IDENTITY']) `
+  --role "Storage Blob Data Reader" `
+  --scope $($DeploymentOutputs['AZIDENTITY_STORAGE_ID']) `
+  -e AZIDENTITY_STORAGE_NAME=$($DeploymentOutputs['AZIDENTITY_STORAGE_NAME']) `
+     AZIDENTITY_STORAGE_NAME_USER_ASSIGNED=$($DeploymentOutputs['AZIDENTITY_STORAGE_NAME_USER_ASSIGNED']) `
+     AZIDENTITY_USER_ASSIGNED_IDENTITY=$($DeploymentOutputs['AZIDENTITY_USER_ASSIGNED_IDENTITY']) `
+     FUNCTIONS_CUSTOMHANDLER_PORT=80
+Write-Host "##vso[task.setvariable variable=AZIDENTITY_ACI_NAME;]$aciName"
+
+# Azure Functions deployment: copy the Windows binary from the Docker image, deploy it in a zip
+Write-Host "Deploying to Azure Functions"
+$container = docker create $image
+docker cp ${container}:managed-id-test.exe "$PSScriptRoot/testdata/managed-id-test/"
+docker rm -v $container
+Compress-Archive -Path "$PSScriptRoot/testdata/managed-id-test/*" -DestinationPath func.zip -Force
+az functionapp deploy -g $rg -n $DeploymentOutputs['AZIDENTITY_FUNCTION_NAME'] --src-path func.zip --type zip
+
+Write-Host "Creating federated identity"
+$aksName = $DeploymentOutputs['AZIDENTITY_AKS_NAME']
+$idName = $DeploymentOutputs['AZIDENTITY_USER_ASSIGNED_IDENTITY_NAME']
+$issuer = az aks show -g $rg -n $aksName --query "oidcIssuerProfile.issuerUrl" -otsv
+$podName = "azidentity-test"
+$serviceAccountName = "workload-identity-sa"
+az identity federated-credential create -g $rg --identity-name $idName --issuer $issuer --name $idName --subject system:serviceaccount:default:$serviceAccountName
+Write-Host "Deploying to AKS"
+az aks get-credentials -g $rg -n $aksName
+az aks update --attach-acr $DeploymentOutputs['AZIDENTITY_ACR_NAME'] -g $rg -n $aksName
+Set-Content -Path "$PSScriptRoot/k8s.yaml" -Value @"
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+  annotations:
+    azure.workload.identity/client-id: $($DeploymentOutputs['AZIDENTITY_USER_ASSIGNED_IDENTITY_CLIENT_ID'])
+  name: $serviceAccountName
+  namespace: default
+---
+apiVersion: v1
+kind: Pod
+metadata:
+  name: $podName
+  namespace: default
+  labels:
+    app: $podName
+    azure.workload.identity/use: "true"
+spec:
+  serviceAccountName: $serviceAccountName
+  containers:
+  - name: $podName
+    image: $image
+    env:
+    - name: AZIDENTITY_STORAGE_NAME
+      value: $($DeploymentOutputs['AZIDENTITY_STORAGE_NAME_USER_ASSIGNED'])
+    - name: AZIDENTITY_USE_WORKLOAD_IDENTITY
+      value: "true"
+    - name: FUNCTIONS_CUSTOMHANDLER_PORT
+      value: "80"
+  nodeSelector:
+    kubernetes.io/os: linux
+"@
+kubectl apply -f "$PSScriptRoot/k8s.yaml"
+Write-Host "##vso[task.setvariable variable=AZIDENTITY_POD_NAME;]$podName"

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-pre.ps1 🔗

@@ -0,0 +1,44 @@
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License.
+
+# IMPORTANT: Do not invoke this file directly. Please instead run eng/common/TestResources/New-TestResources.ps1 from the repository root.
+
+[CmdletBinding(SupportsShouldProcess = $true, ConfirmImpact = 'Medium')]
+param (
+    [hashtable] $AdditionalParameters = @{},
+
+    # Captures any arguments from eng/New-TestResources.ps1 not declared here (no parameter errors).
+    [Parameter(ValueFromRemainingArguments = $true)]
+    $RemainingArguments
+)
+
+if (-not (Test-Path "$PSScriptRoot/sshkey.pub")) {
+    ssh-keygen -t rsa -b 4096 -f "$PSScriptRoot/sshkey" -N '' -C ''
+}
+$templateFileParameters['sshPubKey'] = Get-Content "$PSScriptRoot/sshkey.pub"
+
+if (!$CI) {
+    # TODO: Remove this once auto-cloud config downloads are supported locally
+    Write-Host "Skipping cert setup in local testing mode"
+    return
+}
+
+if ($null -eq $EnvironmentVariables -or $EnvironmentVariables.Count -eq 0) {
+    throw "EnvironmentVariables must be set in the calling script New-TestResources.ps1"
+}
+
+$tmp = $env:TEMP ? $env:TEMP : [System.IO.Path]::GetTempPath()
+$pfxPath = Join-Path $tmp "test.pfx"
+$pemPath = Join-Path $tmp "test.pem"
+
+Write-Host "Creating identity test files: $pfxPath $pemPath"
+
+[System.Convert]::FromBase64String($EnvironmentVariables['PFX_CONTENTS']) | Set-Content -Path $pfxPath -AsByteStream
+Set-Content -Path $pemPath -Value $EnvironmentVariables['PEM_CONTENTS']
+
+# Set for pipeline
+Write-Host "##vso[task.setvariable variable=IDENTITY_SP_CERT_PFX;]$pfxPath"
+Write-Host "##vso[task.setvariable variable=IDENTITY_SP_CERT_PEM;]$pemPath"
+# Set for local
+$env:IDENTITY_SP_CERT_PFX = $pfxPath
+$env:IDENTITY_SP_CERT_PEM = $pemPath

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources.bicep 🔗

@@ -0,0 +1,219 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT License.
+
+@description('Kubernetes cluster admin user name.')
+param adminUser string = 'azureuser'
+
+@minLength(6)
+@maxLength(23)
+@description('The base resource name.')
+param baseName string = resourceGroup().name
+
+@description('Whether to deploy resources. When set to false, this file deploys nothing.')
+param deployResources bool = false
+
+param sshPubKey string = ''
+
+@description('The location of the resource. By default, this is the same as the resource group.')
+param location string = resourceGroup().location
+
+// https://learn.microsoft.com/azure/role-based-access-control/built-in-roles
+var acrPull = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '7f951dda-4ed3-4680-a7ca-43fe172d538d')
+var blobReader = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '2a2b9908-6ea1-4ae2-8e65-a410df84e7d1')
+
+resource sa 'Microsoft.Storage/storageAccounts@2021-08-01' = if (deployResources) {
+  kind: 'StorageV2'
+  location: location
+  name: 'sa${uniqueString(baseName)}'
+  properties: {
+    accessTier: 'Hot'
+  }
+  sku: {
+    name: 'Standard_LRS'
+  }
+}
+
+resource saUserAssigned 'Microsoft.Storage/storageAccounts@2021-08-01' = if (deployResources) {
+  kind: 'StorageV2'
+  location: location
+  name: 'sa2${uniqueString(baseName)}'
+  properties: {
+    accessTier: 'Hot'
+  }
+  sku: {
+    name: 'Standard_LRS'
+  }
+}
+
+resource usermgdid 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' = if (deployResources) {
+  location: location
+  name: baseName
+}
+
+resource acrPullContainerInstance 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (deployResources) {
+  name: guid(resourceGroup().id, acrPull, 'containerInstance')
+  properties: {
+    principalId: deployResources ? usermgdid.properties.principalId : ''
+    principalType: 'ServicePrincipal'
+    roleDefinitionId: acrPull
+  }
+  scope: containerRegistry
+}
+
+resource blobRoleUserAssigned 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (deployResources) {
+  scope: saUserAssigned
+  name: guid(resourceGroup().id, blobReader, usermgdid.id)
+  properties: {
+    principalId: deployResources ? usermgdid.properties.principalId : ''
+    principalType: 'ServicePrincipal'
+    roleDefinitionId: blobReader
+  }
+}
+
+resource blobRoleFunc 'Microsoft.Authorization/roleAssignments@2022-04-01' = if (deployResources) {
+  name: guid(resourceGroup().id, blobReader, 'azfunc')
+  properties: {
+    principalId: deployResources ? azfunc.identity.principalId : ''
+    roleDefinitionId: blobReader
+    principalType: 'ServicePrincipal'
+  }
+  scope: sa
+}
+
+resource containerRegistry 'Microsoft.ContainerRegistry/registries@2023-01-01-preview' = if (deployResources) {
+  location: location
+  name: uniqueString(resourceGroup().id)
+  properties: {
+    adminUserEnabled: true
+  }
+  sku: {
+    name: 'Basic'
+  }
+}
+
+resource farm 'Microsoft.Web/serverfarms@2021-03-01' = if (deployResources) {
+  kind: 'app'
+  location: location
+  name: '${baseName}_asp'
+  properties: {}
+  sku: {
+    capacity: 1
+    family: 'B'
+    name: 'B1'
+    size: 'B1'
+    tier: 'Basic'
+  }
+}
+
+resource azfunc 'Microsoft.Web/sites@2021-03-01' = if (deployResources) {
+  identity: {
+    type: 'SystemAssigned, UserAssigned'
+    userAssignedIdentities: {
+      '${deployResources ? usermgdid.id : ''}': {}
+    }
+  }
+  kind: 'functionapp'
+  location: location
+  name: '${baseName}func'
+  properties: {
+    enabled: true
+    httpsOnly: true
+    keyVaultReferenceIdentity: 'SystemAssigned'
+    serverFarmId: farm.id
+    siteConfig: {
+      alwaysOn: true
+      appSettings: [
+        {
+          name: 'AZIDENTITY_STORAGE_NAME'
+          value: deployResources ? sa.name : null
+        }
+        {
+          name: 'AZIDENTITY_STORAGE_NAME_USER_ASSIGNED'
+          value: deployResources ? saUserAssigned.name : null
+        }
+        {
+          name: 'AZIDENTITY_USER_ASSIGNED_IDENTITY'
+          value: deployResources ? usermgdid.id : null
+        }
+        {
+          name: 'AzureWebJobsStorage'
+          value: 'DefaultEndpointsProtocol=https;AccountName=${deployResources ? sa.name : ''};EndpointSuffix=${deployResources ? environment().suffixes.storage : ''};AccountKey=${deployResources ? sa.listKeys().keys[0].value : ''}'
+        }
+        {
+          name: 'FUNCTIONS_EXTENSION_VERSION'
+          value: '~4'
+        }
+        {
+          name: 'FUNCTIONS_WORKER_RUNTIME'
+          value: 'custom'
+        }
+        {
+          name: 'WEBSITE_CONTENTAZUREFILECONNECTIONSTRING'
+          value: 'DefaultEndpointsProtocol=https;AccountName=${deployResources ? sa.name : ''};EndpointSuffix=${deployResources ? environment().suffixes.storage : ''};AccountKey=${deployResources ? sa.listKeys().keys[0].value : ''}'
+        }
+        {
+          name: 'WEBSITE_CONTENTSHARE'
+          value: toLower('${baseName}-func')
+        }
+      ]
+      http20Enabled: true
+      minTlsVersion: '1.2'
+    }
+  }
+}
+
+resource aks 'Microsoft.ContainerService/managedClusters@2023-06-01' = if (deployResources) {
+  name: baseName
+  location: location
+  identity: {
+    type: 'SystemAssigned'
+  }
+  properties: {
+    agentPoolProfiles: [
+      {
+        count: 1
+        enableAutoScaling: false
+        kubeletDiskType: 'OS'
+        mode: 'System'
+        name: 'agentpool'
+        osDiskSizeGB: 128
+        osDiskType: 'Managed'
+        osSKU: 'Ubuntu'
+        osType: 'Linux'
+        type: 'VirtualMachineScaleSets'
+        vmSize: 'Standard_D2s_v3'
+      }
+    ]
+    dnsPrefix: 'identitytest'
+    enableRBAC: true
+    linuxProfile: {
+      adminUsername: adminUser
+      ssh: {
+        publicKeys: [
+          {
+            keyData: sshPubKey
+          }
+        ]
+      }
+    }
+    oidcIssuerProfile: {
+      enabled: true
+    }
+    securityProfile: {
+      workloadIdentity: {
+        enabled: true
+      }
+    }
+  }
+}
+
+output AZIDENTITY_ACR_LOGIN_SERVER string = deployResources ? containerRegistry.properties.loginServer : ''
+output AZIDENTITY_ACR_NAME string = deployResources ? containerRegistry.name : ''
+output AZIDENTITY_AKS_NAME string = deployResources ? aks.name : ''
+output AZIDENTITY_FUNCTION_NAME string = deployResources ? azfunc.name : ''
+output AZIDENTITY_STORAGE_ID string = deployResources ? sa.id : ''
+output AZIDENTITY_STORAGE_NAME string = deployResources ? sa.name : ''
+output AZIDENTITY_STORAGE_NAME_USER_ASSIGNED string = deployResources ? saUserAssigned.name : ''
+output AZIDENTITY_USER_ASSIGNED_IDENTITY string = deployResources ? usermgdid.id : ''
+output AZIDENTITY_USER_ASSIGNED_IDENTITY_CLIENT_ID string = deployResources ? usermgdid.properties.clientId : ''
+output AZIDENTITY_USER_ASSIGNED_IDENTITY_NAME string = deployResources ? usermgdid.name : ''

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/username_password_credential.go 🔗

@@ -0,0 +1,90 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+)
+
+const credNameUserPassword = "UsernamePasswordCredential"
+
+// UsernamePasswordCredentialOptions contains optional parameters for UsernamePasswordCredential.
+type UsernamePasswordCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens.
+	// Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the
+	// application is registered.
+	AdditionallyAllowedTenants []string
+
+	// authenticationRecord returned by a call to a credential's Authenticate method. Set this option
+	// to enable the credential to use data from a previous authentication.
+	authenticationRecord authenticationRecord
+
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+
+	// tokenCachePersistenceOptions enables persistent token caching when not nil.
+	tokenCachePersistenceOptions *tokenCachePersistenceOptions
+}
+
+// UsernamePasswordCredential authenticates a user with a password. Microsoft doesn't recommend this kind of authentication,
+// because it's less secure than other authentication flows. This credential is not interactive, so it isn't compatible
+// with any form of multi-factor authentication, and the application must already have user or admin consent.
+// This credential can only authenticate work and school accounts; it can't authenticate Microsoft accounts.
+type UsernamePasswordCredential struct {
+	client *publicClient
+}
+
+// NewUsernamePasswordCredential creates a UsernamePasswordCredential. clientID is the ID of the application the user
+// will authenticate to. Pass nil for options to accept defaults.
+func NewUsernamePasswordCredential(tenantID string, clientID string, username string, password string, options *UsernamePasswordCredentialOptions) (*UsernamePasswordCredential, error) {
+	if options == nil {
+		options = &UsernamePasswordCredentialOptions{}
+	}
+	opts := publicClientOptions{
+		AdditionallyAllowedTenants:   options.AdditionallyAllowedTenants,
+		ClientOptions:                options.ClientOptions,
+		DisableInstanceDiscovery:     options.DisableInstanceDiscovery,
+		Password:                     password,
+		Record:                       options.authenticationRecord,
+		TokenCachePersistenceOptions: options.tokenCachePersistenceOptions,
+		Username:                     username,
+	}
+	c, err := newPublicClient(tenantID, clientID, credNameUserPassword, opts)
+	if err != nil {
+		return nil, err
+	}
+	return &UsernamePasswordCredential{client: c}, err
+}
+
+// Authenticate the user. Subsequent calls to GetToken will automatically use the returned AuthenticationRecord.
+func (c *UsernamePasswordCredential) authenticate(ctx context.Context, opts *policy.TokenRequestOptions) (authenticationRecord, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameUserPassword+"."+traceOpAuthenticate, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.Authenticate(ctx, opts)
+	return tk, err
+}
+
+// GetToken requests an access token from Microsoft Entra ID. This method is called automatically by Azure SDK clients.
+func (c *UsernamePasswordCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameUserPassword+"."+traceOpGetToken, c.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := c.client.GetToken(ctx, opts)
+	return tk, err
+}
+
+var _ azcore.TokenCredential = (*UsernamePasswordCredential)(nil)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/version.go 🔗

@@ -0,0 +1,18 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+const (
+	// UserAgent is the string to be used in the user agent string when making requests.
+	component = "azidentity"
+
+	// module is the fully qualified name of the module used in telemetry and distributed tracing.
+	module = "github.com/Azure/azure-sdk-for-go/sdk/" + component
+
+	// Version is the semantic version (see http://semver.org) of this module.
+	version = "v1.7.0"
+)

vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/workload_identity.go 🔗

@@ -0,0 +1,131 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package azidentity
+
+import (
+	"context"
+	"errors"
+	"os"
+	"sync"
+	"time"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/policy"
+	"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
+)
+
+const credNameWorkloadIdentity = "WorkloadIdentityCredential"
+
+// WorkloadIdentityCredential supports Azure workload identity on Kubernetes.
+// See [Azure Kubernetes Service documentation] for more information.
+//
+// [Azure Kubernetes Service documentation]: https://learn.microsoft.com/azure/aks/workload-identity-overview
+type WorkloadIdentityCredential struct {
+	assertion, file string
+	cred            *ClientAssertionCredential
+	expires         time.Time
+	mtx             *sync.RWMutex
+}
+
+// WorkloadIdentityCredentialOptions contains optional parameters for WorkloadIdentityCredential.
+type WorkloadIdentityCredentialOptions struct {
+	azcore.ClientOptions
+
+	// AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens.
+	// Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the
+	// application is registered.
+	AdditionallyAllowedTenants []string
+	// ClientID of the service principal. Defaults to the value of the environment variable AZURE_CLIENT_ID.
+	ClientID string
+	// DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or
+	// private clouds such as Azure Stack. It determines whether the credential requests Microsoft Entra instance metadata
+	// from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making
+	// the application responsible for ensuring the configured authority is valid and trustworthy.
+	DisableInstanceDiscovery bool
+	// TenantID of the service principal. Defaults to the value of the environment variable AZURE_TENANT_ID.
+	TenantID string
+	// TokenFilePath is the path of a file containing a Kubernetes service account token. Defaults to the value of the
+	// environment variable AZURE_FEDERATED_TOKEN_FILE.
+	TokenFilePath string
+}
+
+// NewWorkloadIdentityCredential constructs a WorkloadIdentityCredential. Service principal configuration is read
+// from environment variables as set by the Azure workload identity webhook. Set options to override those values.
+func NewWorkloadIdentityCredential(options *WorkloadIdentityCredentialOptions) (*WorkloadIdentityCredential, error) {
+	if options == nil {
+		options = &WorkloadIdentityCredentialOptions{}
+	}
+	ok := false
+	clientID := options.ClientID
+	if clientID == "" {
+		if clientID, ok = os.LookupEnv(azureClientID); !ok {
+			return nil, errors.New("no client ID specified. Check pod configuration or set ClientID in the options")
+		}
+	}
+	file := options.TokenFilePath
+	if file == "" {
+		if file, ok = os.LookupEnv(azureFederatedTokenFile); !ok {
+			return nil, errors.New("no token file specified. Check pod configuration or set TokenFilePath in the options")
+		}
+	}
+	tenantID := options.TenantID
+	if tenantID == "" {
+		if tenantID, ok = os.LookupEnv(azureTenantID); !ok {
+			return nil, errors.New("no tenant ID specified. Check pod configuration or set TenantID in the options")
+		}
+	}
+	w := WorkloadIdentityCredential{file: file, mtx: &sync.RWMutex{}}
+	caco := ClientAssertionCredentialOptions{
+		AdditionallyAllowedTenants: options.AdditionallyAllowedTenants,
+		ClientOptions:              options.ClientOptions,
+		DisableInstanceDiscovery:   options.DisableInstanceDiscovery,
+	}
+	cred, err := NewClientAssertionCredential(tenantID, clientID, w.getAssertion, &caco)
+	if err != nil {
+		return nil, err
+	}
+	// we want "WorkloadIdentityCredential" in log messages, not "ClientAssertionCredential"
+	cred.client.name = credNameWorkloadIdentity
+	w.cred = cred
+	return &w, nil
+}
+
+// GetToken requests an access token from Microsoft Entra ID. Azure SDK clients call this method automatically.
+func (w *WorkloadIdentityCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) {
+	var err error
+	ctx, endSpan := runtime.StartSpan(ctx, credNameWorkloadIdentity+"."+traceOpGetToken, w.cred.client.azClient.Tracer(), nil)
+	defer func() { endSpan(err) }()
+	tk, err := w.cred.GetToken(ctx, opts)
+	return tk, err
+}
+
+// getAssertion returns the specified file's content, which is expected to be a Kubernetes service account token.
+// Kubernetes is responsible for updating the file as service account tokens expire.
+func (w *WorkloadIdentityCredential) getAssertion(context.Context) (string, error) {
+	w.mtx.RLock()
+	if w.expires.Before(time.Now()) {
+		// ensure only one goroutine at a time updates the assertion
+		w.mtx.RUnlock()
+		w.mtx.Lock()
+		defer w.mtx.Unlock()
+		// double check because another goroutine may have acquired the write lock first and done the update
+		if now := time.Now(); w.expires.Before(now) {
+			content, err := os.ReadFile(w.file)
+			if err != nil {
+				return "", err
+			}
+			w.assertion = string(content)
+			// Kubernetes rotates service account tokens when they reach 80% of their total TTL. The shortest TTL
+			// is 1 hour. That implies the token we just read is valid for at least 12 minutes (20% of 1 hour),
+			// but we add some margin for safety.
+			w.expires = now.Add(10 * time.Minute)
+		}
+	} else {
+		defer w.mtx.RUnlock()
+	}
+	return w.assertion, nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/LICENSE.txt 🔗

@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) Microsoft Corporation.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/diag.go 🔗

@@ -0,0 +1,51 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package diag
+
+import (
+	"fmt"
+	"runtime"
+	"strings"
+)
+
+// Caller returns the file and line number of a frame on the caller's stack.
+// If the funtion fails an empty string is returned.
+// skipFrames - the number of frames to skip when determining the caller.
+// Passing a value of 0 will return the immediate caller of this function.
+func Caller(skipFrames int) string {
+	if pc, file, line, ok := runtime.Caller(skipFrames + 1); ok {
+		// the skipFrames + 1 is to skip ourselves
+		frame := runtime.FuncForPC(pc)
+		return fmt.Sprintf("%s()\n\t%s:%d", frame.Name(), file, line)
+	}
+	return ""
+}
+
+// StackTrace returns a formatted stack trace string.
+// If the funtion fails an empty string is returned.
+// skipFrames - the number of stack frames to skip before composing the trace string.
+// totalFrames - the maximum number of stack frames to include in the trace string.
+func StackTrace(skipFrames, totalFrames int) string {
+	pcCallers := make([]uintptr, totalFrames)
+	if frames := runtime.Callers(skipFrames, pcCallers); frames == 0 {
+		return ""
+	}
+	frames := runtime.CallersFrames(pcCallers)
+	sb := strings.Builder{}
+	for {
+		frame, more := frames.Next()
+		sb.WriteString(frame.Function)
+		sb.WriteString("()\n\t")
+		sb.WriteString(frame.File)
+		sb.WriteRune(':')
+		sb.WriteString(fmt.Sprintf("%d\n", frame.Line))
+		if !more {
+			break
+		}
+	}
+	return sb.String()
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/errorinfo.go 🔗

@@ -0,0 +1,46 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package errorinfo
+
+// NonRetriable represents a non-transient error.  This works in
+// conjunction with the retry policy, indicating that the error condition
+// is idempotent, so no retries will be attempted.
+// Use errors.As() to access this interface in the error chain.
+type NonRetriable interface {
+	error
+	NonRetriable()
+}
+
+// NonRetriableError marks the specified error as non-retriable.
+// This function takes an error as input and returns a new error that is marked as non-retriable.
+func NonRetriableError(err error) error {
+	return &nonRetriableError{err}
+}
+
+// nonRetriableError is a struct that embeds the error interface.
+// It is used to represent errors that should not be retried.
+type nonRetriableError struct {
+	error
+}
+
+// Error method for nonRetriableError struct.
+// It returns the error message of the embedded error.
+func (p *nonRetriableError) Error() string {
+	return p.error.Error()
+}
+
+// NonRetriable is a marker method for nonRetriableError struct.
+// Non-functional and indicates that the error is non-retriable.
+func (*nonRetriableError) NonRetriable() {
+	// marker method
+}
+
+// Unwrap method for nonRetriableError struct.
+// It returns the original error that was marked as non-retriable.
+func (p *nonRetriableError) Unwrap() error {
+	return p.error
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/exported/exported.go 🔗

@@ -0,0 +1,129 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package exported
+
+import (
+	"errors"
+	"io"
+	"net/http"
+)
+
+// HasStatusCode returns true if the Response's status code is one of the specified values.
+// Exported as runtime.HasStatusCode().
+func HasStatusCode(resp *http.Response, statusCodes ...int) bool {
+	if resp == nil {
+		return false
+	}
+	for _, sc := range statusCodes {
+		if resp.StatusCode == sc {
+			return true
+		}
+	}
+	return false
+}
+
+// PayloadOptions contains the optional values for the Payload func.
+// NOT exported but used by azcore.
+type PayloadOptions struct {
+	// BytesModifier receives the downloaded byte slice and returns an updated byte slice.
+	// Use this to modify the downloaded bytes in a payload (e.g. removing a BOM).
+	BytesModifier func([]byte) []byte
+}
+
+// Payload reads and returns the response body or an error.
+// On a successful read, the response body is cached.
+// Subsequent reads will access the cached value.
+// Exported as runtime.Payload() WITHOUT the opts parameter.
+func Payload(resp *http.Response, opts *PayloadOptions) ([]byte, error) {
+	if resp.Body == nil {
+		// this shouldn't happen in real-world scenarios as a
+		// response with no body should set it to http.NoBody
+		return nil, nil
+	}
+	modifyBytes := func(b []byte) []byte { return b }
+	if opts != nil && opts.BytesModifier != nil {
+		modifyBytes = opts.BytesModifier
+	}
+
+	// r.Body won't be a nopClosingBytesReader if downloading was skipped
+	if buf, ok := resp.Body.(*nopClosingBytesReader); ok {
+		bytesBody := modifyBytes(buf.Bytes())
+		buf.Set(bytesBody)
+		return bytesBody, nil
+	}
+
+	bytesBody, err := io.ReadAll(resp.Body)
+	resp.Body.Close()
+	if err != nil {
+		return nil, err
+	}
+
+	bytesBody = modifyBytes(bytesBody)
+	resp.Body = &nopClosingBytesReader{s: bytesBody}
+	return bytesBody, nil
+}
+
+// PayloadDownloaded returns true if the response body has already been downloaded.
+// This implies that the Payload() func above has been previously called.
+// NOT exported but used by azcore.
+func PayloadDownloaded(resp *http.Response) bool {
+	_, ok := resp.Body.(*nopClosingBytesReader)
+	return ok
+}
+
+// nopClosingBytesReader is an io.ReadSeekCloser around a byte slice.
+// It also provides direct access to the byte slice to avoid rereading.
+type nopClosingBytesReader struct {
+	s []byte
+	i int64
+}
+
+// Bytes returns the underlying byte slice.
+func (r *nopClosingBytesReader) Bytes() []byte {
+	return r.s
+}
+
+// Close implements the io.Closer interface.
+func (*nopClosingBytesReader) Close() error {
+	return nil
+}
+
+// Read implements the io.Reader interface.
+func (r *nopClosingBytesReader) Read(b []byte) (n int, err error) {
+	if r.i >= int64(len(r.s)) {
+		return 0, io.EOF
+	}
+	n = copy(b, r.s[r.i:])
+	r.i += int64(n)
+	return
+}
+
+// Set replaces the existing byte slice with the specified byte slice and resets the reader.
+func (r *nopClosingBytesReader) Set(b []byte) {
+	r.s = b
+	r.i = 0
+}
+
+// Seek implements the io.Seeker interface.
+func (r *nopClosingBytesReader) Seek(offset int64, whence int) (int64, error) {
+	var i int64
+	switch whence {
+	case io.SeekStart:
+		i = offset
+	case io.SeekCurrent:
+		i = r.i + offset
+	case io.SeekEnd:
+		i = int64(len(r.s)) + offset
+	default:
+		return 0, errors.New("nopClosingBytesReader: invalid whence")
+	}
+	if i < 0 {
+		return 0, errors.New("nopClosingBytesReader: negative position")
+	}
+	r.i = i
+	return i, nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/log.go 🔗

@@ -0,0 +1,104 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package log
+
+import (
+	"fmt"
+	"os"
+	"time"
+)
+
+///////////////////////////////////////////////////////////////////////////////////////////////////
+// NOTE: The following are exported as public surface area from azcore.  DO NOT MODIFY
+///////////////////////////////////////////////////////////////////////////////////////////////////
+
+// Event is used to group entries.  Each group can be toggled on or off.
+type Event string
+
+// SetEvents is used to control which events are written to
+// the log.  By default all log events are writen.
+func SetEvents(cls ...Event) {
+	log.cls = cls
+}
+
+// SetListener will set the Logger to write to the specified listener.
+func SetListener(lst func(Event, string)) {
+	log.lst = lst
+}
+
+///////////////////////////////////////////////////////////////////////////////////////////////////
+// END PUBLIC SURFACE AREA
+///////////////////////////////////////////////////////////////////////////////////////////////////
+
+// Should returns true if the specified log event should be written to the log.
+// By default all log events will be logged.  Call SetEvents() to limit
+// the log events for logging.
+// If no listener has been set this will return false.
+// Calling this method is useful when the message to log is computationally expensive
+// and you want to avoid the overhead if its log event is not enabled.
+func Should(cls Event) bool {
+	if log.lst == nil {
+		return false
+	}
+	if log.cls == nil || len(log.cls) == 0 {
+		return true
+	}
+	for _, c := range log.cls {
+		if c == cls {
+			return true
+		}
+	}
+	return false
+}
+
+// Write invokes the underlying listener with the specified event and message.
+// If the event shouldn't be logged or there is no listener then Write does nothing.
+func Write(cls Event, message string) {
+	if !Should(cls) {
+		return
+	}
+	log.lst(cls, message)
+}
+
+// Writef invokes the underlying listener with the specified event and formatted message.
+// If the event shouldn't be logged or there is no listener then Writef does nothing.
+func Writef(cls Event, format string, a ...interface{}) {
+	if !Should(cls) {
+		return
+	}
+	log.lst(cls, fmt.Sprintf(format, a...))
+}
+
+// TestResetEvents is used for TESTING PURPOSES ONLY.
+func TestResetEvents() {
+	log.cls = nil
+}
+
+// logger controls which events to log and writing to the underlying log.
+type logger struct {
+	cls []Event
+	lst func(Event, string)
+}
+
+// the process-wide logger
+var log logger
+
+func init() {
+	initLogging()
+}
+
+// split out for testing purposes
+func initLogging() {
+	if cls := os.Getenv("AZURE_SDK_GO_LOGGING"); cls == "all" {
+		// cls could be enhanced to support a comma-delimited list of log events
+		log.lst = func(cls Event, msg string) {
+			// simple console logger, it writes to stderr in the following format:
+			// [time-stamp] Event: message
+			fmt.Fprintf(os.Stderr, "[%s] %s: %s\n", time.Now().Format(time.StampMicro), cls, msg)
+		}
+	}
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/poller/util.go 🔗

@@ -0,0 +1,155 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package poller
+
+import (
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+	"strings"
+
+	"github.com/Azure/azure-sdk-for-go/sdk/internal/exported"
+)
+
+// the well-known set of LRO status/provisioning state values.
+const (
+	StatusSucceeded  = "Succeeded"
+	StatusCanceled   = "Canceled"
+	StatusFailed     = "Failed"
+	StatusInProgress = "InProgress"
+)
+
+// these are non-conformant states that we've seen in the wild.
+// we support them for back-compat.
+const (
+	StatusCancelled = "Cancelled"
+	StatusCompleted = "Completed"
+)
+
+// IsTerminalState returns true if the LRO's state is terminal.
+func IsTerminalState(s string) bool {
+	return Failed(s) || Succeeded(s)
+}
+
+// Failed returns true if the LRO's state is terminal failure.
+func Failed(s string) bool {
+	return strings.EqualFold(s, StatusFailed) || strings.EqualFold(s, StatusCanceled) || strings.EqualFold(s, StatusCancelled)
+}
+
+// Succeeded returns true if the LRO's state is terminal success.
+func Succeeded(s string) bool {
+	return strings.EqualFold(s, StatusSucceeded) || strings.EqualFold(s, StatusCompleted)
+}
+
+// returns true if the LRO response contains a valid HTTP status code
+func StatusCodeValid(resp *http.Response) bool {
+	return exported.HasStatusCode(resp, http.StatusOK, http.StatusAccepted, http.StatusCreated, http.StatusNoContent)
+}
+
+// IsValidURL verifies that the URL is valid and absolute.
+func IsValidURL(s string) bool {
+	u, err := url.Parse(s)
+	return err == nil && u.IsAbs()
+}
+
+// ErrNoBody is returned if the response didn't contain a body.
+var ErrNoBody = errors.New("the response did not contain a body")
+
+// GetJSON reads the response body into a raw JSON object.
+// It returns ErrNoBody if there was no content.
+func GetJSON(resp *http.Response) (map[string]any, error) {
+	body, err := exported.Payload(resp, nil)
+	if err != nil {
+		return nil, err
+	}
+	if len(body) == 0 {
+		return nil, ErrNoBody
+	}
+	// unmarshall the body to get the value
+	var jsonBody map[string]any
+	if err = json.Unmarshal(body, &jsonBody); err != nil {
+		return nil, err
+	}
+	return jsonBody, nil
+}
+
+// provisioningState returns the provisioning state from the response or the empty string.
+func provisioningState(jsonBody map[string]any) string {
+	jsonProps, ok := jsonBody["properties"]
+	if !ok {
+		return ""
+	}
+	props, ok := jsonProps.(map[string]any)
+	if !ok {
+		return ""
+	}
+	rawPs, ok := props["provisioningState"]
+	if !ok {
+		return ""
+	}
+	ps, ok := rawPs.(string)
+	if !ok {
+		return ""
+	}
+	return ps
+}
+
+// status returns the status from the response or the empty string.
+func status(jsonBody map[string]any) string {
+	rawStatus, ok := jsonBody["status"]
+	if !ok {
+		return ""
+	}
+	status, ok := rawStatus.(string)
+	if !ok {
+		return ""
+	}
+	return status
+}
+
+// GetStatus returns the LRO's status from the response body.
+// Typically used for Azure-AsyncOperation flows.
+// If there is no status in the response body the empty string is returned.
+func GetStatus(resp *http.Response) (string, error) {
+	jsonBody, err := GetJSON(resp)
+	if err != nil {
+		return "", err
+	}
+	return status(jsonBody), nil
+}
+
+// GetProvisioningState returns the LRO's state from the response body.
+// If there is no state in the response body the empty string is returned.
+func GetProvisioningState(resp *http.Response) (string, error) {
+	jsonBody, err := GetJSON(resp)
+	if err != nil {
+		return "", err
+	}
+	return provisioningState(jsonBody), nil
+}
+
+// GetResourceLocation returns the LRO's resourceLocation value from the response body.
+// Typically used for Operation-Location flows.
+// If there is no resourceLocation in the response body the empty string is returned.
+func GetResourceLocation(resp *http.Response) (string, error) {
+	jsonBody, err := GetJSON(resp)
+	if err != nil {
+		return "", err
+	}
+	v, ok := jsonBody["resourceLocation"]
+	if !ok {
+		// it might be ok if the field doesn't exist, the caller must make that determination
+		return "", nil
+	}
+	vv, ok := v.(string)
+	if !ok {
+		return "", fmt.Errorf("the resourceLocation value %v was not in string format", v)
+	}
+	return vv, nil
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/temporal/resource.go 🔗

@@ -0,0 +1,123 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package temporal
+
+import (
+	"sync"
+	"time"
+)
+
+// AcquireResource abstracts a method for refreshing a temporal resource.
+type AcquireResource[TResource, TState any] func(state TState) (newResource TResource, newExpiration time.Time, err error)
+
+// Resource is a temporal resource (usually a credential) that requires periodic refreshing.
+type Resource[TResource, TState any] struct {
+	// cond is used to synchronize access to the shared resource embodied by the remaining fields
+	cond *sync.Cond
+
+	// acquiring indicates that some thread/goroutine is in the process of acquiring/updating the resource
+	acquiring bool
+
+	// resource contains the value of the shared resource
+	resource TResource
+
+	// expiration indicates when the shared resource expires; it is 0 if the resource was never acquired
+	expiration time.Time
+
+	// lastAttempt indicates when a thread/goroutine last attempted to acquire/update the resource
+	lastAttempt time.Time
+
+	// acquireResource is the callback function that actually acquires the resource
+	acquireResource AcquireResource[TResource, TState]
+}
+
+// NewResource creates a new Resource that uses the specified AcquireResource for refreshing.
+func NewResource[TResource, TState any](ar AcquireResource[TResource, TState]) *Resource[TResource, TState] {
+	return &Resource[TResource, TState]{cond: sync.NewCond(&sync.Mutex{}), acquireResource: ar}
+}
+
+// Get returns the underlying resource.
+// If the resource is fresh, no refresh is performed.
+func (er *Resource[TResource, TState]) Get(state TState) (TResource, error) {
+	// If the resource is expiring within this time window, update it eagerly.
+	// This allows other threads/goroutines to keep running by using the not-yet-expired
+	// resource value while one thread/goroutine updates the resource.
+	const window = 5 * time.Minute   // This example updates the resource 5 minutes prior to expiration
+	const backoff = 30 * time.Second // Minimum wait time between eager update attempts
+
+	now, acquire, expired := time.Now(), false, false
+
+	// acquire exclusive lock
+	er.cond.L.Lock()
+	resource := er.resource
+
+	for {
+		expired = er.expiration.IsZero() || er.expiration.Before(now)
+		if expired {
+			// The resource was never acquired or has expired
+			if !er.acquiring {
+				// If another thread/goroutine is not acquiring/updating the resource, this thread/goroutine will do it
+				er.acquiring, acquire = true, true
+				break
+			}
+			// Getting here means that this thread/goroutine will wait for the updated resource
+		} else if er.expiration.Add(-window).Before(now) {
+			// The resource is valid but is expiring within the time window
+			if !er.acquiring && er.lastAttempt.Add(backoff).Before(now) {
+				// If another thread/goroutine is not acquiring/renewing the resource, and none has attempted
+				// to do so within the last 30 seconds, this thread/goroutine will do it
+				er.acquiring, acquire = true, true
+				break
+			}
+			// This thread/goroutine will use the existing resource value while another updates it
+			resource = er.resource
+			break
+		} else {
+			// The resource is not close to expiring, this thread/goroutine should use its current value
+			resource = er.resource
+			break
+		}
+		// If we get here, wait for the new resource value to be acquired/updated
+		er.cond.Wait()
+	}
+	er.cond.L.Unlock() // Release the lock so no threads/goroutines are blocked
+
+	var err error
+	if acquire {
+		// This thread/goroutine has been selected to acquire/update the resource
+		var expiration time.Time
+		var newValue TResource
+		er.lastAttempt = now
+		newValue, expiration, err = er.acquireResource(state)
+
+		// Atomically, update the shared resource's new value & expiration.
+		er.cond.L.Lock()
+		if err == nil {
+			// Update resource & expiration, return the new value
+			resource = newValue
+			er.resource, er.expiration = resource, expiration
+		} else if !expired {
+			// An eager update failed. Discard the error and return the current--still valid--resource value
+			err = nil
+		}
+		er.acquiring = false // Indicate that no thread/goroutine is currently acquiring the resource
+
+		// Wake up any waiting threads/goroutines since there is a resource they can ALL use
+		er.cond.L.Unlock()
+		er.cond.Broadcast()
+	}
+	return resource, err // Return the resource this thread/goroutine can use
+}
+
+// Expire marks the resource as expired, ensuring it's refreshed on the next call to Get().
+func (er *Resource[TResource, TState]) Expire() {
+	er.cond.L.Lock()
+	defer er.cond.L.Unlock()
+
+	// Reset the expiration as if we never got this resource to begin with
+	er.expiration = time.Time{}
+}

vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/uuid.go 🔗

@@ -0,0 +1,76 @@
+//go:build go1.18
+// +build go1.18
+
+// Copyright (c) Microsoft Corporation. All rights reserved.
+// Licensed under the MIT License.
+
+package uuid
+
+import (
+	"crypto/rand"
+	"errors"
+	"fmt"
+	"strconv"
+)
+
+// The UUID reserved variants.
+const (
+	reservedRFC4122 byte = 0x40
+)
+
+// A UUID representation compliant with specification in RFC4122 document.
+type UUID [16]byte
+
+// New returns a new UUID using the RFC4122 algorithm.
+func New() (UUID, error) {
+	u := UUID{}
+	// Set all bits to pseudo-random values.
+	// NOTE: this takes a process-wide lock
+	_, err := rand.Read(u[:])
+	if err != nil {
+		return u, err
+	}
+	u[8] = (u[8] | reservedRFC4122) & 0x7F // u.setVariant(ReservedRFC4122)
+
+	var version byte = 4
+	u[6] = (u[6] & 0xF) | (version << 4) // u.setVersion(4)
+	return u, nil
+}
+
+// String returns the UUID in "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" format.
+func (u UUID) String() string {
+	return fmt.Sprintf("%x-%x-%x-%x-%x", u[0:4], u[4:6], u[6:8], u[8:10], u[10:])
+}
+
+// Parse parses a string formatted as "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+// or "{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}" into a UUID.
+func Parse(s string) (UUID, error) {
+	var uuid UUID
+	// ensure format
+	switch len(s) {
+	case 36:
+		// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+	case 38:
+		// {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
+		s = s[1:37]
+	default:
+		return uuid, errors.New("invalid UUID format")
+	}
+	if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' {
+		return uuid, errors.New("invalid UUID format")
+	}
+	// parse chunks
+	for i, x := range [16]int{
+		0, 2, 4, 6,
+		9, 11,
+		14, 16,
+		19, 21,
+		24, 26, 28, 30, 32, 34} {
+		b, err := strconv.ParseUint(s[x:x+2], 16, 8)
+		if err != nil {
+			return uuid, fmt.Errorf("invalid UUID format: %s", err)
+		}
+		uuid[i] = byte(b)
+	}
+	return uuid, nil
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/LICENSE 🔗

@@ -0,0 +1,21 @@
+    MIT License

+

+    Copyright (c) Microsoft Corporation.

+

+    Permission is hereby granted, free of charge, to any person obtaining a copy

+    of this software and associated documentation files (the "Software"), to deal

+    in the Software without restriction, including without limitation the rights

+    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell

+    copies of the Software, and to permit persons to whom the Software is

+    furnished to do so, subject to the following conditions:

+

+    The above copyright notice and this permission notice shall be included in all

+    copies or substantial portions of the Software.

+

+    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR

+    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,

+    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE

+    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER

+    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,

+    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE

+    SOFTWARE

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache/cache.go 🔗

@@ -0,0 +1,54 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+/*
+Package cache allows third parties to implement external storage for caching token data
+for distributed systems or multiple local applications access.
+
+The data stored and extracted will represent the entire cache. Therefore it is recommended
+one msal instance per user. This data is considered opaque and there are no guarantees to
+implementers on the format being passed.
+*/
+package cache
+
+import "context"
+
+// Marshaler marshals data from an internal cache to bytes that can be stored.
+type Marshaler interface {
+	Marshal() ([]byte, error)
+}
+
+// Unmarshaler unmarshals data from a storage medium into the internal cache, overwriting it.
+type Unmarshaler interface {
+	Unmarshal([]byte) error
+}
+
+// Serializer can serialize the cache to binary or from binary into the cache.
+type Serializer interface {
+	Marshaler
+	Unmarshaler
+}
+
+// ExportHints are suggestions for storing data.
+type ExportHints struct {
+	// PartitionKey is a suggested key for partitioning the cache
+	PartitionKey string
+}
+
+// ReplaceHints are suggestions for loading data.
+type ReplaceHints struct {
+	// PartitionKey is a suggested key for partitioning the cache
+	PartitionKey string
+}
+
+// ExportReplace exports and replaces in-memory cache data. It doesn't support nil Context or
+// define the outcome of passing one. A Context without a timeout must receive a default timeout
+// specified by the implementor. Retries must be implemented inside the implementation.
+type ExportReplace interface {
+	// Replace replaces the cache with what is in external storage. Implementors should honor
+	// Context cancellations and return context.Canceled or context.DeadlineExceeded in those cases.
+	Replace(ctx context.Context, cache Unmarshaler, hints ReplaceHints) error
+	// Export writes the binary representation of the cache (cache.Marshal()) to external storage.
+	// This is considered opaque. Context cancellations should be honored as in Replace.
+	Export(ctx context.Context, cache Marshaler, hints ExportHints) error
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential/confidential.go 🔗

@@ -0,0 +1,719 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+/*
+Package confidential provides a client for authentication of "confidential" applications.
+A "confidential" application is defined as an app that run on servers. They are considered
+difficult to access and for that reason capable of keeping an application secret.
+Confidential clients can hold configuration-time secrets.
+*/
+package confidential
+
+import (
+	"context"
+	"crypto"
+	"crypto/rsa"
+	"crypto/x509"
+	"encoding/base64"
+	"encoding/pem"
+	"errors"
+	"fmt"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared"
+)
+
+/*
+Design note:
+
+confidential.Client uses base.Client as an embedded type. base.Client statically assigns its attributes
+during creation. As it doesn't have any pointers in it, anything borrowed from it, such as
+Base.AuthParams is a copy that is free to be manipulated here.
+
+Duplicate Calls shared between public.Client and this package:
+There is some duplicate call options provided here that are the same as in public.Client . This
+is a design choices. Go proverb(https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s):
+"a little copying is better than a little dependency". Yes, we could have another package with
+shared options (fail).  That divides like 2 options from all others which makes the user look
+through more docs.  We can have all clients in one package, but I think separate packages
+here makes for better naming (public.Client vs client.PublicClient).  So I chose a little
+duplication.
+
+.Net People, Take note on X509:
+This uses x509.Certificates and private keys. x509 does not store private keys. .Net
+has a x509.Certificate2 abstraction that has private keys, but that just a strange invention.
+As such I've put a PEM decoder into here.
+*/
+
+// TODO(msal): This should have example code for each method on client using Go's example doc framework.
+// base usage details should be include in the package documentation.
+
+// AuthResult contains the results of one token acquisition operation.
+// For details see https://aka.ms/msal-net-authenticationresult
+type AuthResult = base.AuthResult
+
+type AuthenticationScheme = authority.AuthenticationScheme
+
+type Account = shared.Account
+
+// CertFromPEM converts a PEM file (.pem or .key) for use with [NewCredFromCert]. The file
+// must contain the public certificate and the private key. If a PEM block is encrypted and
+// password is not an empty string, it attempts to decrypt the PEM blocks using the password.
+// Multiple certs are due to certificate chaining for use cases like TLS that sign from root to leaf.
+func CertFromPEM(pemData []byte, password string) ([]*x509.Certificate, crypto.PrivateKey, error) {
+	var certs []*x509.Certificate
+	var priv crypto.PrivateKey
+	for {
+		block, rest := pem.Decode(pemData)
+		if block == nil {
+			break
+		}
+
+		//nolint:staticcheck // x509.IsEncryptedPEMBlock and x509.DecryptPEMBlock are deprecated. They are used here only to support a usecase.
+		if x509.IsEncryptedPEMBlock(block) {
+			b, err := x509.DecryptPEMBlock(block, []byte(password))
+			if err != nil {
+				return nil, nil, fmt.Errorf("could not decrypt encrypted PEM block: %v", err)
+			}
+			block, _ = pem.Decode(b)
+			if block == nil {
+				return nil, nil, fmt.Errorf("encounter encrypted PEM block that did not decode")
+			}
+		}
+
+		switch block.Type {
+		case "CERTIFICATE":
+			cert, err := x509.ParseCertificate(block.Bytes)
+			if err != nil {
+				return nil, nil, fmt.Errorf("block labelled 'CERTIFICATE' could not be parsed by x509: %v", err)
+			}
+			certs = append(certs, cert)
+		case "PRIVATE KEY":
+			if priv != nil {
+				return nil, nil, errors.New("found multiple private key blocks")
+			}
+
+			var err error
+			priv, err = x509.ParsePKCS8PrivateKey(block.Bytes)
+			if err != nil {
+				return nil, nil, fmt.Errorf("could not decode private key: %v", err)
+			}
+		case "RSA PRIVATE KEY":
+			if priv != nil {
+				return nil, nil, errors.New("found multiple private key blocks")
+			}
+			var err error
+			priv, err = x509.ParsePKCS1PrivateKey(block.Bytes)
+			if err != nil {
+				return nil, nil, fmt.Errorf("could not decode private key: %v", err)
+			}
+		}
+		pemData = rest
+	}
+
+	if len(certs) == 0 {
+		return nil, nil, fmt.Errorf("no certificates found")
+	}
+
+	if priv == nil {
+		return nil, nil, fmt.Errorf("no private key found")
+	}
+
+	return certs, priv, nil
+}
+
+// AssertionRequestOptions has required information for client assertion claims
+type AssertionRequestOptions = exported.AssertionRequestOptions
+
+// Credential represents the credential used in confidential client flows.
+type Credential struct {
+	secret string
+
+	cert *x509.Certificate
+	key  crypto.PrivateKey
+	x5c  []string
+
+	assertionCallback func(context.Context, AssertionRequestOptions) (string, error)
+
+	tokenProvider func(context.Context, TokenProviderParameters) (TokenProviderResult, error)
+}
+
+// toInternal returns the accesstokens.Credential that is used internally. The current structure of the
+// code requires that client.go, requests.go and confidential.go share a credential type without
+// having import recursion. That requires the type used between is in a shared package. Therefore
+// we have this.
+func (c Credential) toInternal() (*accesstokens.Credential, error) {
+	if c.secret != "" {
+		return &accesstokens.Credential{Secret: c.secret}, nil
+	}
+	if c.cert != nil {
+		if c.key == nil {
+			return nil, errors.New("missing private key for certificate")
+		}
+		return &accesstokens.Credential{Cert: c.cert, Key: c.key, X5c: c.x5c}, nil
+	}
+	if c.key != nil {
+		return nil, errors.New("missing certificate for private key")
+	}
+	if c.assertionCallback != nil {
+		return &accesstokens.Credential{AssertionCallback: c.assertionCallback}, nil
+	}
+	if c.tokenProvider != nil {
+		return &accesstokens.Credential{TokenProvider: c.tokenProvider}, nil
+	}
+	return nil, errors.New("invalid credential")
+}
+
+// NewCredFromSecret creates a Credential from a secret.
+func NewCredFromSecret(secret string) (Credential, error) {
+	if secret == "" {
+		return Credential{}, errors.New("secret can't be empty string")
+	}
+	return Credential{secret: secret}, nil
+}
+
+// NewCredFromAssertionCallback creates a Credential that invokes a callback to get assertions
+// authenticating the application. The callback must be thread safe.
+func NewCredFromAssertionCallback(callback func(context.Context, AssertionRequestOptions) (string, error)) Credential {
+	return Credential{assertionCallback: callback}
+}
+
+// NewCredFromCert creates a Credential from a certificate or chain of certificates and an RSA private key
+// as returned by [CertFromPEM].
+func NewCredFromCert(certs []*x509.Certificate, key crypto.PrivateKey) (Credential, error) {
+	cred := Credential{key: key}
+	k, ok := key.(*rsa.PrivateKey)
+	if !ok {
+		return cred, errors.New("key must be an RSA key")
+	}
+	for _, cert := range certs {
+		if cert == nil {
+			// not returning an error here because certs may still contain a sufficient cert/key pair
+			continue
+		}
+		certKey, ok := cert.PublicKey.(*rsa.PublicKey)
+		if ok && k.E == certKey.E && k.N.Cmp(certKey.N) == 0 {
+			// We know this is the signing cert because its public key matches the given private key.
+			// This cert must be first in x5c.
+			cred.cert = cert
+			cred.x5c = append([]string{base64.StdEncoding.EncodeToString(cert.Raw)}, cred.x5c...)
+		} else {
+			cred.x5c = append(cred.x5c, base64.StdEncoding.EncodeToString(cert.Raw))
+		}
+	}
+	if cred.cert == nil {
+		return cred, errors.New("key doesn't match any certificate")
+	}
+	return cred, nil
+}
+
+// TokenProviderParameters is the authentication parameters passed to token providers
+type TokenProviderParameters = exported.TokenProviderParameters
+
+// TokenProviderResult is the authentication result returned by custom token providers
+type TokenProviderResult = exported.TokenProviderResult
+
+// NewCredFromTokenProvider creates a Credential from a function that provides access tokens. The function
+// must be concurrency safe. This is intended only to allow the Azure SDK to cache MSI tokens. It isn't
+// useful to applications in general because the token provider must implement all authentication logic.
+func NewCredFromTokenProvider(provider func(context.Context, TokenProviderParameters) (TokenProviderResult, error)) Credential {
+	return Credential{tokenProvider: provider}
+}
+
+// AutoDetectRegion instructs MSAL Go to auto detect region for Azure regional token service.
+func AutoDetectRegion() string {
+	return "TryAutoDetect"
+}
+
+// Client is a representation of authentication client for confidential applications as defined in the
+// package doc. A new Client should be created PER SERVICE USER.
+// For more information, visit https://docs.microsoft.com/azure/active-directory/develop/msal-client-applications
+type Client struct {
+	base base.Client
+	cred *accesstokens.Credential
+}
+
+// clientOptions are optional settings for New(). These options are set using various functions
+// returning Option calls.
+type clientOptions struct {
+	accessor                          cache.ExportReplace
+	authority, azureRegion            string
+	capabilities                      []string
+	disableInstanceDiscovery, sendX5C bool
+	httpClient                        ops.HTTPClient
+}
+
+// Option is an optional argument to New().
+type Option func(o *clientOptions)
+
+// WithCache provides an accessor that will read and write authentication data to an externally managed cache.
+func WithCache(accessor cache.ExportReplace) Option {
+	return func(o *clientOptions) {
+		o.accessor = accessor
+	}
+}
+
+// WithClientCapabilities allows configuring one or more client capabilities such as "CP1"
+func WithClientCapabilities(capabilities []string) Option {
+	return func(o *clientOptions) {
+		// there's no danger of sharing the slice's underlying memory with the application because
+		// this slice is simply passed to base.WithClientCapabilities, which copies its data
+		o.capabilities = capabilities
+	}
+}
+
+// WithHTTPClient allows for a custom HTTP client to be set.
+func WithHTTPClient(httpClient ops.HTTPClient) Option {
+	return func(o *clientOptions) {
+		o.httpClient = httpClient
+	}
+}
+
+// WithX5C specifies if x5c claim(public key of the certificate) should be sent to STS to enable Subject Name Issuer Authentication.
+func WithX5C() Option {
+	return func(o *clientOptions) {
+		o.sendX5C = true
+	}
+}
+
+// WithInstanceDiscovery set to false to disable authority validation (to support private cloud scenarios)
+func WithInstanceDiscovery(enabled bool) Option {
+	return func(o *clientOptions) {
+		o.disableInstanceDiscovery = !enabled
+	}
+}
+
+// WithAzureRegion sets the region(preferred) or Confidential.AutoDetectRegion() for auto detecting region.
+// Region names as per https://azure.microsoft.com/en-ca/global-infrastructure/geographies/.
+// See https://aka.ms/region-map for more details on region names.
+// The region value should be short region name for the region where the service is deployed.
+// For example "centralus" is short name for region Central US.
+// Not all auth flows can use the regional token service.
+// Service To Service (client credential flow) tokens can be obtained from the regional service.
+// Requires configuration at the tenant level.
+// Auto-detection works on a limited number of Azure artifacts (VMs, Azure functions).
+// If auto-detection fails, the non-regional endpoint will be used.
+// If an invalid region name is provided, the non-regional endpoint MIGHT be used or the token request MIGHT fail.
+func WithAzureRegion(val string) Option {
+	return func(o *clientOptions) {
+		o.azureRegion = val
+	}
+}
+
+// New is the constructor for Client. authority is the URL of a token authority such as "https://login.microsoftonline.com/<your tenant>".
+// If the Client will connect directly to AD FS, use "adfs" for the tenant. clientID is the application's client ID (also called its
+// "application ID").
+func New(authority, clientID string, cred Credential, options ...Option) (Client, error) {
+	internalCred, err := cred.toInternal()
+	if err != nil {
+		return Client{}, err
+	}
+
+	opts := clientOptions{
+		authority: authority,
+		// if the caller specified a token provider, it will handle all details of authentication, using Client only as a token cache
+		disableInstanceDiscovery: cred.tokenProvider != nil,
+		httpClient:               shared.DefaultClient,
+	}
+	for _, o := range options {
+		o(&opts)
+	}
+	baseOpts := []base.Option{
+		base.WithCacheAccessor(opts.accessor),
+		base.WithClientCapabilities(opts.capabilities),
+		base.WithInstanceDiscovery(!opts.disableInstanceDiscovery),
+		base.WithRegionDetection(opts.azureRegion),
+		base.WithX5C(opts.sendX5C),
+	}
+	base, err := base.New(clientID, opts.authority, oauth.New(opts.httpClient), baseOpts...)
+	if err != nil {
+		return Client{}, err
+	}
+	base.AuthParams.IsConfidentialClient = true
+
+	return Client{base: base, cred: internalCred}, nil
+}
+
+// authCodeURLOptions contains options for AuthCodeURL
+type authCodeURLOptions struct {
+	claims, loginHint, tenantID, domainHint string
+}
+
+// AuthCodeURLOption is implemented by options for AuthCodeURL
+type AuthCodeURLOption interface {
+	authCodeURLOption()
+}
+
+// AuthCodeURL creates a URL used to acquire an authorization code. Users need to call CreateAuthorizationCodeURLParameters and pass it in.
+//
+// Options: [WithClaims], [WithDomainHint], [WithLoginHint], [WithTenantID]
+func (cca Client) AuthCodeURL(ctx context.Context, clientID, redirectURI string, scopes []string, opts ...AuthCodeURLOption) (string, error) {
+	o := authCodeURLOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return "", err
+	}
+	ap, err := cca.base.AuthParams.WithTenant(o.tenantID)
+	if err != nil {
+		return "", err
+	}
+	ap.Claims = o.claims
+	ap.LoginHint = o.loginHint
+	ap.DomainHint = o.domainHint
+	return cca.base.AuthCodeURL(ctx, clientID, redirectURI, scopes, ap)
+}
+
+// WithLoginHint pre-populates the login prompt with a username.
+func WithLoginHint(username string) interface {
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *authCodeURLOptions:
+					t.loginHint = username
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithDomainHint adds the IdP domain as domain_hint query parameter in the auth url.
+func WithDomainHint(domain string) interface {
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *authCodeURLOptions:
+					t.domainHint = domain
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithClaims sets additional claims to request for the token, such as those required by conditional access policies.
+// Use this option when Azure AD returned a claims challenge for a prior request. The argument must be decoded.
+// This option is valid for any token acquisition method.
+func WithClaims(claims string) interface {
+	AcquireByAuthCodeOption
+	AcquireByCredentialOption
+	AcquireOnBehalfOfOption
+	AcquireSilentOption
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AcquireByAuthCodeOption
+		AcquireByCredentialOption
+		AcquireOnBehalfOfOption
+		AcquireSilentOption
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenByAuthCodeOptions:
+					t.claims = claims
+				case *acquireTokenByCredentialOptions:
+					t.claims = claims
+				case *acquireTokenOnBehalfOfOptions:
+					t.claims = claims
+				case *acquireTokenSilentOptions:
+					t.claims = claims
+				case *authCodeURLOptions:
+					t.claims = claims
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithAuthenticationScheme is an extensibility mechanism designed to be used only by Azure Arc for proof of possession access tokens.
+func WithAuthenticationScheme(authnScheme AuthenticationScheme) interface {
+	AcquireSilentOption
+	AcquireByCredentialOption
+	options.CallOption
+} {
+	return struct {
+		AcquireSilentOption
+		AcquireByCredentialOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenSilentOptions:
+					t.authnScheme = authnScheme
+				case *acquireTokenByCredentialOptions:
+					t.authnScheme = authnScheme
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithTenantID specifies a tenant for a single authentication. It may be different than the tenant set in [New].
+// This option is valid for any token acquisition method.
+func WithTenantID(tenantID string) interface {
+	AcquireByAuthCodeOption
+	AcquireByCredentialOption
+	AcquireOnBehalfOfOption
+	AcquireSilentOption
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AcquireByAuthCodeOption
+		AcquireByCredentialOption
+		AcquireOnBehalfOfOption
+		AcquireSilentOption
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenByAuthCodeOptions:
+					t.tenantID = tenantID
+				case *acquireTokenByCredentialOptions:
+					t.tenantID = tenantID
+				case *acquireTokenOnBehalfOfOptions:
+					t.tenantID = tenantID
+				case *acquireTokenSilentOptions:
+					t.tenantID = tenantID
+				case *authCodeURLOptions:
+					t.tenantID = tenantID
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// acquireTokenSilentOptions are all the optional settings to an AcquireTokenSilent() call.
+// These are set by using various AcquireTokenSilentOption functions.
+type acquireTokenSilentOptions struct {
+	account          Account
+	claims, tenantID string
+	authnScheme      AuthenticationScheme
+}
+
+// AcquireSilentOption is implemented by options for AcquireTokenSilent
+type AcquireSilentOption interface {
+	acquireSilentOption()
+}
+
+// WithSilentAccount uses the passed account during an AcquireTokenSilent() call.
+func WithSilentAccount(account Account) interface {
+	AcquireSilentOption
+	options.CallOption
+} {
+	return struct {
+		AcquireSilentOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenSilentOptions:
+					t.account = account
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// AcquireTokenSilent acquires a token from either the cache or using a refresh token.
+//
+// Options: [WithClaims], [WithSilentAccount], [WithTenantID]
+func (cca Client) AcquireTokenSilent(ctx context.Context, scopes []string, opts ...AcquireSilentOption) (AuthResult, error) {
+	o := acquireTokenSilentOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return AuthResult{}, err
+	}
+
+	if o.claims != "" {
+		return AuthResult{}, errors.New("call another AcquireToken method to request a new token having these claims")
+	}
+
+	silentParameters := base.AcquireTokenSilentParameters{
+		Scopes:      scopes,
+		Account:     o.account,
+		RequestType: accesstokens.ATConfidential,
+		Credential:  cca.cred,
+		IsAppCache:  o.account.IsZero(),
+		TenantID:    o.tenantID,
+		AuthnScheme: o.authnScheme,
+	}
+
+	return cca.base.AcquireTokenSilent(ctx, silentParameters)
+}
+
+// acquireTokenByAuthCodeOptions contains the optional parameters used to acquire an access token using the authorization code flow.
+type acquireTokenByAuthCodeOptions struct {
+	challenge, claims, tenantID string
+}
+
+// AcquireByAuthCodeOption is implemented by options for AcquireTokenByAuthCode
+type AcquireByAuthCodeOption interface {
+	acquireByAuthCodeOption()
+}
+
+// WithChallenge allows you to provide a challenge for the .AcquireTokenByAuthCode() call.
+func WithChallenge(challenge string) interface {
+	AcquireByAuthCodeOption
+	options.CallOption
+} {
+	return struct {
+		AcquireByAuthCodeOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenByAuthCodeOptions:
+					t.challenge = challenge
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// AcquireTokenByAuthCode is a request to acquire a security token from the authority, using an authorization code.
+// The specified redirect URI must be the same URI that was used when the authorization code was requested.
+//
+// Options: [WithChallenge], [WithClaims], [WithTenantID]
+func (cca Client) AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, opts ...AcquireByAuthCodeOption) (AuthResult, error) {
+	o := acquireTokenByAuthCodeOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return AuthResult{}, err
+	}
+
+	params := base.AcquireTokenAuthCodeParameters{
+		Scopes:      scopes,
+		Code:        code,
+		Challenge:   o.challenge,
+		Claims:      o.claims,
+		AppType:     accesstokens.ATConfidential,
+		Credential:  cca.cred, // This setting differs from public.Client.AcquireTokenByAuthCode
+		RedirectURI: redirectURI,
+		TenantID:    o.tenantID,
+	}
+
+	return cca.base.AcquireTokenByAuthCode(ctx, params)
+}
+
+// acquireTokenByCredentialOptions contains optional configuration for AcquireTokenByCredential
+type acquireTokenByCredentialOptions struct {
+	claims, tenantID string
+	authnScheme      AuthenticationScheme
+}
+
+// AcquireByCredentialOption is implemented by options for AcquireTokenByCredential
+type AcquireByCredentialOption interface {
+	acquireByCredOption()
+}
+
+// AcquireTokenByCredential acquires a security token from the authority, using the client credentials grant.
+//
+// Options: [WithClaims], [WithTenantID]
+func (cca Client) AcquireTokenByCredential(ctx context.Context, scopes []string, opts ...AcquireByCredentialOption) (AuthResult, error) {
+	o := acquireTokenByCredentialOptions{}
+	err := options.ApplyOptions(&o, opts)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	authParams, err := cca.base.AuthParams.WithTenant(o.tenantID)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	authParams.Scopes = scopes
+	authParams.AuthorizationType = authority.ATClientCredentials
+	authParams.Claims = o.claims
+	if o.authnScheme != nil {
+		authParams.AuthnScheme = o.authnScheme
+	}
+	token, err := cca.base.Token.Credential(ctx, authParams, cca.cred)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	return cca.base.AuthResultFromToken(ctx, authParams, token, true)
+}
+
+// acquireTokenOnBehalfOfOptions contains optional configuration for AcquireTokenOnBehalfOf
+type acquireTokenOnBehalfOfOptions struct {
+	claims, tenantID string
+}
+
+// AcquireOnBehalfOfOption is implemented by options for AcquireTokenOnBehalfOf
+type AcquireOnBehalfOfOption interface {
+	acquireOBOOption()
+}
+
+// AcquireTokenOnBehalfOf acquires a security token for an app using middle tier apps access token.
+// Refer https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow.
+//
+// Options: [WithClaims], [WithTenantID]
+func (cca Client) AcquireTokenOnBehalfOf(ctx context.Context, userAssertion string, scopes []string, opts ...AcquireOnBehalfOfOption) (AuthResult, error) {
+	o := acquireTokenOnBehalfOfOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return AuthResult{}, err
+	}
+	params := base.AcquireTokenOnBehalfOfParameters{
+		Scopes:        scopes,
+		UserAssertion: userAssertion,
+		Claims:        o.claims,
+		Credential:    cca.cred,
+		TenantID:      o.tenantID,
+	}
+	return cca.base.AcquireTokenOnBehalfOf(ctx, params)
+}
+
+// Account gets the account in the token cache with the specified homeAccountID.
+func (cca Client) Account(ctx context.Context, accountID string) (Account, error) {
+	return cca.base.Account(ctx, accountID)
+}
+
+// RemoveAccount signs the account out and forgets account from token cache.
+func (cca Client) RemoveAccount(ctx context.Context, account Account) error {
+	return cca.base.RemoveAccount(ctx, account)
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/error_design.md 🔗

@@ -0,0 +1,111 @@
+# MSAL Error Design
+
+Author: Abhidnya Patil(abhidnya.patil@microsoft.com)
+
+Contributors:
+
+- John Doak(jdoak@microsoft.com)
+- Keegan Caruso(Keegan.Caruso@microsoft.com)
+- Joel Hendrix(jhendrix@microsoft.com)
+
+## Background
+
+Errors in MSAL are intended for app developers to troubleshoot and not for displaying to end-users.
+
+### Go error handling vs other MSAL languages
+
+Most modern languages use exception based errors. Simply put, you "throw" an exception and it must be caught at some routine in the upper stack or it will eventually crash the program.
+
+Go doesn't use exceptions, instead it relies on multiple return values, one of which can be the builtin error interface type. It is up to the user to decide what to do.
+
+### Go custom error types
+
+Errors can be created in Go by simply using errors.New() or fmt.Errorf() to create an "error".
+
+Custom errors can be created in multiple ways. One of the more robust ways is simply to satisfy the error interface:
+
+```go
+type MyCustomErr struct {
+  Msg string
+}
+func (m MyCustomErr) Error() string { // This implements "error"
+  return m.Msg
+}
+```
+
+### MSAL Error Goals
+
+- Provide diagnostics to the user and for tickets that can be used to track down bugs or client misconfigurations
+- Detect errors that are transitory and can be retried
+- Allow the user to identify certain errors that the program can respond to, such a informing the user for the need to do an enrollment
+  
+## Implementing Client Side Errors
+
+Client side errors indicate a misconfiguration or passing of bad arguments that is non-recoverable. Retrying isn't possible.
+
+These errors can simply be standard Go errors created by errors.New() or fmt.Errorf(). If down the line we need a custom error, we can introduce it, but for now the error messages just need to be clear on what the issue was.
+
+## Implementing Service Side Errors
+
+Service side errors occur when an external RPC responds either with an HTTP error code or returns a message that includes an error.
+
+These errors can be transitory (please slow down) or permanent (HTTP 404).  To provide our diagnostic goals, we require the ability to differentiate these errors from other errors.
+
+The current implementation includes a specialized type that captures any error from the server:
+
+```go
+// CallErr represents an HTTP call error. Has a Verbose() method that allows getting the
+// http.Request and Response objects. Implements error.
+type CallErr struct {
+    Req  *http.Request
+    Resp *http.Response
+    Err  error
+}
+
+// Errors implements error.Error().
+func (e CallErr) Error() string {
+    return e.Err.Error()
+}
+
+// Verbose prints a versbose error message with the request or response.
+func (e CallErr) Verbose() string {
+    e.Resp.Request = nil // This brings in a bunch of TLS stuff we don't need
+    e.Resp.TLS = nil     // Same
+    return fmt.Sprintf("%s:\nRequest:\n%s\nResponse:\n%s", e.Err, prettyConf.Sprint(e.Req), prettyConf.Sprint(e.Resp))
+}
+```
+
+A user will always receive the most concise error we provide.  They can tell if it is a server side error using Go error package:
+
+```go
+var callErr CallErr
+if errors.As(err, &callErr) {
+  ...
+}
+```
+
+We provide a Verbose() function that can retrieve the most verbose message from any error we provide:
+
+```go
+fmt.Println(errors.Verbose(err))
+```
+
+If further differentiation is required, we can add custom errors that use Go error wrapping on top of CallErr to achieve our diagnostic goals (such as detecting when to retry a call due to transient errors).  
+
+CallErr is always thrown from the comm package (which handles all http requests) and looks similar to:
+
+```go
+return nil, errors.CallErr{
+    Req:  req,
+    Resp: reply,
+    Err:  fmt.Errorf("http call(%s)(%s) error: reply status code was %d:\n%s", req.URL.String(), req.Method, reply.StatusCode, ErrorResponse), //ErrorResponse is the json body extracted from the http response
+    }
+```
+
+## Future Decisions
+
+The ability to retry calls needs to have centralized responsibility. Either the user is doing it or the client is doing it.  
+
+If the user should be responsible, our errors package will include a CanRetry() function that will inform the user if the error provided to them is retryable.  This is based on the http error code and possibly the type of error that was returned.  It would also include a sleep time if the server returned an amount of time to wait.
+
+Otherwise we will do this internally and retries will be left to us.

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/errors.go 🔗

@@ -0,0 +1,89 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package errors
+
+import (
+	"errors"
+	"fmt"
+	"io"
+	"net/http"
+	"reflect"
+	"strings"
+
+	"github.com/kylelemons/godebug/pretty"
+)
+
+var prettyConf = &pretty.Config{
+	IncludeUnexported: false,
+	SkipZeroFields:    true,
+	TrackCycles:       true,
+	Formatter: map[reflect.Type]interface{}{
+		reflect.TypeOf((*io.Reader)(nil)).Elem(): func(r io.Reader) string {
+			b, err := io.ReadAll(r)
+			if err != nil {
+				return "could not read io.Reader content"
+			}
+			return string(b)
+		},
+	},
+}
+
+type verboser interface {
+	Verbose() string
+}
+
+// Verbose prints the most verbose error that the error message has.
+func Verbose(err error) string {
+	build := strings.Builder{}
+	for {
+		if err == nil {
+			break
+		}
+		if v, ok := err.(verboser); ok {
+			build.WriteString(v.Verbose())
+		} else {
+			build.WriteString(err.Error())
+		}
+		err = errors.Unwrap(err)
+	}
+	return build.String()
+}
+
+// New is equivalent to errors.New().
+func New(text string) error {
+	return errors.New(text)
+}
+
+// CallErr represents an HTTP call error. Has a Verbose() method that allows getting the
+// http.Request and Response objects. Implements error.
+type CallErr struct {
+	Req *http.Request
+	// Resp contains response body
+	Resp *http.Response
+	Err  error
+}
+
+// Errors implements error.Error().
+func (e CallErr) Error() string {
+	return e.Err.Error()
+}
+
+// Verbose prints a versbose error message with the request or response.
+func (e CallErr) Verbose() string {
+	e.Resp.Request = nil // This brings in a bunch of TLS crap we don't need
+	e.Resp.TLS = nil     // Same
+	return fmt.Sprintf("%s:\nRequest:\n%s\nResponse:\n%s", e.Err, prettyConf.Sprint(e.Req), prettyConf.Sprint(e.Resp))
+}
+
+// Is reports whether any error in errors chain matches target.
+func Is(err, target error) bool {
+	return errors.Is(err, target)
+}
+
+// As finds the first error in errors chain that matches target,
+// and if so, sets target to that error value and returns true.
+// Otherwise, it returns false.
+func As(err error, target interface{}) bool {
+	return errors.As(err, target)
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/base.go 🔗

@@ -0,0 +1,477 @@
+// Package base contains a "Base" client that is used by the external public.Client and confidential.Client.
+// Base holds shared attributes that must be available to both clients and methods that act as
+// shared calls.
+package base
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/url"
+	"reflect"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared"
+)
+
+const (
+	// AuthorityPublicCloud is the default AAD authority host
+	AuthorityPublicCloud = "https://login.microsoftonline.com/common"
+	scopeSeparator       = " "
+)
+
+// manager provides an internal cache. It is defined to allow faking the cache in tests.
+// In production it's a *storage.Manager or *storage.PartitionedManager.
+type manager interface {
+	cache.Serializer
+	Read(context.Context, authority.AuthParams) (storage.TokenResponse, error)
+	Write(authority.AuthParams, accesstokens.TokenResponse) (shared.Account, error)
+}
+
+// accountManager is a manager that also caches accounts. In production it's a *storage.Manager.
+type accountManager interface {
+	manager
+	AllAccounts() []shared.Account
+	Account(homeAccountID string) shared.Account
+	RemoveAccount(account shared.Account, clientID string)
+}
+
+// AcquireTokenSilentParameters contains the parameters to acquire a token silently (from cache).
+type AcquireTokenSilentParameters struct {
+	Scopes            []string
+	Account           shared.Account
+	RequestType       accesstokens.AppType
+	Credential        *accesstokens.Credential
+	IsAppCache        bool
+	TenantID          string
+	UserAssertion     string
+	AuthorizationType authority.AuthorizeType
+	Claims            string
+	AuthnScheme       authority.AuthenticationScheme
+}
+
+// AcquireTokenAuthCodeParameters contains the parameters required to acquire an access token using the auth code flow.
+// To use PKCE, set the CodeChallengeParameter.
+// Code challenges are used to secure authorization code grants; for more information, visit
+// https://tools.ietf.org/html/rfc7636.
+type AcquireTokenAuthCodeParameters struct {
+	Scopes      []string
+	Code        string
+	Challenge   string
+	Claims      string
+	RedirectURI string
+	AppType     accesstokens.AppType
+	Credential  *accesstokens.Credential
+	TenantID    string
+}
+
+type AcquireTokenOnBehalfOfParameters struct {
+	Scopes        []string
+	Claims        string
+	Credential    *accesstokens.Credential
+	TenantID      string
+	UserAssertion string
+}
+
+// AuthResult contains the results of one token acquisition operation in PublicClientApplication
+// or ConfidentialClientApplication. For details see https://aka.ms/msal-net-authenticationresult
+type AuthResult struct {
+	Account        shared.Account
+	IDToken        accesstokens.IDToken
+	AccessToken    string
+	ExpiresOn      time.Time
+	GrantedScopes  []string
+	DeclinedScopes []string
+}
+
+// AuthResultFromStorage creates an AuthResult from a storage token response (which is generated from the cache).
+func AuthResultFromStorage(storageTokenResponse storage.TokenResponse) (AuthResult, error) {
+	if err := storageTokenResponse.AccessToken.Validate(); err != nil {
+		return AuthResult{}, fmt.Errorf("problem with access token in StorageTokenResponse: %w", err)
+	}
+
+	account := storageTokenResponse.Account
+	accessToken := storageTokenResponse.AccessToken.Secret
+	grantedScopes := strings.Split(storageTokenResponse.AccessToken.Scopes, scopeSeparator)
+
+	// Checking if there was an ID token in the cache; this will throw an error in the case of confidential client applications.
+	var idToken accesstokens.IDToken
+	if !storageTokenResponse.IDToken.IsZero() {
+		err := idToken.UnmarshalJSON([]byte(storageTokenResponse.IDToken.Secret))
+		if err != nil {
+			return AuthResult{}, fmt.Errorf("problem decoding JWT token: %w", err)
+		}
+	}
+	return AuthResult{account, idToken, accessToken, storageTokenResponse.AccessToken.ExpiresOn.T, grantedScopes, nil}, nil
+}
+
+// NewAuthResult creates an AuthResult.
+func NewAuthResult(tokenResponse accesstokens.TokenResponse, account shared.Account) (AuthResult, error) {
+	if len(tokenResponse.DeclinedScopes) > 0 {
+		return AuthResult{}, fmt.Errorf("token response failed because declined scopes are present: %s", strings.Join(tokenResponse.DeclinedScopes, ","))
+	}
+	return AuthResult{
+		Account:       account,
+		IDToken:       tokenResponse.IDToken,
+		AccessToken:   tokenResponse.AccessToken,
+		ExpiresOn:     tokenResponse.ExpiresOn.T,
+		GrantedScopes: tokenResponse.GrantedScopes.Slice,
+	}, nil
+}
+
+// Client is a base client that provides access to common methods and primatives that
+// can be used by multiple clients.
+type Client struct {
+	Token   *oauth.Client
+	manager accountManager // *storage.Manager or fakeManager in tests
+	// pmanager is a partitioned cache for OBO authentication. *storage.PartitionedManager or fakeManager in tests
+	pmanager manager
+
+	AuthParams      authority.AuthParams // DO NOT EVER MAKE THIS A POINTER! See "Note" in New().
+	cacheAccessor   cache.ExportReplace
+	cacheAccessorMu *sync.RWMutex
+}
+
+// Option is an optional argument to the New constructor.
+type Option func(c *Client) error
+
+// WithCacheAccessor allows you to set some type of cache for storing authentication tokens.
+func WithCacheAccessor(ca cache.ExportReplace) Option {
+	return func(c *Client) error {
+		if ca != nil {
+			c.cacheAccessor = ca
+		}
+		return nil
+	}
+}
+
+// WithClientCapabilities allows configuring one or more client capabilities such as "CP1"
+func WithClientCapabilities(capabilities []string) Option {
+	return func(c *Client) error {
+		var err error
+		if len(capabilities) > 0 {
+			cc, err := authority.NewClientCapabilities(capabilities)
+			if err == nil {
+				c.AuthParams.Capabilities = cc
+			}
+		}
+		return err
+	}
+}
+
+// WithKnownAuthorityHosts specifies hosts Client shouldn't validate or request metadata for because they're known to the user
+func WithKnownAuthorityHosts(hosts []string) Option {
+	return func(c *Client) error {
+		cp := make([]string, len(hosts))
+		copy(cp, hosts)
+		c.AuthParams.KnownAuthorityHosts = cp
+		return nil
+	}
+}
+
+// WithX5C specifies if x5c claim(public key of the certificate) should be sent to STS to enable Subject Name Issuer Authentication.
+func WithX5C(sendX5C bool) Option {
+	return func(c *Client) error {
+		c.AuthParams.SendX5C = sendX5C
+		return nil
+	}
+}
+
+func WithRegionDetection(region string) Option {
+	return func(c *Client) error {
+		c.AuthParams.AuthorityInfo.Region = region
+		return nil
+	}
+}
+
+func WithInstanceDiscovery(instanceDiscoveryEnabled bool) Option {
+	return func(c *Client) error {
+		c.AuthParams.AuthorityInfo.ValidateAuthority = instanceDiscoveryEnabled
+		c.AuthParams.AuthorityInfo.InstanceDiscoveryDisabled = !instanceDiscoveryEnabled
+		return nil
+	}
+}
+
+// New is the constructor for Base.
+func New(clientID string, authorityURI string, token *oauth.Client, options ...Option) (Client, error) {
+	//By default, validateAuthority is set to true and instanceDiscoveryDisabled is set to false
+	authInfo, err := authority.NewInfoFromAuthorityURI(authorityURI, true, false)
+	if err != nil {
+		return Client{}, err
+	}
+	authParams := authority.NewAuthParams(clientID, authInfo)
+	client := Client{ // Note: Hey, don't even THINK about making Base into *Base. See "design notes" in public.go and confidential.go
+		Token:           token,
+		AuthParams:      authParams,
+		cacheAccessorMu: &sync.RWMutex{},
+		manager:         storage.New(token),
+		pmanager:        storage.NewPartitionedManager(token),
+	}
+	for _, o := range options {
+		if err = o(&client); err != nil {
+			break
+		}
+	}
+	return client, err
+
+}
+
+// AuthCodeURL creates a URL used to acquire an authorization code.
+func (b Client) AuthCodeURL(ctx context.Context, clientID, redirectURI string, scopes []string, authParams authority.AuthParams) (string, error) {
+	endpoints, err := b.Token.ResolveEndpoints(ctx, authParams.AuthorityInfo, "")
+	if err != nil {
+		return "", err
+	}
+
+	baseURL, err := url.Parse(endpoints.AuthorizationEndpoint)
+	if err != nil {
+		return "", err
+	}
+
+	claims, err := authParams.MergeCapabilitiesAndClaims()
+	if err != nil {
+		return "", err
+	}
+
+	v := url.Values{}
+	v.Add("client_id", clientID)
+	v.Add("response_type", "code")
+	v.Add("redirect_uri", redirectURI)
+	v.Add("scope", strings.Join(scopes, scopeSeparator))
+	if authParams.State != "" {
+		v.Add("state", authParams.State)
+	}
+	if claims != "" {
+		v.Add("claims", claims)
+	}
+	if authParams.CodeChallenge != "" {
+		v.Add("code_challenge", authParams.CodeChallenge)
+	}
+	if authParams.CodeChallengeMethod != "" {
+		v.Add("code_challenge_method", authParams.CodeChallengeMethod)
+	}
+	if authParams.LoginHint != "" {
+		v.Add("login_hint", authParams.LoginHint)
+	}
+	if authParams.Prompt != "" {
+		v.Add("prompt", authParams.Prompt)
+	}
+	if authParams.DomainHint != "" {
+		v.Add("domain_hint", authParams.DomainHint)
+	}
+	// There were left over from an implementation that didn't use any of these.  We may
+	// need to add them later, but as of now aren't needed.
+	/*
+		if p.ResponseMode != "" {
+			urlParams.Add("response_mode", p.ResponseMode)
+		}
+	*/
+	baseURL.RawQuery = v.Encode()
+	return baseURL.String(), nil
+}
+
+func (b Client) AcquireTokenSilent(ctx context.Context, silent AcquireTokenSilentParameters) (AuthResult, error) {
+	ar := AuthResult{}
+	// when tenant == "", the caller didn't specify a tenant and WithTenant will choose the client's configured tenant
+	tenant := silent.TenantID
+	authParams, err := b.AuthParams.WithTenant(tenant)
+	if err != nil {
+		return ar, err
+	}
+	authParams.Scopes = silent.Scopes
+	authParams.HomeAccountID = silent.Account.HomeAccountID
+	authParams.AuthorizationType = silent.AuthorizationType
+	authParams.Claims = silent.Claims
+	authParams.UserAssertion = silent.UserAssertion
+	if silent.AuthnScheme != nil {
+		authParams.AuthnScheme = silent.AuthnScheme
+	}
+
+	m := b.pmanager
+	if authParams.AuthorizationType != authority.ATOnBehalfOf {
+		authParams.AuthorizationType = authority.ATRefreshToken
+		m = b.manager
+	}
+	if b.cacheAccessor != nil {
+		key := authParams.CacheKey(silent.IsAppCache)
+		b.cacheAccessorMu.RLock()
+		err = b.cacheAccessor.Replace(ctx, m, cache.ReplaceHints{PartitionKey: key})
+		b.cacheAccessorMu.RUnlock()
+	}
+	if err != nil {
+		return ar, err
+	}
+	storageTokenResponse, err := m.Read(ctx, authParams)
+	if err != nil {
+		return ar, err
+	}
+
+	// ignore cached access tokens when given claims
+	if silent.Claims == "" {
+		ar, err = AuthResultFromStorage(storageTokenResponse)
+		if err == nil {
+			ar.AccessToken, err = authParams.AuthnScheme.FormatAccessToken(ar.AccessToken)
+			return ar, err
+		}
+	}
+
+	// redeem a cached refresh token, if available
+	if reflect.ValueOf(storageTokenResponse.RefreshToken).IsZero() {
+		return ar, errors.New("no token found")
+	}
+	var cc *accesstokens.Credential
+	if silent.RequestType == accesstokens.ATConfidential {
+		cc = silent.Credential
+	}
+	token, err := b.Token.Refresh(ctx, silent.RequestType, authParams, cc, storageTokenResponse.RefreshToken)
+	if err != nil {
+		return ar, err
+	}
+	return b.AuthResultFromToken(ctx, authParams, token, true)
+}
+
+func (b Client) AcquireTokenByAuthCode(ctx context.Context, authCodeParams AcquireTokenAuthCodeParameters) (AuthResult, error) {
+	authParams, err := b.AuthParams.WithTenant(authCodeParams.TenantID)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	authParams.Claims = authCodeParams.Claims
+	authParams.Scopes = authCodeParams.Scopes
+	authParams.Redirecturi = authCodeParams.RedirectURI
+	authParams.AuthorizationType = authority.ATAuthCode
+
+	var cc *accesstokens.Credential
+	if authCodeParams.AppType == accesstokens.ATConfidential {
+		cc = authCodeParams.Credential
+		authParams.IsConfidentialClient = true
+	}
+
+	req, err := accesstokens.NewCodeChallengeRequest(authParams, authCodeParams.AppType, cc, authCodeParams.Code, authCodeParams.Challenge)
+	if err != nil {
+		return AuthResult{}, err
+	}
+
+	token, err := b.Token.AuthCode(ctx, req)
+	if err != nil {
+		return AuthResult{}, err
+	}
+
+	return b.AuthResultFromToken(ctx, authParams, token, true)
+}
+
+// AcquireTokenOnBehalfOf acquires a security token for an app using middle tier apps access token.
+func (b Client) AcquireTokenOnBehalfOf(ctx context.Context, onBehalfOfParams AcquireTokenOnBehalfOfParameters) (AuthResult, error) {
+	var ar AuthResult
+	silentParameters := AcquireTokenSilentParameters{
+		Scopes:            onBehalfOfParams.Scopes,
+		RequestType:       accesstokens.ATConfidential,
+		Credential:        onBehalfOfParams.Credential,
+		UserAssertion:     onBehalfOfParams.UserAssertion,
+		AuthorizationType: authority.ATOnBehalfOf,
+		TenantID:          onBehalfOfParams.TenantID,
+		Claims:            onBehalfOfParams.Claims,
+	}
+	ar, err := b.AcquireTokenSilent(ctx, silentParameters)
+	if err == nil {
+		return ar, err
+	}
+	authParams, err := b.AuthParams.WithTenant(onBehalfOfParams.TenantID)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	authParams.AuthorizationType = authority.ATOnBehalfOf
+	authParams.Claims = onBehalfOfParams.Claims
+	authParams.Scopes = onBehalfOfParams.Scopes
+	authParams.UserAssertion = onBehalfOfParams.UserAssertion
+	token, err := b.Token.OnBehalfOf(ctx, authParams, onBehalfOfParams.Credential)
+	if err == nil {
+		ar, err = b.AuthResultFromToken(ctx, authParams, token, true)
+	}
+	return ar, err
+}
+
+func (b Client) AuthResultFromToken(ctx context.Context, authParams authority.AuthParams, token accesstokens.TokenResponse, cacheWrite bool) (AuthResult, error) {
+	if !cacheWrite {
+		return NewAuthResult(token, shared.Account{})
+	}
+	var m manager = b.manager
+	if authParams.AuthorizationType == authority.ATOnBehalfOf {
+		m = b.pmanager
+	}
+	key := token.CacheKey(authParams)
+	if b.cacheAccessor != nil {
+		b.cacheAccessorMu.Lock()
+		defer b.cacheAccessorMu.Unlock()
+		err := b.cacheAccessor.Replace(ctx, m, cache.ReplaceHints{PartitionKey: key})
+		if err != nil {
+			return AuthResult{}, err
+		}
+	}
+	account, err := m.Write(authParams, token)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	ar, err := NewAuthResult(token, account)
+	if err == nil && b.cacheAccessor != nil {
+		err = b.cacheAccessor.Export(ctx, b.manager, cache.ExportHints{PartitionKey: key})
+	}
+	if err != nil {
+		return AuthResult{}, err
+	}
+
+	ar.AccessToken, err = authParams.AuthnScheme.FormatAccessToken(ar.AccessToken)
+	return ar, err
+}
+
+func (b Client) AllAccounts(ctx context.Context) ([]shared.Account, error) {
+	if b.cacheAccessor != nil {
+		b.cacheAccessorMu.RLock()
+		defer b.cacheAccessorMu.RUnlock()
+		key := b.AuthParams.CacheKey(false)
+		err := b.cacheAccessor.Replace(ctx, b.manager, cache.ReplaceHints{PartitionKey: key})
+		if err != nil {
+			return nil, err
+		}
+	}
+	return b.manager.AllAccounts(), nil
+}
+
+func (b Client) Account(ctx context.Context, homeAccountID string) (shared.Account, error) {
+	if b.cacheAccessor != nil {
+		b.cacheAccessorMu.RLock()
+		defer b.cacheAccessorMu.RUnlock()
+		authParams := b.AuthParams // This is a copy, as we don't have a pointer receiver and .AuthParams is not a pointer.
+		authParams.AuthorizationType = authority.AccountByID
+		authParams.HomeAccountID = homeAccountID
+		key := b.AuthParams.CacheKey(false)
+		err := b.cacheAccessor.Replace(ctx, b.manager, cache.ReplaceHints{PartitionKey: key})
+		if err != nil {
+			return shared.Account{}, err
+		}
+	}
+	return b.manager.Account(homeAccountID), nil
+}
+
+// RemoveAccount removes all the ATs, RTs and IDTs from the cache associated with this account.
+func (b Client) RemoveAccount(ctx context.Context, account shared.Account) error {
+	if b.cacheAccessor == nil {
+		b.manager.RemoveAccount(account, b.AuthParams.ClientID)
+		return nil
+	}
+	b.cacheAccessorMu.Lock()
+	defer b.cacheAccessorMu.Unlock()
+	key := b.AuthParams.CacheKey(false)
+	err := b.cacheAccessor.Replace(ctx, b.manager, cache.ReplaceHints{PartitionKey: key})
+	if err != nil {
+		return err
+	}
+	b.manager.RemoveAccount(account, b.AuthParams.ClientID)
+	return b.cacheAccessor.Export(ctx, b.manager, cache.ExportHints{PartitionKey: key})
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/items.go 🔗

@@ -0,0 +1,213 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package storage
+
+import (
+	"errors"
+	"fmt"
+	"reflect"
+	"strings"
+	"time"
+
+	internalTime "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared"
+)
+
+// Contract is the JSON structure that is written to any storage medium when serializing
+// the internal cache. This design is shared between MSAL versions in many languages.
+// This cannot be changed without design that includes other SDKs.
+type Contract struct {
+	AccessTokens  map[string]AccessToken               `json:"AccessToken,omitempty"`
+	RefreshTokens map[string]accesstokens.RefreshToken `json:"RefreshToken,omitempty"`
+	IDTokens      map[string]IDToken                   `json:"IdToken,omitempty"`
+	Accounts      map[string]shared.Account            `json:"Account,omitempty"`
+	AppMetaData   map[string]AppMetaData               `json:"AppMetadata,omitempty"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// Contract is the JSON structure that is written to any storage medium when serializing
+// the internal cache. This design is shared between MSAL versions in many languages.
+// This cannot be changed without design that includes other SDKs.
+type InMemoryContract struct {
+	AccessTokensPartition  map[string]map[string]AccessToken
+	RefreshTokensPartition map[string]map[string]accesstokens.RefreshToken
+	IDTokensPartition      map[string]map[string]IDToken
+	AccountsPartition      map[string]map[string]shared.Account
+	AppMetaData            map[string]AppMetaData
+}
+
+// NewContract is the constructor for Contract.
+func NewInMemoryContract() *InMemoryContract {
+	return &InMemoryContract{
+		AccessTokensPartition:  map[string]map[string]AccessToken{},
+		RefreshTokensPartition: map[string]map[string]accesstokens.RefreshToken{},
+		IDTokensPartition:      map[string]map[string]IDToken{},
+		AccountsPartition:      map[string]map[string]shared.Account{},
+		AppMetaData:            map[string]AppMetaData{},
+	}
+}
+
+// NewContract is the constructor for Contract.
+func NewContract() *Contract {
+	return &Contract{
+		AccessTokens:     map[string]AccessToken{},
+		RefreshTokens:    map[string]accesstokens.RefreshToken{},
+		IDTokens:         map[string]IDToken{},
+		Accounts:         map[string]shared.Account{},
+		AppMetaData:      map[string]AppMetaData{},
+		AdditionalFields: map[string]interface{}{},
+	}
+}
+
+// AccessToken is the JSON representation of a MSAL access token for encoding to storage.
+type AccessToken struct {
+	HomeAccountID     string            `json:"home_account_id,omitempty"`
+	Environment       string            `json:"environment,omitempty"`
+	Realm             string            `json:"realm,omitempty"`
+	CredentialType    string            `json:"credential_type,omitempty"`
+	ClientID          string            `json:"client_id,omitempty"`
+	Secret            string            `json:"secret,omitempty"`
+	Scopes            string            `json:"target,omitempty"`
+	ExpiresOn         internalTime.Unix `json:"expires_on,omitempty"`
+	ExtendedExpiresOn internalTime.Unix `json:"extended_expires_on,omitempty"`
+	CachedAt          internalTime.Unix `json:"cached_at,omitempty"`
+	UserAssertionHash string            `json:"user_assertion_hash,omitempty"`
+	TokenType         string            `json:"token_type,omitempty"`
+	AuthnSchemeKeyID  string            `json:"keyid,omitempty"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// NewAccessToken is the constructor for AccessToken.
+func NewAccessToken(homeID, env, realm, clientID string, cachedAt, expiresOn, extendedExpiresOn time.Time, scopes, token, tokenType, authnSchemeKeyID string) AccessToken {
+	return AccessToken{
+		HomeAccountID:     homeID,
+		Environment:       env,
+		Realm:             realm,
+		CredentialType:    "AccessToken",
+		ClientID:          clientID,
+		Secret:            token,
+		Scopes:            scopes,
+		CachedAt:          internalTime.Unix{T: cachedAt.UTC()},
+		ExpiresOn:         internalTime.Unix{T: expiresOn.UTC()},
+		ExtendedExpiresOn: internalTime.Unix{T: extendedExpiresOn.UTC()},
+		TokenType:         tokenType,
+		AuthnSchemeKeyID:  authnSchemeKeyID,
+	}
+}
+
+// Key outputs the key that can be used to uniquely look up this entry in a map.
+func (a AccessToken) Key() string {
+	key := strings.Join(
+		[]string{a.HomeAccountID, a.Environment, a.CredentialType, a.ClientID, a.Realm, a.Scopes},
+		shared.CacheKeySeparator,
+	)
+	// add token type to key for new access tokens types. skip for bearer token type to
+	// preserve fwd and back compat between a common cache and msal clients
+	if !strings.EqualFold(a.TokenType, authority.AccessTokenTypeBearer) {
+		key = strings.Join([]string{key, a.TokenType}, shared.CacheKeySeparator)
+	}
+	return strings.ToLower(key)
+}
+
+// FakeValidate enables tests to fake access token validation
+var FakeValidate func(AccessToken) error
+
+// Validate validates that this AccessToken can be used.
+func (a AccessToken) Validate() error {
+	if FakeValidate != nil {
+		return FakeValidate(a)
+	}
+	if a.CachedAt.T.After(time.Now()) {
+		return errors.New("access token isn't valid, it was cached at a future time")
+	}
+	if a.ExpiresOn.T.Before(time.Now().Add(5 * time.Minute)) {
+		return fmt.Errorf("access token is expired")
+	}
+	if a.CachedAt.T.IsZero() {
+		return fmt.Errorf("access token does not have CachedAt set")
+	}
+	return nil
+}
+
+// IDToken is the JSON representation of an MSAL id token for encoding to storage.
+type IDToken struct {
+	HomeAccountID     string `json:"home_account_id,omitempty"`
+	Environment       string `json:"environment,omitempty"`
+	Realm             string `json:"realm,omitempty"`
+	CredentialType    string `json:"credential_type,omitempty"`
+	ClientID          string `json:"client_id,omitempty"`
+	Secret            string `json:"secret,omitempty"`
+	UserAssertionHash string `json:"user_assertion_hash,omitempty"`
+	AdditionalFields  map[string]interface{}
+}
+
+// IsZero determines if IDToken is the zero value.
+func (i IDToken) IsZero() bool {
+	v := reflect.ValueOf(i)
+	for i := 0; i < v.NumField(); i++ {
+		field := v.Field(i)
+		if !field.IsZero() {
+			switch field.Kind() {
+			case reflect.Map, reflect.Slice:
+				if field.Len() == 0 {
+					continue
+				}
+			}
+			return false
+		}
+	}
+	return true
+}
+
+// NewIDToken is the constructor for IDToken.
+func NewIDToken(homeID, env, realm, clientID, idToken string) IDToken {
+	return IDToken{
+		HomeAccountID:  homeID,
+		Environment:    env,
+		Realm:          realm,
+		CredentialType: "IDToken",
+		ClientID:       clientID,
+		Secret:         idToken,
+	}
+}
+
+// Key outputs the key that can be used to uniquely look up this entry in a map.
+func (id IDToken) Key() string {
+	key := strings.Join(
+		[]string{id.HomeAccountID, id.Environment, id.CredentialType, id.ClientID, id.Realm},
+		shared.CacheKeySeparator,
+	)
+	return strings.ToLower(key)
+}
+
+// AppMetaData is the JSON representation of application metadata for encoding to storage.
+type AppMetaData struct {
+	FamilyID    string `json:"family_id,omitempty"`
+	ClientID    string `json:"client_id,omitempty"`
+	Environment string `json:"environment,omitempty"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// NewAppMetaData is the constructor for AppMetaData.
+func NewAppMetaData(familyID, clientID, environment string) AppMetaData {
+	return AppMetaData{
+		FamilyID:    familyID,
+		ClientID:    clientID,
+		Environment: environment,
+	}
+}
+
+// Key outputs the key that can be used to uniquely look up this entry in a map.
+func (a AppMetaData) Key() string {
+	key := strings.Join(
+		[]string{"AppMetaData", a.Environment, a.ClientID},
+		shared.CacheKeySeparator,
+	)
+	return strings.ToLower(key)
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/partitioned_storage.go 🔗

@@ -0,0 +1,442 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package storage
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared"
+)
+
+// PartitionedManager is a partitioned in-memory cache of access tokens, accounts and meta data.
+type PartitionedManager struct {
+	contract   *InMemoryContract
+	contractMu sync.RWMutex
+	requests   aadInstanceDiscoveryer // *oauth.Token
+
+	aadCacheMu sync.RWMutex
+	aadCache   map[string]authority.InstanceDiscoveryMetadata
+}
+
+// NewPartitionedManager is the constructor for PartitionedManager.
+func NewPartitionedManager(requests *oauth.Client) *PartitionedManager {
+	m := &PartitionedManager{requests: requests, aadCache: make(map[string]authority.InstanceDiscoveryMetadata)}
+	m.contract = NewInMemoryContract()
+	return m
+}
+
+// Read reads a storage token from the cache if it exists.
+func (m *PartitionedManager) Read(ctx context.Context, authParameters authority.AuthParams) (TokenResponse, error) {
+	tr := TokenResponse{}
+	realm := authParameters.AuthorityInfo.Tenant
+	clientID := authParameters.ClientID
+	scopes := authParameters.Scopes
+	authnSchemeKeyID := authParameters.AuthnScheme.KeyID()
+	tokenType := authParameters.AuthnScheme.AccessTokenType()
+
+	// fetch metadata if instanceDiscovery is enabled
+	aliases := []string{authParameters.AuthorityInfo.Host}
+	if !authParameters.AuthorityInfo.InstanceDiscoveryDisabled {
+		metadata, err := m.getMetadataEntry(ctx, authParameters.AuthorityInfo)
+		if err != nil {
+			return TokenResponse{}, err
+		}
+		aliases = metadata.Aliases
+	}
+
+	userAssertionHash := authParameters.AssertionHash()
+	partitionKeyFromRequest := userAssertionHash
+
+	// errors returned by read* methods indicate a cache miss and are therefore non-fatal. We continue populating
+	// TokenResponse fields so that e.g. lack of an ID token doesn't prevent the caller from receiving a refresh token.
+	accessToken, err := m.readAccessToken(aliases, realm, clientID, userAssertionHash, scopes, partitionKeyFromRequest, tokenType, authnSchemeKeyID)
+	if err == nil {
+		tr.AccessToken = accessToken
+	}
+	idToken, err := m.readIDToken(aliases, realm, clientID, userAssertionHash, getPartitionKeyIDTokenRead(accessToken))
+	if err == nil {
+		tr.IDToken = idToken
+	}
+
+	if appMetadata, err := m.readAppMetaData(aliases, clientID); err == nil {
+		// we need the family ID to identify the correct refresh token, if any
+		familyID := appMetadata.FamilyID
+		refreshToken, err := m.readRefreshToken(aliases, familyID, clientID, userAssertionHash, partitionKeyFromRequest)
+		if err == nil {
+			tr.RefreshToken = refreshToken
+		}
+	}
+
+	account, err := m.readAccount(aliases, realm, userAssertionHash, idToken.HomeAccountID)
+	if err == nil {
+		tr.Account = account
+	}
+	return tr, nil
+}
+
+// Write writes a token response to the cache and returns the account information the token is stored with.
+func (m *PartitionedManager) Write(authParameters authority.AuthParams, tokenResponse accesstokens.TokenResponse) (shared.Account, error) {
+	authParameters.HomeAccountID = tokenResponse.HomeAccountID()
+	homeAccountID := authParameters.HomeAccountID
+	environment := authParameters.AuthorityInfo.Host
+	realm := authParameters.AuthorityInfo.Tenant
+	clientID := authParameters.ClientID
+	target := strings.Join(tokenResponse.GrantedScopes.Slice, scopeSeparator)
+	userAssertionHash := authParameters.AssertionHash()
+	cachedAt := time.Now()
+	authnSchemeKeyID := authParameters.AuthnScheme.KeyID()
+	var account shared.Account
+
+	if len(tokenResponse.RefreshToken) > 0 {
+		refreshToken := accesstokens.NewRefreshToken(homeAccountID, environment, clientID, tokenResponse.RefreshToken, tokenResponse.FamilyID)
+		if authParameters.AuthorizationType == authority.ATOnBehalfOf {
+			refreshToken.UserAssertionHash = userAssertionHash
+		}
+		if err := m.writeRefreshToken(refreshToken, getPartitionKeyRefreshToken(refreshToken)); err != nil {
+			return account, err
+		}
+	}
+
+	if len(tokenResponse.AccessToken) > 0 {
+		accessToken := NewAccessToken(
+			homeAccountID,
+			environment,
+			realm,
+			clientID,
+			cachedAt,
+			tokenResponse.ExpiresOn.T,
+			tokenResponse.ExtExpiresOn.T,
+			target,
+			tokenResponse.AccessToken,
+			tokenResponse.TokenType,
+			authnSchemeKeyID,
+		)
+		if authParameters.AuthorizationType == authority.ATOnBehalfOf {
+			accessToken.UserAssertionHash = userAssertionHash // get Hash method on this
+		}
+
+		// Since we have a valid access token, cache it before moving on.
+		if err := accessToken.Validate(); err == nil {
+			if err := m.writeAccessToken(accessToken, getPartitionKeyAccessToken(accessToken)); err != nil {
+				return account, err
+			}
+		} else {
+			return shared.Account{}, err
+		}
+	}
+
+	idTokenJwt := tokenResponse.IDToken
+	if !idTokenJwt.IsZero() {
+		idToken := NewIDToken(homeAccountID, environment, realm, clientID, idTokenJwt.RawToken)
+		if authParameters.AuthorizationType == authority.ATOnBehalfOf {
+			idToken.UserAssertionHash = userAssertionHash
+		}
+		if err := m.writeIDToken(idToken, getPartitionKeyIDToken(idToken)); err != nil {
+			return shared.Account{}, err
+		}
+
+		localAccountID := idTokenJwt.LocalAccountID()
+		authorityType := authParameters.AuthorityInfo.AuthorityType
+
+		preferredUsername := idTokenJwt.UPN
+		if idTokenJwt.PreferredUsername != "" {
+			preferredUsername = idTokenJwt.PreferredUsername
+		}
+
+		account = shared.NewAccount(
+			homeAccountID,
+			environment,
+			realm,
+			localAccountID,
+			authorityType,
+			preferredUsername,
+		)
+		if authParameters.AuthorizationType == authority.ATOnBehalfOf {
+			account.UserAssertionHash = userAssertionHash
+		}
+		if err := m.writeAccount(account, getPartitionKeyAccount(account)); err != nil {
+			return shared.Account{}, err
+		}
+	}
+
+	AppMetaData := NewAppMetaData(tokenResponse.FamilyID, clientID, environment)
+
+	if err := m.writeAppMetaData(AppMetaData); err != nil {
+		return shared.Account{}, err
+	}
+	return account, nil
+}
+
+func (m *PartitionedManager) getMetadataEntry(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) {
+	md, err := m.aadMetadataFromCache(ctx, authorityInfo)
+	if err != nil {
+		// not in the cache, retrieve it
+		md, err = m.aadMetadata(ctx, authorityInfo)
+	}
+	return md, err
+}
+
+func (m *PartitionedManager) aadMetadataFromCache(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) {
+	m.aadCacheMu.RLock()
+	defer m.aadCacheMu.RUnlock()
+	metadata, ok := m.aadCache[authorityInfo.Host]
+	if ok {
+		return metadata, nil
+	}
+	return metadata, errors.New("not found")
+}
+
+func (m *PartitionedManager) aadMetadata(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) {
+	discoveryResponse, err := m.requests.AADInstanceDiscovery(ctx, authorityInfo)
+	if err != nil {
+		return authority.InstanceDiscoveryMetadata{}, err
+	}
+
+	m.aadCacheMu.Lock()
+	defer m.aadCacheMu.Unlock()
+
+	for _, metadataEntry := range discoveryResponse.Metadata {
+		for _, aliasedAuthority := range metadataEntry.Aliases {
+			m.aadCache[aliasedAuthority] = metadataEntry
+		}
+	}
+	if _, ok := m.aadCache[authorityInfo.Host]; !ok {
+		m.aadCache[authorityInfo.Host] = authority.InstanceDiscoveryMetadata{
+			PreferredNetwork: authorityInfo.Host,
+			PreferredCache:   authorityInfo.Host,
+		}
+	}
+	return m.aadCache[authorityInfo.Host], nil
+}
+
+func (m *PartitionedManager) readAccessToken(envAliases []string, realm, clientID, userAssertionHash string, scopes []string, partitionKey, tokenType, authnSchemeKeyID string) (AccessToken, error) {
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+	if accessTokens, ok := m.contract.AccessTokensPartition[partitionKey]; ok {
+		// TODO: linear search (over a map no less) is slow for a large number (thousands) of tokens.
+		// this shows up as the dominating node in a profile. for real-world scenarios this likely isn't
+		// an issue, however if it does become a problem then we know where to look.
+		for _, at := range accessTokens {
+			if at.Realm == realm && at.ClientID == clientID && at.UserAssertionHash == userAssertionHash {
+				if at.TokenType == tokenType && at.AuthnSchemeKeyID == authnSchemeKeyID {
+					if checkAlias(at.Environment, envAliases) {
+						if isMatchingScopes(scopes, at.Scopes) {
+							return at, nil
+						}
+					}
+				}
+			}
+		}
+	}
+	return AccessToken{}, fmt.Errorf("access token not found")
+}
+
+func (m *PartitionedManager) writeAccessToken(accessToken AccessToken, partitionKey string) error {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	key := accessToken.Key()
+	if m.contract.AccessTokensPartition[partitionKey] == nil {
+		m.contract.AccessTokensPartition[partitionKey] = make(map[string]AccessToken)
+	}
+	m.contract.AccessTokensPartition[partitionKey][key] = accessToken
+	return nil
+}
+
+func matchFamilyRefreshTokenObo(rt accesstokens.RefreshToken, userAssertionHash string, envAliases []string) bool {
+	return rt.UserAssertionHash == userAssertionHash && checkAlias(rt.Environment, envAliases) && rt.FamilyID != ""
+}
+
+func matchClientIDRefreshTokenObo(rt accesstokens.RefreshToken, userAssertionHash string, envAliases []string, clientID string) bool {
+	return rt.UserAssertionHash == userAssertionHash && checkAlias(rt.Environment, envAliases) && rt.ClientID == clientID
+}
+
+func (m *PartitionedManager) readRefreshToken(envAliases []string, familyID, clientID, userAssertionHash, partitionKey string) (accesstokens.RefreshToken, error) {
+	byFamily := func(rt accesstokens.RefreshToken) bool {
+		return matchFamilyRefreshTokenObo(rt, userAssertionHash, envAliases)
+	}
+	byClient := func(rt accesstokens.RefreshToken) bool {
+		return matchClientIDRefreshTokenObo(rt, userAssertionHash, envAliases, clientID)
+	}
+
+	var matchers []func(rt accesstokens.RefreshToken) bool
+	if familyID == "" {
+		matchers = []func(rt accesstokens.RefreshToken) bool{
+			byClient, byFamily,
+		}
+	} else {
+		matchers = []func(rt accesstokens.RefreshToken) bool{
+			byFamily, byClient,
+		}
+	}
+
+	// TODO(keegan): All the tests here pass, but Bogdan says this is
+	// more complicated.  I'm opening an issue for this to have him
+	// review the tests and suggest tests that would break this so
+	// we can re-write against good tests. His comments as follow:
+	// The algorithm is a bit more complex than this, I assume there are some tests covering everything. I would keep the order as is.
+	// The algorithm is:
+	// If application is NOT part of the family, search by client_ID
+	// If app is part of the family or if we DO NOT KNOW if it's part of the family, search by family ID, then by client_id (we will know if an app is part of the family after the first token response).
+	// https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/311fe8b16e7c293462806f397e189a6aa1159769/src/client/Microsoft.Identity.Client/Internal/Requests/Silent/CacheSilentStrategy.cs#L95
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+	for _, matcher := range matchers {
+		for _, rt := range m.contract.RefreshTokensPartition[partitionKey] {
+			if matcher(rt) {
+				return rt, nil
+			}
+		}
+	}
+
+	return accesstokens.RefreshToken{}, fmt.Errorf("refresh token not found")
+}
+
+func (m *PartitionedManager) writeRefreshToken(refreshToken accesstokens.RefreshToken, partitionKey string) error {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	key := refreshToken.Key()
+	if m.contract.AccessTokensPartition[partitionKey] == nil {
+		m.contract.RefreshTokensPartition[partitionKey] = make(map[string]accesstokens.RefreshToken)
+	}
+	m.contract.RefreshTokensPartition[partitionKey][key] = refreshToken
+	return nil
+}
+
+func (m *PartitionedManager) readIDToken(envAliases []string, realm, clientID, userAssertionHash, partitionKey string) (IDToken, error) {
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+	for _, idt := range m.contract.IDTokensPartition[partitionKey] {
+		if idt.Realm == realm && idt.ClientID == clientID && idt.UserAssertionHash == userAssertionHash {
+			if checkAlias(idt.Environment, envAliases) {
+				return idt, nil
+			}
+		}
+	}
+	return IDToken{}, fmt.Errorf("token not found")
+}
+
+func (m *PartitionedManager) writeIDToken(idToken IDToken, partitionKey string) error {
+	key := idToken.Key()
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	if m.contract.IDTokensPartition[partitionKey] == nil {
+		m.contract.IDTokensPartition[partitionKey] = make(map[string]IDToken)
+	}
+	m.contract.IDTokensPartition[partitionKey][key] = idToken
+	return nil
+}
+
+func (m *PartitionedManager) readAccount(envAliases []string, realm, UserAssertionHash, partitionKey string) (shared.Account, error) {
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+
+	// You might ask why, if cache.Accounts is a map, we would loop through all of these instead of using a key.
+	// We only use a map because the storage contract shared between all language implementations says use a map.
+	// We can't change that. The other is because the keys are made using a specific "env", but here we are allowing
+	// a match in multiple envs (envAlias). That means we either need to hash each possible keyand do the lookup
+	// or just statically check.  Since the design is to have a storage.Manager per user, the amount of keys stored
+	// is really low (say 2).  Each hash is more expensive than the entire iteration.
+	for _, acc := range m.contract.AccountsPartition[partitionKey] {
+		if checkAlias(acc.Environment, envAliases) && acc.UserAssertionHash == UserAssertionHash && acc.Realm == realm {
+			return acc, nil
+		}
+	}
+	return shared.Account{}, fmt.Errorf("account not found")
+}
+
+func (m *PartitionedManager) writeAccount(account shared.Account, partitionKey string) error {
+	key := account.Key()
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	if m.contract.AccountsPartition[partitionKey] == nil {
+		m.contract.AccountsPartition[partitionKey] = make(map[string]shared.Account)
+	}
+	m.contract.AccountsPartition[partitionKey][key] = account
+	return nil
+}
+
+func (m *PartitionedManager) readAppMetaData(envAliases []string, clientID string) (AppMetaData, error) {
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+
+	for _, app := range m.contract.AppMetaData {
+		if checkAlias(app.Environment, envAliases) && app.ClientID == clientID {
+			return app, nil
+		}
+	}
+	return AppMetaData{}, fmt.Errorf("not found")
+}
+
+func (m *PartitionedManager) writeAppMetaData(AppMetaData AppMetaData) error {
+	key := AppMetaData.Key()
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	m.contract.AppMetaData[key] = AppMetaData
+	return nil
+}
+
+// update updates the internal cache object. This is for use in tests, other uses are not
+// supported.
+func (m *PartitionedManager) update(cache *InMemoryContract) {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	m.contract = cache
+}
+
+// Marshal implements cache.Marshaler.
+func (m *PartitionedManager) Marshal() ([]byte, error) {
+	return json.Marshal(m.contract)
+}
+
+// Unmarshal implements cache.Unmarshaler.
+func (m *PartitionedManager) Unmarshal(b []byte) error {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+
+	contract := NewInMemoryContract()
+
+	err := json.Unmarshal(b, contract)
+	if err != nil {
+		return err
+	}
+
+	m.contract = contract
+
+	return nil
+}
+
+func getPartitionKeyAccessToken(item AccessToken) string {
+	if item.UserAssertionHash != "" {
+		return item.UserAssertionHash
+	}
+	return item.HomeAccountID
+}
+
+func getPartitionKeyRefreshToken(item accesstokens.RefreshToken) string {
+	if item.UserAssertionHash != "" {
+		return item.UserAssertionHash
+	}
+	return item.HomeAccountID
+}
+
+func getPartitionKeyIDToken(item IDToken) string {
+	return item.HomeAccountID
+}
+
+func getPartitionKeyAccount(item shared.Account) string {
+	return item.HomeAccountID
+}
+
+func getPartitionKeyIDTokenRead(item AccessToken) string {
+	return item.HomeAccountID
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/storage.go 🔗

@@ -0,0 +1,583 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// Package storage holds all cached token information for MSAL. This storage can be
+// augmented with third-party extensions to provide persistent storage. In that case,
+// reads and writes in upper packages will call Marshal() to take the entire in-memory
+// representation and write it to storage and Unmarshal() to update the entire in-memory
+// storage with what was in the persistent storage.  The persistent storage can only be
+// accessed in this way because multiple MSAL clients written in multiple languages can
+// access the same storage and must adhere to the same method that was defined
+// previously.
+package storage
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared"
+)
+
+// aadInstanceDiscoveryer allows faking in tests.
+// It is implemented in production by ops/authority.Client
+type aadInstanceDiscoveryer interface {
+	AADInstanceDiscovery(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryResponse, error)
+}
+
+// TokenResponse mimics a token response that was pulled from the cache.
+type TokenResponse struct {
+	RefreshToken accesstokens.RefreshToken
+	IDToken      IDToken // *Credential
+	AccessToken  AccessToken
+	Account      shared.Account
+}
+
+// Manager is an in-memory cache of access tokens, accounts and meta data. This data is
+// updated on read/write calls. Unmarshal() replaces all data stored here with whatever
+// was given to it on each call.
+type Manager struct {
+	contract   *Contract
+	contractMu sync.RWMutex
+	requests   aadInstanceDiscoveryer // *oauth.Token
+
+	aadCacheMu sync.RWMutex
+	aadCache   map[string]authority.InstanceDiscoveryMetadata
+}
+
+// New is the constructor for Manager.
+func New(requests *oauth.Client) *Manager {
+	m := &Manager{requests: requests, aadCache: make(map[string]authority.InstanceDiscoveryMetadata)}
+	m.contract = NewContract()
+	return m
+}
+
+func checkAlias(alias string, aliases []string) bool {
+	for _, v := range aliases {
+		if alias == v {
+			return true
+		}
+	}
+	return false
+}
+
+func isMatchingScopes(scopesOne []string, scopesTwo string) bool {
+	newScopesTwo := strings.Split(scopesTwo, scopeSeparator)
+	scopeCounter := 0
+	for _, scope := range scopesOne {
+		for _, otherScope := range newScopesTwo {
+			if strings.EqualFold(scope, otherScope) {
+				scopeCounter++
+				continue
+			}
+		}
+	}
+	return scopeCounter == len(scopesOne)
+}
+
+// needsUpgrade returns true if the given key follows the v1.0 schema i.e.,
+// it contains an uppercase character (v1.1+ keys are all lowercase)
+func needsUpgrade(key string) bool {
+	for _, r := range key {
+		if 'A' <= r && r <= 'Z' {
+			return true
+		}
+	}
+	return false
+}
+
+// upgrade a v1.0 cache item by adding a v1.1+ item having the same value and deleting
+// the v1.0 item. Callers must hold an exclusive lock on m.
+func upgrade[T any](m map[string]T, k string) T {
+	v1_1Key := strings.ToLower(k)
+	v, ok := m[k]
+	if !ok {
+		// another goroutine did the upgrade while this one was waiting for the write lock
+		return m[v1_1Key]
+	}
+	if v2, ok := m[v1_1Key]; ok {
+		// cache has an equivalent v1.1+ item, which we prefer because we know it was added
+		// by a newer version of the module and is therefore more likely to remain valid.
+		// The v1.0 item may have expired because only v1.0 or earlier would update it.
+		v = v2
+	} else {
+		// add an equivalent item according to the v1.1 schema
+		m[v1_1Key] = v
+	}
+	delete(m, k)
+	return v
+}
+
+// Read reads a storage token from the cache if it exists.
+func (m *Manager) Read(ctx context.Context, authParameters authority.AuthParams) (TokenResponse, error) {
+	tr := TokenResponse{}
+	homeAccountID := authParameters.HomeAccountID
+	realm := authParameters.AuthorityInfo.Tenant
+	clientID := authParameters.ClientID
+	scopes := authParameters.Scopes
+	authnSchemeKeyID := authParameters.AuthnScheme.KeyID()
+	tokenType := authParameters.AuthnScheme.AccessTokenType()
+
+	// fetch metadata if instanceDiscovery is enabled
+	aliases := []string{authParameters.AuthorityInfo.Host}
+	if !authParameters.AuthorityInfo.InstanceDiscoveryDisabled {
+		metadata, err := m.getMetadataEntry(ctx, authParameters.AuthorityInfo)
+		if err != nil {
+			return TokenResponse{}, err
+		}
+		aliases = metadata.Aliases
+	}
+
+	accessToken := m.readAccessToken(homeAccountID, aliases, realm, clientID, scopes, tokenType, authnSchemeKeyID)
+	tr.AccessToken = accessToken
+
+	if homeAccountID == "" {
+		// caller didn't specify a user, so there's no reason to search for an ID or refresh token
+		return tr, nil
+	}
+	// errors returned by read* methods indicate a cache miss and are therefore non-fatal. We continue populating
+	// TokenResponse fields so that e.g. lack of an ID token doesn't prevent the caller from receiving a refresh token.
+	idToken, err := m.readIDToken(homeAccountID, aliases, realm, clientID)
+	if err == nil {
+		tr.IDToken = idToken
+	}
+
+	if appMetadata, err := m.readAppMetaData(aliases, clientID); err == nil {
+		// we need the family ID to identify the correct refresh token, if any
+		familyID := appMetadata.FamilyID
+		refreshToken, err := m.readRefreshToken(homeAccountID, aliases, familyID, clientID)
+		if err == nil {
+			tr.RefreshToken = refreshToken
+		}
+	}
+
+	account, err := m.readAccount(homeAccountID, aliases, realm)
+	if err == nil {
+		tr.Account = account
+	}
+	return tr, nil
+}
+
+const scopeSeparator = " "
+
+// Write writes a token response to the cache and returns the account information the token is stored with.
+func (m *Manager) Write(authParameters authority.AuthParams, tokenResponse accesstokens.TokenResponse) (shared.Account, error) {
+	homeAccountID := tokenResponse.HomeAccountID()
+	environment := authParameters.AuthorityInfo.Host
+	realm := authParameters.AuthorityInfo.Tenant
+	clientID := authParameters.ClientID
+	target := strings.Join(tokenResponse.GrantedScopes.Slice, scopeSeparator)
+	cachedAt := time.Now()
+	authnSchemeKeyID := authParameters.AuthnScheme.KeyID()
+
+	var account shared.Account
+
+	if len(tokenResponse.RefreshToken) > 0 {
+		refreshToken := accesstokens.NewRefreshToken(homeAccountID, environment, clientID, tokenResponse.RefreshToken, tokenResponse.FamilyID)
+		if err := m.writeRefreshToken(refreshToken); err != nil {
+			return account, err
+		}
+	}
+
+	if len(tokenResponse.AccessToken) > 0 {
+		accessToken := NewAccessToken(
+			homeAccountID,
+			environment,
+			realm,
+			clientID,
+			cachedAt,
+			tokenResponse.ExpiresOn.T,
+			tokenResponse.ExtExpiresOn.T,
+			target,
+			tokenResponse.AccessToken,
+			tokenResponse.TokenType,
+			authnSchemeKeyID,
+		)
+
+		// Since we have a valid access token, cache it before moving on.
+		if err := accessToken.Validate(); err == nil {
+			if err := m.writeAccessToken(accessToken); err != nil {
+				return account, err
+			}
+		}
+	}
+
+	idTokenJwt := tokenResponse.IDToken
+	if !idTokenJwt.IsZero() {
+		idToken := NewIDToken(homeAccountID, environment, realm, clientID, idTokenJwt.RawToken)
+		if err := m.writeIDToken(idToken); err != nil {
+			return shared.Account{}, err
+		}
+
+		localAccountID := idTokenJwt.LocalAccountID()
+		authorityType := authParameters.AuthorityInfo.AuthorityType
+
+		preferredUsername := idTokenJwt.UPN
+		if idTokenJwt.PreferredUsername != "" {
+			preferredUsername = idTokenJwt.PreferredUsername
+		}
+
+		account = shared.NewAccount(
+			homeAccountID,
+			environment,
+			realm,
+			localAccountID,
+			authorityType,
+			preferredUsername,
+		)
+		if err := m.writeAccount(account); err != nil {
+			return shared.Account{}, err
+		}
+	}
+
+	AppMetaData := NewAppMetaData(tokenResponse.FamilyID, clientID, environment)
+
+	if err := m.writeAppMetaData(AppMetaData); err != nil {
+		return shared.Account{}, err
+	}
+	return account, nil
+}
+
+func (m *Manager) getMetadataEntry(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) {
+	md, err := m.aadMetadataFromCache(ctx, authorityInfo)
+	if err != nil {
+		// not in the cache, retrieve it
+		md, err = m.aadMetadata(ctx, authorityInfo)
+	}
+	return md, err
+}
+
+func (m *Manager) aadMetadataFromCache(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) {
+	m.aadCacheMu.RLock()
+	defer m.aadCacheMu.RUnlock()
+	metadata, ok := m.aadCache[authorityInfo.Host]
+	if ok {
+		return metadata, nil
+	}
+	return metadata, errors.New("not found")
+}
+
+func (m *Manager) aadMetadata(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) {
+	m.aadCacheMu.Lock()
+	defer m.aadCacheMu.Unlock()
+	discoveryResponse, err := m.requests.AADInstanceDiscovery(ctx, authorityInfo)
+	if err != nil {
+		return authority.InstanceDiscoveryMetadata{}, err
+	}
+
+	for _, metadataEntry := range discoveryResponse.Metadata {
+		for _, aliasedAuthority := range metadataEntry.Aliases {
+			m.aadCache[aliasedAuthority] = metadataEntry
+		}
+	}
+	if _, ok := m.aadCache[authorityInfo.Host]; !ok {
+		m.aadCache[authorityInfo.Host] = authority.InstanceDiscoveryMetadata{
+			PreferredNetwork: authorityInfo.Host,
+			PreferredCache:   authorityInfo.Host,
+		}
+	}
+	return m.aadCache[authorityInfo.Host], nil
+}
+
+func (m *Manager) readAccessToken(homeID string, envAliases []string, realm, clientID string, scopes []string, tokenType, authnSchemeKeyID string) AccessToken {
+	m.contractMu.RLock()
+	// TODO: linear search (over a map no less) is slow for a large number (thousands) of tokens.
+	// this shows up as the dominating node in a profile. for real-world scenarios this likely isn't
+	// an issue, however if it does become a problem then we know where to look.
+	for k, at := range m.contract.AccessTokens {
+		if at.HomeAccountID == homeID && at.Realm == realm && at.ClientID == clientID {
+			if (strings.EqualFold(at.TokenType, tokenType) && at.AuthnSchemeKeyID == authnSchemeKeyID) || (at.TokenType == "" && (tokenType == "" || tokenType == "Bearer")) {
+				if checkAlias(at.Environment, envAliases) && isMatchingScopes(scopes, at.Scopes) {
+					m.contractMu.RUnlock()
+					if needsUpgrade(k) {
+						m.contractMu.Lock()
+						defer m.contractMu.Unlock()
+						at = upgrade(m.contract.AccessTokens, k)
+					}
+					return at
+				}
+			}
+		}
+	}
+	m.contractMu.RUnlock()
+	return AccessToken{}
+}
+
+func (m *Manager) writeAccessToken(accessToken AccessToken) error {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	key := accessToken.Key()
+	m.contract.AccessTokens[key] = accessToken
+	return nil
+}
+
+func (m *Manager) readRefreshToken(homeID string, envAliases []string, familyID, clientID string) (accesstokens.RefreshToken, error) {
+	byFamily := func(rt accesstokens.RefreshToken) bool {
+		return matchFamilyRefreshToken(rt, homeID, envAliases)
+	}
+	byClient := func(rt accesstokens.RefreshToken) bool {
+		return matchClientIDRefreshToken(rt, homeID, envAliases, clientID)
+	}
+
+	var matchers []func(rt accesstokens.RefreshToken) bool
+	if familyID == "" {
+		matchers = []func(rt accesstokens.RefreshToken) bool{
+			byClient, byFamily,
+		}
+	} else {
+		matchers = []func(rt accesstokens.RefreshToken) bool{
+			byFamily, byClient,
+		}
+	}
+
+	// TODO(keegan): All the tests here pass, but Bogdan says this is
+	// more complicated.  I'm opening an issue for this to have him
+	// review the tests and suggest tests that would break this so
+	// we can re-write against good tests. His comments as follow:
+	// The algorithm is a bit more complex than this, I assume there are some tests covering everything. I would keep the order as is.
+	// The algorithm is:
+	// If application is NOT part of the family, search by client_ID
+	// If app is part of the family or if we DO NOT KNOW if it's part of the family, search by family ID, then by client_id (we will know if an app is part of the family after the first token response).
+	// https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/311fe8b16e7c293462806f397e189a6aa1159769/src/client/Microsoft.Identity.Client/Internal/Requests/Silent/CacheSilentStrategy.cs#L95
+	m.contractMu.RLock()
+	for _, matcher := range matchers {
+		for k, rt := range m.contract.RefreshTokens {
+			if matcher(rt) {
+				m.contractMu.RUnlock()
+				if needsUpgrade(k) {
+					m.contractMu.Lock()
+					defer m.contractMu.Unlock()
+					rt = upgrade(m.contract.RefreshTokens, k)
+				}
+				return rt, nil
+			}
+		}
+	}
+
+	m.contractMu.RUnlock()
+	return accesstokens.RefreshToken{}, fmt.Errorf("refresh token not found")
+}
+
+func matchFamilyRefreshToken(rt accesstokens.RefreshToken, homeID string, envAliases []string) bool {
+	return rt.HomeAccountID == homeID && checkAlias(rt.Environment, envAliases) && rt.FamilyID != ""
+}
+
+func matchClientIDRefreshToken(rt accesstokens.RefreshToken, homeID string, envAliases []string, clientID string) bool {
+	return rt.HomeAccountID == homeID && checkAlias(rt.Environment, envAliases) && rt.ClientID == clientID
+}
+
+func (m *Manager) writeRefreshToken(refreshToken accesstokens.RefreshToken) error {
+	key := refreshToken.Key()
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	m.contract.RefreshTokens[key] = refreshToken
+	return nil
+}
+
+func (m *Manager) readIDToken(homeID string, envAliases []string, realm, clientID string) (IDToken, error) {
+	m.contractMu.RLock()
+	for k, idt := range m.contract.IDTokens {
+		if idt.HomeAccountID == homeID && idt.Realm == realm && idt.ClientID == clientID {
+			if checkAlias(idt.Environment, envAliases) {
+				m.contractMu.RUnlock()
+				if needsUpgrade(k) {
+					m.contractMu.Lock()
+					defer m.contractMu.Unlock()
+					idt = upgrade(m.contract.IDTokens, k)
+				}
+				return idt, nil
+			}
+		}
+	}
+	m.contractMu.RUnlock()
+	return IDToken{}, fmt.Errorf("token not found")
+}
+
+func (m *Manager) writeIDToken(idToken IDToken) error {
+	key := idToken.Key()
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	m.contract.IDTokens[key] = idToken
+	return nil
+}
+
+func (m *Manager) AllAccounts() []shared.Account {
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+
+	var accounts []shared.Account
+	for _, v := range m.contract.Accounts {
+		accounts = append(accounts, v)
+	}
+
+	return accounts
+}
+
+func (m *Manager) Account(homeAccountID string) shared.Account {
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+
+	for _, v := range m.contract.Accounts {
+		if v.HomeAccountID == homeAccountID {
+			return v
+		}
+	}
+
+	return shared.Account{}
+}
+
+func (m *Manager) readAccount(homeAccountID string, envAliases []string, realm string) (shared.Account, error) {
+	m.contractMu.RLock()
+
+	// You might ask why, if cache.Accounts is a map, we would loop through all of these instead of using a key.
+	// We only use a map because the storage contract shared between all language implementations says use a map.
+	// We can't change that. The other is because the keys are made using a specific "env", but here we are allowing
+	// a match in multiple envs (envAlias). That means we either need to hash each possible keyand do the lookup
+	// or just statically check.  Since the design is to have a storage.Manager per user, the amount of keys stored
+	// is really low (say 2).  Each hash is more expensive than the entire iteration.
+	for k, acc := range m.contract.Accounts {
+		if acc.HomeAccountID == homeAccountID && checkAlias(acc.Environment, envAliases) && acc.Realm == realm {
+			m.contractMu.RUnlock()
+			if needsUpgrade(k) {
+				m.contractMu.Lock()
+				defer m.contractMu.Unlock()
+				acc = upgrade(m.contract.Accounts, k)
+			}
+			return acc, nil
+		}
+	}
+	m.contractMu.RUnlock()
+	return shared.Account{}, fmt.Errorf("account not found")
+}
+
+func (m *Manager) writeAccount(account shared.Account) error {
+	key := account.Key()
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	m.contract.Accounts[key] = account
+	return nil
+}
+
+func (m *Manager) readAppMetaData(envAliases []string, clientID string) (AppMetaData, error) {
+	m.contractMu.RLock()
+	for k, app := range m.contract.AppMetaData {
+		if checkAlias(app.Environment, envAliases) && app.ClientID == clientID {
+			m.contractMu.RUnlock()
+			if needsUpgrade(k) {
+				m.contractMu.Lock()
+				defer m.contractMu.Unlock()
+				app = upgrade(m.contract.AppMetaData, k)
+			}
+			return app, nil
+		}
+	}
+	m.contractMu.RUnlock()
+	return AppMetaData{}, fmt.Errorf("not found")
+}
+
+func (m *Manager) writeAppMetaData(AppMetaData AppMetaData) error {
+	key := AppMetaData.Key()
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	m.contract.AppMetaData[key] = AppMetaData
+	return nil
+}
+
+// RemoveAccount removes all the associated ATs, RTs and IDTs from the cache associated with this account.
+func (m *Manager) RemoveAccount(account shared.Account, clientID string) {
+	m.removeRefreshTokens(account.HomeAccountID, account.Environment, clientID)
+	m.removeAccessTokens(account.HomeAccountID, account.Environment)
+	m.removeIDTokens(account.HomeAccountID, account.Environment)
+	m.removeAccounts(account.HomeAccountID, account.Environment)
+}
+
+func (m *Manager) removeRefreshTokens(homeID string, env string, clientID string) {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	for key, rt := range m.contract.RefreshTokens {
+		// Check for RTs associated with the account.
+		if rt.HomeAccountID == homeID && rt.Environment == env {
+			// Do RT's app ownership check as a precaution, in case family apps
+			// and 3rd-party apps share same token cache, although they should not.
+			if rt.ClientID == clientID || rt.FamilyID != "" {
+				delete(m.contract.RefreshTokens, key)
+			}
+		}
+	}
+}
+
+func (m *Manager) removeAccessTokens(homeID string, env string) {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	for key, at := range m.contract.AccessTokens {
+		// Remove AT's associated with the account
+		if at.HomeAccountID == homeID && at.Environment == env {
+			// # To avoid the complexity of locating sibling family app's AT, we skip AT's app ownership check.
+			// It means ATs for other apps will also be removed, it is OK because:
+			// non-family apps are not supposed to share token cache to begin with;
+			// Even if it happens, we keep other app's RT already, so SSO still works.
+			delete(m.contract.AccessTokens, key)
+		}
+	}
+}
+
+func (m *Manager) removeIDTokens(homeID string, env string) {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	for key, idt := range m.contract.IDTokens {
+		// Remove ID tokens associated with the account.
+		if idt.HomeAccountID == homeID && idt.Environment == env {
+			delete(m.contract.IDTokens, key)
+		}
+	}
+}
+
+func (m *Manager) removeAccounts(homeID string, env string) {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	for key, acc := range m.contract.Accounts {
+		// Remove the specified account.
+		if acc.HomeAccountID == homeID && acc.Environment == env {
+			delete(m.contract.Accounts, key)
+		}
+	}
+}
+
+// update updates the internal cache object. This is for use in tests, other uses are not
+// supported.
+func (m *Manager) update(cache *Contract) {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+	m.contract = cache
+}
+
+// Marshal implements cache.Marshaler.
+func (m *Manager) Marshal() ([]byte, error) {
+	m.contractMu.RLock()
+	defer m.contractMu.RUnlock()
+	return json.Marshal(m.contract)
+}
+
+// Unmarshal implements cache.Unmarshaler.
+func (m *Manager) Unmarshal(b []byte) error {
+	m.contractMu.Lock()
+	defer m.contractMu.Unlock()
+
+	contract := NewContract()
+
+	err := json.Unmarshal(b, contract)
+	if err != nil {
+		return err
+	}
+
+	m.contract = contract
+
+	return nil
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported/exported.go 🔗

@@ -0,0 +1,34 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// package exported contains internal types that are re-exported from a public package
+package exported
+
+// AssertionRequestOptions has information required to generate a client assertion
+type AssertionRequestOptions struct {
+	// ClientID identifies the application for which an assertion is requested. Used as the assertion's "iss" and "sub" claims.
+	ClientID string
+
+	// TokenEndpoint is the intended token endpoint. Used as the assertion's "aud" claim.
+	TokenEndpoint string
+}
+
+// TokenProviderParameters is the authentication parameters passed to token providers
+type TokenProviderParameters struct {
+	// Claims contains any additional claims requested for the token
+	Claims string
+	// CorrelationID of the authentication request
+	CorrelationID string
+	// Scopes requested for the token
+	Scopes []string
+	// TenantID identifies the tenant in which to authenticate
+	TenantID string
+}
+
+// TokenProviderResult is the authentication result returned by custom token providers
+type TokenProviderResult struct {
+	// AccessToken is the requested token
+	AccessToken string
+	// ExpiresInSeconds is the lifetime of the token in seconds
+	ExpiresInSeconds int
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/design.md 🔗

@@ -0,0 +1,140 @@
+# JSON Package Design
+Author: John Doak(jdoak@microsoft.com)
+
+## Why?
+
+This project needs a special type of marshal/unmarshal not directly supported
+by the encoding/json package. 
+
+The need revolves around a few key wants/needs:
+- unmarshal and marshal structs representing JSON messages
+- fields in the messgage not in the struct must be maintained when unmarshalled
+- those same fields must be marshalled back when encoded again
+
+The initial version used map[string]interface{} to put in the keys that
+were known and then any other keys were put into a field called AdditionalFields.
+
+This has a few negatives:
+- Dual marshaling/unmarshalling is required
+- Adding a struct field requires manually adding a key by name to be encoded/decoded from the map (which is a loosely coupled construct), which can lead to bugs that aren't detected or have bad side effects
+- Tests can become quickly disconnected if those keys aren't put
+in tests as well. So you think you have support working, but you
+don't. Existing tests were found that didn't test the marshalling output.
+- There is no enforcement that if AdditionalFields is required on one struct, it should be on all containers
+that don't have custom marshal/unmarshal.
+
+This package aims to support our needs by providing custom Marshal()/Unmarshal() functions.
+
+This prevents all the negatives in the initial solution listed above. However, it does add its own negative:
+- Custom encoding/decoding via reflection is messy (as can be  seen in encoding/json itself)
+
+Go proverb: Reflection is never clear
+Suggested reading: https://blog.golang.org/laws-of-reflection
+
+## Important design decisions
+
+- We don't want to understand all JSON decoding rules
+- We don't want to deal with all the quoting, commas, etc on decode
+- Need support for json.Marshaler/Unmarshaler, so we can support types like time.Time
+- If struct does not implement json.Unmarshaler, it must have AdditionalFields defined
+- We only support root level objects that are \*struct or struct
+
+To faciliate these goals, we will utilize the json.Encoder and json.Decoder.
+They provide streaming processing (efficient) and return errors on bad JSON.
+
+Support for json.Marshaler/Unmarshaler allows for us to use non-basic types
+that must be specially encoded/decoded (like time.Time objects).
+
+We don't support types that can't customer unmarshal or have AdditionalFields
+in order to prevent future devs from forgetting that important field and
+generating bad return values.
+
+Support for root level objects of \*struct or struct simply acknowledges the
+fact that this is designed only for the purposes listed in the Introduction.
+Outside that (like encoding a lone number) should be done with the
+regular json package (as it will not have additional fields).
+
+We don't support a few things on json supported reference types and structs:
+- \*map: no need for pointers to maps
+- \*slice: no need for pointers to slices
+- any further pointers on struct after \*struct
+
+There should never be a need for this in Go.
+
+## Design
+
+## State Machines
+
+This uses state machine designs that based upon the Rob Pike talk on 
+lexers and parsers: https://www.youtube.com/watch?v=HxaD_trXwRE
+
+This is the most common pattern for state machines in Go and
+the model to follow closesly when dealing with streaming 
+processing of textual data.
+
+Our state machines are based on the type:
+```go
+type stateFn func() (stateFn, error)
+```
+
+The state machine itself is simply a struct that has methods that
+satisfy stateFn. 
+
+Our state machines have a few standard calls
+- run(): runs the state machine
+- start(): always the first stateFn to be called
+
+All state machines have the following logic:
+* run() is called
+* start() is called and returns the next stateFn or error
+* stateFn is called
+    - If returned stateFn(next state) is non-nil, call it
+    - If error is non-nil, run() returns the error
+    - If stateFn == nil and err == nil, run() return err == nil
+
+## Supporting types
+
+Marshalling/Unmarshalling must support(within top level struct):
+- struct
+- \*struct
+- []struct
+- []\*struct
+- []map[string]structContainer
+- [][]structContainer
+
+**Term note:** structContainer == type that has a struct or \*struct inside it
+
+We specifically do not support []interface or map[string]interface
+where the interface value would hold some value with a struct in it.
+
+Those will still marshal/unmarshal, but without support for 
+AdditionalFields. 
+
+## Marshalling
+
+The marshalling design will be based around a statemachine design. 
+
+The basic logic is as follows:
+
+* If struct has custom marshaller, call it and return
+* If struct has field "AdditionalFields", it must be a map[string]interface{}
+* If struct does not have "AdditionalFields", give an error
+* Get struct tag detailing json names to go names, create mapping
+* For each public field name
+    - Write field name out
+    - If field value is a struct, recursively call our state machine
+    - Otherwise, use the json.Encoder to write out the value
+
+## Unmarshalling
+
+The unmarshalling desin is also based around a statemachine design. The 
+basic logic is as follows:
+
+* If struct has custom marhaller, call it
+* If struct has field "AdditionalFields", it must be a map[string]interface{}
+* Get struct tag detailing json names to go names, create mapping
+* For each key found
+    - If key exists, 
+        - If value is basic type, extract value into struct field using Decoder
+        - If value is struct type, recursively call statemachine
+    - If key doesn't exist, add it to AdditionalFields if it exists using Decoder

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/json.go 🔗

@@ -0,0 +1,184 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// Package json provide functions for marshalling an unmarshalling types to JSON. These functions are meant to
+// be utilized inside of structs that implement json.Unmarshaler and json.Marshaler interfaces.
+// This package provides the additional functionality of writing fields that are not in the struct when marshalling
+// to a field called AdditionalFields if that field exists and is a map[string]interface{}.
+// When marshalling, if the struct has all the same prerequisites, it will uses the keys in AdditionalFields as
+// extra fields. This package uses encoding/json underneath.
+package json
+
+import (
+	"bytes"
+	"encoding/json"
+	"fmt"
+	"reflect"
+	"strings"
+)
+
+const addField = "AdditionalFields"
+const (
+	marshalJSON   = "MarshalJSON"
+	unmarshalJSON = "UnmarshalJSON"
+)
+
+var (
+	leftBrace  = []byte("{")[0]
+	rightBrace = []byte("}")[0]
+	comma      = []byte(",")[0]
+	leftParen  = []byte("[")[0]
+	rightParen = []byte("]")[0]
+)
+
+var mapStrInterType = reflect.TypeOf(map[string]interface{}{})
+
+// stateFn defines a state machine function. This will be used in all state
+// machines in this package.
+type stateFn func() (stateFn, error)
+
+// Marshal is used to marshal a type into its JSON representation. It
+// wraps the stdlib calls in order to marshal a struct or *struct so
+// that a field called "AdditionalFields" of type map[string]interface{}
+// with "-" used inside struct tag `json:"-"` can be marshalled as if
+// they were fields within the struct.
+func Marshal(i interface{}) ([]byte, error) {
+	buff := bytes.Buffer{}
+	enc := json.NewEncoder(&buff)
+	enc.SetEscapeHTML(false)
+	enc.SetIndent("", "")
+
+	v := reflect.ValueOf(i)
+	if v.Kind() != reflect.Ptr && v.CanAddr() {
+		v = v.Addr()
+	}
+	err := marshalStruct(v, &buff, enc)
+	if err != nil {
+		return nil, err
+	}
+	return buff.Bytes(), nil
+}
+
+// Unmarshal unmarshals a []byte representing JSON into i, which must be a *struct. In addition, if the struct has
+// a field called AdditionalFields of type map[string]interface{}, JSON data representing fields not in the struct
+// will be written as key/value pairs to AdditionalFields.
+func Unmarshal(b []byte, i interface{}) error {
+	if len(b) == 0 {
+		return nil
+	}
+
+	jdec := json.NewDecoder(bytes.NewBuffer(b))
+	jdec.UseNumber()
+	return unmarshalStruct(jdec, i)
+}
+
+// MarshalRaw marshals i into a json.RawMessage. If I cannot be marshalled,
+// this will panic. This is exposed to help test AdditionalField values
+// which are stored as json.RawMessage.
+func MarshalRaw(i interface{}) json.RawMessage {
+	b, err := json.Marshal(i)
+	if err != nil {
+		panic(err)
+	}
+	return json.RawMessage(b)
+}
+
+// isDelim simply tests to see if a json.Token is a delimeter.
+func isDelim(got json.Token) bool {
+	switch got.(type) {
+	case json.Delim:
+		return true
+	}
+	return false
+}
+
+// delimIs tests got to see if it is want.
+func delimIs(got json.Token, want rune) bool {
+	switch v := got.(type) {
+	case json.Delim:
+		if v == json.Delim(want) {
+			return true
+		}
+	}
+	return false
+}
+
+// hasMarshalJSON will determine if the value or a pointer to this value has
+// the MarshalJSON method.
+func hasMarshalJSON(v reflect.Value) bool {
+	if method := v.MethodByName(marshalJSON); method.Kind() != reflect.Invalid {
+		_, ok := v.Interface().(json.Marshaler)
+		return ok
+	}
+
+	if v.Kind() == reflect.Ptr {
+		v = v.Elem()
+	} else {
+		if !v.CanAddr() {
+			return false
+		}
+		v = v.Addr()
+	}
+
+	if method := v.MethodByName(marshalJSON); method.Kind() != reflect.Invalid {
+		_, ok := v.Interface().(json.Marshaler)
+		return ok
+	}
+	return false
+}
+
+// callMarshalJSON will call MarshalJSON() method on the value or a pointer to this value.
+// This will panic if the method is not defined.
+func callMarshalJSON(v reflect.Value) ([]byte, error) {
+	if method := v.MethodByName(marshalJSON); method.Kind() != reflect.Invalid {
+		marsh := v.Interface().(json.Marshaler)
+		return marsh.MarshalJSON()
+	}
+
+	if v.Kind() == reflect.Ptr {
+		v = v.Elem()
+	} else {
+		if v.CanAddr() {
+			v = v.Addr()
+		}
+	}
+
+	if method := v.MethodByName(unmarshalJSON); method.Kind() != reflect.Invalid {
+		marsh := v.Interface().(json.Marshaler)
+		return marsh.MarshalJSON()
+	}
+
+	panic(fmt.Sprintf("callMarshalJSON called on type %T that does not have MarshalJSON defined", v.Interface()))
+}
+
+// hasUnmarshalJSON will determine if the value or a pointer to this value has
+// the UnmarshalJSON method.
+func hasUnmarshalJSON(v reflect.Value) bool {
+	// You can't unmarshal on a non-pointer type.
+	if v.Kind() != reflect.Ptr {
+		if !v.CanAddr() {
+			return false
+		}
+		v = v.Addr()
+	}
+
+	if method := v.MethodByName(unmarshalJSON); method.Kind() != reflect.Invalid {
+		_, ok := v.Interface().(json.Unmarshaler)
+		return ok
+	}
+
+	return false
+}
+
+// hasOmitEmpty indicates if the field has instructed us to not output
+// the field if omitempty is set on the tag. tag is the string
+// returned by reflect.StructField.Tag().Get().
+func hasOmitEmpty(tag string) bool {
+	sl := strings.Split(tag, ",")
+	for _, str := range sl {
+		if str == "omitempty" {
+			return true
+		}
+	}
+	return false
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/mapslice.go 🔗

@@ -0,0 +1,333 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package json
+
+import (
+	"encoding/json"
+	"fmt"
+	"reflect"
+)
+
+// unmarshalMap unmarshal's a map.
+func unmarshalMap(dec *json.Decoder, m reflect.Value) error {
+	if m.Kind() != reflect.Ptr || m.Elem().Kind() != reflect.Map {
+		panic("unmarshalMap called on non-*map value")
+	}
+	mapValueType := m.Elem().Type().Elem()
+	walk := mapWalk{dec: dec, m: m, valueType: mapValueType}
+	if err := walk.run(); err != nil {
+		return err
+	}
+	return nil
+}
+
+type mapWalk struct {
+	dec       *json.Decoder
+	key       string
+	m         reflect.Value
+	valueType reflect.Type
+}
+
+// run runs our decoder state machine.
+func (m *mapWalk) run() error {
+	var state = m.start
+	var err error
+	for {
+		state, err = state()
+		if err != nil {
+			return err
+		}
+		if state == nil {
+			return nil
+		}
+	}
+}
+
+func (m *mapWalk) start() (stateFn, error) {
+	// maps can have custom unmarshaler's.
+	if hasUnmarshalJSON(m.m) {
+		err := m.dec.Decode(m.m.Interface())
+		if err != nil {
+			return nil, err
+		}
+		return nil, nil
+	}
+
+	// We only want to use this if the map value is:
+	// *struct/struct/map/slice
+	// otherwise use standard decode
+	t, _ := m.valueBaseType()
+	switch t.Kind() {
+	case reflect.Struct, reflect.Map, reflect.Slice:
+		delim, err := m.dec.Token()
+		if err != nil {
+			return nil, err
+		}
+		// This indicates the value was set to JSON null.
+		if delim == nil {
+			return nil, nil
+		}
+		if !delimIs(delim, '{') {
+			return nil, fmt.Errorf("Unmarshal expected opening {, received %v", delim)
+		}
+		return m.next, nil
+	case reflect.Ptr:
+		return nil, fmt.Errorf("do not support maps with values of '**type' or '*reference")
+	}
+
+	// This is a basic map type, so just use Decode().
+	if err := m.dec.Decode(m.m.Interface()); err != nil {
+		return nil, err
+	}
+
+	return nil, nil
+}
+
+func (m *mapWalk) next() (stateFn, error) {
+	if m.dec.More() {
+		key, err := m.dec.Token()
+		if err != nil {
+			return nil, err
+		}
+		m.key = key.(string)
+		return m.storeValue, nil
+	}
+	// No more entries, so remove final }.
+	_, err := m.dec.Token()
+	if err != nil {
+		return nil, err
+	}
+	return nil, nil
+}
+
+func (m *mapWalk) storeValue() (stateFn, error) {
+	v := m.valueType
+	for {
+		switch v.Kind() {
+		case reflect.Ptr:
+			v = v.Elem()
+			continue
+		case reflect.Struct:
+			return m.storeStruct, nil
+		case reflect.Map:
+			return m.storeMap, nil
+		case reflect.Slice:
+			return m.storeSlice, nil
+		}
+		return nil, fmt.Errorf("bug: mapWalk.storeValue() called on unsupported type: %v", v.Kind())
+	}
+}
+
+func (m *mapWalk) storeStruct() (stateFn, error) {
+	v := newValue(m.valueType)
+	if err := unmarshalStruct(m.dec, v.Interface()); err != nil {
+		return nil, err
+	}
+
+	if m.valueType.Kind() == reflect.Ptr {
+		m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v)
+		return m.next, nil
+	}
+	m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v.Elem())
+
+	return m.next, nil
+}
+
+func (m *mapWalk) storeMap() (stateFn, error) {
+	v := reflect.MakeMap(m.valueType)
+	ptr := newValue(v.Type())
+	ptr.Elem().Set(v)
+	if err := unmarshalMap(m.dec, ptr); err != nil {
+		return nil, err
+	}
+
+	m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v)
+
+	return m.next, nil
+}
+
+func (m *mapWalk) storeSlice() (stateFn, error) {
+	v := newValue(m.valueType)
+	if err := unmarshalSlice(m.dec, v); err != nil {
+		return nil, err
+	}
+
+	m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v.Elem())
+
+	return m.next, nil
+}
+
+// valueType returns the underlying Type. So a *struct would yield
+// struct, etc...
+func (m *mapWalk) valueBaseType() (reflect.Type, bool) {
+	ptr := false
+	v := m.valueType
+	if v.Kind() == reflect.Ptr {
+		ptr = true
+		v = v.Elem()
+	}
+	return v, ptr
+}
+
+// unmarshalSlice unmarshal's the next value, which must be a slice, into
+// ptrSlice, which must be a pointer to a slice. newValue() can be use to
+// create the slice.
+func unmarshalSlice(dec *json.Decoder, ptrSlice reflect.Value) error {
+	if ptrSlice.Kind() != reflect.Ptr || ptrSlice.Elem().Kind() != reflect.Slice {
+		panic("unmarshalSlice called on non-*[]slice value")
+	}
+	sliceValueType := ptrSlice.Elem().Type().Elem()
+	walk := sliceWalk{
+		dec:       dec,
+		s:         ptrSlice,
+		valueType: sliceValueType,
+	}
+	if err := walk.run(); err != nil {
+		return err
+	}
+
+	return nil
+}
+
+type sliceWalk struct {
+	dec       *json.Decoder
+	s         reflect.Value // *[]slice
+	valueType reflect.Type
+}
+
+// run runs our decoder state machine.
+func (s *sliceWalk) run() error {
+	var state = s.start
+	var err error
+	for {
+		state, err = state()
+		if err != nil {
+			return err
+		}
+		if state == nil {
+			return nil
+		}
+	}
+}
+
+func (s *sliceWalk) start() (stateFn, error) {
+	// slices can have custom unmarshaler's.
+	if hasUnmarshalJSON(s.s) {
+		err := s.dec.Decode(s.s.Interface())
+		if err != nil {
+			return nil, err
+		}
+		return nil, nil
+	}
+
+	// We only want to use this if the slice value is:
+	// []*struct/[]struct/[]map/[]slice
+	// otherwise use standard decode
+	t := s.valueBaseType()
+
+	switch t.Kind() {
+	case reflect.Ptr:
+		return nil, fmt.Errorf("cannot unmarshal into a **<type> or *<reference>")
+	case reflect.Struct, reflect.Map, reflect.Slice:
+		delim, err := s.dec.Token()
+		if err != nil {
+			return nil, err
+		}
+		// This indicates the value was set to nil.
+		if delim == nil {
+			return nil, nil
+		}
+		if !delimIs(delim, '[') {
+			return nil, fmt.Errorf("Unmarshal expected opening [, received %v", delim)
+		}
+		return s.next, nil
+	}
+
+	if err := s.dec.Decode(s.s.Interface()); err != nil {
+		return nil, err
+	}
+	return nil, nil
+}
+
+func (s *sliceWalk) next() (stateFn, error) {
+	if s.dec.More() {
+		return s.storeValue, nil
+	}
+	// Nothing left in the slice, remove closing ]
+	_, err := s.dec.Token()
+	return nil, err
+}
+
+func (s *sliceWalk) storeValue() (stateFn, error) {
+	t := s.valueBaseType()
+	switch t.Kind() {
+	case reflect.Ptr:
+		return nil, fmt.Errorf("do not support 'pointer to pointer' or 'pointer to reference' types")
+	case reflect.Struct:
+		return s.storeStruct, nil
+	case reflect.Map:
+		return s.storeMap, nil
+	case reflect.Slice:
+		return s.storeSlice, nil
+	}
+	return nil, fmt.Errorf("bug: sliceWalk.storeValue() called on unsupported type: %v", t.Kind())
+}
+
+func (s *sliceWalk) storeStruct() (stateFn, error) {
+	v := newValue(s.valueType)
+	if err := unmarshalStruct(s.dec, v.Interface()); err != nil {
+		return nil, err
+	}
+
+	if s.valueType.Kind() == reflect.Ptr {
+		s.s.Elem().Set(reflect.Append(s.s.Elem(), v))
+		return s.next, nil
+	}
+
+	s.s.Elem().Set(reflect.Append(s.s.Elem(), v.Elem()))
+	return s.next, nil
+}
+
+func (s *sliceWalk) storeMap() (stateFn, error) {
+	v := reflect.MakeMap(s.valueType)
+	ptr := newValue(v.Type())
+	ptr.Elem().Set(v)
+
+	if err := unmarshalMap(s.dec, ptr); err != nil {
+		return nil, err
+	}
+
+	s.s.Elem().Set(reflect.Append(s.s.Elem(), v))
+
+	return s.next, nil
+}
+
+func (s *sliceWalk) storeSlice() (stateFn, error) {
+	v := newValue(s.valueType)
+	if err := unmarshalSlice(s.dec, v); err != nil {
+		return nil, err
+	}
+
+	s.s.Elem().Set(reflect.Append(s.s.Elem(), v.Elem()))
+
+	return s.next, nil
+}
+
+// valueType returns the underlying Type. So a *struct would yield
+// struct, etc...
+func (s *sliceWalk) valueBaseType() reflect.Type {
+	v := s.valueType
+	if v.Kind() == reflect.Ptr {
+		v = v.Elem()
+	}
+	return v
+}
+
+// newValue() returns a new *type that represents type passed.
+func newValue(valueType reflect.Type) reflect.Value {
+	if valueType.Kind() == reflect.Ptr {
+		return reflect.New(valueType.Elem())
+	}
+	return reflect.New(valueType)
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/marshal.go 🔗

@@ -0,0 +1,346 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package json
+
+import (
+	"bytes"
+	"encoding/json"
+	"fmt"
+	"reflect"
+	"unicode"
+)
+
+// marshalStruct takes in i, which must be a *struct or struct and marshals its content
+// as JSON into buff (sometimes with writes to buff directly, sometimes via enc).
+// This call is recursive for all fields of *struct or struct type.
+func marshalStruct(v reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error {
+	if v.Kind() == reflect.Ptr {
+		v = v.Elem()
+	}
+	// We only care about custom Marshalling a struct.
+	if v.Kind() != reflect.Struct {
+		return fmt.Errorf("bug: marshal() received a non *struct or struct, received type %T", v.Interface())
+	}
+
+	if hasMarshalJSON(v) {
+		b, err := callMarshalJSON(v)
+		if err != nil {
+			return err
+		}
+		buff.Write(b)
+		return nil
+	}
+
+	t := v.Type()
+
+	// If it has an AdditionalFields field make sure its the right type.
+	f := v.FieldByName(addField)
+	if f.Kind() != reflect.Invalid {
+		if f.Kind() != reflect.Map {
+			return fmt.Errorf("type %T has field 'AdditionalFields' that is not a map[string]interface{}", v.Interface())
+		}
+		if !f.Type().AssignableTo(mapStrInterType) {
+			return fmt.Errorf("type %T has field 'AdditionalFields' that is not a map[string]interface{}", v.Interface())
+		}
+	}
+
+	translator, err := findFields(v)
+	if err != nil {
+		return err
+	}
+
+	buff.WriteByte(leftBrace)
+	for x := 0; x < v.NumField(); x++ {
+		field := v.Field(x)
+
+		// We don't access private fields.
+		if unicode.IsLower(rune(t.Field(x).Name[0])) {
+			continue
+		}
+
+		if t.Field(x).Name == addField {
+			if v.Field(x).Len() > 0 {
+				if err := writeAddFields(field.Interface(), buff, enc); err != nil {
+					return err
+				}
+				buff.WriteByte(comma)
+			}
+			continue
+		}
+
+		// If they have omitempty set, we don't write out the field if
+		// it is the zero value.
+		if hasOmitEmpty(t.Field(x).Tag.Get("json")) {
+			if v.Field(x).IsZero() {
+				continue
+			}
+		}
+
+		// Write out the field name part.
+		jsonName := translator.jsonName(t.Field(x).Name)
+		buff.WriteString(fmt.Sprintf("%q:", jsonName))
+
+		if field.Kind() == reflect.Ptr {
+			field = field.Elem()
+		}
+
+		if err := marshalStructField(field, buff, enc); err != nil {
+			return err
+		}
+	}
+
+	buff.Truncate(buff.Len() - 1) // Remove final comma
+	buff.WriteByte(rightBrace)
+
+	return nil
+}
+
+func marshalStructField(field reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error {
+	// Determine if we need a trailing comma.
+	defer buff.WriteByte(comma)
+
+	switch field.Kind() {
+	// If it was a *struct or struct, we need to recursively all marshal().
+	case reflect.Struct:
+		if field.CanAddr() {
+			field = field.Addr()
+		}
+		return marshalStruct(field, buff, enc)
+	case reflect.Map:
+		return marshalMap(field, buff, enc)
+	case reflect.Slice:
+		return marshalSlice(field, buff, enc)
+	}
+
+	// It is just a basic type, so encode it.
+	if err := enc.Encode(field.Interface()); err != nil {
+		return err
+	}
+	buff.Truncate(buff.Len() - 1) // Remove Encode() added \n
+
+	return nil
+}
+
+func marshalMap(v reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error {
+	if v.Kind() != reflect.Map {
+		return fmt.Errorf("bug: marshalMap() called on %T", v.Interface())
+	}
+	if v.Len() == 0 {
+		buff.WriteByte(leftBrace)
+		buff.WriteByte(rightBrace)
+		return nil
+	}
+	encoder := mapEncode{m: v, buff: buff, enc: enc}
+	return encoder.run()
+}
+
+type mapEncode struct {
+	m    reflect.Value
+	buff *bytes.Buffer
+	enc  *json.Encoder
+
+	valueBaseType reflect.Type
+}
+
+// run runs our encoder state machine.
+func (m *mapEncode) run() error {
+	var state = m.start
+	var err error
+	for {
+		state, err = state()
+		if err != nil {
+			return err
+		}
+		if state == nil {
+			return nil
+		}
+	}
+}
+
+func (m *mapEncode) start() (stateFn, error) {
+	if hasMarshalJSON(m.m) {
+		b, err := callMarshalJSON(m.m)
+		if err != nil {
+			return nil, err
+		}
+		m.buff.Write(b)
+		return nil, nil
+	}
+
+	valueBaseType := m.m.Type().Elem()
+	if valueBaseType.Kind() == reflect.Ptr {
+		valueBaseType = valueBaseType.Elem()
+	}
+	m.valueBaseType = valueBaseType
+
+	switch valueBaseType.Kind() {
+	case reflect.Ptr:
+		return nil, fmt.Errorf("Marshal does not support **<type> or *<reference>")
+	case reflect.Struct, reflect.Map, reflect.Slice:
+		return m.encode, nil
+	}
+
+	// If the map value doesn't have a struct/map/slice, just Encode() it.
+	if err := m.enc.Encode(m.m.Interface()); err != nil {
+		return nil, err
+	}
+	m.buff.Truncate(m.buff.Len() - 1) // Remove Encode() added \n
+	return nil, nil
+}
+
+func (m *mapEncode) encode() (stateFn, error) {
+	m.buff.WriteByte(leftBrace)
+
+	iter := m.m.MapRange()
+	for iter.Next() {
+		// Write the key.
+		k := iter.Key()
+		m.buff.WriteString(fmt.Sprintf("%q:", k.String()))
+
+		v := iter.Value()
+		switch m.valueBaseType.Kind() {
+		case reflect.Struct:
+			if v.CanAddr() {
+				v = v.Addr()
+			}
+			if err := marshalStruct(v, m.buff, m.enc); err != nil {
+				return nil, err
+			}
+		case reflect.Map:
+			if err := marshalMap(v, m.buff, m.enc); err != nil {
+				return nil, err
+			}
+		case reflect.Slice:
+			if err := marshalSlice(v, m.buff, m.enc); err != nil {
+				return nil, err
+			}
+		default:
+			panic(fmt.Sprintf("critical bug: mapEncode.encode() called with value base type: %v", m.valueBaseType.Kind()))
+		}
+		m.buff.WriteByte(comma)
+	}
+	m.buff.Truncate(m.buff.Len() - 1) // Remove final comma
+	m.buff.WriteByte(rightBrace)
+
+	return nil, nil
+}
+
+func marshalSlice(v reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error {
+	if v.Kind() != reflect.Slice {
+		return fmt.Errorf("bug: marshalSlice() called on %T", v.Interface())
+	}
+	if v.Len() == 0 {
+		buff.WriteByte(leftParen)
+		buff.WriteByte(rightParen)
+		return nil
+	}
+	encoder := sliceEncode{s: v, buff: buff, enc: enc}
+	return encoder.run()
+}
+
+type sliceEncode struct {
+	s    reflect.Value
+	buff *bytes.Buffer
+	enc  *json.Encoder
+
+	valueBaseType reflect.Type
+}
+
+// run runs our encoder state machine.
+func (s *sliceEncode) run() error {
+	var state = s.start
+	var err error
+	for {
+		state, err = state()
+		if err != nil {
+			return err
+		}
+		if state == nil {
+			return nil
+		}
+	}
+}
+
+func (s *sliceEncode) start() (stateFn, error) {
+	if hasMarshalJSON(s.s) {
+		b, err := callMarshalJSON(s.s)
+		if err != nil {
+			return nil, err
+		}
+		s.buff.Write(b)
+		return nil, nil
+	}
+
+	valueBaseType := s.s.Type().Elem()
+	if valueBaseType.Kind() == reflect.Ptr {
+		valueBaseType = valueBaseType.Elem()
+	}
+	s.valueBaseType = valueBaseType
+
+	switch valueBaseType.Kind() {
+	case reflect.Ptr:
+		return nil, fmt.Errorf("Marshal does not support **<type> or *<reference>")
+	case reflect.Struct, reflect.Map, reflect.Slice:
+		return s.encode, nil
+	}
+
+	// If the map value doesn't have a struct/map/slice, just Encode() it.
+	if err := s.enc.Encode(s.s.Interface()); err != nil {
+		return nil, err
+	}
+	s.buff.Truncate(s.buff.Len() - 1) // Remove Encode added \n
+
+	return nil, nil
+}
+
+func (s *sliceEncode) encode() (stateFn, error) {
+	s.buff.WriteByte(leftParen)
+	for i := 0; i < s.s.Len(); i++ {
+		v := s.s.Index(i)
+		switch s.valueBaseType.Kind() {
+		case reflect.Struct:
+			if v.CanAddr() {
+				v = v.Addr()
+			}
+			if err := marshalStruct(v, s.buff, s.enc); err != nil {
+				return nil, err
+			}
+		case reflect.Map:
+			if err := marshalMap(v, s.buff, s.enc); err != nil {
+				return nil, err
+			}
+		case reflect.Slice:
+			if err := marshalSlice(v, s.buff, s.enc); err != nil {
+				return nil, err
+			}
+		default:
+			panic(fmt.Sprintf("critical bug: mapEncode.encode() called with value base type: %v", s.valueBaseType.Kind()))
+		}
+		s.buff.WriteByte(comma)
+	}
+	s.buff.Truncate(s.buff.Len() - 1) // Remove final comma
+	s.buff.WriteByte(rightParen)
+	return nil, nil
+}
+
+// writeAddFields writes the AdditionalFields struct field out to JSON as field
+// values. i must be a map[string]interface{} or this will panic.
+func writeAddFields(i interface{}, buff *bytes.Buffer, enc *json.Encoder) error {
+	m := i.(map[string]interface{})
+
+	x := 0
+	for k, v := range m {
+		buff.WriteString(fmt.Sprintf("%q:", k))
+		if err := enc.Encode(v); err != nil {
+			return err
+		}
+		buff.Truncate(buff.Len() - 1) // Remove Encode() added \n
+
+		if x+1 != len(m) {
+			buff.WriteByte(comma)
+		}
+		x++
+	}
+	return nil
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/struct.go 🔗

@@ -0,0 +1,290 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package json
+
+import (
+	"encoding/json"
+	"fmt"
+	"reflect"
+	"strings"
+)
+
+func unmarshalStruct(jdec *json.Decoder, i interface{}) error {
+	v := reflect.ValueOf(i)
+	if v.Kind() != reflect.Ptr {
+		return fmt.Errorf("Unmarshal() received type %T, which is not a *struct", i)
+	}
+	v = v.Elem()
+	if v.Kind() != reflect.Struct {
+		return fmt.Errorf("Unmarshal() received type %T, which is not a *struct", i)
+	}
+
+	if hasUnmarshalJSON(v) {
+		// Indicates that this type has a custom Unmarshaler.
+		return jdec.Decode(v.Addr().Interface())
+	}
+
+	f := v.FieldByName(addField)
+	if f.Kind() == reflect.Invalid {
+		return fmt.Errorf("Unmarshal(%T) only supports structs that have the field AdditionalFields or implements json.Unmarshaler", i)
+	}
+
+	if f.Kind() != reflect.Map || !f.Type().AssignableTo(mapStrInterType) {
+		return fmt.Errorf("type %T has field 'AdditionalFields' that is not a map[string]interface{}", i)
+	}
+
+	dec := newDecoder(jdec, v)
+	return dec.run()
+}
+
+type decoder struct {
+	dec        *json.Decoder
+	value      reflect.Value // This will be a reflect.Struct
+	translator translateFields
+	key        string
+}
+
+func newDecoder(dec *json.Decoder, value reflect.Value) *decoder {
+	return &decoder{value: value, dec: dec}
+}
+
+// run runs our decoder state machine.
+func (d *decoder) run() error {
+	var state = d.start
+	var err error
+	for {
+		state, err = state()
+		if err != nil {
+			return err
+		}
+		if state == nil {
+			return nil
+		}
+	}
+}
+
+// start looks for our opening delimeter '{' and then transitions to looping through our fields.
+func (d *decoder) start() (stateFn, error) {
+	var err error
+	d.translator, err = findFields(d.value)
+	if err != nil {
+		return nil, err
+	}
+
+	delim, err := d.dec.Token()
+	if err != nil {
+		return nil, err
+	}
+	if !delimIs(delim, '{') {
+		return nil, fmt.Errorf("Unmarshal expected opening {, received %v", delim)
+	}
+
+	return d.next, nil
+}
+
+// next gets the next struct field name from the raw json or stops the machine if we get our closing }.
+func (d *decoder) next() (stateFn, error) {
+	if !d.dec.More() {
+		// Remove the closing }.
+		if _, err := d.dec.Token(); err != nil {
+			return nil, err
+		}
+		return nil, nil
+	}
+
+	key, err := d.dec.Token()
+	if err != nil {
+		return nil, err
+	}
+
+	d.key = key.(string)
+	return d.storeValue, nil
+}
+
+// storeValue takes the next value and stores it our struct. If the field can't be found
+// in the struct, it pushes the operation to storeAdditional().
+func (d *decoder) storeValue() (stateFn, error) {
+	goName := d.translator.goName(d.key)
+	if goName == "" {
+		goName = d.key
+	}
+
+	// We don't have the field in the struct, so it goes in AdditionalFields.
+	f := d.value.FieldByName(goName)
+	if f.Kind() == reflect.Invalid {
+		return d.storeAdditional, nil
+	}
+
+	// Indicates that this type has a custom Unmarshaler.
+	if hasUnmarshalJSON(f) {
+		err := d.dec.Decode(f.Addr().Interface())
+		if err != nil {
+			return nil, err
+		}
+		return d.next, nil
+	}
+
+	t, isPtr, err := fieldBaseType(d.value, goName)
+	if err != nil {
+		return nil, fmt.Errorf("type(%s) had field(%s) %w", d.value.Type().Name(), goName, err)
+	}
+
+	switch t.Kind() {
+	// We need to recursively call ourselves on any *struct or struct.
+	case reflect.Struct:
+		if isPtr {
+			if f.IsNil() {
+				f.Set(reflect.New(t))
+			}
+		} else {
+			f = f.Addr()
+		}
+		if err := unmarshalStruct(d.dec, f.Interface()); err != nil {
+			return nil, err
+		}
+		return d.next, nil
+	case reflect.Map:
+		v := reflect.MakeMap(f.Type())
+		ptr := newValue(f.Type())
+		ptr.Elem().Set(v)
+		if err := unmarshalMap(d.dec, ptr); err != nil {
+			return nil, err
+		}
+		f.Set(ptr.Elem())
+		return d.next, nil
+	case reflect.Slice:
+		v := reflect.MakeSlice(f.Type(), 0, 0)
+		ptr := newValue(f.Type())
+		ptr.Elem().Set(v)
+		if err := unmarshalSlice(d.dec, ptr); err != nil {
+			return nil, err
+		}
+		f.Set(ptr.Elem())
+		return d.next, nil
+	}
+
+	if !isPtr {
+		f = f.Addr()
+	}
+
+	// For values that are pointers, we need them to be non-nil in order
+	// to decode into them.
+	if f.IsNil() {
+		f.Set(reflect.New(t))
+	}
+
+	if err := d.dec.Decode(f.Interface()); err != nil {
+		return nil, err
+	}
+
+	return d.next, nil
+}
+
+// storeAdditional pushes the key/value into our .AdditionalFields map.
+func (d *decoder) storeAdditional() (stateFn, error) {
+	rw := json.RawMessage{}
+	if err := d.dec.Decode(&rw); err != nil {
+		return nil, err
+	}
+	field := d.value.FieldByName(addField)
+	if field.IsNil() {
+		field.Set(reflect.MakeMap(field.Type()))
+	}
+	field.SetMapIndex(reflect.ValueOf(d.key), reflect.ValueOf(rw))
+	return d.next, nil
+}
+
+func fieldBaseType(v reflect.Value, fieldName string) (t reflect.Type, isPtr bool, err error) {
+	sf, ok := v.Type().FieldByName(fieldName)
+	if !ok {
+		return nil, false, fmt.Errorf("bug: fieldBaseType() lookup of field(%s) on type(%s): do not have field", fieldName, v.Type().Name())
+	}
+	t = sf.Type
+	if t.Kind() == reflect.Ptr {
+		t = t.Elem()
+		isPtr = true
+	}
+	if t.Kind() == reflect.Ptr {
+		return nil, isPtr, fmt.Errorf("received pointer to pointer type, not supported")
+	}
+	return t, isPtr, nil
+}
+
+type translateField struct {
+	jsonName string
+	goName   string
+}
+
+// translateFields is a list of translateFields with a handy lookup method.
+type translateFields []translateField
+
+// goName loops through a list of fields looking for one contaning the jsonName and
+// returning the goName. If not found, returns the empty string.
+// Note: not a map because at this size slices are faster even in tight loops.
+func (t translateFields) goName(jsonName string) string {
+	for _, entry := range t {
+		if entry.jsonName == jsonName {
+			return entry.goName
+		}
+	}
+	return ""
+}
+
+// jsonName loops through a list of fields looking for one contaning the goName and
+// returning the jsonName. If not found, returns the empty string.
+// Note: not a map because at this size slices are faster even in tight loops.
+func (t translateFields) jsonName(goName string) string {
+	for _, entry := range t {
+		if entry.goName == goName {
+			return entry.jsonName
+		}
+	}
+	return ""
+}
+
+var umarshalerType = reflect.TypeOf((*json.Unmarshaler)(nil)).Elem()
+
+// findFields parses a struct and writes the field tags for lookup. It will return an error
+// if any field has a type of *struct or struct that does not implement json.Marshaler.
+func findFields(v reflect.Value) (translateFields, error) {
+	if v.Kind() == reflect.Ptr {
+		v = v.Elem()
+	}
+	if v.Kind() != reflect.Struct {
+		return nil, fmt.Errorf("findFields received a %s type, expected *struct or struct", v.Type().Name())
+	}
+	tfs := make([]translateField, 0, v.NumField())
+	for i := 0; i < v.NumField(); i++ {
+		tf := translateField{
+			goName:   v.Type().Field(i).Name,
+			jsonName: parseTag(v.Type().Field(i).Tag.Get("json")),
+		}
+		switch tf.jsonName {
+		case "", "-":
+			tf.jsonName = tf.goName
+		}
+		tfs = append(tfs, tf)
+
+		f := v.Field(i)
+		if f.Kind() == reflect.Ptr {
+			f = f.Elem()
+		}
+		if f.Kind() == reflect.Struct {
+			if f.Type().Implements(umarshalerType) {
+				return nil, fmt.Errorf("struct type %q which has field %q which "+
+					"doesn't implement json.Unmarshaler", v.Type().Name(), v.Type().Field(i).Name)
+			}
+		}
+	}
+	return tfs, nil
+}
+
+// parseTag just returns the first entry in the tag. tag is the string
+// returned by reflect.StructField.Tag().Get().
+func parseTag(tag string) string {
+	if idx := strings.Index(tag, ","); idx != -1 {
+		return tag[:idx]
+	}
+	return tag
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time/time.go 🔗

@@ -0,0 +1,70 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// Package time provides for custom types to translate time from JSON and other formats
+// into time.Time objects.
+package time
+
+import (
+	"fmt"
+	"strconv"
+	"strings"
+	"time"
+)
+
+// Unix provides a type that can marshal and unmarshal a string representation
+// of the unix epoch into a time.Time object.
+type Unix struct {
+	T time.Time
+}
+
+// MarshalJSON implements encoding/json.MarshalJSON().
+func (u Unix) MarshalJSON() ([]byte, error) {
+	if u.T.IsZero() {
+		return []byte(""), nil
+	}
+	return []byte(fmt.Sprintf("%q", strconv.FormatInt(u.T.Unix(), 10))), nil
+}
+
+// UnmarshalJSON implements encoding/json.UnmarshalJSON().
+func (u *Unix) UnmarshalJSON(b []byte) error {
+	i, err := strconv.Atoi(strings.Trim(string(b), `"`))
+	if err != nil {
+		return fmt.Errorf("unix time(%s) could not be converted from string to int: %w", string(b), err)
+	}
+	u.T = time.Unix(int64(i), 0)
+	return nil
+}
+
+// DurationTime provides a type that can marshal and unmarshal a string representation
+// of a duration from now into a time.Time object.
+// Note: I'm not sure this is the best way to do this. What happens is we get a field
+// called "expires_in" that represents the seconds from now that this expires. We
+// turn that into a time we call .ExpiresOn. But maybe we should be recording
+// when the token was received at .TokenRecieved and .ExpiresIn should remain as a duration.
+// Then we could have a method called ExpiresOn().  Honestly, the whole thing is
+// bad because the server doesn't return a concrete time. I think this is
+// cleaner, but its not great either.
+type DurationTime struct {
+	T time.Time
+}
+
+// MarshalJSON implements encoding/json.MarshalJSON().
+func (d DurationTime) MarshalJSON() ([]byte, error) {
+	if d.T.IsZero() {
+		return []byte(""), nil
+	}
+
+	dt := time.Until(d.T)
+	return []byte(fmt.Sprintf("%d", int64(dt*time.Second))), nil
+}
+
+// UnmarshalJSON implements encoding/json.UnmarshalJSON().
+func (d *DurationTime) UnmarshalJSON(b []byte) error {
+	i, err := strconv.Atoi(strings.Trim(string(b), `"`))
+	if err != nil {
+		return fmt.Errorf("unix time(%s) could not be converted from string to int: %w", string(b), err)
+	}
+	d.T = time.Now().Add(time.Duration(i) * time.Second)
+	return nil
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local/server.go 🔗

@@ -0,0 +1,177 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// Package local contains a local HTTP server used with interactive authentication.
+package local
+
+import (
+	"context"
+	"fmt"
+	"net"
+	"net/http"
+	"strconv"
+	"strings"
+	"time"
+)
+
+var okPage = []byte(`
+<!DOCTYPE html>
+<html>
+<head>
+    <meta charset="utf-8" />
+    <title>Authentication Complete</title>
+</head>
+<body>
+    <p>Authentication complete. You can return to the application. Feel free to close this browser tab.</p>
+</body>
+</html>
+`)
+
+const failPage = `
+<!DOCTYPE html>
+<html>
+<head>
+    <meta charset="utf-8" />
+    <title>Authentication Failed</title>
+</head>
+<body>
+	<p>Authentication failed. You can return to the application. Feel free to close this browser tab.</p>
+	<p>Error details: error %s error_description: %s</p>
+</body>
+</html>
+`
+
+// Result is the result from the redirect.
+type Result struct {
+	// Code is the code sent by the authority server.
+	Code string
+	// Err is set if there was an error.
+	Err error
+}
+
+// Server is an HTTP server.
+type Server struct {
+	// Addr is the address the server is listening on.
+	Addr     string
+	resultCh chan Result
+	s        *http.Server
+	reqState string
+}
+
+// New creates a local HTTP server and starts it.
+func New(reqState string, port int) (*Server, error) {
+	var l net.Listener
+	var err error
+	var portStr string
+	if port > 0 {
+		// use port provided by caller
+		l, err = net.Listen("tcp", fmt.Sprintf("localhost:%d", port))
+		portStr = strconv.FormatInt(int64(port), 10)
+	} else {
+		// find a free port
+		for i := 0; i < 10; i++ {
+			l, err = net.Listen("tcp", "localhost:0")
+			if err != nil {
+				continue
+			}
+			addr := l.Addr().String()
+			portStr = addr[strings.LastIndex(addr, ":")+1:]
+			break
+		}
+	}
+	if err != nil {
+		return nil, err
+	}
+
+	serv := &Server{
+		Addr:     fmt.Sprintf("http://localhost:%s", portStr),
+		s:        &http.Server{Addr: "localhost:0", ReadHeaderTimeout: time.Second},
+		reqState: reqState,
+		resultCh: make(chan Result, 1),
+	}
+	serv.s.Handler = http.HandlerFunc(serv.handler)
+
+	if err := serv.start(l); err != nil {
+		return nil, err
+	}
+
+	return serv, nil
+}
+
+func (s *Server) start(l net.Listener) error {
+	go func() {
+		err := s.s.Serve(l)
+		if err != nil {
+			select {
+			case s.resultCh <- Result{Err: err}:
+			default:
+			}
+		}
+	}()
+
+	return nil
+}
+
+// Result gets the result of the redirect operation. Once a single result is returned, the server
+// is shutdown. ctx deadline will be honored.
+func (s *Server) Result(ctx context.Context) Result {
+	select {
+	case <-ctx.Done():
+		return Result{Err: ctx.Err()}
+	case r := <-s.resultCh:
+		return r
+	}
+}
+
+// Shutdown shuts down the server.
+func (s *Server) Shutdown() {
+	// Note: You might get clever and think you can do this in handler() as a defer, you can't.
+	_ = s.s.Shutdown(context.Background())
+}
+
+func (s *Server) putResult(r Result) {
+	select {
+	case s.resultCh <- r:
+	default:
+	}
+}
+
+func (s *Server) handler(w http.ResponseWriter, r *http.Request) {
+	q := r.URL.Query()
+
+	headerErr := q.Get("error")
+	if headerErr != "" {
+		desc := q.Get("error_description")
+		// Note: It is a little weird we handle some errors by not going to the failPage. If they all should,
+		// change this to s.error() and make s.error() write the failPage instead of an error code.
+		_, _ = w.Write([]byte(fmt.Sprintf(failPage, headerErr, desc)))
+		s.putResult(Result{Err: fmt.Errorf(desc)})
+		return
+	}
+
+	respState := q.Get("state")
+	switch respState {
+	case s.reqState:
+	case "":
+		s.error(w, http.StatusInternalServerError, "server didn't send OAuth state")
+		return
+	default:
+		s.error(w, http.StatusInternalServerError, "mismatched OAuth state, req(%s), resp(%s)", s.reqState, respState)
+		return
+	}
+
+	code := q.Get("code")
+	if code == "" {
+		s.error(w, http.StatusInternalServerError, "authorization code missing in query string")
+		return
+	}
+
+	_, _ = w.Write(okPage)
+	s.putResult(Result{Code: code})
+}
+
+func (s *Server) error(w http.ResponseWriter, code int, str string, i ...interface{}) {
+	err := fmt.Errorf(str, i...)
+	http.Error(w, err.Error(), code)
+	s.putResult(Result{Err: err})
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/oauth.go 🔗

@@ -0,0 +1,354 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package oauth
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"time"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported"
+	internalTime "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs"
+	"github.com/google/uuid"
+)
+
+// ResolveEndpointer contains the methods for resolving authority endpoints.
+type ResolveEndpointer interface {
+	ResolveEndpoints(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, error)
+}
+
+// AccessTokens contains the methods for fetching tokens from different sources.
+type AccessTokens interface {
+	DeviceCodeResult(ctx context.Context, authParameters authority.AuthParams) (accesstokens.DeviceCodeResult, error)
+	FromUsernamePassword(ctx context.Context, authParameters authority.AuthParams) (accesstokens.TokenResponse, error)
+	FromAuthCode(ctx context.Context, req accesstokens.AuthCodeRequest) (accesstokens.TokenResponse, error)
+	FromRefreshToken(ctx context.Context, appType accesstokens.AppType, authParams authority.AuthParams, cc *accesstokens.Credential, refreshToken string) (accesstokens.TokenResponse, error)
+	FromClientSecret(ctx context.Context, authParameters authority.AuthParams, clientSecret string) (accesstokens.TokenResponse, error)
+	FromAssertion(ctx context.Context, authParameters authority.AuthParams, assertion string) (accesstokens.TokenResponse, error)
+	FromUserAssertionClientSecret(ctx context.Context, authParameters authority.AuthParams, userAssertion string, clientSecret string) (accesstokens.TokenResponse, error)
+	FromUserAssertionClientCertificate(ctx context.Context, authParameters authority.AuthParams, userAssertion string, assertion string) (accesstokens.TokenResponse, error)
+	FromDeviceCodeResult(ctx context.Context, authParameters authority.AuthParams, deviceCodeResult accesstokens.DeviceCodeResult) (accesstokens.TokenResponse, error)
+	FromSamlGrant(ctx context.Context, authParameters authority.AuthParams, samlGrant wstrust.SamlTokenInfo) (accesstokens.TokenResponse, error)
+}
+
+// FetchAuthority will be implemented by authority.Authority.
+type FetchAuthority interface {
+	UserRealm(context.Context, authority.AuthParams) (authority.UserRealm, error)
+	AADInstanceDiscovery(context.Context, authority.Info) (authority.InstanceDiscoveryResponse, error)
+}
+
+// FetchWSTrust contains the methods for interacting with WSTrust endpoints.
+type FetchWSTrust interface {
+	Mex(ctx context.Context, federationMetadataURL string) (defs.MexDocument, error)
+	SAMLTokenInfo(ctx context.Context, authParameters authority.AuthParams, cloudAudienceURN string, endpoint defs.Endpoint) (wstrust.SamlTokenInfo, error)
+}
+
+// Client provides tokens for various types of token requests.
+type Client struct {
+	Resolver     ResolveEndpointer
+	AccessTokens AccessTokens
+	Authority    FetchAuthority
+	WSTrust      FetchWSTrust
+}
+
+// New is the constructor for Token.
+func New(httpClient ops.HTTPClient) *Client {
+	r := ops.New(httpClient)
+	return &Client{
+		Resolver:     newAuthorityEndpoint(r),
+		AccessTokens: r.AccessTokens(),
+		Authority:    r.Authority(),
+		WSTrust:      r.WSTrust(),
+	}
+}
+
+// ResolveEndpoints gets the authorization and token endpoints and creates an AuthorityEndpoints instance.
+func (t *Client) ResolveEndpoints(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, error) {
+	return t.Resolver.ResolveEndpoints(ctx, authorityInfo, userPrincipalName)
+}
+
+// AADInstanceDiscovery attempts to discover a tenant endpoint (used in OIDC auth with an authorization endpoint).
+// This is done by AAD which allows for aliasing of tenants (windows.sts.net is the same as login.windows.com).
+func (t *Client) AADInstanceDiscovery(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryResponse, error) {
+	return t.Authority.AADInstanceDiscovery(ctx, authorityInfo)
+}
+
+// AuthCode returns a token based on an authorization code.
+func (t *Client) AuthCode(ctx context.Context, req accesstokens.AuthCodeRequest) (accesstokens.TokenResponse, error) {
+	if err := scopeError(req.AuthParams); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+	if err := t.resolveEndpoint(ctx, &req.AuthParams, ""); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+
+	tResp, err := t.AccessTokens.FromAuthCode(ctx, req)
+	if err != nil {
+		return accesstokens.TokenResponse{}, fmt.Errorf("could not retrieve token from auth code: %w", err)
+	}
+	return tResp, nil
+}
+
+// Credential acquires a token from the authority using a client credentials grant.
+func (t *Client) Credential(ctx context.Context, authParams authority.AuthParams, cred *accesstokens.Credential) (accesstokens.TokenResponse, error) {
+	if cred.TokenProvider != nil {
+		now := time.Now()
+		scopes := make([]string, len(authParams.Scopes))
+		copy(scopes, authParams.Scopes)
+		params := exported.TokenProviderParameters{
+			Claims:        authParams.Claims,
+			CorrelationID: uuid.New().String(),
+			Scopes:        scopes,
+			TenantID:      authParams.AuthorityInfo.Tenant,
+		}
+		tr, err := cred.TokenProvider(ctx, params)
+		if err != nil {
+			if len(scopes) == 0 {
+				err = fmt.Errorf("token request had an empty authority.AuthParams.Scopes, which may cause the following error: %w", err)
+				return accesstokens.TokenResponse{}, err
+			}
+			return accesstokens.TokenResponse{}, err
+		}
+		return accesstokens.TokenResponse{
+			TokenType:   authParams.AuthnScheme.AccessTokenType(),
+			AccessToken: tr.AccessToken,
+			ExpiresOn: internalTime.DurationTime{
+				T: now.Add(time.Duration(tr.ExpiresInSeconds) * time.Second),
+			},
+			GrantedScopes: accesstokens.Scopes{Slice: authParams.Scopes},
+		}, nil
+	}
+
+	if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+
+	if cred.Secret != "" {
+		return t.AccessTokens.FromClientSecret(ctx, authParams, cred.Secret)
+	}
+	jwt, err := cred.JWT(ctx, authParams)
+	if err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+	return t.AccessTokens.FromAssertion(ctx, authParams, jwt)
+}
+
+// Credential acquires a token from the authority using a client credentials grant.
+func (t *Client) OnBehalfOf(ctx context.Context, authParams authority.AuthParams, cred *accesstokens.Credential) (accesstokens.TokenResponse, error) {
+	if err := scopeError(authParams); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+	if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+
+	if cred.Secret != "" {
+		return t.AccessTokens.FromUserAssertionClientSecret(ctx, authParams, authParams.UserAssertion, cred.Secret)
+	}
+	jwt, err := cred.JWT(ctx, authParams)
+	if err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+	tr, err := t.AccessTokens.FromUserAssertionClientCertificate(ctx, authParams, authParams.UserAssertion, jwt)
+	if err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+	return tr, nil
+}
+
+func (t *Client) Refresh(ctx context.Context, reqType accesstokens.AppType, authParams authority.AuthParams, cc *accesstokens.Credential, refreshToken accesstokens.RefreshToken) (accesstokens.TokenResponse, error) {
+	if err := scopeError(authParams); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+	if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+
+	tr, err := t.AccessTokens.FromRefreshToken(ctx, reqType, authParams, cc, refreshToken.Secret)
+	if err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+	return tr, nil
+}
+
+// UsernamePassword retrieves a token where a username and password is used. However, if this is
+// a user realm of "Federated", this uses SAML tokens. If "Managed", uses normal username/password.
+func (t *Client) UsernamePassword(ctx context.Context, authParams authority.AuthParams) (accesstokens.TokenResponse, error) {
+	if err := scopeError(authParams); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+
+	if authParams.AuthorityInfo.AuthorityType == authority.ADFS {
+		if err := t.resolveEndpoint(ctx, &authParams, authParams.Username); err != nil {
+			return accesstokens.TokenResponse{}, err
+		}
+		return t.AccessTokens.FromUsernamePassword(ctx, authParams)
+	}
+	if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil {
+		return accesstokens.TokenResponse{}, err
+	}
+
+	userRealm, err := t.Authority.UserRealm(ctx, authParams)
+	if err != nil {
+		return accesstokens.TokenResponse{}, fmt.Errorf("problem getting user realm from authority: %w", err)
+	}
+
+	switch userRealm.AccountType {
+	case authority.Federated:
+		mexDoc, err := t.WSTrust.Mex(ctx, userRealm.FederationMetadataURL)
+		if err != nil {
+			err = fmt.Errorf("problem getting mex doc from federated url(%s): %w", userRealm.FederationMetadataURL, err)
+			return accesstokens.TokenResponse{}, err
+		}
+
+		saml, err := t.WSTrust.SAMLTokenInfo(ctx, authParams, userRealm.CloudAudienceURN, mexDoc.UsernamePasswordEndpoint)
+		if err != nil {
+			err = fmt.Errorf("problem getting SAML token info: %w", err)
+			return accesstokens.TokenResponse{}, err
+		}
+		tr, err := t.AccessTokens.FromSamlGrant(ctx, authParams, saml)
+		if err != nil {
+			return accesstokens.TokenResponse{}, err
+		}
+		return tr, nil
+	case authority.Managed:
+		if len(authParams.Scopes) == 0 {
+			err = fmt.Errorf("token request had an empty authority.AuthParams.Scopes, which may cause the following error: %w", err)
+			return accesstokens.TokenResponse{}, err
+		}
+		return t.AccessTokens.FromUsernamePassword(ctx, authParams)
+	}
+	return accesstokens.TokenResponse{}, errors.New("unknown account type")
+}
+
+// DeviceCode is the result of a call to Token.DeviceCode().
+type DeviceCode struct {
+	// Result is the device code result from the first call in the device code flow. This allows
+	// the caller to retrieve the displayed code that is used to authorize on the second device.
+	Result     accesstokens.DeviceCodeResult
+	authParams authority.AuthParams
+
+	accessTokens AccessTokens
+}
+
+// Token returns a token AFTER the user uses the user code on the second device. This will block
+// until either: (1) the code is input by the user and the service releases a token, (2) the token
+// expires, (3) the Context passed to .DeviceCode() is cancelled or expires, (4) some other service
+// error occurs.
+func (d DeviceCode) Token(ctx context.Context) (accesstokens.TokenResponse, error) {
+	if d.accessTokens == nil {
+		return accesstokens.TokenResponse{}, fmt.Errorf("DeviceCode was either created outside its package or the creating method had an error. DeviceCode is not valid")
+	}
+
+	var cancel context.CancelFunc
+	if deadline, ok := ctx.Deadline(); !ok || d.Result.ExpiresOn.Before(deadline) {
+		ctx, cancel = context.WithDeadline(ctx, d.Result.ExpiresOn)
+	} else {
+		ctx, cancel = context.WithCancel(ctx)
+	}
+	defer cancel()
+
+	var interval = 50 * time.Millisecond
+	timer := time.NewTimer(interval)
+	defer timer.Stop()
+
+	for {
+		timer.Reset(interval)
+		select {
+		case <-ctx.Done():
+			return accesstokens.TokenResponse{}, ctx.Err()
+		case <-timer.C:
+			interval += interval * 2
+			if interval > 5*time.Second {
+				interval = 5 * time.Second
+			}
+		}
+
+		token, err := d.accessTokens.FromDeviceCodeResult(ctx, d.authParams, d.Result)
+		if err != nil && isWaitDeviceCodeErr(err) {
+			continue
+		}
+		return token, err // This handles if it was a non-wait error or success
+	}
+}
+
+type deviceCodeError struct {
+	Error string `json:"error"`
+}
+
+func isWaitDeviceCodeErr(err error) bool {
+	var c errors.CallErr
+	if !errors.As(err, &c) {
+		return false
+	}
+	if c.Resp.StatusCode != 400 {
+		return false
+	}
+	var dCErr deviceCodeError
+	defer c.Resp.Body.Close()
+	body, err := io.ReadAll(c.Resp.Body)
+	if err != nil {
+		return false
+	}
+	err = json.Unmarshal(body, &dCErr)
+	if err != nil {
+		return false
+	}
+	if dCErr.Error == "authorization_pending" || dCErr.Error == "slow_down" {
+		return true
+	}
+	return false
+}
+
+// DeviceCode returns a DeviceCode object that can be used to get the code that must be entered on the second
+// device and optionally the token once the code has been entered on the second device.
+func (t *Client) DeviceCode(ctx context.Context, authParams authority.AuthParams) (DeviceCode, error) {
+	if err := scopeError(authParams); err != nil {
+		return DeviceCode{}, err
+	}
+
+	if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil {
+		return DeviceCode{}, err
+	}
+
+	dcr, err := t.AccessTokens.DeviceCodeResult(ctx, authParams)
+	if err != nil {
+		return DeviceCode{}, err
+	}
+
+	return DeviceCode{Result: dcr, authParams: authParams, accessTokens: t.AccessTokens}, nil
+}
+
+func (t *Client) resolveEndpoint(ctx context.Context, authParams *authority.AuthParams, userPrincipalName string) error {
+	endpoints, err := t.Resolver.ResolveEndpoints(ctx, authParams.AuthorityInfo, userPrincipalName)
+	if err != nil {
+		return fmt.Errorf("unable to resolve an endpoint: %s", err)
+	}
+	authParams.Endpoints = endpoints
+	return nil
+}
+
+// scopeError takes an authority.AuthParams and returns an error
+// if len(AuthParams.Scope) == 0.
+func scopeError(a authority.AuthParams) error {
+	// TODO(someone): we could look deeper at the message to determine if
+	// it's a scope error, but this is a good start.
+	/*
+		{error":"invalid_scope","error_description":"AADSTS1002012: The provided value for scope
+		openid offline_access profile is not valid. Client credential flows must have a scope value
+		with /.default suffixed to the resource identifier (application ID URI)...}
+	*/
+	if len(a.Scopes) == 0 {
+		return fmt.Errorf("token request had an empty authority.AuthParams.Scopes, which is invalid")
+	}
+	return nil
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/accesstokens.go 🔗

@@ -0,0 +1,457 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+/*
+Package accesstokens exposes a REST client for querying backend systems to get various types of
+access tokens (oauth) for use in authentication.
+
+These calls are of type "application/x-www-form-urlencoded".  This means we use url.Values to
+represent arguments and then encode them into the POST body message.  We receive JSON in
+return for the requests.  The request definition is defined in https://tools.ietf.org/html/rfc7521#section-4.2 .
+*/
+package accesstokens
+
+import (
+	"context"
+	"crypto"
+
+	/* #nosec */
+	"crypto/sha1"
+	"crypto/x509"
+	"encoding/base64"
+	"encoding/json"
+	"fmt"
+	"net/url"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust"
+	"github.com/golang-jwt/jwt/v5"
+	"github.com/google/uuid"
+)
+
+const (
+	grantType     = "grant_type"
+	deviceCode    = "device_code"
+	clientID      = "client_id"
+	clientInfo    = "client_info"
+	clientInfoVal = "1"
+	username      = "username"
+	password      = "password"
+)
+
+//go:generate stringer -type=AppType
+
+// AppType is whether the authorization code flow is for a public or confidential client.
+type AppType int8
+
+const (
+	// ATUnknown is the zero value when the type hasn't been set.
+	ATUnknown AppType = iota
+	// ATPublic indicates this if for the Public.Client.
+	ATPublic
+	// ATConfidential indicates this if for the Confidential.Client.
+	ATConfidential
+)
+
+type urlFormCaller interface {
+	URLFormCall(ctx context.Context, endpoint string, qv url.Values, resp interface{}) error
+}
+
+// DeviceCodeResponse represents the HTTP response received from the device code endpoint
+type DeviceCodeResponse struct {
+	authority.OAuthResponseBase
+
+	UserCode        string `json:"user_code"`
+	DeviceCode      string `json:"device_code"`
+	VerificationURL string `json:"verification_url"`
+	ExpiresIn       int    `json:"expires_in"`
+	Interval        int    `json:"interval"`
+	Message         string `json:"message"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// Convert converts the DeviceCodeResponse to a DeviceCodeResult
+func (dcr DeviceCodeResponse) Convert(clientID string, scopes []string) DeviceCodeResult {
+	expiresOn := time.Now().UTC().Add(time.Duration(dcr.ExpiresIn) * time.Second)
+	return NewDeviceCodeResult(dcr.UserCode, dcr.DeviceCode, dcr.VerificationURL, expiresOn, dcr.Interval, dcr.Message, clientID, scopes)
+}
+
+// Credential represents the credential used in confidential client flows. This can be either
+// a Secret or Cert/Key.
+type Credential struct {
+	// Secret contains the credential secret if we are doing auth by secret.
+	Secret string
+
+	// Cert is the public certificate, if we're authenticating by certificate.
+	Cert *x509.Certificate
+	// Key is the private key for signing, if we're authenticating by certificate.
+	Key crypto.PrivateKey
+	// X5c is the JWT assertion's x5c header value, required for SN/I authentication.
+	X5c []string
+
+	// AssertionCallback is a function provided by the application, if we're authenticating by assertion.
+	AssertionCallback func(context.Context, exported.AssertionRequestOptions) (string, error)
+
+	// TokenProvider is a function provided by the application that implements custom authentication
+	// logic for a confidential client
+	TokenProvider func(context.Context, exported.TokenProviderParameters) (exported.TokenProviderResult, error)
+}
+
+// JWT gets the jwt assertion when the credential is not using a secret.
+func (c *Credential) JWT(ctx context.Context, authParams authority.AuthParams) (string, error) {
+	if c.AssertionCallback != nil {
+		options := exported.AssertionRequestOptions{
+			ClientID:      authParams.ClientID,
+			TokenEndpoint: authParams.Endpoints.TokenEndpoint,
+		}
+		return c.AssertionCallback(ctx, options)
+	}
+
+	token := jwt.NewWithClaims(jwt.SigningMethodRS256, jwt.MapClaims{
+		"aud": authParams.Endpoints.TokenEndpoint,
+		"exp": json.Number(strconv.FormatInt(time.Now().Add(10*time.Minute).Unix(), 10)),
+		"iss": authParams.ClientID,
+		"jti": uuid.New().String(),
+		"nbf": json.Number(strconv.FormatInt(time.Now().Unix(), 10)),
+		"sub": authParams.ClientID,
+	})
+	token.Header = map[string]interface{}{
+		"alg": "RS256",
+		"typ": "JWT",
+		"x5t": base64.StdEncoding.EncodeToString(thumbprint(c.Cert)),
+	}
+
+	if authParams.SendX5C {
+		token.Header["x5c"] = c.X5c
+	}
+
+	assertion, err := token.SignedString(c.Key)
+	if err != nil {
+		return "", fmt.Errorf("unable to sign a JWT token using private key: %w", err)
+	}
+	return assertion, nil
+}
+
+// thumbprint runs the asn1.Der bytes through sha1 for use in the x5t parameter of JWT.
+// https://tools.ietf.org/html/rfc7517#section-4.8
+func thumbprint(cert *x509.Certificate) []byte {
+	/* #nosec */
+	a := sha1.Sum(cert.Raw)
+	return a[:]
+}
+
+// Client represents the REST calls to get tokens from token generator backends.
+type Client struct {
+	// Comm provides the HTTP transport client.
+	Comm urlFormCaller
+
+	testing bool
+}
+
+// FromUsernamePassword uses a username and password to get an access token.
+func (c Client) FromUsernamePassword(ctx context.Context, authParameters authority.AuthParams) (TokenResponse, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(grantType, grant.Password)
+	qv.Set(username, authParameters.Username)
+	qv.Set(password, authParameters.Password)
+	qv.Set(clientID, authParameters.ClientID)
+	qv.Set(clientInfo, clientInfoVal)
+	addScopeQueryParam(qv, authParameters)
+
+	return c.doTokenResp(ctx, authParameters, qv)
+}
+
+// AuthCodeRequest stores the values required to request a token from the authority using an authorization code
+type AuthCodeRequest struct {
+	AuthParams    authority.AuthParams
+	Code          string
+	CodeChallenge string
+	Credential    *Credential
+	AppType       AppType
+}
+
+// NewCodeChallengeRequest returns an AuthCodeRequest that uses a code challenge..
+func NewCodeChallengeRequest(params authority.AuthParams, appType AppType, cc *Credential, code, challenge string) (AuthCodeRequest, error) {
+	if appType == ATUnknown {
+		return AuthCodeRequest{}, fmt.Errorf("bug: NewCodeChallengeRequest() called with AppType == ATUnknown")
+	}
+	return AuthCodeRequest{
+		AuthParams:    params,
+		AppType:       appType,
+		Code:          code,
+		CodeChallenge: challenge,
+		Credential:    cc,
+	}, nil
+}
+
+// FromAuthCode uses an authorization code to retrieve an access token.
+func (c Client) FromAuthCode(ctx context.Context, req AuthCodeRequest) (TokenResponse, error) {
+	var qv url.Values
+
+	switch req.AppType {
+	case ATUnknown:
+		return TokenResponse{}, fmt.Errorf("bug: Token.AuthCode() received request with AppType == ATUnknown")
+	case ATConfidential:
+		var err error
+		if req.Credential == nil {
+			return TokenResponse{}, fmt.Errorf("AuthCodeRequest had nil Credential for Confidential app")
+		}
+		qv, err = prepURLVals(ctx, req.Credential, req.AuthParams)
+		if err != nil {
+			return TokenResponse{}, err
+		}
+	case ATPublic:
+		qv = url.Values{}
+	default:
+		return TokenResponse{}, fmt.Errorf("bug: Token.AuthCode() received request with AppType == %v, which we do not recongnize", req.AppType)
+	}
+
+	qv.Set(grantType, grant.AuthCode)
+	qv.Set("code", req.Code)
+	qv.Set("code_verifier", req.CodeChallenge)
+	qv.Set("redirect_uri", req.AuthParams.Redirecturi)
+	qv.Set(clientID, req.AuthParams.ClientID)
+	qv.Set(clientInfo, clientInfoVal)
+	addScopeQueryParam(qv, req.AuthParams)
+	if err := addClaims(qv, req.AuthParams); err != nil {
+		return TokenResponse{}, err
+	}
+
+	return c.doTokenResp(ctx, req.AuthParams, qv)
+}
+
+// FromRefreshToken uses a refresh token (for refreshing credentials) to get a new access token.
+func (c Client) FromRefreshToken(ctx context.Context, appType AppType, authParams authority.AuthParams, cc *Credential, refreshToken string) (TokenResponse, error) {
+	qv := url.Values{}
+	if appType == ATConfidential {
+		var err error
+		qv, err = prepURLVals(ctx, cc, authParams)
+		if err != nil {
+			return TokenResponse{}, err
+		}
+	}
+	if err := addClaims(qv, authParams); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(grantType, grant.RefreshToken)
+	qv.Set(clientID, authParams.ClientID)
+	qv.Set(clientInfo, clientInfoVal)
+	qv.Set("refresh_token", refreshToken)
+	addScopeQueryParam(qv, authParams)
+
+	return c.doTokenResp(ctx, authParams, qv)
+}
+
+// FromClientSecret uses a client's secret (aka password) to get a new token.
+func (c Client) FromClientSecret(ctx context.Context, authParameters authority.AuthParams, clientSecret string) (TokenResponse, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(grantType, grant.ClientCredential)
+	qv.Set("client_secret", clientSecret)
+	qv.Set(clientID, authParameters.ClientID)
+	addScopeQueryParam(qv, authParameters)
+
+	token, err := c.doTokenResp(ctx, authParameters, qv)
+	if err != nil {
+		return token, fmt.Errorf("FromClientSecret(): %w", err)
+	}
+	return token, nil
+}
+
+func (c Client) FromAssertion(ctx context.Context, authParameters authority.AuthParams, assertion string) (TokenResponse, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(grantType, grant.ClientCredential)
+	qv.Set("client_assertion_type", grant.ClientAssertion)
+	qv.Set("client_assertion", assertion)
+	qv.Set(clientID, authParameters.ClientID)
+	qv.Set(clientInfo, clientInfoVal)
+	addScopeQueryParam(qv, authParameters)
+
+	token, err := c.doTokenResp(ctx, authParameters, qv)
+	if err != nil {
+		return token, fmt.Errorf("FromAssertion(): %w", err)
+	}
+	return token, nil
+}
+
+func (c Client) FromUserAssertionClientSecret(ctx context.Context, authParameters authority.AuthParams, userAssertion string, clientSecret string) (TokenResponse, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(grantType, grant.JWT)
+	qv.Set(clientID, authParameters.ClientID)
+	qv.Set("client_secret", clientSecret)
+	qv.Set("assertion", userAssertion)
+	qv.Set(clientInfo, clientInfoVal)
+	qv.Set("requested_token_use", "on_behalf_of")
+	addScopeQueryParam(qv, authParameters)
+
+	return c.doTokenResp(ctx, authParameters, qv)
+}
+
+func (c Client) FromUserAssertionClientCertificate(ctx context.Context, authParameters authority.AuthParams, userAssertion string, assertion string) (TokenResponse, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(grantType, grant.JWT)
+	qv.Set("client_assertion_type", grant.ClientAssertion)
+	qv.Set("client_assertion", assertion)
+	qv.Set(clientID, authParameters.ClientID)
+	qv.Set("assertion", userAssertion)
+	qv.Set(clientInfo, clientInfoVal)
+	qv.Set("requested_token_use", "on_behalf_of")
+	addScopeQueryParam(qv, authParameters)
+
+	return c.doTokenResp(ctx, authParameters, qv)
+}
+
+func (c Client) DeviceCodeResult(ctx context.Context, authParameters authority.AuthParams) (DeviceCodeResult, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return DeviceCodeResult{}, err
+	}
+	qv.Set(clientID, authParameters.ClientID)
+	addScopeQueryParam(qv, authParameters)
+
+	endpoint := strings.Replace(authParameters.Endpoints.TokenEndpoint, "token", "devicecode", -1)
+
+	resp := DeviceCodeResponse{}
+	err := c.Comm.URLFormCall(ctx, endpoint, qv, &resp)
+	if err != nil {
+		return DeviceCodeResult{}, err
+	}
+
+	return resp.Convert(authParameters.ClientID, authParameters.Scopes), nil
+}
+
+func (c Client) FromDeviceCodeResult(ctx context.Context, authParameters authority.AuthParams, deviceCodeResult DeviceCodeResult) (TokenResponse, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(grantType, grant.DeviceCode)
+	qv.Set(deviceCode, deviceCodeResult.DeviceCode)
+	qv.Set(clientID, authParameters.ClientID)
+	qv.Set(clientInfo, clientInfoVal)
+	addScopeQueryParam(qv, authParameters)
+
+	return c.doTokenResp(ctx, authParameters, qv)
+}
+
+func (c Client) FromSamlGrant(ctx context.Context, authParameters authority.AuthParams, samlGrant wstrust.SamlTokenInfo) (TokenResponse, error) {
+	qv := url.Values{}
+	if err := addClaims(qv, authParameters); err != nil {
+		return TokenResponse{}, err
+	}
+	qv.Set(username, authParameters.Username)
+	qv.Set(password, authParameters.Password)
+	qv.Set(clientID, authParameters.ClientID)
+	qv.Set(clientInfo, clientInfoVal)
+	qv.Set("assertion", base64.StdEncoding.WithPadding(base64.StdPadding).EncodeToString([]byte(samlGrant.Assertion)))
+	addScopeQueryParam(qv, authParameters)
+
+	switch samlGrant.AssertionType {
+	case grant.SAMLV1:
+		qv.Set(grantType, grant.SAMLV1)
+	case grant.SAMLV2:
+		qv.Set(grantType, grant.SAMLV2)
+	default:
+		return TokenResponse{}, fmt.Errorf("GetAccessTokenFromSamlGrant returned unknown SAML assertion type: %q", samlGrant.AssertionType)
+	}
+
+	return c.doTokenResp(ctx, authParameters, qv)
+}
+
+func (c Client) doTokenResp(ctx context.Context, authParams authority.AuthParams, qv url.Values) (TokenResponse, error) {
+	resp := TokenResponse{}
+	if authParams.AuthnScheme != nil {
+		trParams := authParams.AuthnScheme.TokenRequestParams()
+		for k, v := range trParams {
+			qv.Set(k, v)
+		}
+	}
+	err := c.Comm.URLFormCall(ctx, authParams.Endpoints.TokenEndpoint, qv, &resp)
+	if err != nil {
+		return resp, err
+	}
+	resp.ComputeScope(authParams)
+	if c.testing {
+		return resp, nil
+	}
+	return resp, resp.Validate()
+}
+
+// prepURLVals returns an url.Values that sets various key/values if we are doing secrets
+// or JWT assertions.
+func prepURLVals(ctx context.Context, cc *Credential, authParams authority.AuthParams) (url.Values, error) {
+	params := url.Values{}
+	if cc.Secret != "" {
+		params.Set("client_secret", cc.Secret)
+		return params, nil
+	}
+
+	jwt, err := cc.JWT(ctx, authParams)
+	if err != nil {
+		return nil, err
+	}
+	params.Set("client_assertion", jwt)
+	params.Set("client_assertion_type", grant.ClientAssertion)
+	return params, nil
+}
+
+// openid required to get an id token
+// offline_access required to get a refresh token
+// profile required to get the client_info field back
+var detectDefaultScopes = map[string]bool{
+	"openid":         true,
+	"offline_access": true,
+	"profile":        true,
+}
+
+var defaultScopes = []string{"openid", "offline_access", "profile"}
+
+func AppendDefaultScopes(authParameters authority.AuthParams) []string {
+	scopes := make([]string, 0, len(authParameters.Scopes)+len(defaultScopes))
+	for _, scope := range authParameters.Scopes {
+		s := strings.TrimSpace(scope)
+		if s == "" {
+			continue
+		}
+		if detectDefaultScopes[scope] {
+			continue
+		}
+		scopes = append(scopes, scope)
+	}
+	scopes = append(scopes, defaultScopes...)
+	return scopes
+}
+
+// addClaims adds client capabilities and claims from AuthParams to the given url.Values
+func addClaims(v url.Values, ap authority.AuthParams) error {
+	claims, err := ap.MergeCapabilitiesAndClaims()
+	if err == nil && claims != "" {
+		v.Set("claims", claims)
+	}
+	return err
+}
+
+func addScopeQueryParam(queryParams url.Values, authParameters authority.AuthParams) {
+	scopes := AppendDefaultScopes(authParameters)
+	queryParams.Set("scope", strings.Join(scopes, " "))
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/apptype_string.go 🔗

@@ -0,0 +1,25 @@
+// Code generated by "stringer -type=AppType"; DO NOT EDIT.
+
+package accesstokens
+
+import "strconv"
+
+func _() {
+	// An "invalid array index" compiler error signifies that the constant values have changed.
+	// Re-run the stringer command to generate them again.
+	var x [1]struct{}
+	_ = x[ATUnknown-0]
+	_ = x[ATPublic-1]
+	_ = x[ATConfidential-2]
+}
+
+const _AppType_name = "ATUnknownATPublicATConfidential"
+
+var _AppType_index = [...]uint8{0, 9, 17, 31}
+
+func (i AppType) String() string {
+	if i < 0 || i >= AppType(len(_AppType_index)-1) {
+		return "AppType(" + strconv.FormatInt(int64(i), 10) + ")"
+	}
+	return _AppType_name[_AppType_index[i]:_AppType_index[i+1]]
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/tokens.go 🔗

@@ -0,0 +1,339 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package accesstokens
+
+import (
+	"bytes"
+	"encoding/base64"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"reflect"
+	"strings"
+	"time"
+
+	internalTime "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared"
+)
+
+// IDToken consists of all the information used to validate a user.
+// https://docs.microsoft.com/azure/active-directory/develop/id-tokens .
+type IDToken struct {
+	PreferredUsername string `json:"preferred_username,omitempty"`
+	GivenName         string `json:"given_name,omitempty"`
+	FamilyName        string `json:"family_name,omitempty"`
+	MiddleName        string `json:"middle_name,omitempty"`
+	Name              string `json:"name,omitempty"`
+	Oid               string `json:"oid,omitempty"`
+	TenantID          string `json:"tid,omitempty"`
+	Subject           string `json:"sub,omitempty"`
+	UPN               string `json:"upn,omitempty"`
+	Email             string `json:"email,omitempty"`
+	AlternativeID     string `json:"alternative_id,omitempty"`
+	Issuer            string `json:"iss,omitempty"`
+	Audience          string `json:"aud,omitempty"`
+	ExpirationTime    int64  `json:"exp,omitempty"`
+	IssuedAt          int64  `json:"iat,omitempty"`
+	NotBefore         int64  `json:"nbf,omitempty"`
+	RawToken          string
+
+	AdditionalFields map[string]interface{}
+}
+
+var null = []byte("null")
+
+// UnmarshalJSON implements json.Unmarshaler.
+func (i *IDToken) UnmarshalJSON(b []byte) error {
+	if bytes.Equal(null, b) {
+		return nil
+	}
+
+	// Because we have a custom unmarshaler, you
+	// cannot directly call json.Unmarshal here. If you do, it will call this function
+	// recursively until reach our recursion limit. We have to create a new type
+	// that doesn't have this method in order to use json.Unmarshal.
+	type idToken2 IDToken
+
+	jwt := strings.Trim(string(b), `"`)
+	jwtArr := strings.Split(jwt, ".")
+	if len(jwtArr) < 2 {
+		return errors.New("IDToken returned from server is invalid")
+	}
+
+	jwtPart := jwtArr[1]
+	jwtDecoded, err := decodeJWT(jwtPart)
+	if err != nil {
+		return fmt.Errorf("unable to unmarshal IDToken, problem decoding JWT: %w", err)
+	}
+
+	token := idToken2{}
+	err = json.Unmarshal(jwtDecoded, &token)
+	if err != nil {
+		return fmt.Errorf("unable to unmarshal IDToken: %w", err)
+	}
+	token.RawToken = jwt
+
+	*i = IDToken(token)
+	return nil
+}
+
+// IsZero indicates if the IDToken is the zero value.
+func (i IDToken) IsZero() bool {
+	v := reflect.ValueOf(i)
+	for i := 0; i < v.NumField(); i++ {
+		field := v.Field(i)
+		if !field.IsZero() {
+			switch field.Kind() {
+			case reflect.Map, reflect.Slice:
+				if field.Len() == 0 {
+					continue
+				}
+			}
+			return false
+		}
+	}
+	return true
+}
+
+// LocalAccountID extracts an account's local account ID from an ID token.
+func (i IDToken) LocalAccountID() string {
+	if i.Oid != "" {
+		return i.Oid
+	}
+	return i.Subject
+}
+
+// jwtDecoder is provided to allow tests to provide their own.
+var jwtDecoder = decodeJWT
+
+// ClientInfo is used to create a Home Account ID for an account.
+type ClientInfo struct {
+	UID  string `json:"uid"`
+	UTID string `json:"utid"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// UnmarshalJSON implements json.Unmarshaler.s
+func (c *ClientInfo) UnmarshalJSON(b []byte) error {
+	s := strings.Trim(string(b), `"`)
+	// Client info may be empty in some flows, e.g. certificate exchange.
+	if len(s) == 0 {
+		return nil
+	}
+
+	// Because we have a custom unmarshaler, you
+	// cannot directly call json.Unmarshal here. If you do, it will call this function
+	// recursively until reach our recursion limit. We have to create a new type
+	// that doesn't have this method in order to use json.Unmarshal.
+	type clientInfo2 ClientInfo
+
+	raw, err := jwtDecoder(s)
+	if err != nil {
+		return fmt.Errorf("TokenResponse client_info field had JWT decode error: %w", err)
+	}
+
+	var c2 clientInfo2
+
+	err = json.Unmarshal(raw, &c2)
+	if err != nil {
+		return fmt.Errorf("was unable to unmarshal decoded JWT in TokenRespone to ClientInfo: %w", err)
+	}
+
+	*c = ClientInfo(c2)
+	return nil
+}
+
+// Scopes represents scopes in a TokenResponse.
+type Scopes struct {
+	Slice []string
+}
+
+// UnmarshalJSON implements json.Unmarshal.
+func (s *Scopes) UnmarshalJSON(b []byte) error {
+	str := strings.Trim(string(b), `"`)
+	if len(str) == 0 {
+		return nil
+	}
+	sl := strings.Split(str, " ")
+	s.Slice = sl
+	return nil
+}
+
+// TokenResponse is the information that is returned from a token endpoint during a token acquisition flow.
+type TokenResponse struct {
+	authority.OAuthResponseBase
+
+	AccessToken  string `json:"access_token"`
+	RefreshToken string `json:"refresh_token"`
+	TokenType    string `json:"token_type"`
+
+	FamilyID       string                    `json:"foci"`
+	IDToken        IDToken                   `json:"id_token"`
+	ClientInfo     ClientInfo                `json:"client_info"`
+	ExpiresOn      internalTime.DurationTime `json:"expires_in"`
+	ExtExpiresOn   internalTime.DurationTime `json:"ext_expires_in"`
+	GrantedScopes  Scopes                    `json:"scope"`
+	DeclinedScopes []string                  // This is derived
+
+	AdditionalFields map[string]interface{}
+
+	scopesComputed bool
+}
+
+// ComputeScope computes the final scopes based on what was granted by the server and
+// what our AuthParams were from the authority server. Per OAuth spec, if no scopes are returned, the response should be treated as if all scopes were granted
+// This behavior can be observed in client assertion flows, but can happen at any time, this check ensures we treat
+// those special responses properly Link to spec: https://tools.ietf.org/html/rfc6749#section-3.3
+func (tr *TokenResponse) ComputeScope(authParams authority.AuthParams) {
+	if len(tr.GrantedScopes.Slice) == 0 {
+		tr.GrantedScopes = Scopes{Slice: authParams.Scopes}
+	} else {
+		tr.DeclinedScopes = findDeclinedScopes(authParams.Scopes, tr.GrantedScopes.Slice)
+	}
+	tr.scopesComputed = true
+}
+
+// HomeAccountID uniquely identifies the authenticated account, if any. It's "" when the token is an app token.
+func (tr *TokenResponse) HomeAccountID() string {
+	id := tr.IDToken.Subject
+	if uid := tr.ClientInfo.UID; uid != "" {
+		utid := tr.ClientInfo.UTID
+		if utid == "" {
+			utid = uid
+		}
+		id = fmt.Sprintf("%s.%s", uid, utid)
+	}
+	return id
+}
+
+// Validate validates the TokenResponse has basic valid values. It must be called
+// after ComputeScopes() is called.
+func (tr *TokenResponse) Validate() error {
+	if tr.Error != "" {
+		return fmt.Errorf("%s: %s", tr.Error, tr.ErrorDescription)
+	}
+
+	if tr.AccessToken == "" {
+		return errors.New("response is missing access_token")
+	}
+
+	if !tr.scopesComputed {
+		return fmt.Errorf("TokenResponse hasn't had ScopesComputed() called")
+	}
+	return nil
+}
+
+func (tr *TokenResponse) CacheKey(authParams authority.AuthParams) string {
+	if authParams.AuthorizationType == authority.ATOnBehalfOf {
+		return authParams.AssertionHash()
+	}
+	if authParams.AuthorizationType == authority.ATClientCredentials {
+		return authParams.AppKey()
+	}
+	if authParams.IsConfidentialClient || authParams.AuthorizationType == authority.ATRefreshToken {
+		return tr.HomeAccountID()
+	}
+	return ""
+}
+
+func findDeclinedScopes(requestedScopes []string, grantedScopes []string) []string {
+	declined := []string{}
+	grantedMap := map[string]bool{}
+	for _, s := range grantedScopes {
+		grantedMap[strings.ToLower(s)] = true
+	}
+	// Comparing the requested scopes with the granted scopes to see if there are any scopes that have been declined.
+	for _, r := range requestedScopes {
+		if !grantedMap[strings.ToLower(r)] {
+			declined = append(declined, r)
+		}
+	}
+	return declined
+}
+
+// decodeJWT decodes a JWT and converts it to a byte array representing a JSON object
+// JWT has headers and payload base64url encoded without padding
+// https://tools.ietf.org/html/rfc7519#section-3 and
+// https://tools.ietf.org/html/rfc7515#section-2
+func decodeJWT(data string) ([]byte, error) {
+	// https://tools.ietf.org/html/rfc7515#appendix-C
+	return base64.RawURLEncoding.DecodeString(data)
+}
+
+// RefreshToken is the JSON representation of a MSAL refresh token for encoding to storage.
+type RefreshToken struct {
+	HomeAccountID     string `json:"home_account_id,omitempty"`
+	Environment       string `json:"environment,omitempty"`
+	CredentialType    string `json:"credential_type,omitempty"`
+	ClientID          string `json:"client_id,omitempty"`
+	FamilyID          string `json:"family_id,omitempty"`
+	Secret            string `json:"secret,omitempty"`
+	Realm             string `json:"realm,omitempty"`
+	Target            string `json:"target,omitempty"`
+	UserAssertionHash string `json:"user_assertion_hash,omitempty"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// NewRefreshToken is the constructor for RefreshToken.
+func NewRefreshToken(homeID, env, clientID, refreshToken, familyID string) RefreshToken {
+	return RefreshToken{
+		HomeAccountID:  homeID,
+		Environment:    env,
+		CredentialType: "RefreshToken",
+		ClientID:       clientID,
+		FamilyID:       familyID,
+		Secret:         refreshToken,
+	}
+}
+
+// Key outputs the key that can be used to uniquely look up this entry in a map.
+func (rt RefreshToken) Key() string {
+	var fourth = rt.FamilyID
+	if fourth == "" {
+		fourth = rt.ClientID
+	}
+
+	key := strings.Join(
+		[]string{rt.HomeAccountID, rt.Environment, rt.CredentialType, fourth},
+		shared.CacheKeySeparator,
+	)
+	return strings.ToLower(key)
+}
+
+func (rt RefreshToken) GetSecret() string {
+	return rt.Secret
+}
+
+// DeviceCodeResult stores the response from the STS device code endpoint.
+type DeviceCodeResult struct {
+	// UserCode is the code the user needs to provide when authentication at the verification URI.
+	UserCode string
+	// DeviceCode is the code used in the access token request.
+	DeviceCode string
+	// VerificationURL is the the URL where user can authenticate.
+	VerificationURL string
+	// ExpiresOn is the expiration time of device code in seconds.
+	ExpiresOn time.Time
+	// Interval is the interval at which the STS should be polled at.
+	Interval int
+	// Message is the message which should be displayed to the user.
+	Message string
+	// ClientID is the UUID issued by the authorization server for your application.
+	ClientID string
+	// Scopes is the OpenID scopes used to request access a protected API.
+	Scopes []string
+}
+
+// NewDeviceCodeResult creates a DeviceCodeResult instance.
+func NewDeviceCodeResult(userCode, deviceCode, verificationURL string, expiresOn time.Time, interval int, message, clientID string, scopes []string) DeviceCodeResult {
+	return DeviceCodeResult{userCode, deviceCode, verificationURL, expiresOn, interval, message, clientID, scopes}
+}
+
+func (dcr DeviceCodeResult) String() string {
+	return fmt.Sprintf("UserCode: (%v)\nDeviceCode: (%v)\nURL: (%v)\nMessage: (%v)\n", dcr.UserCode, dcr.DeviceCode, dcr.VerificationURL, dcr.Message)
+
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authority.go 🔗

@@ -0,0 +1,589 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package authority
+
+import (
+	"context"
+	"crypto/sha256"
+	"encoding/base64"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"io"
+	"net/http"
+	"net/url"
+	"os"
+	"path"
+	"strings"
+	"time"
+
+	"github.com/google/uuid"
+)
+
+const (
+	authorizationEndpoint             = "https://%v/%v/oauth2/v2.0/authorize"
+	instanceDiscoveryEndpoint         = "https://%v/common/discovery/instance"
+	tenantDiscoveryEndpointWithRegion = "https://%s.%s/%s/v2.0/.well-known/openid-configuration"
+	regionName                        = "REGION_NAME"
+	defaultAPIVersion                 = "2021-10-01"
+	imdsEndpoint                      = "http://169.254.169.254/metadata/instance/compute/location?format=text&api-version=" + defaultAPIVersion
+	autoDetectRegion                  = "TryAutoDetect"
+	AccessTokenTypeBearer             = "Bearer"
+)
+
+// These are various hosts that host AAD Instance discovery endpoints.
+const (
+	defaultHost          = "login.microsoftonline.com"
+	loginMicrosoft       = "login.microsoft.com"
+	loginWindows         = "login.windows.net"
+	loginSTSWindows      = "sts.windows.net"
+	loginMicrosoftOnline = defaultHost
+)
+
+// jsonCaller is an interface that allows us to mock the JSONCall method.
+type jsonCaller interface {
+	JSONCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, body, resp interface{}) error
+}
+
+var aadTrustedHostList = map[string]bool{
+	"login.windows.net":            true, // Microsoft Azure Worldwide - Used in validation scenarios where host is not this list
+	"login.chinacloudapi.cn":       true, // Microsoft Azure China
+	"login.microsoftonline.de":     true, // Microsoft Azure Blackforest
+	"login-us.microsoftonline.com": true, // Microsoft Azure US Government - Legacy
+	"login.microsoftonline.us":     true, // Microsoft Azure US Government
+	"login.microsoftonline.com":    true, // Microsoft Azure Worldwide
+	"login.cloudgovapi.us":         true, // Microsoft Azure US Government
+}
+
+// TrustedHost checks if an AAD host is trusted/valid.
+func TrustedHost(host string) bool {
+	if _, ok := aadTrustedHostList[host]; ok {
+		return true
+	}
+	return false
+}
+
+// OAuthResponseBase is the base JSON return message for an OAuth call.
+// This is embedded in other calls to get the base fields from every response.
+type OAuthResponseBase struct {
+	Error            string `json:"error"`
+	SubError         string `json:"suberror"`
+	ErrorDescription string `json:"error_description"`
+	ErrorCodes       []int  `json:"error_codes"`
+	CorrelationID    string `json:"correlation_id"`
+	Claims           string `json:"claims"`
+}
+
+// TenantDiscoveryResponse is the tenant endpoints from the OpenID configuration endpoint.
+type TenantDiscoveryResponse struct {
+	OAuthResponseBase
+
+	AuthorizationEndpoint string `json:"authorization_endpoint"`
+	TokenEndpoint         string `json:"token_endpoint"`
+	Issuer                string `json:"issuer"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// Validate validates that the response had the correct values required.
+func (r *TenantDiscoveryResponse) Validate() error {
+	switch "" {
+	case r.AuthorizationEndpoint:
+		return errors.New("TenantDiscoveryResponse: authorize endpoint was not found in the openid configuration")
+	case r.TokenEndpoint:
+		return errors.New("TenantDiscoveryResponse: token endpoint was not found in the openid configuration")
+	case r.Issuer:
+		return errors.New("TenantDiscoveryResponse: issuer was not found in the openid configuration")
+	}
+	return nil
+}
+
+type InstanceDiscoveryMetadata struct {
+	PreferredNetwork string   `json:"preferred_network"`
+	PreferredCache   string   `json:"preferred_cache"`
+	Aliases          []string `json:"aliases"`
+
+	AdditionalFields map[string]interface{}
+}
+
+type InstanceDiscoveryResponse struct {
+	TenantDiscoveryEndpoint string                      `json:"tenant_discovery_endpoint"`
+	Metadata                []InstanceDiscoveryMetadata `json:"metadata"`
+
+	AdditionalFields map[string]interface{}
+}
+
+//go:generate stringer -type=AuthorizeType
+
+// AuthorizeType represents the type of token flow.
+type AuthorizeType int
+
+// These are all the types of token flows.
+const (
+	ATUnknown AuthorizeType = iota
+	ATUsernamePassword
+	ATWindowsIntegrated
+	ATAuthCode
+	ATInteractive
+	ATClientCredentials
+	ATDeviceCode
+	ATRefreshToken
+	AccountByID
+	ATOnBehalfOf
+)
+
+// These are all authority types
+const (
+	AAD  = "MSSTS"
+	ADFS = "ADFS"
+)
+
+// AuthenticationScheme is an extensibility mechanism designed to be used only by Azure Arc for proof of possession access tokens.
+type AuthenticationScheme interface {
+	// Extra parameters that are added to the request to the /token endpoint.
+	TokenRequestParams() map[string]string
+	// Key ID of the public / private key pair used by the encryption algorithm, if any.
+	// Tokens obtained by authentication schemes that use this are bound to the KeyId, i.e.
+	// if a different kid is presented, the access token cannot be used.
+	KeyID() string
+	// Creates the access token that goes into an Authorization HTTP header.
+	FormatAccessToken(accessToken string) (string, error)
+	//Expected to match the token_type parameter returned by ESTS. Used to disambiguate
+	// between ATs of different types (e.g. Bearer and PoP) when loading from cache etc.
+	AccessTokenType() string
+}
+
+// default authn scheme realizing AuthenticationScheme for "Bearer" tokens
+type BearerAuthenticationScheme struct{}
+
+var bearerAuthnScheme BearerAuthenticationScheme
+
+func (ba *BearerAuthenticationScheme) TokenRequestParams() map[string]string {
+	return nil
+}
+func (ba *BearerAuthenticationScheme) KeyID() string {
+	return ""
+}
+func (ba *BearerAuthenticationScheme) FormatAccessToken(accessToken string) (string, error) {
+	return accessToken, nil
+}
+func (ba *BearerAuthenticationScheme) AccessTokenType() string {
+	return AccessTokenTypeBearer
+}
+
+// AuthParams represents the parameters used for authorization for token acquisition.
+type AuthParams struct {
+	AuthorityInfo Info
+	CorrelationID string
+	Endpoints     Endpoints
+	ClientID      string
+	// Redirecturi is used for auth flows that specify a redirect URI (e.g. local server for interactive auth flow).
+	Redirecturi   string
+	HomeAccountID string
+	// Username is the user-name portion for username/password auth flow.
+	Username string
+	// Password is the password portion for username/password auth flow.
+	Password string
+	// Scopes is the list of scopes the user consents to.
+	Scopes []string
+	// AuthorizationType specifies the auth flow being used.
+	AuthorizationType AuthorizeType
+	// State is a random value used to prevent cross-site request forgery attacks.
+	State string
+	// CodeChallenge is derived from a code verifier and is sent in the auth request.
+	CodeChallenge string
+	// CodeChallengeMethod describes the method used to create the CodeChallenge.
+	CodeChallengeMethod string
+	// Prompt specifies the user prompt type during interactive auth.
+	Prompt string
+	// IsConfidentialClient specifies if it is a confidential client.
+	IsConfidentialClient bool
+	// SendX5C specifies if x5c claim(public key of the certificate) should be sent to STS.
+	SendX5C bool
+	// UserAssertion is the access token used to acquire token on behalf of user
+	UserAssertion string
+	// Capabilities the client will include with each token request, for example "CP1".
+	// Call [NewClientCapabilities] to construct a value for this field.
+	Capabilities ClientCapabilities
+	// Claims required for an access token to satisfy a conditional access policy
+	Claims string
+	// KnownAuthorityHosts don't require metadata discovery because they're known to the user
+	KnownAuthorityHosts []string
+	// LoginHint is a username with which to pre-populate account selection during interactive auth
+	LoginHint string
+	// DomainHint is a directive that can be used to accelerate the user to their federated IdP sign-in page
+	DomainHint string
+	// AuthnScheme is an optional scheme for formatting access tokens
+	AuthnScheme AuthenticationScheme
+}
+
+// NewAuthParams creates an authorization parameters object.
+func NewAuthParams(clientID string, authorityInfo Info) AuthParams {
+	return AuthParams{
+		ClientID:      clientID,
+		AuthorityInfo: authorityInfo,
+		CorrelationID: uuid.New().String(),
+		AuthnScheme:   &bearerAuthnScheme,
+	}
+}
+
+// WithTenant returns a copy of the AuthParams having the specified tenant ID. If the given
+// ID is empty, the copy is identical to the original. This function returns an error in
+// several cases:
+//   - ID isn't specific (for example, it's "common")
+//   - ID is non-empty and the authority doesn't support tenants (for example, it's an ADFS authority)
+//   - the client is configured to authenticate only Microsoft accounts via the "consumers" endpoint
+//   - the resulting authority URL is invalid
+func (p AuthParams) WithTenant(ID string) (AuthParams, error) {
+	switch ID {
+	case "", p.AuthorityInfo.Tenant:
+		// keep the default tenant because the caller didn't override it
+		return p, nil
+	case "common", "consumers", "organizations":
+		if p.AuthorityInfo.AuthorityType == AAD {
+			return p, fmt.Errorf(`tenant ID must be a specific tenant, not "%s"`, ID)
+		}
+		// else we'll return a better error below
+	}
+	if p.AuthorityInfo.AuthorityType != AAD {
+		return p, errors.New("the authority doesn't support tenants")
+	}
+	if p.AuthorityInfo.Tenant == "consumers" {
+		return p, errors.New(`client is configured to authenticate only personal Microsoft accounts, via the "consumers" endpoint`)
+	}
+	authority := "https://" + path.Join(p.AuthorityInfo.Host, ID)
+	info, err := NewInfoFromAuthorityURI(authority, p.AuthorityInfo.ValidateAuthority, p.AuthorityInfo.InstanceDiscoveryDisabled)
+	if err == nil {
+		info.Region = p.AuthorityInfo.Region
+		p.AuthorityInfo = info
+	}
+	return p, err
+}
+
+// MergeCapabilitiesAndClaims combines client capabilities and challenge claims into a value suitable for an authentication request's "claims" parameter.
+func (p AuthParams) MergeCapabilitiesAndClaims() (string, error) {
+	claims := p.Claims
+	if len(p.Capabilities.asMap) > 0 {
+		if claims == "" {
+			// without claims the result is simply the capabilities
+			return p.Capabilities.asJSON, nil
+		}
+		// Otherwise, merge claims and capabilties into a single JSON object.
+		// We handle the claims challenge as a map because we don't know its structure.
+		var challenge map[string]any
+		if err := json.Unmarshal([]byte(claims), &challenge); err != nil {
+			return "", fmt.Errorf(`claims must be JSON. Are they base64 encoded? json.Unmarshal returned "%v"`, err)
+		}
+		if err := merge(p.Capabilities.asMap, challenge); err != nil {
+			return "", err
+		}
+		b, err := json.Marshal(challenge)
+		if err != nil {
+			return "", err
+		}
+		claims = string(b)
+	}
+	return claims, nil
+}
+
+// merges a into b without overwriting b's values. Returns an error when a and b share a key for which either has a non-object value.
+func merge(a, b map[string]any) error {
+	for k, av := range a {
+		if bv, ok := b[k]; !ok {
+			// b doesn't contain this key => simply set it to a's value
+			b[k] = av
+		} else {
+			// b does contain this key => recursively merge a[k] into b[k], provided both are maps. If a[k] or b[k] isn't
+			// a map, return an error because merging would overwrite some value in b. Errors shouldn't occur in practice
+			// because the challenge will be from AAD, which knows the capabilities format.
+			if A, ok := av.(map[string]any); ok {
+				if B, ok := bv.(map[string]any); ok {
+					return merge(A, B)
+				} else {
+					// b[k] isn't a map
+					return errors.New("challenge claims conflict with client capabilities")
+				}
+			} else {
+				// a[k] isn't a map
+				return errors.New("challenge claims conflict with client capabilities")
+			}
+		}
+	}
+	return nil
+}
+
+// ClientCapabilities stores capabilities in the formats used by AuthParams.MergeCapabilitiesAndClaims.
+// [NewClientCapabilities] precomputes these representations because capabilities are static for the
+// lifetime of a client and are included with every authentication request i.e., these computations
+// always have the same result and would otherwise have to be repeated for every request.
+type ClientCapabilities struct {
+	// asJSON is for the common case: adding the capabilities to an auth request with no challenge claims
+	asJSON string
+	// asMap is for merging the capabilities with challenge claims
+	asMap map[string]any
+}
+
+func NewClientCapabilities(capabilities []string) (ClientCapabilities, error) {
+	c := ClientCapabilities{}
+	var err error
+	if len(capabilities) > 0 {
+		cpbs := make([]string, len(capabilities))
+		for i := 0; i < len(cpbs); i++ {
+			cpbs[i] = fmt.Sprintf(`"%s"`, capabilities[i])
+		}
+		c.asJSON = fmt.Sprintf(`{"access_token":{"xms_cc":{"values":[%s]}}}`, strings.Join(cpbs, ","))
+		// note our JSON is valid but we can't stop users breaking it with garbage like "}"
+		err = json.Unmarshal([]byte(c.asJSON), &c.asMap)
+	}
+	return c, err
+}
+
+// Info consists of information about the authority.
+type Info struct {
+	Host                      string
+	CanonicalAuthorityURI     string
+	AuthorityType             string
+	UserRealmURIPrefix        string
+	ValidateAuthority         bool
+	Tenant                    string
+	Region                    string
+	InstanceDiscoveryDisabled bool
+}
+
+func firstPathSegment(u *url.URL) (string, error) {
+	pathParts := strings.Split(u.EscapedPath(), "/")
+	if len(pathParts) >= 2 {
+		return pathParts[1], nil
+	}
+
+	return "", errors.New(`authority must be an https URL such as "https://login.microsoftonline.com/<your tenant>"`)
+}
+
+// NewInfoFromAuthorityURI creates an AuthorityInfo instance from the authority URL provided.
+func NewInfoFromAuthorityURI(authority string, validateAuthority bool, instanceDiscoveryDisabled bool) (Info, error) {
+	u, err := url.Parse(strings.ToLower(authority))
+	if err != nil || u.Scheme != "https" {
+		return Info{}, errors.New(`authority must be an https URL such as "https://login.microsoftonline.com/<your tenant>"`)
+	}
+
+	tenant, err := firstPathSegment(u)
+	if err != nil {
+		return Info{}, err
+	}
+	authorityType := AAD
+	if tenant == "adfs" {
+		authorityType = ADFS
+	}
+
+	// u.Host includes the port, if any, which is required for private cloud deployments
+	return Info{
+		Host:                      u.Host,
+		CanonicalAuthorityURI:     fmt.Sprintf("https://%v/%v/", u.Host, tenant),
+		AuthorityType:             authorityType,
+		UserRealmURIPrefix:        fmt.Sprintf("https://%v/common/userrealm/", u.Hostname()),
+		ValidateAuthority:         validateAuthority,
+		Tenant:                    tenant,
+		InstanceDiscoveryDisabled: instanceDiscoveryDisabled,
+	}, nil
+}
+
+// Endpoints consists of the endpoints from the tenant discovery response.
+type Endpoints struct {
+	AuthorizationEndpoint string
+	TokenEndpoint         string
+	selfSignedJwtAudience string
+	authorityHost         string
+}
+
+// NewEndpoints creates an Endpoints object.
+func NewEndpoints(authorizationEndpoint string, tokenEndpoint string, selfSignedJwtAudience string, authorityHost string) Endpoints {
+	return Endpoints{authorizationEndpoint, tokenEndpoint, selfSignedJwtAudience, authorityHost}
+}
+
+// UserRealmAccountType refers to the type of user realm.
+type UserRealmAccountType string
+
+// These are the different types of user realms.
+const (
+	Unknown   UserRealmAccountType = ""
+	Federated UserRealmAccountType = "Federated"
+	Managed   UserRealmAccountType = "Managed"
+)
+
+// UserRealm is used for the username password request to determine user type
+type UserRealm struct {
+	AccountType       UserRealmAccountType `json:"account_type"`
+	DomainName        string               `json:"domain_name"`
+	CloudInstanceName string               `json:"cloud_instance_name"`
+	CloudAudienceURN  string               `json:"cloud_audience_urn"`
+
+	// required if accountType is Federated
+	FederationProtocol    string `json:"federation_protocol"`
+	FederationMetadataURL string `json:"federation_metadata_url"`
+
+	AdditionalFields map[string]interface{}
+}
+
+func (u UserRealm) validate() error {
+	switch "" {
+	case string(u.AccountType):
+		return errors.New("the account type (Federated or Managed) is missing")
+	case u.DomainName:
+		return errors.New("domain name of user realm is missing")
+	case u.CloudInstanceName:
+		return errors.New("cloud instance name of user realm is missing")
+	case u.CloudAudienceURN:
+		return errors.New("cloud Instance URN is missing")
+	}
+
+	if u.AccountType == Federated {
+		switch "" {
+		case u.FederationProtocol:
+			return errors.New("federation protocol of user realm is missing")
+		case u.FederationMetadataURL:
+			return errors.New("federation metadata URL of user realm is missing")
+		}
+	}
+	return nil
+}
+
+// Client represents the REST calls to authority backends.
+type Client struct {
+	// Comm provides the HTTP transport client.
+	Comm jsonCaller // *comm.Client
+}
+
+func (c Client) UserRealm(ctx context.Context, authParams AuthParams) (UserRealm, error) {
+	endpoint := fmt.Sprintf("https://%s/common/UserRealm/%s", authParams.Endpoints.authorityHost, url.PathEscape(authParams.Username))
+	qv := url.Values{
+		"api-version": []string{"1.0"},
+	}
+
+	resp := UserRealm{}
+	err := c.Comm.JSONCall(
+		ctx,
+		endpoint,
+		http.Header{"client-request-id": []string{authParams.CorrelationID}},
+		qv,
+		nil,
+		&resp,
+	)
+	if err != nil {
+		return resp, err
+	}
+
+	return resp, resp.validate()
+}
+
+func (c Client) GetTenantDiscoveryResponse(ctx context.Context, openIDConfigurationEndpoint string) (TenantDiscoveryResponse, error) {
+	resp := TenantDiscoveryResponse{}
+	err := c.Comm.JSONCall(
+		ctx,
+		openIDConfigurationEndpoint,
+		http.Header{},
+		nil,
+		nil,
+		&resp,
+	)
+
+	return resp, err
+}
+
+// AADInstanceDiscovery attempts to discover a tenant endpoint (used in OIDC auth with an authorization endpoint).
+// This is done by AAD which allows for aliasing of tenants (windows.sts.net is the same as login.windows.com).
+func (c Client) AADInstanceDiscovery(ctx context.Context, authorityInfo Info) (InstanceDiscoveryResponse, error) {
+	region := ""
+	var err error
+	resp := InstanceDiscoveryResponse{}
+	if authorityInfo.Region != "" && authorityInfo.Region != autoDetectRegion {
+		region = authorityInfo.Region
+	} else if authorityInfo.Region == autoDetectRegion {
+		region = detectRegion(ctx)
+	}
+	if region != "" {
+		environment := authorityInfo.Host
+		switch environment {
+		case loginMicrosoft, loginWindows, loginSTSWindows, defaultHost:
+			environment = loginMicrosoft
+		}
+
+		resp.TenantDiscoveryEndpoint = fmt.Sprintf(tenantDiscoveryEndpointWithRegion, region, environment, authorityInfo.Tenant)
+		metadata := InstanceDiscoveryMetadata{
+			PreferredNetwork: fmt.Sprintf("%v.%v", region, authorityInfo.Host),
+			PreferredCache:   authorityInfo.Host,
+			Aliases:          []string{fmt.Sprintf("%v.%v", region, authorityInfo.Host), authorityInfo.Host},
+		}
+		resp.Metadata = []InstanceDiscoveryMetadata{metadata}
+	} else {
+		qv := url.Values{}
+		qv.Set("api-version", "1.1")
+		qv.Set("authorization_endpoint", fmt.Sprintf(authorizationEndpoint, authorityInfo.Host, authorityInfo.Tenant))
+
+		discoveryHost := defaultHost
+		if TrustedHost(authorityInfo.Host) {
+			discoveryHost = authorityInfo.Host
+		}
+
+		endpoint := fmt.Sprintf(instanceDiscoveryEndpoint, discoveryHost)
+		err = c.Comm.JSONCall(ctx, endpoint, http.Header{}, qv, nil, &resp)
+	}
+	return resp, err
+}
+
+func detectRegion(ctx context.Context) string {
+	region := os.Getenv(regionName)
+	if region != "" {
+		region = strings.ReplaceAll(region, " ", "")
+		return strings.ToLower(region)
+	}
+	// HTTP call to IMDS endpoint to get region
+	// Refer : https://identitydivision.visualstudio.com/DevEx/_git/AuthLibrariesApiReview?path=%2FPinAuthToRegion%2FAAD%20SDK%20Proposal%20to%20Pin%20Auth%20to%20region.md&_a=preview&version=GBdev
+	// Set a 2 second timeout for this http client which only does calls to IMDS endpoint
+	client := http.Client{
+		Timeout: time.Duration(2 * time.Second),
+	}
+	req, _ := http.NewRequest("GET", imdsEndpoint, nil)
+	req.Header.Set("Metadata", "true")
+	resp, err := client.Do(req)
+	// If the request times out or there is an error, it is retried once
+	if err != nil || resp.StatusCode != 200 {
+		resp, err = client.Do(req)
+		if err != nil || resp.StatusCode != 200 {
+			return ""
+		}
+	}
+	defer resp.Body.Close()
+	response, err := io.ReadAll(resp.Body)
+	if err != nil {
+		return ""
+	}
+	return string(response)
+}
+
+func (a *AuthParams) CacheKey(isAppCache bool) string {
+	if a.AuthorizationType == ATOnBehalfOf {
+		return a.AssertionHash()
+	}
+	if a.AuthorizationType == ATClientCredentials || isAppCache {
+		return a.AppKey()
+	}
+	if a.AuthorizationType == ATRefreshToken || a.AuthorizationType == AccountByID {
+		return a.HomeAccountID
+	}
+	return ""
+}
+func (a *AuthParams) AssertionHash() string {
+	hasher := sha256.New()
+	// Per documentation this never returns an error : https://pkg.go.dev/hash#pkg-types
+	_, _ = hasher.Write([]byte(a.UserAssertion))
+	sha := base64.URLEncoding.EncodeToString(hasher.Sum(nil))
+	return sha
+}
+
+func (a *AuthParams) AppKey() string {
+	if a.AuthorityInfo.Tenant != "" {
+		return fmt.Sprintf("%s_%s_AppTokenCache", a.ClientID, a.AuthorityInfo.Tenant)
+	}
+	return fmt.Sprintf("%s__AppTokenCache", a.ClientID)
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authorizetype_string.go 🔗

@@ -0,0 +1,30 @@
+// Code generated by "stringer -type=AuthorizeType"; DO NOT EDIT.
+
+package authority
+
+import "strconv"
+
+func _() {
+	// An "invalid array index" compiler error signifies that the constant values have changed.
+	// Re-run the stringer command to generate them again.
+	var x [1]struct{}
+	_ = x[ATUnknown-0]
+	_ = x[ATUsernamePassword-1]
+	_ = x[ATWindowsIntegrated-2]
+	_ = x[ATAuthCode-3]
+	_ = x[ATInteractive-4]
+	_ = x[ATClientCredentials-5]
+	_ = x[ATDeviceCode-6]
+	_ = x[ATRefreshToken-7]
+}
+
+const _AuthorizeType_name = "ATUnknownATUsernamePasswordATWindowsIntegratedATAuthCodeATInteractiveATClientCredentialsATDeviceCodeATRefreshToken"
+
+var _AuthorizeType_index = [...]uint8{0, 9, 27, 46, 56, 69, 88, 100, 114}
+
+func (i AuthorizeType) String() string {
+	if i < 0 || i >= AuthorizeType(len(_AuthorizeType_index)-1) {
+		return "AuthorizeType(" + strconv.FormatInt(int64(i), 10) + ")"
+	}
+	return _AuthorizeType_name[_AuthorizeType_index[i]:_AuthorizeType_index[i+1]]
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/comm.go 🔗

@@ -0,0 +1,320 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// Package comm provides helpers for communicating with HTTP backends.
+package comm
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"encoding/xml"
+	"fmt"
+	"io"
+	"net/http"
+	"net/url"
+	"reflect"
+	"runtime"
+	"strings"
+	"time"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors"
+	customJSON "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version"
+	"github.com/google/uuid"
+)
+
+// HTTPClient represents an HTTP client.
+// It's usually an *http.Client from the standard library.
+type HTTPClient interface {
+	// Do sends an HTTP request and returns an HTTP response.
+	Do(req *http.Request) (*http.Response, error)
+
+	// CloseIdleConnections closes any idle connections in a "keep-alive" state.
+	CloseIdleConnections()
+}
+
+// Client provides a wrapper to our *http.Client that handles compression and serialization needs.
+type Client struct {
+	client HTTPClient
+}
+
+// New returns a new Client object.
+func New(httpClient HTTPClient) *Client {
+	if httpClient == nil {
+		panic("http.Client cannot == nil")
+	}
+
+	return &Client{client: httpClient}
+}
+
+// JSONCall connects to the REST endpoint passing the HTTP query values, headers and JSON conversion
+// of body in the HTTP body. It automatically handles compression and decompression with gzip. The response is JSON
+// unmarshalled into resp. resp must be a pointer to a struct. If the body struct contains a field called
+// "AdditionalFields" we use a custom marshal/unmarshal engine.
+func (c *Client) JSONCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, body, resp interface{}) error {
+	if qv == nil {
+		qv = url.Values{}
+	}
+
+	v := reflect.ValueOf(resp)
+	if err := c.checkResp(v); err != nil {
+		return err
+	}
+
+	// Choose a JSON marshal/unmarshal depending on if we have AdditionalFields attribute.
+	var marshal = json.Marshal
+	var unmarshal = json.Unmarshal
+	if _, ok := v.Elem().Type().FieldByName("AdditionalFields"); ok {
+		marshal = customJSON.Marshal
+		unmarshal = customJSON.Unmarshal
+	}
+
+	u, err := url.Parse(endpoint)
+	if err != nil {
+		return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err)
+	}
+	u.RawQuery = qv.Encode()
+
+	addStdHeaders(headers)
+
+	req := &http.Request{Method: http.MethodGet, URL: u, Header: headers}
+
+	if body != nil {
+		// Note: In case your wondering why we are not gzip encoding....
+		// I'm not sure if these various services support gzip on send.
+		headers.Add("Content-Type", "application/json; charset=utf-8")
+		data, err := marshal(body)
+		if err != nil {
+			return fmt.Errorf("bug: conn.Call(): could not marshal the body object: %w", err)
+		}
+		req.Body = io.NopCloser(bytes.NewBuffer(data))
+		req.Method = http.MethodPost
+	}
+
+	data, err := c.do(ctx, req)
+	if err != nil {
+		return err
+	}
+
+	if resp != nil {
+		if err := unmarshal(data, resp); err != nil {
+			return fmt.Errorf("json decode error: %w\njson message bytes were: %s", err, string(data))
+		}
+	}
+	return nil
+}
+
+// XMLCall connects to an endpoint and decodes the XML response into resp. This is used when
+// sending application/xml . If sending XML via SOAP, use SOAPCall().
+func (c *Client) XMLCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, resp interface{}) error {
+	if err := c.checkResp(reflect.ValueOf(resp)); err != nil {
+		return err
+	}
+
+	if qv == nil {
+		qv = url.Values{}
+	}
+
+	u, err := url.Parse(endpoint)
+	if err != nil {
+		return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err)
+	}
+	u.RawQuery = qv.Encode()
+
+	headers.Set("Content-Type", "application/xml; charset=utf-8") // This was not set in he original Mex(), but...
+	addStdHeaders(headers)
+
+	return c.xmlCall(ctx, u, headers, "", resp)
+}
+
+// SOAPCall returns the SOAP message given an endpoint, action, body of the request and the response object to marshal into.
+func (c *Client) SOAPCall(ctx context.Context, endpoint, action string, headers http.Header, qv url.Values, body string, resp interface{}) error {
+	if body == "" {
+		return fmt.Errorf("cannot make a SOAP call with body set to empty string")
+	}
+
+	if err := c.checkResp(reflect.ValueOf(resp)); err != nil {
+		return err
+	}
+
+	if qv == nil {
+		qv = url.Values{}
+	}
+
+	u, err := url.Parse(endpoint)
+	if err != nil {
+		return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err)
+	}
+	u.RawQuery = qv.Encode()
+
+	headers.Set("Content-Type", "application/soap+xml; charset=utf-8")
+	headers.Set("SOAPAction", action)
+	addStdHeaders(headers)
+
+	return c.xmlCall(ctx, u, headers, body, resp)
+}
+
+// xmlCall sends an XML in body and decodes into resp. This simply does the transport and relies on
+// an upper level call to set things such as SOAP parameters and Content-Type, if required.
+func (c *Client) xmlCall(ctx context.Context, u *url.URL, headers http.Header, body string, resp interface{}) error {
+	req := &http.Request{Method: http.MethodGet, URL: u, Header: headers}
+
+	if len(body) > 0 {
+		req.Method = http.MethodPost
+		req.Body = io.NopCloser(strings.NewReader(body))
+	}
+
+	data, err := c.do(ctx, req)
+	if err != nil {
+		return err
+	}
+
+	return xml.Unmarshal(data, resp)
+}
+
+// URLFormCall is used to make a call where we need to send application/x-www-form-urlencoded data
+// to the backend and receive JSON back. qv will be encoded into the request body.
+func (c *Client) URLFormCall(ctx context.Context, endpoint string, qv url.Values, resp interface{}) error {
+	if len(qv) == 0 {
+		return fmt.Errorf("URLFormCall() requires qv to have non-zero length")
+	}
+
+	if err := c.checkResp(reflect.ValueOf(resp)); err != nil {
+		return err
+	}
+
+	u, err := url.Parse(endpoint)
+	if err != nil {
+		return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err)
+	}
+
+	headers := http.Header{}
+	headers.Set("Content-Type", "application/x-www-form-urlencoded; charset=utf-8")
+	addStdHeaders(headers)
+
+	enc := qv.Encode()
+
+	req := &http.Request{
+		Method:        http.MethodPost,
+		URL:           u,
+		Header:        headers,
+		ContentLength: int64(len(enc)),
+		Body:          io.NopCloser(strings.NewReader(enc)),
+		GetBody: func() (io.ReadCloser, error) {
+			return io.NopCloser(strings.NewReader(enc)), nil
+		},
+	}
+
+	data, err := c.do(ctx, req)
+	if err != nil {
+		return err
+	}
+
+	v := reflect.ValueOf(resp)
+	if err := c.checkResp(v); err != nil {
+		return err
+	}
+
+	var unmarshal = json.Unmarshal
+	if _, ok := v.Elem().Type().FieldByName("AdditionalFields"); ok {
+		unmarshal = customJSON.Unmarshal
+	}
+	if resp != nil {
+		if err := unmarshal(data, resp); err != nil {
+			return fmt.Errorf("json decode error: %w\nraw message was: %s", err, string(data))
+		}
+	}
+	return nil
+}
+
+// do makes the HTTP call to the server and returns the contents of the body.
+func (c *Client) do(ctx context.Context, req *http.Request) ([]byte, error) {
+	if _, ok := ctx.Deadline(); !ok {
+		var cancel context.CancelFunc
+		ctx, cancel = context.WithTimeout(ctx, 30*time.Second)
+		defer cancel()
+	}
+	req = req.WithContext(ctx)
+
+	reply, err := c.client.Do(req)
+	if err != nil {
+		return nil, fmt.Errorf("server response error:\n %w", err)
+	}
+	defer reply.Body.Close()
+
+	data, err := c.readBody(reply)
+	if err != nil {
+		return nil, fmt.Errorf("could not read the body of an HTTP Response: %w", err)
+	}
+	reply.Body = io.NopCloser(bytes.NewBuffer(data))
+
+	// NOTE: This doesn't happen immediately after the call so that we can get an error message
+	// from the server and include it in our error.
+	switch reply.StatusCode {
+	case 200, 201:
+	default:
+		sd := strings.TrimSpace(string(data))
+		if sd != "" {
+			// We probably have the error in the body.
+			return nil, errors.CallErr{
+				Req:  req,
+				Resp: reply,
+				Err:  fmt.Errorf("http call(%s)(%s) error: reply status code was %d:\n%s", req.URL.String(), req.Method, reply.StatusCode, sd),
+			}
+		}
+		return nil, errors.CallErr{
+			Req:  req,
+			Resp: reply,
+			Err:  fmt.Errorf("http call(%s)(%s) error: reply status code was %d", req.URL.String(), req.Method, reply.StatusCode),
+		}
+	}
+
+	return data, nil
+}
+
+// checkResp checks a response object o make sure it is a pointer to a struct.
+func (c *Client) checkResp(v reflect.Value) error {
+	if v.Kind() != reflect.Ptr {
+		return fmt.Errorf("bug: resp argument must a *struct, was %T", v.Interface())
+	}
+	v = v.Elem()
+	if v.Kind() != reflect.Struct {
+		return fmt.Errorf("bug: resp argument must be a *struct, was %T", v.Interface())
+	}
+	return nil
+}
+
+// readBody reads the body out of an *http.Response. It supports gzip encoded responses.
+func (c *Client) readBody(resp *http.Response) ([]byte, error) {
+	var reader io.Reader = resp.Body
+	switch resp.Header.Get("Content-Encoding") {
+	case "":
+		// Do nothing
+	case "gzip":
+		reader = gzipDecompress(resp.Body)
+	default:
+		return nil, fmt.Errorf("bug: comm.Client.JSONCall(): content was send with unsupported content-encoding %s", resp.Header.Get("Content-Encoding"))
+	}
+	return io.ReadAll(reader)
+}
+
+var testID string
+
+// addStdHeaders adds the standard headers we use on all calls.
+func addStdHeaders(headers http.Header) http.Header {
+	headers.Set("Accept-Encoding", "gzip")
+	// So that I can have a static id for tests.
+	if testID != "" {
+		headers.Set("client-request-id", testID)
+		headers.Set("Return-Client-Request-Id", "false")
+	} else {
+		headers.Set("client-request-id", uuid.New().String())
+		headers.Set("Return-Client-Request-Id", "false")
+	}
+	headers.Set("x-client-sku", "MSAL.Go")
+	headers.Set("x-client-os", runtime.GOOS)
+	headers.Set("x-client-cpu", runtime.GOARCH)
+	headers.Set("x-client-ver", version.Version)
+	return headers
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/compress.go 🔗

@@ -0,0 +1,33 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package comm
+
+import (
+	"compress/gzip"
+	"io"
+)
+
+func gzipDecompress(r io.Reader) io.Reader {
+	gzipReader, _ := gzip.NewReader(r)
+
+	pipeOut, pipeIn := io.Pipe()
+	go func() {
+		// decompression bomb would have to come from Azure services.
+		// If we want to limit, we should do that in comm.do().
+		_, err := io.Copy(pipeIn, gzipReader) //nolint
+		if err != nil {
+			// don't need the error.
+			pipeIn.CloseWithError(err) //nolint
+			gzipReader.Close()
+			return
+		}
+		if err := gzipReader.Close(); err != nil {
+			// don't need the error.
+			pipeIn.CloseWithError(err) //nolint
+			return
+		}
+		pipeIn.Close()
+	}()
+	return pipeOut
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant/grant.go 🔗

@@ -0,0 +1,17 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// Package grant holds types of grants issued by authorization services.
+package grant
+
+const (
+	Password         = "password"
+	JWT              = "urn:ietf:params:oauth:grant-type:jwt-bearer"
+	SAMLV1           = "urn:ietf:params:oauth:grant-type:saml1_1-bearer"
+	SAMLV2           = "urn:ietf:params:oauth:grant-type:saml2-bearer"
+	DeviceCode       = "device_code"
+	AuthCode         = "authorization_code"
+	RefreshToken     = "refresh_token"
+	ClientCredential = "client_credentials"
+	ClientAssertion  = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer"
+)

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/ops.go 🔗

@@ -0,0 +1,56 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+/*
+Package ops provides operations to various backend services using REST clients.
+
+The REST type provides several clients that can be used to communicate to backends.
+Usage is simple:
+
+	rest := ops.New()
+
+	// Creates an authority client and calls the UserRealm() method.
+	userRealm, err := rest.Authority().UserRealm(ctx, authParameters)
+	if err != nil {
+		// Do something
+	}
+*/
+package ops
+
+import (
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust"
+)
+
+// HTTPClient represents an HTTP client.
+// It's usually an *http.Client from the standard library.
+type HTTPClient = comm.HTTPClient
+
+// REST provides REST clients for communicating with various backends used by MSAL.
+type REST struct {
+	client *comm.Client
+}
+
+// New is the constructor for REST.
+func New(httpClient HTTPClient) *REST {
+	return &REST{client: comm.New(httpClient)}
+}
+
+// Authority returns a client for querying information about various authorities.
+func (r *REST) Authority() authority.Client {
+	return authority.Client{Comm: r.client}
+}
+
+// AccessTokens returns a client that can be used to get various access tokens for
+// authorization purposes.
+func (r *REST) AccessTokens() accesstokens.Client {
+	return accesstokens.Client{Comm: r.client}
+}
+
+// WSTrust provides access to various metadata in a WSTrust service. This data can
+// be used to gain tokens based on SAML data using the client provided by AccessTokens().
+func (r *REST) WSTrust() wstrust.Client {
+	return wstrust.Client{Comm: r.client}
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/endpointtype_string.go 🔗

@@ -0,0 +1,25 @@
+// Code generated by "stringer -type=endpointType"; DO NOT EDIT.
+
+package defs
+
+import "strconv"
+
+func _() {
+	// An "invalid array index" compiler error signifies that the constant values have changed.
+	// Re-run the stringer command to generate them again.
+	var x [1]struct{}
+	_ = x[etUnknown-0]
+	_ = x[etUsernamePassword-1]
+	_ = x[etWindowsTransport-2]
+}
+
+const _endpointType_name = "etUnknownetUsernamePasswordetWindowsTransport"
+
+var _endpointType_index = [...]uint8{0, 9, 27, 45}
+
+func (i endpointType) String() string {
+	if i < 0 || i >= endpointType(len(_endpointType_index)-1) {
+		return "endpointType(" + strconv.FormatInt(int64(i), 10) + ")"
+	}
+	return _endpointType_name[_endpointType_index[i]:_endpointType_index[i+1]]
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/mex_document_definitions.go 🔗

@@ -0,0 +1,394 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package defs
+
+import "encoding/xml"
+
+type Definitions struct {
+	XMLName         xml.Name   `xml:"definitions"`
+	Text            string     `xml:",chardata"`
+	Name            string     `xml:"name,attr"`
+	TargetNamespace string     `xml:"targetNamespace,attr"`
+	WSDL            string     `xml:"wsdl,attr"`
+	XSD             string     `xml:"xsd,attr"`
+	T               string     `xml:"t,attr"`
+	SOAPENC         string     `xml:"soapenc,attr"`
+	SOAP            string     `xml:"soap,attr"`
+	TNS             string     `xml:"tns,attr"`
+	MSC             string     `xml:"msc,attr"`
+	WSAM            string     `xml:"wsam,attr"`
+	SOAP12          string     `xml:"soap12,attr"`
+	WSA10           string     `xml:"wsa10,attr"`
+	WSA             string     `xml:"wsa,attr"`
+	WSAW            string     `xml:"wsaw,attr"`
+	WSX             string     `xml:"wsx,attr"`
+	WSAP            string     `xml:"wsap,attr"`
+	WSU             string     `xml:"wsu,attr"`
+	Trust           string     `xml:"trust,attr"`
+	WSP             string     `xml:"wsp,attr"`
+	Policy          []Policy   `xml:"Policy"`
+	Types           Types      `xml:"types"`
+	Message         []Message  `xml:"message"`
+	PortType        []PortType `xml:"portType"`
+	Binding         []Binding  `xml:"binding"`
+	Service         Service    `xml:"service"`
+}
+
+type Policy struct {
+	Text       string     `xml:",chardata"`
+	ID         string     `xml:"Id,attr"`
+	ExactlyOne ExactlyOne `xml:"ExactlyOne"`
+}
+
+type ExactlyOne struct {
+	Text string `xml:",chardata"`
+	All  All    `xml:"All"`
+}
+
+type All struct {
+	Text                            string                          `xml:",chardata"`
+	NegotiateAuthentication         NegotiateAuthentication         `xml:"NegotiateAuthentication"`
+	TransportBinding                TransportBinding                `xml:"TransportBinding"`
+	UsingAddressing                 Text                            `xml:"UsingAddressing"`
+	EndorsingSupportingTokens       EndorsingSupportingTokens       `xml:"EndorsingSupportingTokens"`
+	WSS11                           WSS11                           `xml:"Wss11"`
+	Trust10                         Trust10                         `xml:"Trust10"`
+	SignedSupportingTokens          SignedSupportingTokens          `xml:"SignedSupportingTokens"`
+	Trust13                         WSTrust13                       `xml:"Trust13"`
+	SignedEncryptedSupportingTokens SignedEncryptedSupportingTokens `xml:"SignedEncryptedSupportingTokens"`
+}
+
+type NegotiateAuthentication struct {
+	Text    string `xml:",chardata"`
+	HTTP    string `xml:"http,attr"`
+	XMLName xml.Name
+}
+
+type TransportBinding struct {
+	Text   string                 `xml:",chardata"`
+	SP     string                 `xml:"sp,attr"`
+	Policy TransportBindingPolicy `xml:"Policy"`
+}
+
+type TransportBindingPolicy struct {
+	Text             string         `xml:",chardata"`
+	TransportToken   TransportToken `xml:"TransportToken"`
+	AlgorithmSuite   AlgorithmSuite `xml:"AlgorithmSuite"`
+	Layout           Layout         `xml:"Layout"`
+	IncludeTimestamp Text           `xml:"IncludeTimestamp"`
+}
+
+type TransportToken struct {
+	Text   string               `xml:",chardata"`
+	Policy TransportTokenPolicy `xml:"Policy"`
+}
+
+type TransportTokenPolicy struct {
+	Text       string     `xml:",chardata"`
+	HTTPSToken HTTPSToken `xml:"HttpsToken"`
+}
+
+type HTTPSToken struct {
+	Text                     string `xml:",chardata"`
+	RequireClientCertificate string `xml:"RequireClientCertificate,attr"`
+}
+
+type AlgorithmSuite struct {
+	Text   string               `xml:",chardata"`
+	Policy AlgorithmSuitePolicy `xml:"Policy"`
+}
+
+type AlgorithmSuitePolicy struct {
+	Text     string `xml:",chardata"`
+	Basic256 Text   `xml:"Basic256"`
+	Basic128 Text   `xml:"Basic128"`
+}
+
+type Layout struct {
+	Text   string       `xml:",chardata"`
+	Policy LayoutPolicy `xml:"Policy"`
+}
+
+type LayoutPolicy struct {
+	Text   string `xml:",chardata"`
+	Strict Text   `xml:"Strict"`
+}
+
+type EndorsingSupportingTokens struct {
+	Text   string                          `xml:",chardata"`
+	SP     string                          `xml:"sp,attr"`
+	Policy EndorsingSupportingTokensPolicy `xml:"Policy"`
+}
+
+type EndorsingSupportingTokensPolicy struct {
+	Text          string        `xml:",chardata"`
+	X509Token     X509Token     `xml:"X509Token"`
+	RSAToken      RSAToken      `xml:"RsaToken"`
+	SignedParts   SignedParts   `xml:"SignedParts"`
+	KerberosToken KerberosToken `xml:"KerberosToken"`
+	IssuedToken   IssuedToken   `xml:"IssuedToken"`
+	KeyValueToken KeyValueToken `xml:"KeyValueToken"`
+}
+
+type X509Token struct {
+	Text         string          `xml:",chardata"`
+	IncludeToken string          `xml:"IncludeToken,attr"`
+	Policy       X509TokenPolicy `xml:"Policy"`
+}
+
+type X509TokenPolicy struct {
+	Text                       string `xml:",chardata"`
+	RequireThumbprintReference Text   `xml:"RequireThumbprintReference"`
+	WSSX509V3Token10           Text   `xml:"WssX509V3Token10"`
+}
+
+type RSAToken struct {
+	Text         string `xml:",chardata"`
+	IncludeToken string `xml:"IncludeToken,attr"`
+	Optional     string `xml:"Optional,attr"`
+	MSSP         string `xml:"mssp,attr"`
+}
+
+type SignedParts struct {
+	Text   string            `xml:",chardata"`
+	Header SignedPartsHeader `xml:"Header"`
+}
+
+type SignedPartsHeader struct {
+	Text      string `xml:",chardata"`
+	Name      string `xml:"Name,attr"`
+	Namespace string `xml:"Namespace,attr"`
+}
+
+type KerberosToken struct {
+	Text         string              `xml:",chardata"`
+	IncludeToken string              `xml:"IncludeToken,attr"`
+	Policy       KerberosTokenPolicy `xml:"Policy"`
+}
+
+type KerberosTokenPolicy struct {
+	Text                         string `xml:",chardata"`
+	WSSGSSKerberosV5ApReqToken11 Text   `xml:"WssGssKerberosV5ApReqToken11"`
+}
+
+type IssuedToken struct {
+	Text                         string                       `xml:",chardata"`
+	IncludeToken                 string                       `xml:"IncludeToken,attr"`
+	RequestSecurityTokenTemplate RequestSecurityTokenTemplate `xml:"RequestSecurityTokenTemplate"`
+	Policy                       IssuedTokenPolicy            `xml:"Policy"`
+}
+
+type RequestSecurityTokenTemplate struct {
+	Text                      string `xml:",chardata"`
+	KeyType                   Text   `xml:"KeyType"`
+	EncryptWith               Text   `xml:"EncryptWith"`
+	SignatureAlgorithm        Text   `xml:"SignatureAlgorithm"`
+	CanonicalizationAlgorithm Text   `xml:"CanonicalizationAlgorithm"`
+	EncryptionAlgorithm       Text   `xml:"EncryptionAlgorithm"`
+	KeySize                   Text   `xml:"KeySize"`
+	KeyWrapAlgorithm          Text   `xml:"KeyWrapAlgorithm"`
+}
+
+type IssuedTokenPolicy struct {
+	Text                     string `xml:",chardata"`
+	RequireInternalReference Text   `xml:"RequireInternalReference"`
+}
+
+type KeyValueToken struct {
+	Text         string `xml:",chardata"`
+	IncludeToken string `xml:"IncludeToken,attr"`
+	Optional     string `xml:"Optional,attr"`
+}
+
+type WSS11 struct {
+	Text   string      `xml:",chardata"`
+	SP     string      `xml:"sp,attr"`
+	Policy Wss11Policy `xml:"Policy"`
+}
+
+type Wss11Policy struct {
+	Text                     string `xml:",chardata"`
+	MustSupportRefThumbprint Text   `xml:"MustSupportRefThumbprint"`
+}
+
+type Trust10 struct {
+	Text   string        `xml:",chardata"`
+	SP     string        `xml:"sp,attr"`
+	Policy Trust10Policy `xml:"Policy"`
+}
+
+type Trust10Policy struct {
+	Text                    string `xml:",chardata"`
+	MustSupportIssuedTokens Text   `xml:"MustSupportIssuedTokens"`
+	RequireClientEntropy    Text   `xml:"RequireClientEntropy"`
+	RequireServerEntropy    Text   `xml:"RequireServerEntropy"`
+}
+
+type SignedSupportingTokens struct {
+	Text   string                 `xml:",chardata"`
+	SP     string                 `xml:"sp,attr"`
+	Policy SupportingTokensPolicy `xml:"Policy"`
+}
+
+type SupportingTokensPolicy struct {
+	Text          string        `xml:",chardata"`
+	UsernameToken UsernameToken `xml:"UsernameToken"`
+}
+type UsernameToken struct {
+	Text         string              `xml:",chardata"`
+	IncludeToken string              `xml:"IncludeToken,attr"`
+	Policy       UsernameTokenPolicy `xml:"Policy"`
+}
+
+type UsernameTokenPolicy struct {
+	Text               string             `xml:",chardata"`
+	WSSUsernameToken10 WSSUsernameToken10 `xml:"WssUsernameToken10"`
+}
+
+type WSSUsernameToken10 struct {
+	Text    string `xml:",chardata"`
+	XMLName xml.Name
+}
+
+type WSTrust13 struct {
+	Text   string          `xml:",chardata"`
+	SP     string          `xml:"sp,attr"`
+	Policy WSTrust13Policy `xml:"Policy"`
+}
+
+type WSTrust13Policy struct {
+	Text                    string `xml:",chardata"`
+	MustSupportIssuedTokens Text   `xml:"MustSupportIssuedTokens"`
+	RequireClientEntropy    Text   `xml:"RequireClientEntropy"`
+	RequireServerEntropy    Text   `xml:"RequireServerEntropy"`
+}
+
+type SignedEncryptedSupportingTokens struct {
+	Text   string                 `xml:",chardata"`
+	SP     string                 `xml:"sp,attr"`
+	Policy SupportingTokensPolicy `xml:"Policy"`
+}
+
+type Types struct {
+	Text   string `xml:",chardata"`
+	Schema Schema `xml:"schema"`
+}
+
+type Schema struct {
+	Text            string   `xml:",chardata"`
+	TargetNamespace string   `xml:"targetNamespace,attr"`
+	Import          []Import `xml:"import"`
+}
+
+type Import struct {
+	Text           string `xml:",chardata"`
+	SchemaLocation string `xml:"schemaLocation,attr"`
+	Namespace      string `xml:"namespace,attr"`
+}
+
+type Message struct {
+	Text string `xml:",chardata"`
+	Name string `xml:"name,attr"`
+	Part Part   `xml:"part"`
+}
+
+type Part struct {
+	Text    string `xml:",chardata"`
+	Name    string `xml:"name,attr"`
+	Element string `xml:"element,attr"`
+}
+
+type PortType struct {
+	Text      string    `xml:",chardata"`
+	Name      string    `xml:"name,attr"`
+	Operation Operation `xml:"operation"`
+}
+
+type Operation struct {
+	Text   string      `xml:",chardata"`
+	Name   string      `xml:"name,attr"`
+	Input  OperationIO `xml:"input"`
+	Output OperationIO `xml:"output"`
+}
+
+type OperationIO struct {
+	Text    string          `xml:",chardata"`
+	Action  string          `xml:"Action,attr"`
+	Message string          `xml:"message,attr"`
+	Body    OperationIOBody `xml:"body"`
+}
+
+type OperationIOBody struct {
+	Text string `xml:",chardata"`
+	Use  string `xml:"use,attr"`
+}
+
+type Binding struct {
+	Text            string             `xml:",chardata"`
+	Name            string             `xml:"name,attr"`
+	Type            string             `xml:"type,attr"`
+	PolicyReference PolicyReference    `xml:"PolicyReference"`
+	Binding         DefinitionsBinding `xml:"binding"`
+	Operation       BindingOperation   `xml:"operation"`
+}
+
+type PolicyReference struct {
+	Text string `xml:",chardata"`
+	URI  string `xml:"URI,attr"`
+}
+
+type DefinitionsBinding struct {
+	Text      string `xml:",chardata"`
+	Transport string `xml:"transport,attr"`
+}
+
+type BindingOperation struct {
+	Text      string                    `xml:",chardata"`
+	Name      string                    `xml:"name,attr"`
+	Operation BindingOperationOperation `xml:"operation"`
+	Input     BindingOperationIO        `xml:"input"`
+	Output    BindingOperationIO        `xml:"output"`
+}
+
+type BindingOperationOperation struct {
+	Text       string `xml:",chardata"`
+	SoapAction string `xml:"soapAction,attr"`
+	Style      string `xml:"style,attr"`
+}
+
+type BindingOperationIO struct {
+	Text string          `xml:",chardata"`
+	Body OperationIOBody `xml:"body"`
+}
+
+type Service struct {
+	Text string `xml:",chardata"`
+	Name string `xml:"name,attr"`
+	Port []Port `xml:"port"`
+}
+
+type Port struct {
+	Text              string                `xml:",chardata"`
+	Name              string                `xml:"name,attr"`
+	Binding           string                `xml:"binding,attr"`
+	Address           Address               `xml:"address"`
+	EndpointReference PortEndpointReference `xml:"EndpointReference"`
+}
+
+type Address struct {
+	Text     string `xml:",chardata"`
+	Location string `xml:"location,attr"`
+}
+
+type PortEndpointReference struct {
+	Text     string   `xml:",chardata"`
+	Address  Text     `xml:"Address"`
+	Identity Identity `xml:"Identity"`
+}
+
+type Identity struct {
+	Text  string `xml:",chardata"`
+	XMLNS string `xml:"xmlns,attr"`
+	SPN   Text   `xml:"Spn"`
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/saml_assertion_definitions.go 🔗

@@ -0,0 +1,230 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package defs
+
+import "encoding/xml"
+
+// TODO(msal): Someone (and it ain't gonna be me) needs to document these attributes or
+// at the least put a link to RFC.
+
+type SAMLDefinitions struct {
+	XMLName xml.Name `xml:"Envelope"`
+	Text    string   `xml:",chardata"`
+	S       string   `xml:"s,attr"`
+	A       string   `xml:"a,attr"`
+	U       string   `xml:"u,attr"`
+	Header  Header   `xml:"Header"`
+	Body    Body     `xml:"Body"`
+}
+
+type Header struct {
+	Text     string   `xml:",chardata"`
+	Action   Action   `xml:"Action"`
+	Security Security `xml:"Security"`
+}
+
+type Action struct {
+	Text           string `xml:",chardata"`
+	MustUnderstand string `xml:"mustUnderstand,attr"`
+}
+
+type Security struct {
+	Text           string    `xml:",chardata"`
+	MustUnderstand string    `xml:"mustUnderstand,attr"`
+	O              string    `xml:"o,attr"`
+	Timestamp      Timestamp `xml:"Timestamp"`
+}
+
+type Timestamp struct {
+	Text    string `xml:",chardata"`
+	ID      string `xml:"Id,attr"`
+	Created Text   `xml:"Created"`
+	Expires Text   `xml:"Expires"`
+}
+
+type Text struct {
+	Text string `xml:",chardata"`
+}
+
+type Body struct {
+	Text                                   string                                 `xml:",chardata"`
+	RequestSecurityTokenResponseCollection RequestSecurityTokenResponseCollection `xml:"RequestSecurityTokenResponseCollection"`
+}
+
+type RequestSecurityTokenResponseCollection struct {
+	Text                         string                         `xml:",chardata"`
+	Trust                        string                         `xml:"trust,attr"`
+	RequestSecurityTokenResponse []RequestSecurityTokenResponse `xml:"RequestSecurityTokenResponse"`
+}
+
+type RequestSecurityTokenResponse struct {
+	Text                         string                       `xml:",chardata"`
+	Lifetime                     Lifetime                     `xml:"Lifetime"`
+	AppliesTo                    AppliesTo                    `xml:"AppliesTo"`
+	RequestedSecurityToken       RequestedSecurityToken       `xml:"RequestedSecurityToken"`
+	RequestedAttachedReference   RequestedAttachedReference   `xml:"RequestedAttachedReference"`
+	RequestedUnattachedReference RequestedUnattachedReference `xml:"RequestedUnattachedReference"`
+	TokenType                    Text                         `xml:"TokenType"`
+	RequestType                  Text                         `xml:"RequestType"`
+	KeyType                      Text                         `xml:"KeyType"`
+}
+
+type Lifetime struct {
+	Text    string       `xml:",chardata"`
+	Created WSUTimestamp `xml:"Created"`
+	Expires WSUTimestamp `xml:"Expires"`
+}
+
+type WSUTimestamp struct {
+	Text string `xml:",chardata"`
+	Wsu  string `xml:"wsu,attr"`
+}
+
+type AppliesTo struct {
+	Text              string            `xml:",chardata"`
+	Wsp               string            `xml:"wsp,attr"`
+	EndpointReference EndpointReference `xml:"EndpointReference"`
+}
+
+type EndpointReference struct {
+	Text    string `xml:",chardata"`
+	Wsa     string `xml:"wsa,attr"`
+	Address Text   `xml:"Address"`
+}
+
+type RequestedSecurityToken struct {
+	Text            string    `xml:",chardata"`
+	AssertionRawXML string    `xml:",innerxml"`
+	Assertion       Assertion `xml:"Assertion"`
+}
+
+type Assertion struct {
+	XMLName                 xml.Name                // Normally its `xml:"Assertion"`, but I think they want to capture the xmlns
+	Text                    string                  `xml:",chardata"`
+	MajorVersion            string                  `xml:"MajorVersion,attr"`
+	MinorVersion            string                  `xml:"MinorVersion,attr"`
+	AssertionID             string                  `xml:"AssertionID,attr"`
+	Issuer                  string                  `xml:"Issuer,attr"`
+	IssueInstant            string                  `xml:"IssueInstant,attr"`
+	Saml                    string                  `xml:"saml,attr"`
+	Conditions              Conditions              `xml:"Conditions"`
+	AttributeStatement      AttributeStatement      `xml:"AttributeStatement"`
+	AuthenticationStatement AuthenticationStatement `xml:"AuthenticationStatement"`
+	Signature               Signature               `xml:"Signature"`
+}
+
+type Conditions struct {
+	Text                         string                       `xml:",chardata"`
+	NotBefore                    string                       `xml:"NotBefore,attr"`
+	NotOnOrAfter                 string                       `xml:"NotOnOrAfter,attr"`
+	AudienceRestrictionCondition AudienceRestrictionCondition `xml:"AudienceRestrictionCondition"`
+}
+
+type AudienceRestrictionCondition struct {
+	Text     string `xml:",chardata"`
+	Audience Text   `xml:"Audience"`
+}
+
+type AttributeStatement struct {
+	Text      string      `xml:",chardata"`
+	Subject   Subject     `xml:"Subject"`
+	Attribute []Attribute `xml:"Attribute"`
+}
+
+type Subject struct {
+	Text                string              `xml:",chardata"`
+	NameIdentifier      NameIdentifier      `xml:"NameIdentifier"`
+	SubjectConfirmation SubjectConfirmation `xml:"SubjectConfirmation"`
+}
+
+type NameIdentifier struct {
+	Text   string `xml:",chardata"`
+	Format string `xml:"Format,attr"`
+}
+
+type SubjectConfirmation struct {
+	Text               string `xml:",chardata"`
+	ConfirmationMethod Text   `xml:"ConfirmationMethod"`
+}
+
+type Attribute struct {
+	Text               string `xml:",chardata"`
+	AttributeName      string `xml:"AttributeName,attr"`
+	AttributeNamespace string `xml:"AttributeNamespace,attr"`
+	AttributeValue     Text   `xml:"AttributeValue"`
+}
+
+type AuthenticationStatement struct {
+	Text                  string  `xml:",chardata"`
+	AuthenticationMethod  string  `xml:"AuthenticationMethod,attr"`
+	AuthenticationInstant string  `xml:"AuthenticationInstant,attr"`
+	Subject               Subject `xml:"Subject"`
+}
+
+type Signature struct {
+	Text           string     `xml:",chardata"`
+	Ds             string     `xml:"ds,attr"`
+	SignedInfo     SignedInfo `xml:"SignedInfo"`
+	SignatureValue Text       `xml:"SignatureValue"`
+	KeyInfo        KeyInfo    `xml:"KeyInfo"`
+}
+
+type SignedInfo struct {
+	Text                   string    `xml:",chardata"`
+	CanonicalizationMethod Method    `xml:"CanonicalizationMethod"`
+	SignatureMethod        Method    `xml:"SignatureMethod"`
+	Reference              Reference `xml:"Reference"`
+}
+
+type Method struct {
+	Text      string `xml:",chardata"`
+	Algorithm string `xml:"Algorithm,attr"`
+}
+
+type Reference struct {
+	Text         string     `xml:",chardata"`
+	URI          string     `xml:"URI,attr"`
+	Transforms   Transforms `xml:"Transforms"`
+	DigestMethod Method     `xml:"DigestMethod"`
+	DigestValue  Text       `xml:"DigestValue"`
+}
+
+type Transforms struct {
+	Text      string   `xml:",chardata"`
+	Transform []Method `xml:"Transform"`
+}
+
+type KeyInfo struct {
+	Text     string   `xml:",chardata"`
+	Xmlns    string   `xml:"xmlns,attr"`
+	X509Data X509Data `xml:"X509Data"`
+}
+
+type X509Data struct {
+	Text            string `xml:",chardata"`
+	X509Certificate Text   `xml:"X509Certificate"`
+}
+
+type RequestedAttachedReference struct {
+	Text                   string                 `xml:",chardata"`
+	SecurityTokenReference SecurityTokenReference `xml:"SecurityTokenReference"`
+}
+
+type SecurityTokenReference struct {
+	Text          string        `xml:",chardata"`
+	TokenType     string        `xml:"TokenType,attr"`
+	O             string        `xml:"o,attr"`
+	K             string        `xml:"k,attr"`
+	KeyIdentifier KeyIdentifier `xml:"KeyIdentifier"`
+}
+
+type KeyIdentifier struct {
+	Text      string `xml:",chardata"`
+	ValueType string `xml:"ValueType,attr"`
+}
+
+type RequestedUnattachedReference struct {
+	Text                   string                 `xml:",chardata"`
+	SecurityTokenReference SecurityTokenReference `xml:"SecurityTokenReference"`
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/version_string.go 🔗

@@ -0,0 +1,25 @@
+// Code generated by "stringer -type=Version"; DO NOT EDIT.
+
+package defs
+
+import "strconv"
+
+func _() {
+	// An "invalid array index" compiler error signifies that the constant values have changed.
+	// Re-run the stringer command to generate them again.
+	var x [1]struct{}
+	_ = x[TrustUnknown-0]
+	_ = x[Trust2005-1]
+	_ = x[Trust13-2]
+}
+
+const _Version_name = "TrustUnknownTrust2005Trust13"
+
+var _Version_index = [...]uint8{0, 12, 21, 28}
+
+func (i Version) String() string {
+	if i < 0 || i >= Version(len(_Version_index)-1) {
+		return "Version(" + strconv.FormatInt(int64(i), 10) + ")"
+	}
+	return _Version_name[_Version_index[i]:_Version_index[i+1]]
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_endpoint.go 🔗

@@ -0,0 +1,199 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package defs
+
+import (
+	"encoding/xml"
+	"fmt"
+	"time"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	uuid "github.com/google/uuid"
+)
+
+//go:generate stringer -type=Version
+
+type Version int
+
+const (
+	TrustUnknown Version = iota
+	Trust2005
+	Trust13
+)
+
+// Endpoint represents a WSTrust endpoint.
+type Endpoint struct {
+	// Version is the version of the endpoint.
+	Version Version
+	// URL is the URL of the endpoint.
+	URL string
+}
+
+type wsTrustTokenRequestEnvelope struct {
+	XMLName xml.Name `xml:"s:Envelope"`
+	Text    string   `xml:",chardata"`
+	S       string   `xml:"xmlns:s,attr"`
+	Wsa     string   `xml:"xmlns:wsa,attr"`
+	Wsu     string   `xml:"xmlns:wsu,attr"`
+	Header  struct {
+		Text   string `xml:",chardata"`
+		Action struct {
+			Text           string `xml:",chardata"`
+			MustUnderstand string `xml:"s:mustUnderstand,attr"`
+		} `xml:"wsa:Action"`
+		MessageID struct {
+			Text string `xml:",chardata"`
+		} `xml:"wsa:messageID"`
+		ReplyTo struct {
+			Text    string `xml:",chardata"`
+			Address struct {
+				Text string `xml:",chardata"`
+			} `xml:"wsa:Address"`
+		} `xml:"wsa:ReplyTo"`
+		To struct {
+			Text           string `xml:",chardata"`
+			MustUnderstand string `xml:"s:mustUnderstand,attr"`
+		} `xml:"wsa:To"`
+		Security struct {
+			Text           string `xml:",chardata"`
+			MustUnderstand string `xml:"s:mustUnderstand,attr"`
+			Wsse           string `xml:"xmlns:wsse,attr"`
+			Timestamp      struct {
+				Text    string `xml:",chardata"`
+				ID      string `xml:"wsu:Id,attr"`
+				Created struct {
+					Text string `xml:",chardata"`
+				} `xml:"wsu:Created"`
+				Expires struct {
+					Text string `xml:",chardata"`
+				} `xml:"wsu:Expires"`
+			} `xml:"wsu:Timestamp"`
+			UsernameToken struct {
+				Text     string `xml:",chardata"`
+				ID       string `xml:"wsu:Id,attr"`
+				Username struct {
+					Text string `xml:",chardata"`
+				} `xml:"wsse:Username"`
+				Password struct {
+					Text string `xml:",chardata"`
+				} `xml:"wsse:Password"`
+			} `xml:"wsse:UsernameToken"`
+		} `xml:"wsse:Security"`
+	} `xml:"s:Header"`
+	Body struct {
+		Text                 string `xml:",chardata"`
+		RequestSecurityToken struct {
+			Text      string `xml:",chardata"`
+			Wst       string `xml:"xmlns:wst,attr"`
+			AppliesTo struct {
+				Text              string `xml:",chardata"`
+				Wsp               string `xml:"xmlns:wsp,attr"`
+				EndpointReference struct {
+					Text    string `xml:",chardata"`
+					Address struct {
+						Text string `xml:",chardata"`
+					} `xml:"wsa:Address"`
+				} `xml:"wsa:EndpointReference"`
+			} `xml:"wsp:AppliesTo"`
+			KeyType struct {
+				Text string `xml:",chardata"`
+			} `xml:"wst:KeyType"`
+			RequestType struct {
+				Text string `xml:",chardata"`
+			} `xml:"wst:RequestType"`
+		} `xml:"wst:RequestSecurityToken"`
+	} `xml:"s:Body"`
+}
+
+func buildTimeString(t time.Time) string {
+	// Golang time formats are weird: https://stackoverflow.com/questions/20234104/how-to-format-current-time-using-a-yyyymmddhhmmss-format
+	return t.Format("2006-01-02T15:04:05.000Z")
+}
+
+func (wte *Endpoint) buildTokenRequestMessage(authType authority.AuthorizeType, cloudAudienceURN string, username string, password string) (string, error) {
+	var soapAction string
+	var trustNamespace string
+	var keyType string
+	var requestType string
+
+	createdTime := time.Now().UTC()
+	expiresTime := createdTime.Add(10 * time.Minute)
+
+	switch wte.Version {
+	case Trust2005:
+		soapAction = trust2005Spec
+		trustNamespace = "http://schemas.xmlsoap.org/ws/2005/02/trust"
+		keyType = "http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey"
+		requestType = "http://schemas.xmlsoap.org/ws/2005/02/trust/Issue"
+	case Trust13:
+		soapAction = trust13Spec
+		trustNamespace = "http://docs.oasis-open.org/ws-sx/ws-trust/200512"
+		keyType = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer"
+		requestType = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue"
+	default:
+		return "", fmt.Errorf("buildTokenRequestMessage had Version == %q, which is not recognized", wte.Version)
+	}
+
+	var envelope wsTrustTokenRequestEnvelope
+
+	messageUUID := uuid.New()
+
+	envelope.S = "http://www.w3.org/2003/05/soap-envelope"
+	envelope.Wsa = "http://www.w3.org/2005/08/addressing"
+	envelope.Wsu = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
+
+	envelope.Header.Action.MustUnderstand = "1"
+	envelope.Header.Action.Text = soapAction
+	envelope.Header.MessageID.Text = "urn:uuid:" + messageUUID.String()
+	envelope.Header.ReplyTo.Address.Text = "http://www.w3.org/2005/08/addressing/anonymous"
+	envelope.Header.To.MustUnderstand = "1"
+	envelope.Header.To.Text = wte.URL
+
+	switch authType {
+	case authority.ATUnknown:
+		return "", fmt.Errorf("buildTokenRequestMessage had no authority type(%v)", authType)
+	case authority.ATUsernamePassword:
+		endpointUUID := uuid.New()
+
+		var trustID string
+		if wte.Version == Trust2005 {
+			trustID = "UnPwSecTok2005-" + endpointUUID.String()
+		} else {
+			trustID = "UnPwSecTok13-" + endpointUUID.String()
+		}
+
+		envelope.Header.Security.MustUnderstand = "1"
+		envelope.Header.Security.Wsse = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"
+		envelope.Header.Security.Timestamp.ID = "MSATimeStamp"
+		envelope.Header.Security.Timestamp.Created.Text = buildTimeString(createdTime)
+		envelope.Header.Security.Timestamp.Expires.Text = buildTimeString(expiresTime)
+		envelope.Header.Security.UsernameToken.ID = trustID
+		envelope.Header.Security.UsernameToken.Username.Text = username
+		envelope.Header.Security.UsernameToken.Password.Text = password
+	default:
+		// This is just to note that we don't do anything for other cases.
+		// We aren't missing anything I know of.
+	}
+
+	envelope.Body.RequestSecurityToken.Wst = trustNamespace
+	envelope.Body.RequestSecurityToken.AppliesTo.Wsp = "http://schemas.xmlsoap.org/ws/2004/09/policy"
+	envelope.Body.RequestSecurityToken.AppliesTo.EndpointReference.Address.Text = cloudAudienceURN
+	envelope.Body.RequestSecurityToken.KeyType.Text = keyType
+	envelope.Body.RequestSecurityToken.RequestType.Text = requestType
+
+	output, err := xml.Marshal(envelope)
+	if err != nil {
+		return "", err
+	}
+
+	return string(output), nil
+}
+
+func (wte *Endpoint) BuildTokenRequestMessageWIA(cloudAudienceURN string) (string, error) {
+	return wte.buildTokenRequestMessage(authority.ATWindowsIntegrated, cloudAudienceURN, "", "")
+}
+
+func (wte *Endpoint) BuildTokenRequestMessageUsernamePassword(cloudAudienceURN string, username string, password string) (string, error) {
+	return wte.buildTokenRequestMessage(authority.ATUsernamePassword, cloudAudienceURN, username, password)
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_mex_document.go 🔗

@@ -0,0 +1,159 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package defs
+
+import (
+	"errors"
+	"fmt"
+	"strings"
+)
+
+//go:generate stringer -type=endpointType
+
+type endpointType int
+
+const (
+	etUnknown endpointType = iota
+	etUsernamePassword
+	etWindowsTransport
+)
+
+type wsEndpointData struct {
+	Version      Version
+	EndpointType endpointType
+}
+
+const trust13Spec string = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue"
+const trust2005Spec string = "http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue"
+
+type MexDocument struct {
+	UsernamePasswordEndpoint Endpoint
+	WindowsTransportEndpoint Endpoint
+	policies                 map[string]endpointType
+	bindings                 map[string]wsEndpointData
+}
+
+func updateEndpoint(cached *Endpoint, found Endpoint) {
+	if cached == nil || cached.Version == TrustUnknown {
+		*cached = found
+		return
+	}
+	if (*cached).Version == Trust2005 && found.Version == Trust13 {
+		*cached = found
+		return
+	}
+}
+
+// TODO(msal): Someone needs to write tests for everything below.
+
+// NewFromDef creates a new MexDocument.
+func NewFromDef(defs Definitions) (MexDocument, error) {
+	policies, err := policies(defs)
+	if err != nil {
+		return MexDocument{}, err
+	}
+
+	bindings, err := bindings(defs, policies)
+	if err != nil {
+		return MexDocument{}, err
+	}
+
+	userPass, windows, err := endpoints(defs, bindings)
+	if err != nil {
+		return MexDocument{}, err
+	}
+
+	return MexDocument{
+		UsernamePasswordEndpoint: userPass,
+		WindowsTransportEndpoint: windows,
+		policies:                 policies,
+		bindings:                 bindings,
+	}, nil
+}
+
+func policies(defs Definitions) (map[string]endpointType, error) {
+	policies := make(map[string]endpointType, len(defs.Policy))
+
+	for _, policy := range defs.Policy {
+		if policy.ExactlyOne.All.NegotiateAuthentication.XMLName.Local != "" {
+			if policy.ExactlyOne.All.TransportBinding.SP != "" && policy.ID != "" {
+				policies["#"+policy.ID] = etWindowsTransport
+			}
+		}
+
+		if policy.ExactlyOne.All.SignedEncryptedSupportingTokens.Policy.UsernameToken.Policy.WSSUsernameToken10.XMLName.Local != "" {
+			if policy.ExactlyOne.All.TransportBinding.SP != "" && policy.ID != "" {
+				policies["#"+policy.ID] = etUsernamePassword
+			}
+		}
+		if policy.ExactlyOne.All.SignedSupportingTokens.Policy.UsernameToken.Policy.WSSUsernameToken10.XMLName.Local != "" {
+			if policy.ExactlyOne.All.TransportBinding.SP != "" && policy.ID != "" {
+				policies["#"+policy.ID] = etUsernamePassword
+			}
+		}
+	}
+
+	if len(policies) == 0 {
+		return policies, errors.New("no policies for mex document")
+	}
+
+	return policies, nil
+}
+
+func bindings(defs Definitions, policies map[string]endpointType) (map[string]wsEndpointData, error) {
+	bindings := make(map[string]wsEndpointData, len(defs.Binding))
+
+	for _, binding := range defs.Binding {
+		policyName := binding.PolicyReference.URI
+		transport := binding.Binding.Transport
+
+		if transport == "http://schemas.xmlsoap.org/soap/http" {
+			if policy, ok := policies[policyName]; ok {
+				bindingName := binding.Name
+				specVersion := binding.Operation.Operation.SoapAction
+
+				if specVersion == trust13Spec {
+					bindings[bindingName] = wsEndpointData{Trust13, policy}
+				} else if specVersion == trust2005Spec {
+					bindings[bindingName] = wsEndpointData{Trust2005, policy}
+				} else {
+					return nil, errors.New("found unknown spec version in mex document")
+				}
+			}
+		}
+	}
+	return bindings, nil
+}
+
+func endpoints(defs Definitions, bindings map[string]wsEndpointData) (userPass, windows Endpoint, err error) {
+	for _, port := range defs.Service.Port {
+		bindingName := port.Binding
+
+		index := strings.Index(bindingName, ":")
+		if index != -1 {
+			bindingName = bindingName[index+1:]
+		}
+
+		if binding, ok := bindings[bindingName]; ok {
+			url := strings.TrimSpace(port.EndpointReference.Address.Text)
+			if url == "" {
+				return Endpoint{}, Endpoint{}, fmt.Errorf("MexDocument cannot have blank URL endpoint")
+			}
+			if binding.Version == TrustUnknown {
+				return Endpoint{}, Endpoint{}, fmt.Errorf("endpoint version unknown")
+			}
+			endpoint := Endpoint{Version: binding.Version, URL: url}
+
+			switch binding.EndpointType {
+			case etUsernamePassword:
+				updateEndpoint(&userPass, endpoint)
+			case etWindowsTransport:
+				updateEndpoint(&windows, endpoint)
+			default:
+				return Endpoint{}, Endpoint{}, errors.New("found unknown port type in MEX document")
+			}
+		}
+	}
+	return userPass, windows, nil
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/wstrust.go 🔗

@@ -0,0 +1,136 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+/*
+Package wstrust provides a client for communicating with a WSTrust (https://en.wikipedia.org/wiki/WS-Trust#:~:text=WS%2DTrust%20is%20a%20WS,in%20a%20secure%20message%20exchange.)
+for the purposes of extracting metadata from the service. This data can be used to acquire
+tokens using the accesstokens.Client.GetAccessTokenFromSamlGrant() call.
+*/
+package wstrust
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs"
+)
+
+type xmlCaller interface {
+	XMLCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, resp interface{}) error
+	SOAPCall(ctx context.Context, endpoint, action string, headers http.Header, qv url.Values, body string, resp interface{}) error
+}
+
+type SamlTokenInfo struct {
+	AssertionType string // Should be either constants SAMLV1Grant or SAMLV2Grant.
+	Assertion     string
+}
+
+// Client represents the REST calls to get tokens from token generator backends.
+type Client struct {
+	// Comm provides the HTTP transport client.
+	Comm xmlCaller
+}
+
+// TODO(msal): This allows me to call Mex without having a real Def file on line 45.
+// This would fail because policies() would not find a policy. This is easy enough to
+// fix in test data, but.... Definitions is defined with built in structs.  That needs
+// to be pulled apart and until then I have this hack in.
+var newFromDef = defs.NewFromDef
+
+// Mex provides metadata about a wstrust service.
+func (c Client) Mex(ctx context.Context, federationMetadataURL string) (defs.MexDocument, error) {
+	resp := defs.Definitions{}
+	err := c.Comm.XMLCall(
+		ctx,
+		federationMetadataURL,
+		http.Header{},
+		nil,
+		&resp,
+	)
+	if err != nil {
+		return defs.MexDocument{}, err
+	}
+
+	return newFromDef(resp)
+}
+
+const (
+	SoapActionDefault = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue"
+
+	// Note: Commented out because this action is not supported. It was in the original code
+	// but only used in a switch where it errored. Since there was only one value, a default
+	// worked better. However, buildTokenRequestMessage() had 2005 support.  I'm not actually
+	// sure what's going on here. It like we have half support.  For now this is here just
+	// for documentation purposes in case we are going to add support.
+	//
+	// SoapActionWSTrust2005 = "http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue"
+)
+
+// SAMLTokenInfo provides SAML information that is used to generate a SAML token.
+func (c Client) SAMLTokenInfo(ctx context.Context, authParameters authority.AuthParams, cloudAudienceURN string, endpoint defs.Endpoint) (SamlTokenInfo, error) {
+	var wsTrustRequestMessage string
+	var err error
+
+	switch authParameters.AuthorizationType {
+	case authority.ATWindowsIntegrated:
+		wsTrustRequestMessage, err = endpoint.BuildTokenRequestMessageWIA(cloudAudienceURN)
+		if err != nil {
+			return SamlTokenInfo{}, err
+		}
+	case authority.ATUsernamePassword:
+		wsTrustRequestMessage, err = endpoint.BuildTokenRequestMessageUsernamePassword(
+			cloudAudienceURN, authParameters.Username, authParameters.Password)
+		if err != nil {
+			return SamlTokenInfo{}, err
+		}
+	default:
+		return SamlTokenInfo{}, fmt.Errorf("unknown auth type %v", authParameters.AuthorizationType)
+	}
+
+	var soapAction string
+	switch endpoint.Version {
+	case defs.Trust13:
+		soapAction = SoapActionDefault
+	case defs.Trust2005:
+		return SamlTokenInfo{}, errors.New("WS Trust 2005 support is not implemented")
+	default:
+		return SamlTokenInfo{}, fmt.Errorf("the SOAP endpoint for a wstrust call had an invalid version: %v", endpoint.Version)
+	}
+
+	resp := defs.SAMLDefinitions{}
+	err = c.Comm.SOAPCall(ctx, endpoint.URL, soapAction, http.Header{}, nil, wsTrustRequestMessage, &resp)
+	if err != nil {
+		return SamlTokenInfo{}, err
+	}
+
+	return c.samlAssertion(resp)
+}
+
+const (
+	samlv1Assertion = "urn:oasis:names:tc:SAML:1.0:assertion"
+	samlv2Assertion = "urn:oasis:names:tc:SAML:2.0:assertion"
+)
+
+func (c Client) samlAssertion(def defs.SAMLDefinitions) (SamlTokenInfo, error) {
+	for _, tokenResponse := range def.Body.RequestSecurityTokenResponseCollection.RequestSecurityTokenResponse {
+		token := tokenResponse.RequestedSecurityToken
+		if token.Assertion.XMLName.Local != "" {
+			assertion := token.AssertionRawXML
+
+			samlVersion := token.Assertion.Saml
+			switch samlVersion {
+			case samlv1Assertion:
+				return SamlTokenInfo{AssertionType: grant.SAMLV1, Assertion: assertion}, nil
+			case samlv2Assertion:
+				return SamlTokenInfo{AssertionType: grant.SAMLV2, Assertion: assertion}, nil
+			}
+			return SamlTokenInfo{}, fmt.Errorf("couldn't parse SAML assertion, version unknown: %q", samlVersion)
+		}
+	}
+	return SamlTokenInfo{}, errors.New("unknown WS-Trust version")
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/resolvers.go 🔗

@@ -0,0 +1,149 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+// TODO(msal): Write some tests. The original code this came from didn't have tests and I'm too
+// tired at this point to do it. It, like many other *Manager code I found was broken because
+// they didn't have mutex protection.
+
+package oauth
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"strings"
+	"sync"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+)
+
+// ADFS is an active directory federation service authority type.
+const ADFS = "ADFS"
+
+type cacheEntry struct {
+	Endpoints             authority.Endpoints
+	ValidForDomainsInList map[string]bool
+}
+
+func createcacheEntry(endpoints authority.Endpoints) cacheEntry {
+	return cacheEntry{endpoints, map[string]bool{}}
+}
+
+// AuthorityEndpoint retrieves endpoints from an authority for auth and token acquisition.
+type authorityEndpoint struct {
+	rest *ops.REST
+
+	mu    sync.Mutex
+	cache map[string]cacheEntry
+}
+
+// newAuthorityEndpoint is the constructor for AuthorityEndpoint.
+func newAuthorityEndpoint(rest *ops.REST) *authorityEndpoint {
+	m := &authorityEndpoint{rest: rest, cache: map[string]cacheEntry{}}
+	return m
+}
+
+// ResolveEndpoints gets the authorization and token endpoints and creates an AuthorityEndpoints instance
+func (m *authorityEndpoint) ResolveEndpoints(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, error) {
+
+	if endpoints, found := m.cachedEndpoints(authorityInfo, userPrincipalName); found {
+		return endpoints, nil
+	}
+
+	endpoint, err := m.openIDConfigurationEndpoint(ctx, authorityInfo, userPrincipalName)
+	if err != nil {
+		return authority.Endpoints{}, err
+	}
+
+	resp, err := m.rest.Authority().GetTenantDiscoveryResponse(ctx, endpoint)
+	if err != nil {
+		return authority.Endpoints{}, err
+	}
+	if err := resp.Validate(); err != nil {
+		return authority.Endpoints{}, fmt.Errorf("ResolveEndpoints(): %w", err)
+	}
+
+	tenant := authorityInfo.Tenant
+
+	endpoints := authority.NewEndpoints(
+		strings.Replace(resp.AuthorizationEndpoint, "{tenant}", tenant, -1),
+		strings.Replace(resp.TokenEndpoint, "{tenant}", tenant, -1),
+		strings.Replace(resp.Issuer, "{tenant}", tenant, -1),
+		authorityInfo.Host)
+
+	m.addCachedEndpoints(authorityInfo, userPrincipalName, endpoints)
+
+	return endpoints, nil
+}
+
+// cachedEndpoints returns a the cached endpoints if they exists. If not, we return false.
+func (m *authorityEndpoint) cachedEndpoints(authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, bool) {
+	m.mu.Lock()
+	defer m.mu.Unlock()
+
+	if cacheEntry, ok := m.cache[authorityInfo.CanonicalAuthorityURI]; ok {
+		if authorityInfo.AuthorityType == ADFS {
+			domain, err := adfsDomainFromUpn(userPrincipalName)
+			if err == nil {
+				if _, ok := cacheEntry.ValidForDomainsInList[domain]; ok {
+					return cacheEntry.Endpoints, true
+				}
+			}
+		}
+		return cacheEntry.Endpoints, true
+	}
+	return authority.Endpoints{}, false
+}
+
+func (m *authorityEndpoint) addCachedEndpoints(authorityInfo authority.Info, userPrincipalName string, endpoints authority.Endpoints) {
+	m.mu.Lock()
+	defer m.mu.Unlock()
+
+	updatedCacheEntry := createcacheEntry(endpoints)
+
+	if authorityInfo.AuthorityType == ADFS {
+		// Since we're here, we've made a call to the backend.  We want to ensure we're caching
+		// the latest values from the server.
+		if cacheEntry, ok := m.cache[authorityInfo.CanonicalAuthorityURI]; ok {
+			for k := range cacheEntry.ValidForDomainsInList {
+				updatedCacheEntry.ValidForDomainsInList[k] = true
+			}
+		}
+		domain, err := adfsDomainFromUpn(userPrincipalName)
+		if err == nil {
+			updatedCacheEntry.ValidForDomainsInList[domain] = true
+		}
+	}
+
+	m.cache[authorityInfo.CanonicalAuthorityURI] = updatedCacheEntry
+}
+
+func (m *authorityEndpoint) openIDConfigurationEndpoint(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (string, error) {
+	if authorityInfo.Tenant == "adfs" {
+		return fmt.Sprintf("https://%s/adfs/.well-known/openid-configuration", authorityInfo.Host), nil
+	} else if authorityInfo.ValidateAuthority && !authority.TrustedHost(authorityInfo.Host) {
+		resp, err := m.rest.Authority().AADInstanceDiscovery(ctx, authorityInfo)
+		if err != nil {
+			return "", err
+		}
+		return resp.TenantDiscoveryEndpoint, nil
+	} else if authorityInfo.Region != "" {
+		resp, err := m.rest.Authority().AADInstanceDiscovery(ctx, authorityInfo)
+		if err != nil {
+			return "", err
+		}
+		return resp.TenantDiscoveryEndpoint, nil
+
+	}
+
+	return authorityInfo.CanonicalAuthorityURI + "v2.0/.well-known/openid-configuration", nil
+}
+
+func adfsDomainFromUpn(userPrincipalName string) (string, error) {
+	parts := strings.Split(userPrincipalName, "@")
+	if len(parts) < 2 {
+		return "", errors.New("no @ present in user principal name")
+	}
+	return parts[1], nil
+}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options/options.go 🔗

@@ -0,0 +1,52 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package options
+
+import (
+	"errors"
+	"fmt"
+)
+
+// CallOption implements an optional argument to a method call. See
+// https://blog.devgenius.io/go-call-option-that-can-be-used-with-multiple-methods-6c81734f3dbe
+// for an explanation of the usage pattern.
+type CallOption interface {
+	Do(any) error
+	callOption()
+}
+
+// ApplyOptions applies all the callOptions to options. options must be a pointer to a struct and
+// callOptions must be a list of objects that implement CallOption.
+func ApplyOptions[O, C any](options O, callOptions []C) error {
+	for _, o := range callOptions {
+		if t, ok := any(o).(CallOption); !ok {
+			return fmt.Errorf("unexpected option type %T", o)
+		} else if err := t.Do(options); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+// NewCallOption returns a new CallOption whose Do() method calls function "f".
+func NewCallOption(f func(any) error) CallOption {
+	if f == nil {
+		// This isn't a practical concern because only an MSAL maintainer can get
+		// us here, by implementing a do-nothing option. But if someone does that,
+		// the below ensures the method invoked with the option returns an error.
+		return callOption(func(any) error {
+			return errors.New("invalid option: missing implementation")
+		})
+	}
+	return callOption(f)
+}
+
+// callOption is an adapter for a function to a CallOption
+type callOption func(any) error
+
+func (c callOption) Do(a any) error {
+	return c(a)
+}
+
+func (callOption) callOption() {}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared/shared.go 🔗

@@ -0,0 +1,72 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+package shared
+
+import (
+	"net/http"
+	"reflect"
+	"strings"
+)
+
+const (
+	// CacheKeySeparator is used in creating the keys of the cache.
+	CacheKeySeparator = "-"
+)
+
+type Account struct {
+	HomeAccountID     string `json:"home_account_id,omitempty"`
+	Environment       string `json:"environment,omitempty"`
+	Realm             string `json:"realm,omitempty"`
+	LocalAccountID    string `json:"local_account_id,omitempty"`
+	AuthorityType     string `json:"authority_type,omitempty"`
+	PreferredUsername string `json:"username,omitempty"`
+	GivenName         string `json:"given_name,omitempty"`
+	FamilyName        string `json:"family_name,omitempty"`
+	MiddleName        string `json:"middle_name,omitempty"`
+	Name              string `json:"name,omitempty"`
+	AlternativeID     string `json:"alternative_account_id,omitempty"`
+	RawClientInfo     string `json:"client_info,omitempty"`
+	UserAssertionHash string `json:"user_assertion_hash,omitempty"`
+
+	AdditionalFields map[string]interface{}
+}
+
+// NewAccount creates an account.
+func NewAccount(homeAccountID, env, realm, localAccountID, authorityType, username string) Account {
+	return Account{
+		HomeAccountID:     homeAccountID,
+		Environment:       env,
+		Realm:             realm,
+		LocalAccountID:    localAccountID,
+		AuthorityType:     authorityType,
+		PreferredUsername: username,
+	}
+}
+
+// Key creates the key for storing accounts in the cache.
+func (acc Account) Key() string {
+	key := strings.Join([]string{acc.HomeAccountID, acc.Environment, acc.Realm}, CacheKeySeparator)
+	return strings.ToLower(key)
+}
+
+// IsZero checks the zero value of account.
+func (acc Account) IsZero() bool {
+	v := reflect.ValueOf(acc)
+	for i := 0; i < v.NumField(); i++ {
+		field := v.Field(i)
+		if !field.IsZero() {
+			switch field.Kind() {
+			case reflect.Map, reflect.Slice:
+				if field.Len() == 0 {
+					continue
+				}
+			}
+			return false
+		}
+	}
+	return true
+}
+
+// DefaultClient is our default shared HTTP client.
+var DefaultClient = &http.Client{}

vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/public/public.go 🔗

@@ -0,0 +1,756 @@
+// Copyright (c) Microsoft Corporation.
+// Licensed under the MIT license.
+
+/*
+Package public provides a client for authentication of "public" applications. A "public"
+application is defined as an app that runs on client devices (android, ios, windows, linux, ...).
+These devices are "untrusted" and access resources via web APIs that must authenticate.
+*/
+package public
+
+/*
+Design note:
+
+public.Client uses client.Base as an embedded type. client.Base statically assigns its attributes
+during creation. As it doesn't have any pointers in it, anything borrowed from it, such as
+Base.AuthParams is a copy that is free to be manipulated here.
+*/
+
+// TODO(msal): This should have example code for each method on client using Go's example doc framework.
+// base usage details should be includee in the package documentation.
+
+import (
+	"context"
+	"crypto/rand"
+	"crypto/sha256"
+	"encoding/base64"
+	"errors"
+	"fmt"
+	"net/url"
+	"reflect"
+	"strconv"
+
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options"
+	"github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared"
+	"github.com/google/uuid"
+	"github.com/pkg/browser"
+)
+
+// AuthResult contains the results of one token acquisition operation.
+// For details see https://aka.ms/msal-net-authenticationresult
+type AuthResult = base.AuthResult
+
+type AuthenticationScheme = authority.AuthenticationScheme
+
+type Account = shared.Account
+
+var errNoAccount = errors.New("no account was specified with public.WithSilentAccount(), or the specified account is invalid")
+
+// clientOptions configures the Client's behavior.
+type clientOptions struct {
+	accessor                 cache.ExportReplace
+	authority                string
+	capabilities             []string
+	disableInstanceDiscovery bool
+	httpClient               ops.HTTPClient
+}
+
+func (p *clientOptions) validate() error {
+	u, err := url.Parse(p.authority)
+	if err != nil {
+		return fmt.Errorf("Authority options cannot be URL parsed: %w", err)
+	}
+	if u.Scheme != "https" {
+		return fmt.Errorf("Authority(%s) did not start with https://", u.String())
+	}
+	return nil
+}
+
+// Option is an optional argument to the New constructor.
+type Option func(o *clientOptions)
+
+// WithAuthority allows for a custom authority to be set. This must be a valid https url.
+func WithAuthority(authority string) Option {
+	return func(o *clientOptions) {
+		o.authority = authority
+	}
+}
+
+// WithCache provides an accessor that will read and write authentication data to an externally managed cache.
+func WithCache(accessor cache.ExportReplace) Option {
+	return func(o *clientOptions) {
+		o.accessor = accessor
+	}
+}
+
+// WithClientCapabilities allows configuring one or more client capabilities such as "CP1"
+func WithClientCapabilities(capabilities []string) Option {
+	return func(o *clientOptions) {
+		// there's no danger of sharing the slice's underlying memory with the application because
+		// this slice is simply passed to base.WithClientCapabilities, which copies its data
+		o.capabilities = capabilities
+	}
+}
+
+// WithHTTPClient allows for a custom HTTP client to be set.
+func WithHTTPClient(httpClient ops.HTTPClient) Option {
+	return func(o *clientOptions) {
+		o.httpClient = httpClient
+	}
+}
+
+// WithInstanceDiscovery set to false to disable authority validation (to support private cloud scenarios)
+func WithInstanceDiscovery(enabled bool) Option {
+	return func(o *clientOptions) {
+		o.disableInstanceDiscovery = !enabled
+	}
+}
+
+// Client is a representation of authentication client for public applications as defined in the
+// package doc. For more information, visit https://docs.microsoft.com/azure/active-directory/develop/msal-client-applications.
+type Client struct {
+	base base.Client
+}
+
+// New is the constructor for Client.
+func New(clientID string, options ...Option) (Client, error) {
+	opts := clientOptions{
+		authority:  base.AuthorityPublicCloud,
+		httpClient: shared.DefaultClient,
+	}
+
+	for _, o := range options {
+		o(&opts)
+	}
+	if err := opts.validate(); err != nil {
+		return Client{}, err
+	}
+
+	base, err := base.New(clientID, opts.authority, oauth.New(opts.httpClient), base.WithCacheAccessor(opts.accessor), base.WithClientCapabilities(opts.capabilities), base.WithInstanceDiscovery(!opts.disableInstanceDiscovery))
+	if err != nil {
+		return Client{}, err
+	}
+	return Client{base}, nil
+}
+
+// authCodeURLOptions contains options for AuthCodeURL
+type authCodeURLOptions struct {
+	claims, loginHint, tenantID, domainHint string
+}
+
+// AuthCodeURLOption is implemented by options for AuthCodeURL
+type AuthCodeURLOption interface {
+	authCodeURLOption()
+}
+
+// AuthCodeURL creates a URL used to acquire an authorization code.
+//
+// Options: [WithClaims], [WithDomainHint], [WithLoginHint], [WithTenantID]
+func (pca Client) AuthCodeURL(ctx context.Context, clientID, redirectURI string, scopes []string, opts ...AuthCodeURLOption) (string, error) {
+	o := authCodeURLOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return "", err
+	}
+	ap, err := pca.base.AuthParams.WithTenant(o.tenantID)
+	if err != nil {
+		return "", err
+	}
+	ap.Claims = o.claims
+	ap.LoginHint = o.loginHint
+	ap.DomainHint = o.domainHint
+	return pca.base.AuthCodeURL(ctx, clientID, redirectURI, scopes, ap)
+}
+
+// WithClaims sets additional claims to request for the token, such as those required by conditional access policies.
+// Use this option when Azure AD returned a claims challenge for a prior request. The argument must be decoded.
+// This option is valid for any token acquisition method.
+func WithClaims(claims string) interface {
+	AcquireByAuthCodeOption
+	AcquireByDeviceCodeOption
+	AcquireByUsernamePasswordOption
+	AcquireInteractiveOption
+	AcquireSilentOption
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AcquireByAuthCodeOption
+		AcquireByDeviceCodeOption
+		AcquireByUsernamePasswordOption
+		AcquireInteractiveOption
+		AcquireSilentOption
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenByAuthCodeOptions:
+					t.claims = claims
+				case *acquireTokenByDeviceCodeOptions:
+					t.claims = claims
+				case *acquireTokenByUsernamePasswordOptions:
+					t.claims = claims
+				case *acquireTokenSilentOptions:
+					t.claims = claims
+				case *authCodeURLOptions:
+					t.claims = claims
+				case *interactiveAuthOptions:
+					t.claims = claims
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithAuthenticationScheme is an extensibility mechanism designed to be used only by Azure Arc for proof of possession access tokens.
+func WithAuthenticationScheme(authnScheme AuthenticationScheme) interface {
+	AcquireSilentOption
+	AcquireInteractiveOption
+	AcquireByUsernamePasswordOption
+	options.CallOption
+} {
+	return struct {
+		AcquireSilentOption
+		AcquireInteractiveOption
+		AcquireByUsernamePasswordOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenSilentOptions:
+					t.authnScheme = authnScheme
+				case *interactiveAuthOptions:
+					t.authnScheme = authnScheme
+				case *acquireTokenByUsernamePasswordOptions:
+					t.authnScheme = authnScheme
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithTenantID specifies a tenant for a single authentication. It may be different than the tenant set in [New] by [WithAuthority].
+// This option is valid for any token acquisition method.
+func WithTenantID(tenantID string) interface {
+	AcquireByAuthCodeOption
+	AcquireByDeviceCodeOption
+	AcquireByUsernamePasswordOption
+	AcquireInteractiveOption
+	AcquireSilentOption
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AcquireByAuthCodeOption
+		AcquireByDeviceCodeOption
+		AcquireByUsernamePasswordOption
+		AcquireInteractiveOption
+		AcquireSilentOption
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenByAuthCodeOptions:
+					t.tenantID = tenantID
+				case *acquireTokenByDeviceCodeOptions:
+					t.tenantID = tenantID
+				case *acquireTokenByUsernamePasswordOptions:
+					t.tenantID = tenantID
+				case *acquireTokenSilentOptions:
+					t.tenantID = tenantID
+				case *authCodeURLOptions:
+					t.tenantID = tenantID
+				case *interactiveAuthOptions:
+					t.tenantID = tenantID
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// acquireTokenSilentOptions are all the optional settings to an AcquireTokenSilent() call.
+// These are set by using various AcquireTokenSilentOption functions.
+type acquireTokenSilentOptions struct {
+	account          Account
+	claims, tenantID string
+	authnScheme      AuthenticationScheme
+}
+
+// AcquireSilentOption is implemented by options for AcquireTokenSilent
+type AcquireSilentOption interface {
+	acquireSilentOption()
+}
+
+// WithSilentAccount uses the passed account during an AcquireTokenSilent() call.
+func WithSilentAccount(account Account) interface {
+	AcquireSilentOption
+	options.CallOption
+} {
+	return struct {
+		AcquireSilentOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenSilentOptions:
+					t.account = account
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// AcquireTokenSilent acquires a token from either the cache or using a refresh token.
+//
+// Options: [WithClaims], [WithSilentAccount], [WithTenantID]
+func (pca Client) AcquireTokenSilent(ctx context.Context, scopes []string, opts ...AcquireSilentOption) (AuthResult, error) {
+	o := acquireTokenSilentOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return AuthResult{}, err
+	}
+	// an account is required to find user tokens in the cache
+	if reflect.ValueOf(o.account).IsZero() {
+		return AuthResult{}, errNoAccount
+	}
+
+	silentParameters := base.AcquireTokenSilentParameters{
+		Scopes:      scopes,
+		Account:     o.account,
+		Claims:      o.claims,
+		RequestType: accesstokens.ATPublic,
+		IsAppCache:  false,
+		TenantID:    o.tenantID,
+		AuthnScheme: o.authnScheme,
+	}
+
+	return pca.base.AcquireTokenSilent(ctx, silentParameters)
+}
+
+// acquireTokenByUsernamePasswordOptions contains optional configuration for AcquireTokenByUsernamePassword
+type acquireTokenByUsernamePasswordOptions struct {
+	claims, tenantID string
+	authnScheme      AuthenticationScheme
+}
+
+// AcquireByUsernamePasswordOption is implemented by options for AcquireTokenByUsernamePassword
+type AcquireByUsernamePasswordOption interface {
+	acquireByUsernamePasswordOption()
+}
+
+// AcquireTokenByUsernamePassword acquires a security token from the authority, via Username/Password Authentication.
+// NOTE: this flow is NOT recommended.
+//
+// Options: [WithClaims], [WithTenantID]
+func (pca Client) AcquireTokenByUsernamePassword(ctx context.Context, scopes []string, username, password string, opts ...AcquireByUsernamePasswordOption) (AuthResult, error) {
+	o := acquireTokenByUsernamePasswordOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return AuthResult{}, err
+	}
+	authParams, err := pca.base.AuthParams.WithTenant(o.tenantID)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	authParams.Scopes = scopes
+	authParams.AuthorizationType = authority.ATUsernamePassword
+	authParams.Claims = o.claims
+	authParams.Username = username
+	authParams.Password = password
+	if o.authnScheme != nil {
+		authParams.AuthnScheme = o.authnScheme
+	}
+
+	token, err := pca.base.Token.UsernamePassword(ctx, authParams)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	return pca.base.AuthResultFromToken(ctx, authParams, token, true)
+}
+
+type DeviceCodeResult = accesstokens.DeviceCodeResult
+
+// DeviceCode provides the results of the device code flows first stage (containing the code)
+// that must be entered on the second device and provides a method to retrieve the AuthenticationResult
+// once that code has been entered and verified.
+type DeviceCode struct {
+	// Result holds the information about the device code (such as the code).
+	Result DeviceCodeResult
+
+	authParams authority.AuthParams
+	client     Client
+	dc         oauth.DeviceCode
+}
+
+// AuthenticationResult retreives the AuthenticationResult once the user enters the code
+// on the second device. Until then it blocks until the .AcquireTokenByDeviceCode() context
+// is cancelled or the token expires.
+func (d DeviceCode) AuthenticationResult(ctx context.Context) (AuthResult, error) {
+	token, err := d.dc.Token(ctx)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	return d.client.base.AuthResultFromToken(ctx, d.authParams, token, true)
+}
+
+// acquireTokenByDeviceCodeOptions contains optional configuration for AcquireTokenByDeviceCode
+type acquireTokenByDeviceCodeOptions struct {
+	claims, tenantID string
+}
+
+// AcquireByDeviceCodeOption is implemented by options for AcquireTokenByDeviceCode
+type AcquireByDeviceCodeOption interface {
+	acquireByDeviceCodeOptions()
+}
+
+// AcquireTokenByDeviceCode acquires a security token from the authority, by acquiring a device code and using that to acquire the token.
+// Users need to create an AcquireTokenDeviceCodeParameters instance and pass it in.
+//
+// Options: [WithClaims], [WithTenantID]
+func (pca Client) AcquireTokenByDeviceCode(ctx context.Context, scopes []string, opts ...AcquireByDeviceCodeOption) (DeviceCode, error) {
+	o := acquireTokenByDeviceCodeOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return DeviceCode{}, err
+	}
+	authParams, err := pca.base.AuthParams.WithTenant(o.tenantID)
+	if err != nil {
+		return DeviceCode{}, err
+	}
+	authParams.Scopes = scopes
+	authParams.AuthorizationType = authority.ATDeviceCode
+	authParams.Claims = o.claims
+
+	dc, err := pca.base.Token.DeviceCode(ctx, authParams)
+	if err != nil {
+		return DeviceCode{}, err
+	}
+
+	return DeviceCode{Result: dc.Result, authParams: authParams, client: pca, dc: dc}, nil
+}
+
+// acquireTokenByAuthCodeOptions contains the optional parameters used to acquire an access token using the authorization code flow.
+type acquireTokenByAuthCodeOptions struct {
+	challenge, claims, tenantID string
+}
+
+// AcquireByAuthCodeOption is implemented by options for AcquireTokenByAuthCode
+type AcquireByAuthCodeOption interface {
+	acquireByAuthCodeOption()
+}
+
+// WithChallenge allows you to provide a code for the .AcquireTokenByAuthCode() call.
+func WithChallenge(challenge string) interface {
+	AcquireByAuthCodeOption
+	options.CallOption
+} {
+	return struct {
+		AcquireByAuthCodeOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *acquireTokenByAuthCodeOptions:
+					t.challenge = challenge
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// AcquireTokenByAuthCode is a request to acquire a security token from the authority, using an authorization code.
+// The specified redirect URI must be the same URI that was used when the authorization code was requested.
+//
+// Options: [WithChallenge], [WithClaims], [WithTenantID]
+func (pca Client) AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, opts ...AcquireByAuthCodeOption) (AuthResult, error) {
+	o := acquireTokenByAuthCodeOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return AuthResult{}, err
+	}
+
+	params := base.AcquireTokenAuthCodeParameters{
+		Scopes:      scopes,
+		Code:        code,
+		Challenge:   o.challenge,
+		Claims:      o.claims,
+		AppType:     accesstokens.ATPublic,
+		RedirectURI: redirectURI,
+		TenantID:    o.tenantID,
+	}
+
+	return pca.base.AcquireTokenByAuthCode(ctx, params)
+}
+
+// Accounts gets all the accounts in the token cache.
+// If there are no accounts in the cache the returned slice is empty.
+func (pca Client) Accounts(ctx context.Context) ([]Account, error) {
+	return pca.base.AllAccounts(ctx)
+}
+
+// RemoveAccount signs the account out and forgets account from token cache.
+func (pca Client) RemoveAccount(ctx context.Context, account Account) error {
+	return pca.base.RemoveAccount(ctx, account)
+}
+
+// interactiveAuthOptions contains the optional parameters used to acquire an access token for interactive auth code flow.
+type interactiveAuthOptions struct {
+	claims, domainHint, loginHint, redirectURI, tenantID string
+	openURL                                              func(url string) error
+	authnScheme                                          AuthenticationScheme
+}
+
+// AcquireInteractiveOption is implemented by options for AcquireTokenInteractive
+type AcquireInteractiveOption interface {
+	acquireInteractiveOption()
+}
+
+// WithLoginHint pre-populates the login prompt with a username.
+func WithLoginHint(username string) interface {
+	AcquireInteractiveOption
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AcquireInteractiveOption
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *authCodeURLOptions:
+					t.loginHint = username
+				case *interactiveAuthOptions:
+					t.loginHint = username
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithDomainHint adds the IdP domain as domain_hint query parameter in the auth url.
+func WithDomainHint(domain string) interface {
+	AcquireInteractiveOption
+	AuthCodeURLOption
+	options.CallOption
+} {
+	return struct {
+		AcquireInteractiveOption
+		AuthCodeURLOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *authCodeURLOptions:
+					t.domainHint = domain
+				case *interactiveAuthOptions:
+					t.domainHint = domain
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithRedirectURI sets a port for the local server used in interactive authentication, for
+// example http://localhost:port. All URI components other than the port are ignored.
+func WithRedirectURI(redirectURI string) interface {
+	AcquireInteractiveOption
+	options.CallOption
+} {
+	return struct {
+		AcquireInteractiveOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *interactiveAuthOptions:
+					t.redirectURI = redirectURI
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// WithOpenURL allows you to provide a function to open the browser to complete the interactive login, instead of launching the system default browser.
+func WithOpenURL(openURL func(url string) error) interface {
+	AcquireInteractiveOption
+	options.CallOption
+} {
+	return struct {
+		AcquireInteractiveOption
+		options.CallOption
+	}{
+		CallOption: options.NewCallOption(
+			func(a any) error {
+				switch t := a.(type) {
+				case *interactiveAuthOptions:
+					t.openURL = openURL
+				default:
+					return fmt.Errorf("unexpected options type %T", a)
+				}
+				return nil
+			},
+		),
+	}
+}
+
+// AcquireTokenInteractive acquires a security token from the authority using the default web browser to select the account.
+// https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-authentication-flows#interactive-and-non-interactive-authentication
+//
+// Options: [WithDomainHint], [WithLoginHint], [WithOpenURL], [WithRedirectURI], [WithTenantID]
+func (pca Client) AcquireTokenInteractive(ctx context.Context, scopes []string, opts ...AcquireInteractiveOption) (AuthResult, error) {
+	o := interactiveAuthOptions{}
+	if err := options.ApplyOptions(&o, opts); err != nil {
+		return AuthResult{}, err
+	}
+	// the code verifier is a random 32-byte sequence that's been base-64 encoded without padding.
+	// it's used to prevent MitM attacks during auth code flow, see https://tools.ietf.org/html/rfc7636
+	cv, challenge, err := codeVerifier()
+	if err != nil {
+		return AuthResult{}, err
+	}
+	var redirectURL *url.URL
+	if o.redirectURI != "" {
+		redirectURL, err = url.Parse(o.redirectURI)
+		if err != nil {
+			return AuthResult{}, err
+		}
+	}
+	if o.openURL == nil {
+		o.openURL = browser.OpenURL
+	}
+	authParams, err := pca.base.AuthParams.WithTenant(o.tenantID)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	authParams.Scopes = scopes
+	authParams.AuthorizationType = authority.ATInteractive
+	authParams.Claims = o.claims
+	authParams.CodeChallenge = challenge
+	authParams.CodeChallengeMethod = "S256"
+	authParams.LoginHint = o.loginHint
+	authParams.DomainHint = o.domainHint
+	authParams.State = uuid.New().String()
+	authParams.Prompt = "select_account"
+	if o.authnScheme != nil {
+		authParams.AuthnScheme = o.authnScheme
+	}
+	res, err := pca.browserLogin(ctx, redirectURL, authParams, o.openURL)
+	if err != nil {
+		return AuthResult{}, err
+	}
+	authParams.Redirecturi = res.redirectURI
+
+	req, err := accesstokens.NewCodeChallengeRequest(authParams, accesstokens.ATPublic, nil, res.authCode, cv)
+	if err != nil {
+		return AuthResult{}, err
+	}
+
+	token, err := pca.base.Token.AuthCode(ctx, req)
+	if err != nil {
+		return AuthResult{}, err
+	}
+
+	return pca.base.AuthResultFromToken(ctx, authParams, token, true)
+}
+
+type interactiveAuthResult struct {
+	authCode    string
+	redirectURI string
+}
+
+// parses the port number from the provided URL.
+// returns 0 if nil or no port is specified.
+func parsePort(u *url.URL) (int, error) {
+	if u == nil {
+		return 0, nil
+	}
+	p := u.Port()
+	if p == "" {
+		return 0, nil
+	}
+	return strconv.Atoi(p)
+}
+
+// browserLogin calls openURL and waits for a user to log in
+func (pca Client) browserLogin(ctx context.Context, redirectURI *url.URL, params authority.AuthParams, openURL func(string) error) (interactiveAuthResult, error) {
+	// start local redirect server so login can call us back
+	port, err := parsePort(redirectURI)
+	if err != nil {
+		return interactiveAuthResult{}, err
+	}
+	srv, err := local.New(params.State, port)
+	if err != nil {
+		return interactiveAuthResult{}, err
+	}
+	defer srv.Shutdown()
+	params.Scopes = accesstokens.AppendDefaultScopes(params)
+	authURL, err := pca.base.AuthCodeURL(ctx, params.ClientID, srv.Addr, params.Scopes, params)
+	if err != nil {
+		return interactiveAuthResult{}, err
+	}
+	// open browser window so user can select credentials
+	if err := openURL(authURL); err != nil {
+		return interactiveAuthResult{}, err
+	}
+	// now wait until the logic calls us back
+	res := srv.Result(ctx)
+	if res.Err != nil {
+		return interactiveAuthResult{}, res.Err
+	}
+	return interactiveAuthResult{
+		authCode:    res.Code,
+		redirectURI: srv.Addr,
+	}, nil
+}
+
+// creates a code verifier string along with its SHA256 hash which
+// is used as the challenge when requesting an auth code.
+// used in interactive auth flow for PKCE.
+func codeVerifier() (codeVerifier string, challenge string, err error) {
+	cvBytes := make([]byte, 32)
+	if _, err = rand.Read(cvBytes); err != nil {
+		return
+	}
+	codeVerifier = base64.RawURLEncoding.EncodeToString(cvBytes)
+	// for PKCE, create a hash of the code verifier
+	cvh := sha256.Sum256([]byte(codeVerifier))
+	challenge = base64.RawURLEncoding.EncodeToString(cvh[:])
+	return
+}

vendor/github.com/JohannesKaufmann/html-to-markdown/LICENSE 🔗

@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2018 Johannes Kaufmann
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.

vendor/github.com/JohannesKaufmann/html-to-markdown/README.md 🔗

@@ -0,0 +1,242 @@
+# html-to-markdown
+
+[![Go Report Card](https://goreportcard.com/badge/github.com/JohannesKaufmann/html-to-markdown)](https://goreportcard.com/report/github.com/JohannesKaufmann/html-to-markdown)
+[![codecov](https://codecov.io/gh/JohannesKaufmann/html-to-markdown/branch/master/graph/badge.svg)](https://codecov.io/gh/JohannesKaufmann/html-to-markdown)
+![GitHub MIT License](https://img.shields.io/github/license/JohannesKaufmann/html-to-markdown)
+[![GoDoc](https://godoc.org/github.com/JohannesKaufmann/html-to-markdown?status.png)](http://godoc.org/github.com/JohannesKaufmann/html-to-markdown)
+
+![Gopher, the mascot of Golang, is wearing a party hat and holding a balloon. Next to the Gopher is a machine that converts characters associated with HTML to characters associated with Markdown.](/logo_five_years.png)
+
+Convert HTML into Markdown with Go. It is using an [HTML Parser](https://github.com/PuerkitoBio/goquery) to avoid the use of `regexp` as much as possible. That should prevent some [weird cases](https://stackoverflow.com/a/1732454) and allows it to be used for cases where the input is totally unknown.
+
+## Installation
+
+```
+go get github.com/JohannesKaufmann/html-to-markdown
+```
+
+## Usage
+
+```go
+import (
+	"fmt"
+	"log"
+
+	md "github.com/JohannesKaufmann/html-to-markdown"
+)
+
+converter := md.NewConverter("", true, nil)
+
+html := `<strong>Important</strong>`
+
+markdown, err := converter.ConvertString(html)
+if err != nil {
+  log.Fatal(err)
+}
+fmt.Println("md ->", markdown)
+```
+
+If you are already using [goquery](https://github.com/PuerkitoBio/goquery) you can pass a selection to `Convert`.
+
+```go
+markdown, err := converter.Convert(selec)
+```
+
+### Using it on the command line
+
+If you want to make use of `html-to-markdown` on the command line without any Go coding, check out [`html2md`](https://github.com/suntong/html2md#usage), a cli wrapper for `html-to-markdown` that has all the following options and plugins builtin.
+
+## Options
+
+The third parameter to `md.NewConverter` is `*md.Options`.
+
+For example you can change the character that is around a bold text ("`**`") to a different one (for example "`__`") by changing the value of `StrongDelimiter`.
+
+```go
+opt := &md.Options{
+  StrongDelimiter: "__", // default: **
+  // ...
+}
+converter := md.NewConverter("", true, opt)
+```
+
+For all the possible options look at [godocs](https://godoc.org/github.com/JohannesKaufmann/html-to-markdown/#Options) and for a example look at the [example](/examples/options/main.go).
+
+## Adding Rules
+
+```go
+converter.AddRules(
+  md.Rule{
+    Filter: []string{"del", "s", "strike"},
+    Replacement: func(content string, selec *goquery.Selection, opt *md.Options) *string {
+      // You need to return a pointer to a string (md.String is just a helper function).
+      // If you return nil the next function for that html element
+      // will be picked. For example you could only convert an element
+      // if it has a certain class name and fallback if not.
+      content = strings.TrimSpace(content)
+      return md.String("~" + content + "~")
+    },
+  },
+  // more rules
+)
+```
+
+For more information have a look at the example [add_rules](/examples/add_rules/main.go).
+
+## Using Plugins
+
+If you want plugins (github flavored markdown like striketrough, tables, ...) you can pass it to `Use`.
+
+```go
+import "github.com/JohannesKaufmann/html-to-markdown/plugin"
+
+// Use the `GitHubFlavored` plugin from the `plugin` package.
+converter.Use(plugin.GitHubFlavored())
+```
+
+Or if you only want to use the `Strikethrough` plugin. You can change the character that distinguishes
+the text that is crossed out by setting the first argument to a different value (for example "~~" instead of "~").
+
+```go
+converter.Use(plugin.Strikethrough(""))
+```
+
+For more information have a look at the example [github_flavored](/examples/github_flavored/main.go).
+
+---
+
+These are the plugins located in the [plugin folder](/plugin) which you can use by importing "github.com/JohannesKaufmann/html-to-markdown/plugin".
+
+| Name                  | Description                                                                                 |
+| --------------------- | ------------------------------------------------------------------------------------------- |
+| GitHubFlavored        | GitHub's Flavored Markdown contains `TaskListItems`, `Strikethrough` and `Table`.           |
+| TaskListItems         | (Included in `GitHubFlavored`). Converts `<input>` checkboxes into `- [x] Task`.            |
+| Strikethrough         | (Included in `GitHubFlavored`). Converts `<strike>`, `<s>`, and `<del>` to the `~~` syntax. |
+| Table                 | (Included in `GitHubFlavored`). Convert a `<table>` into something like this...             |
+| TableCompat           |                                                                                             |
+|                       |                                                                                             |
+| VimeoEmbed            |                                                                                             |
+| YoutubeEmbed          |                                                                                             |
+|                       |                                                                                             |
+| ConfluenceCodeBlock   | Converts `<ac:structured-macro>` elements that are used in Atlassian’s Wiki "Confluence".   |
+| ConfluenceAttachments | Converts `<ri:attachment ri:filename=""/>` elements.                                        |
+
+These are the plugins in other repositories:
+
+| Name                         | Description         |
+| ---------------------------- | ------------------- |
+| \[Plugin Name\]\(Your Link\) | A short description |
+
+I you write a plugin, feel free to open a PR that adds your Plugin to this list.
+
+## Writing Plugins
+
+Have a look at the [plugin folder](/plugin) for a reference implementation. The most basic one is [Strikethrough](/plugin/strikethrough.go).
+
+## Security
+
+This library produces markdown that is readable and can be changed by humans.
+
+Once you convert this markdown back to HTML (e.g. using [goldmark](https://github.com/yuin/goldmark) or [blackfriday](https://github.com/russross/blackfriday)) you need to be careful of malicious content.
+
+This library does NOT sanitize untrusted content. Use an HTML sanitizer such as [bluemonday](https://github.com/microcosm-cc/bluemonday) before displaying the HTML in the browser.
+
+## Other Methods
+
+[Godoc](https://godoc.org/github.com/JohannesKaufmann/html-to-markdown)
+
+### `func (c *Converter) Keep(tags ...string) *Converter`
+
+Determines which elements are to be kept and rendered as HTML.
+
+### `func (c *Converter) Remove(tags ...string) *Converter`
+
+Determines which elements are to be removed altogether i.e. converted to an empty string.
+
+## Escaping
+
+Some characters have a special meaning in markdown. For example, the character "\*" can be used for lists, emphasis and dividers. By placing a backlash before that character (e.g. "\\\*") you can "escape" it. Then the character will render as a raw "\*" without the _"markdown meaning"_ applied.
+
+But why is "escaping" even necessary?
+
+<!-- prettier-ignore -->
+```md
+Paragraph 1
+-
+Paragraph 2
+```
+
+The markdown above doesn't seem that problematic. But "Paragraph 1" (with only one hyphen below) will be recognized as a _setext heading_.
+
+```html
+<h2>Paragraph 1</h2>
+<p>Paragraph 2</p>
+```
+
+A well-placed backslash character would prevent that...
+
+<!-- prettier-ignore -->
+```md
+Paragraph 1
+\-
+Paragraph 2
+```
+
+---
+
+How to configure escaping? Depending on the `EscapeMode` option, the markdown output is going to be different.
+
+```go
+opt = &md.Options{
+	EscapeMode: "basic", // default
+}
+```
+
+Lets try it out with this HTML input:
+
+|          |                                                       |
+| -------- | ----------------------------------------------------- |
+| input    | `<p>fake **bold** and real <strong>bold</strong></p>` |
+|          |                                                       |
+|          | **With EscapeMode "basic"**                           |
+| output   | `fake \*\*bold\*\* and real **bold**`                 |
+| rendered | fake \*\*bold\*\* and real **bold**                   |
+|          |                                                       |
+|          | **With EscapeMode "disabled"**                        |
+| output   | `fake **bold** and real **bold**`                     |
+| rendered | fake **bold** and real **bold**                       |
+
+With **basic** escaping, we get some escape characters (the backlash "\\") but it renders correctly.
+
+With escaping **disabled**, the fake and real bold can't be distinguished in the markdown. That means it is both going to render as bold.
+
+---
+
+So now you know the purpose of escaping. However, if you encounter some content where the escaping breaks, you can manually disable it. But please also open an issue!
+
+## Issues
+
+If you find HTML snippets (or even full websites) that don't produce the expected results, please open an issue!
+
+## Contributing & Testing
+
+Please first discuss the change you wish to make, by opening an issue. I'm also happy to guide you to where a change is most likely needed.
+
+_Note: The outside API should not change because of backwards compatibility..._
+
+You don't have to be afraid of breaking the converter, since there are many "Golden File Tests":
+
+Add your problematic HTML snippet to one of the `input.html` files in the `testdata` folder. Then run `go test -update` and have a look at which `.golden` files changed in GIT.
+
+You can now change the internal logic and inspect what impact your change has by running `go test -update` again.
+
+_Note: Before submitting your change as a PR, make sure that you run those tests and check the files into GIT..._
+
+## Related Projects
+
+- [turndown (js)](https://github.com/domchristie/turndown), a very good library written in javascript.
+- [lunny/html2md](https://github.com/lunny/html2md), which is using [regex instead of goquery](https://stackoverflow.com/a/1732454). I came around a few edge case when using it (leaving some html comments, ...) so I wrote my own.
+
+## License
+
+This project is licensed under the terms of the MIT license.

vendor/github.com/JohannesKaufmann/html-to-markdown/commonmark.go 🔗

@@ -0,0 +1,393 @@
+package md
+
+import (
+	"fmt"
+	"unicode"
+
+	"regexp"
+	"strconv"
+	"strings"
+	"unicode/utf8"
+
+	"github.com/JohannesKaufmann/html-to-markdown/escape"
+	"github.com/PuerkitoBio/goquery"
+)
+
+var multipleSpacesR = regexp.MustCompile(`  +`)
+
+var commonmark = []Rule{
+	{
+		Filter: []string{"ul", "ol"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			parent := selec.Parent()
+
+			// we have a nested list, were the ul/ol is inside a list item
+			// -> based on work done by @requilence from @anytypeio
+			if (parent.Is("li") || parent.Is("ul") || parent.Is("ol")) && parent.Children().Last().IsSelection(selec) {
+				// add a line break prefix if the parent's text node doesn't have it.
+				// that makes sure that every list item is on its on line
+				lastContentTextNode := strings.TrimRight(parent.Nodes[0].FirstChild.Data, " \t")
+				if !strings.HasSuffix(lastContentTextNode, "\n") {
+					content = "\n" + content
+				}
+
+				// remove empty lines between lists
+				trimmedSpaceContent := strings.TrimRight(content, " \t")
+				if strings.HasSuffix(trimmedSpaceContent, "\n") {
+					content = strings.TrimRightFunc(content, unicode.IsSpace)
+				}
+			} else {
+				content = "\n\n" + content + "\n\n"
+			}
+			return &content
+		},
+	},
+	{
+		Filter: []string{"li"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			if strings.TrimSpace(content) == "" {
+				return nil
+			}
+
+			// remove leading newlines
+			content = leadingNewlinesR.ReplaceAllString(content, "")
+			// replace trailing newlines with just a single one
+			content = trailingNewlinesR.ReplaceAllString(content, "\n")
+			// remove leading spaces
+			content = strings.TrimLeft(content, " ")
+
+			prefix := selec.AttrOr(attrListPrefix, "")
+
+			// `prefixCount` is not nessesarily the length of the empty string `prefix`
+			// but how much space is reserved for the prefixes of the siblings.
+			prefixCount, previousPrefixCounts := countListParents(opt, selec)
+
+			// if the prefix is not needed, balance it by adding the usual prefix spaces
+			if prefix == "" {
+				prefix = strings.Repeat(" ", prefixCount)
+			}
+			// indent the prefix so that the nested links are represented
+			indent := strings.Repeat(" ", previousPrefixCounts)
+			prefix = indent + prefix
+
+			content = IndentMultiLineListItem(opt, content, prefixCount+previousPrefixCounts)
+
+			return String(prefix + content + "\n")
+		},
+	},
+	{
+		Filter: []string{"#text"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			text := selec.Text()
+			if trimmed := strings.TrimSpace(text); trimmed == "" {
+				return String("")
+			}
+			text = tabR.ReplaceAllString(text, " ")
+
+			// replace multiple spaces by one space: dont accidentally make
+			// normal text be indented and thus be a code block.
+			text = multipleSpacesR.ReplaceAllString(text, " ")
+
+			if opt.EscapeMode == "basic" {
+				text = escape.MarkdownCharacters(text)
+			}
+
+			// if its inside a list, trim the spaces to not mess up the indentation
+			parent := selec.Parent()
+			next := selec.Next()
+			if IndexWithText(selec) == 0 &&
+				(parent.Is("li") || parent.Is("ol") || parent.Is("ul")) &&
+				(next.Is("ul") || next.Is("ol")) {
+				// trim only spaces and not new lines
+				text = strings.Trim(text, ` `)
+			}
+
+			return &text
+		},
+	},
+	{
+		Filter: []string{"p", "div"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			parent := goquery.NodeName(selec.Parent())
+			if IsInlineElement(parent) || parent == "li" {
+				content = "\n" + content + "\n"
+				return &content
+			}
+
+			// remove unnecessary spaces to have clean markdown
+			content = TrimpLeadingSpaces(content)
+
+			content = "\n\n" + content + "\n\n"
+			return &content
+		},
+	},
+	{
+		Filter: []string{"h1", "h2", "h3", "h4", "h5", "h6"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			if strings.TrimSpace(content) == "" {
+				return nil
+			}
+
+			content = strings.Replace(content, "\n", " ", -1)
+			content = strings.Replace(content, "\r", " ", -1)
+			content = strings.Replace(content, `#`, `\#`, -1)
+			content = strings.TrimSpace(content)
+
+			insideLink := selec.ParentsFiltered("a").Length() > 0
+			if insideLink {
+				text := opt.StrongDelimiter + content + opt.StrongDelimiter
+				text = AddSpaceIfNessesary(selec, text)
+				return &text
+			}
+
+			node := goquery.NodeName(selec)
+			level, err := strconv.Atoi(node[1:])
+			if err != nil {
+				return nil
+			}
+
+			if opt.HeadingStyle == "setext" && level < 3 {
+				line := "-"
+				if level == 1 {
+					line = "="
+				}
+
+				underline := strings.Repeat(line, len(content))
+				return String("\n\n" + content + "\n" + underline + "\n\n")
+			}
+
+			prefix := strings.Repeat("#", level)
+			text := "\n\n" + prefix + " " + content + "\n\n"
+			return &text
+		},
+	},
+	{
+		Filter: []string{"strong", "b"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			// only use one bold tag if they are nested
+			parent := selec.Parent()
+			if parent.Is("strong") || parent.Is("b") {
+				return &content
+			}
+
+			trimmed := strings.TrimSpace(content)
+			if trimmed == "" {
+				return &trimmed
+			}
+
+			// If there is a newline character between the start and end delimiter
+			// the delimiters won't be recognized. Either we remove all newline characters
+			// OR on _every_ line we put start & end delimiters.
+			trimmed = delimiterForEveryLine(trimmed, opt.StrongDelimiter)
+
+			// Always have a space to the side to recognize the delimiter
+			trimmed = AddSpaceIfNessesary(selec, trimmed)
+
+			return &trimmed
+		},
+	},
+	{
+		Filter: []string{"i", "em"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			// only use one italic tag if they are nested
+			parent := selec.Parent()
+			if parent.Is("i") || parent.Is("em") {
+				return &content
+			}
+
+			trimmed := strings.TrimSpace(content)
+			if trimmed == "" {
+				return &trimmed
+			}
+
+			// If there is a newline character between the start and end delimiter
+			// the delimiters won't be recognized. Either we remove all newline characters
+			// OR on _every_ line we put start & end delimiters.
+			trimmed = delimiterForEveryLine(trimmed, opt.EmDelimiter)
+
+			// Always have a space to the side to recognize the delimiter
+			trimmed = AddSpaceIfNessesary(selec, trimmed)
+
+			return &trimmed
+		},
+	},
+	{
+		Filter: []string{"img"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			src := selec.AttrOr("src", "")
+			src = strings.TrimSpace(src)
+			if src == "" {
+				return String("")
+			}
+
+			src = opt.GetAbsoluteURL(selec, src, opt.domain)
+
+			alt := selec.AttrOr("alt", "")
+			alt = strings.Replace(alt, "\n", " ", -1)
+
+			text := fmt.Sprintf("![%s](%s)", alt, src)
+			return &text
+		},
+	},
+	{
+		Filter: []string{"a"},
+		AdvancedReplacement: func(content string, selec *goquery.Selection, opt *Options) (AdvancedResult, bool) {
+			// if there is no href, no link is used. So just return the content inside the link
+			href, ok := selec.Attr("href")
+			if !ok || strings.TrimSpace(href) == "" || strings.TrimSpace(href) == "#" {
+				return AdvancedResult{
+					Markdown: content,
+				}, false
+			}
+
+			href = opt.GetAbsoluteURL(selec, href, opt.domain)
+
+			// having multiline content inside a link is a bit tricky
+			content = EscapeMultiLine(content)
+
+			var title string
+			if t, ok := selec.Attr("title"); ok {
+				t = strings.Replace(t, "\n", " ", -1)
+				// escape all quotes
+				t = strings.Replace(t, `"`, `\"`, -1)
+				title = fmt.Sprintf(` "%s"`, t)
+			}
+
+			// if there is no link content (for example because it contains an svg)
+			// the 'title' or 'aria-label' attribute is used instead.
+			if strings.TrimSpace(content) == "" {
+				content = selec.AttrOr("title", selec.AttrOr("aria-label", ""))
+			}
+
+			// a link without text won't de displayed anyway
+			if content == "" {
+				return AdvancedResult{}, true
+			}
+
+			if opt.LinkStyle == "inlined" {
+				md := fmt.Sprintf("[%s](%s%s)", content, href, title)
+				md = AddSpaceIfNessesary(selec, md)
+
+				return AdvancedResult{
+					Markdown: md,
+				}, false
+			}
+
+			var replacement string
+			var reference string
+
+			switch opt.LinkReferenceStyle {
+			case "collapsed":
+
+				replacement = "[" + content + "][]"
+				reference = "[" + content + "]: " + href + title
+			case "shortcut":
+				replacement = "[" + content + "]"
+				reference = "[" + content + "]: " + href + title
+
+			default:
+				id := selec.AttrOr("data-index", "")
+				replacement = "[" + content + "][" + id + "]"
+				reference = "[" + id + "]: " + href + title
+			}
+
+			replacement = AddSpaceIfNessesary(selec, replacement)
+			return AdvancedResult{Markdown: replacement, Footer: reference}, false
+		},
+	},
+	{
+		Filter: []string{"code", "kbd", "samp", "tt"},
+		Replacement: func(_ string, selec *goquery.Selection, opt *Options) *string {
+			code := getCodeContent(selec)
+
+			// Newlines in the text aren't great, since this is inline code and not a code block.
+			// Newlines will be stripped anyway in the browser, but it won't be recognized as code
+			// from the markdown parser when there is more than one newline.
+			// So limit to
+			code = multipleNewLinesRegex.ReplaceAllString(code, "\n")
+
+			fenceChar := '`'
+			maxCount := calculateCodeFenceOccurrences(fenceChar, code)
+			maxCount++
+
+			fence := strings.Repeat(string(fenceChar), maxCount)
+
+			// code block contains a backtick as first character
+			if strings.HasPrefix(code, "`") {
+				code = " " + code
+			}
+			// code block contains a backtick as last character
+			if strings.HasSuffix(code, "`") {
+				code = code + " "
+			}
+
+			// TODO: configure delimeter in options?
+			text := fence + code + fence
+			text = AddSpaceIfNessesary(selec, text)
+			return &text
+		},
+	},
+	{
+		Filter: []string{"pre"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			codeElement := selec.Find("code")
+			language := codeElement.AttrOr("class", "")
+			language = strings.Replace(language, "language-", "", 1)
+
+			code := getCodeContent(selec)
+
+			fenceChar, _ := utf8.DecodeRuneInString(opt.Fence)
+			fence := CalculateCodeFence(fenceChar, code)
+
+			text := "\n\n" + fence + language + "\n" +
+				code +
+				"\n" + fence + "\n\n"
+			return &text
+		},
+	},
+	{
+		Filter: []string{"hr"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			// e.g. `## --- Heading` would look weird, so don't render a divider if inside a heading
+			insideHeading := selec.ParentsFiltered("h1,h2,h3,h4,h5,h6").Length() > 0
+			if insideHeading {
+				return String("")
+			}
+
+			text := "\n\n" + opt.HorizontalRule + "\n\n"
+			return &text
+		},
+	},
+	{
+		Filter: []string{"br"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			return String("\n\n")
+		},
+	},
+	{
+		Filter: []string{"blockquote"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			content = strings.TrimSpace(content)
+			if content == "" {
+				return nil
+			}
+
+			content = multipleNewLinesRegex.ReplaceAllString(content, "\n\n")
+
+			var beginningR = regexp.MustCompile(`(?m)^`)
+			content = beginningR.ReplaceAllString(content, "> ")
+
+			text := "\n\n" + content + "\n\n"
+			return &text
+		},
+	},
+	{
+		Filter: []string{"noscript"},
+		Replacement: func(content string, selec *goquery.Selection, opt *Options) *string {
+			// for now remove the contents of noscript. But in the future we could
+			// tell goquery to parse the contents of the tag.
+			// -> https://github.com/PuerkitoBio/goquery/issues/139#issuecomment-517526070
+			return nil
+		},
+	},
+}

vendor/github.com/JohannesKaufmann/html-to-markdown/escape/escape.go 🔗

@@ -0,0 +1,65 @@
+// Package escape escapes characters that are commonly used in
+// markdown like the * for strong/italic.
+package escape
+
+import (
+	"regexp"
+	"strings"
+)
+
+var backslash = regexp.MustCompile(`\\(\S)`)
+var heading = regexp.MustCompile(`(?m)^(#{1,6} )`)
+var orderedList = regexp.MustCompile(`(?m)^(\W* {0,3})(\d+)\. `)
+var unorderedList = regexp.MustCompile(`(?m)^([^\\\w]*)[*+-] `)
+var horizontalDivider = regexp.MustCompile(`(?m)^([-*_] *){3,}$`)
+var blockquote = regexp.MustCompile(`(?m)^(\W* {0,3})> `)
+var link = regexp.MustCompile(`([\[\]])`)
+
+var replacer = strings.NewReplacer(
+	`*`, `\*`,
+	`_`, `\_`,
+	"`", "\\`",
+	`|`, `\|`,
+)
+
+// MarkdownCharacters escapes common markdown characters so that
+// `<p>**Not Bold**</p> ends up as correct markdown `\*\*Not Strong\*\*`.
+// No worry, the escaped characters will display fine, just without the formatting.
+func MarkdownCharacters(text string) string {
+	// Escape backslash escapes!
+	text = backslash.ReplaceAllString(text, `\\$1`)
+
+	// Escape headings
+	text = heading.ReplaceAllString(text, `\$1`)
+
+	// Escape hr
+	text = horizontalDivider.ReplaceAllStringFunc(text, func(t string) string {
+		if strings.Contains(t, "-") {
+			return strings.Replace(t, "-", `\-`, 3)
+		} else if strings.Contains(t, "_") {
+			return strings.Replace(t, "_", `\_`, 3)
+		}
+		return strings.Replace(t, "*", `\*`, 3)
+	})
+
+	// Escape ol bullet points
+	text = orderedList.ReplaceAllString(text, `$1$2\. `)
+
+	// Escape ul bullet points
+	text = unorderedList.ReplaceAllStringFunc(text, func(t string) string {
+		return regexp.MustCompile(`([*+-])`).ReplaceAllString(t, `\$1`)
+	})
+
+	// Escape blockquote indents
+	text = blockquote.ReplaceAllString(text, `$1\> `)
+
+	// Escape em/strong *
+	// Escape em/strong _
+	// Escape code _
+	text = replacer.Replace(text)
+
+	// Escape link & image brackets
+	text = link.ReplaceAllString(text, `\$1`)
+
+	return text
+}

vendor/github.com/JohannesKaufmann/html-to-markdown/from.go 🔗

@@ -0,0 +1,464 @@
+// Package md converts html to markdown.
+//
+//  converter := md.NewConverter("", true, nil)
+//
+//  html = `<strong>Important</strong>`
+//
+//  markdown, err := converter.ConvertString(html)
+//  if err != nil {
+//    log.Fatal(err)
+//  }
+//  fmt.Println("md ->", markdown)
+// Or if you are already using goquery:
+//  markdown, err := converter.Convert(selec)
+package md
+
+import (
+	"bytes"
+	"errors"
+	"fmt"
+	"io"
+	"log"
+	"net/http"
+	"net/url"
+	"regexp"
+	"strconv"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/PuerkitoBio/goquery"
+)
+
+type simpleRuleFunc func(content string, selec *goquery.Selection, options *Options) *string
+type ruleFunc func(content string, selec *goquery.Selection, options *Options) (res AdvancedResult, skip bool)
+
+// BeforeHook runs before the converter and can be used to transform the original html
+type BeforeHook func(selec *goquery.Selection)
+
+// Afterhook runs after the converter and can be used to transform the resulting markdown
+type Afterhook func(markdown string) string
+
+// Converter is initialized by NewConverter.
+type Converter struct {
+	mutex  sync.RWMutex
+	rules  map[string][]ruleFunc
+	keep   map[string]struct{}
+	remove map[string]struct{}
+
+	before []BeforeHook
+	after  []Afterhook
+
+	domain  string
+	options Options
+}
+
+func validate(val string, possible ...string) error {
+	for _, e := range possible {
+		if e == val {
+			return nil
+		}
+	}
+	return fmt.Errorf("field must be one of %v but got %s", possible, val)
+}
+func validateOptions(opt Options) error {
+	if err := validate(opt.HeadingStyle, "setext", "atx"); err != nil {
+		return err
+	}
+	if strings.Count(opt.HorizontalRule, "*") < 3 &&
+		strings.Count(opt.HorizontalRule, "_") < 3 &&
+		strings.Count(opt.HorizontalRule, "-") < 3 {
+		return errors.New("HorizontalRule must be at least 3 characters of '*', '_' or '-' but got " + opt.HorizontalRule)
+	}
+
+	if err := validate(opt.BulletListMarker, "-", "+", "*"); err != nil {
+		return err
+	}
+	if err := validate(opt.CodeBlockStyle, "indented", "fenced"); err != nil {
+		return err
+	}
+	if err := validate(opt.Fence, "```", "~~~"); err != nil {
+		return err
+	}
+	if err := validate(opt.EmDelimiter, "_", "*"); err != nil {
+		return err
+	}
+	if err := validate(opt.StrongDelimiter, "**", "__"); err != nil {
+		return err
+	}
+	if err := validate(opt.LinkStyle, "inlined", "referenced"); err != nil {
+		return err
+	}
+	if err := validate(opt.LinkReferenceStyle, "full", "collapsed", "shortcut"); err != nil {
+		return err
+	}
+
+	return nil
+}
+
+var (
+	attrListPrefix = "data-converter-list-prefix"
+)
+
+// NewConverter initializes a new converter and holds all the rules.
+// - `domain` is used for links and images to convert relative urls ("/image.png") to absolute urls.
+// - CommonMark is the default set of rules. Set enableCommonmark to false if you want
+//   to customize everything using AddRules and DONT want to fallback to default rules.
+func NewConverter(domain string, enableCommonmark bool, options *Options) *Converter {
+	conv := &Converter{
+		domain: domain,
+		rules:  make(map[string][]ruleFunc),
+		keep:   make(map[string]struct{}),
+		remove: make(map[string]struct{}),
+	}
+
+	conv.before = append(conv.before, func(selec *goquery.Selection) {
+		selec.Find("a[href]").Each(func(i int, s *goquery.Selection) {
+			// TODO: don't hardcode "data-index" and rename it to avoid accidental conflicts
+			s.SetAttr("data-index", strconv.Itoa(i+1))
+		})
+	})
+	conv.before = append(conv.before, func(selec *goquery.Selection) {
+		selec.Find("li").Each(func(i int, s *goquery.Selection) {
+			prefix := getListPrefix(options, s)
+
+			s.SetAttr(attrListPrefix, prefix)
+		})
+	})
+	conv.after = append(conv.after, func(markdown string) string {
+		markdown = strings.TrimSpace(markdown)
+		markdown = multipleNewLinesRegex.ReplaceAllString(markdown, "\n\n")
+
+		// remove unnecessary trailing spaces to have clean markdown
+		markdown = TrimTrailingSpaces(markdown)
+
+		return markdown
+	})
+
+	if enableCommonmark {
+		conv.AddRules(commonmark...)
+		conv.remove["script"] = struct{}{}
+		conv.remove["style"] = struct{}{}
+		conv.remove["textarea"] = struct{}{}
+	}
+
+	// TODO: put domain in options?
+	if options == nil {
+		options = &Options{}
+	}
+	if options.HeadingStyle == "" {
+		options.HeadingStyle = "atx"
+	}
+	if options.HorizontalRule == "" {
+		options.HorizontalRule = "* * *"
+	}
+	if options.BulletListMarker == "" {
+		options.BulletListMarker = "-"
+	}
+	if options.CodeBlockStyle == "" {
+		options.CodeBlockStyle = "indented"
+	}
+	if options.Fence == "" {
+		options.Fence = "```"
+	}
+	if options.EmDelimiter == "" {
+		options.EmDelimiter = "_"
+	}
+	if options.StrongDelimiter == "" {
+		options.StrongDelimiter = "**"
+	}
+	if options.LinkStyle == "" {
+		options.LinkStyle = "inlined"
+	}
+	if options.LinkReferenceStyle == "" {
+		options.LinkReferenceStyle = "full"
+	}
+	if options.EscapeMode == "" {
+		options.EscapeMode = "basic"
+	}
+
+	// for now, store it in the options
+	options.domain = domain
+
+	if options.GetAbsoluteURL == nil {
+		options.GetAbsoluteURL = DefaultGetAbsoluteURL
+	}
+
+	conv.options = *options
+	err := validateOptions(conv.options)
+	if err != nil {
+		log.Println("markdown options is not valid:", err)
+	}
+
+	return conv
+}
+func (conv *Converter) getRuleFuncs(tag string) []ruleFunc {
+	conv.mutex.RLock()
+	defer conv.mutex.RUnlock()
+
+	r, ok := conv.rules[tag]
+	if !ok || len(r) == 0 {
+		if _, keep := conv.keep[tag]; keep {
+			return []ruleFunc{wrap(ruleKeep)}
+		}
+		if _, remove := conv.remove[tag]; remove {
+			return nil // TODO:
+		}
+
+		return []ruleFunc{wrap(ruleDefault)}
+	}
+
+	return r
+}
+
+func wrap(simple simpleRuleFunc) ruleFunc {
+	return func(content string, selec *goquery.Selection, opt *Options) (AdvancedResult, bool) {
+		res := simple(content, selec, opt)
+		if res == nil {
+			return AdvancedResult{}, true
+		}
+		return AdvancedResult{Markdown: *res}, false
+	}
+}
+
+// Before registers a hook that is run before the conversion. It
+// can be used to transform the original goquery html document.
+//
+// For example, the default before hook adds an index to every link,
+// so that the `a` tag rule (for "reference" "full") can have an incremental number.
+func (conv *Converter) Before(hooks ...BeforeHook) *Converter {
+	conv.mutex.Lock()
+	defer conv.mutex.Unlock()
+
+	for _, hook := range hooks {
+		conv.before = append(conv.before, hook)
+	}
+
+	return conv
+}
+
+// After registers a hook that is run after the conversion. It
+// can be used to transform the markdown document that is about to be returned.
+//
+// For example, the default after hook trims the returned markdown.
+func (conv *Converter) After(hooks ...Afterhook) *Converter {
+	conv.mutex.Lock()
+	defer conv.mutex.Unlock()
+
+	for _, hook := range hooks {
+		conv.after = append(conv.after, hook)
+	}
+
+	return conv
+}
+
+// ClearBefore clears the current before hooks (including the default before hooks).
+func (conv *Converter) ClearBefore() *Converter {
+	conv.mutex.Lock()
+	defer conv.mutex.Unlock()
+
+	conv.before = nil
+
+	return conv
+}
+
+// ClearAfter clears the current after hooks (including the default after hooks).
+func (conv *Converter) ClearAfter() *Converter {
+	conv.mutex.Lock()
+	defer conv.mutex.Unlock()
+
+	conv.after = nil
+
+	return conv
+}
+
+// AddRules adds the rules that are passed in to the converter.
+//
+// By default it overrides the rule for that html tag. You can
+// fall back to the default rule by returning nil.
+func (conv *Converter) AddRules(rules ...Rule) *Converter {
+	conv.mutex.Lock()
+	defer conv.mutex.Unlock()
+
+	for _, rule := range rules {
+		if len(rule.Filter) == 0 {
+			log.Println("you need to specify at least one filter for your rule")
+		}
+		for _, filter := range rule.Filter {
+			r, _ := conv.rules[filter]
+
+			if rule.AdvancedReplacement != nil {
+				r = append(r, rule.AdvancedReplacement)
+			} else {
+				r = append(r, wrap(rule.Replacement))
+			}
+			conv.rules[filter] = r
+		}
+	}
+
+	return conv
+}
+
+// Keep certain html tags in the generated output.
+func (conv *Converter) Keep(tags ...string) *Converter {
+	conv.mutex.Lock()
+	defer conv.mutex.Unlock()
+
+	for _, tag := range tags {
+		conv.keep[tag] = struct{}{}
+	}
+	return conv
+}
+
+// Remove certain html tags from the source.
+func (conv *Converter) Remove(tags ...string) *Converter {
+	conv.mutex.Lock()
+	defer conv.mutex.Unlock()
+	for _, tag := range tags {
+		conv.remove[tag] = struct{}{}
+	}
+	return conv
+}
+
+// Plugin can be used to extends functionality beyond what
+// is offered by commonmark.
+type Plugin func(conv *Converter) []Rule
+
+// Use can be used to add additional functionality to the converter. It is
+// used when its not sufficient to use only rules for example in Plugins.
+func (conv *Converter) Use(plugins ...Plugin) *Converter {
+	for _, plugin := range plugins {
+		rules := plugin(conv)
+		conv.AddRules(rules...) // TODO: for better performance only use one lock for all plugins
+	}
+	return conv
+}
+
+// Timeout for the http client
+var Timeout = time.Second * 10
+var netClient = &http.Client{
+	Timeout: Timeout,
+}
+
+// DomainFromURL returns `u.Host` from the parsed url.
+func DomainFromURL(rawURL string) string {
+	rawURL = strings.TrimSpace(rawURL)
+
+	u, _ := url.Parse(rawURL)
+	if u != nil && u.Host != "" {
+		return u.Host
+	}
+
+	// lets try it again by adding a scheme
+	u, _ = url.Parse("http://" + rawURL)
+	if u != nil {
+		return u.Host
+	}
+
+	return ""
+}
+
+// Reduce many newline characters `\n` to at most 2 new line characters.
+var multipleNewLinesRegex = regexp.MustCompile(`[\n]{2,}`)
+
+// Convert returns the content from a goquery selection.
+// If you have a goquery document just pass in doc.Selection.
+func (conv *Converter) Convert(selec *goquery.Selection) string {
+	conv.mutex.RLock()
+	domain := conv.domain
+	options := conv.options
+	l := len(conv.rules)
+	if l == 0 {
+		log.Println("you have added no rules. either enable commonmark or add you own.")
+	}
+	before := conv.before
+	after := conv.after
+	conv.mutex.RUnlock()
+
+	// before hook
+	for _, hook := range before {
+		hook(selec)
+	}
+
+	res := conv.selecToMD(domain, selec, &options)
+	markdown := res.Markdown
+
+	if res.Header != "" {
+		markdown = res.Header + "\n\n" + markdown
+	}
+	if res.Footer != "" {
+		markdown += "\n\n" + res.Footer
+	}
+
+	// after hook
+	for _, hook := range after {
+		markdown = hook(markdown)
+	}
+
+	return markdown
+}
+
+// ConvertReader returns the content from a reader and returns a buffer.
+func (conv *Converter) ConvertReader(reader io.Reader) (bytes.Buffer, error) {
+	var buffer bytes.Buffer
+	doc, err := goquery.NewDocumentFromReader(reader)
+	if err != nil {
+		return buffer, err
+	}
+	buffer.WriteString(
+		conv.Convert(doc.Selection),
+	)
+
+	return buffer, nil
+}
+
+// ConvertResponse returns the content from a html response.
+func (conv *Converter) ConvertResponse(res *http.Response) (string, error) {
+	doc, err := goquery.NewDocumentFromResponse(res)
+	if err != nil {
+		return "", err
+	}
+	return conv.Convert(doc.Selection), nil
+}
+
+// ConvertString returns the content from a html string. If you
+// already have a goquery selection use `Convert`.
+func (conv *Converter) ConvertString(html string) (string, error) {
+	doc, err := goquery.NewDocumentFromReader(strings.NewReader(html))
+	if err != nil {
+		return "", err
+	}
+	return conv.Convert(doc.Selection), nil
+}
+
+// ConvertBytes returns the content from a html byte array.
+func (conv *Converter) ConvertBytes(bytes []byte) ([]byte, error) {
+	res, err := conv.ConvertString(string(bytes))
+	if err != nil {
+		return nil, err
+	}
+	return []byte(res), nil
+}
+
+// ConvertURL returns the content from the page with that url.
+func (conv *Converter) ConvertURL(url string) (string, error) {
+	// not using goquery.NewDocument directly because of the timeout
+	resp, err := netClient.Get(url)
+	if err != nil {
+		return "", err
+	}
+
+	if resp.StatusCode < 200 || resp.StatusCode > 299 {
+		return "", fmt.Errorf("expected a status code in the 2xx range but got %d", resp.StatusCode)
+	}
+
+	doc, err := goquery.NewDocumentFromResponse(resp)
+	if err != nil {
+		return "", err
+	}
+	domain := DomainFromURL(url)
+	if conv.domain != domain {
+		log.Printf("expected '%s' as the domain but got '%s' \n", conv.domain, domain)
+	}
+	return conv.Convert(doc.Selection), nil
+}

vendor/github.com/JohannesKaufmann/html-to-markdown/markdown.go 🔗

@@ -0,0 +1,212 @@
+package md
+
+import (
+	"bytes"
+	"log"
+	"net/url"
+	"regexp"
+	"strings"
+
+	"github.com/PuerkitoBio/goquery"
+	"golang.org/x/net/html"
+)
+
+var (
+	ruleDefault = func(content string, selec *goquery.Selection, opt *Options) *string {
+		return &content
+	}
+	ruleKeep = func(content string, selec *goquery.Selection, opt *Options) *string {
+		element := selec.Get(0)
+
+		var buf bytes.Buffer
+		err := html.Render(&buf, element)
+		if err != nil {
+			log.Println("[JohannesKaufmann/html-to-markdown] ruleKeep: error while rendering the element to html:", err)
+			return String("")
+		}
+
+		return String(buf.String())
+	}
+)
+
+var inlineElements = []string{ // -> https://developer.mozilla.org/de/docs/Web/HTML/Inline_elemente
+	"b", "big", "i", "small", "tt",
+	"abbr", "acronym", "cite", "code", "dfn", "em", "kbd", "strong", "samp", "var",
+	"a", "bdo", "br", "img", "map", "object", "q", "script", "span", "sub", "sup",
+	"button", "input", "label", "select", "textarea",
+}
+
+// IsInlineElement can be used to check wether a node name (goquery.Nodename) is
+// an html inline element and not a block element. Used in the rule for the
+// p tag to check wether the text is inside a block element.
+func IsInlineElement(e string) bool {
+	for _, element := range inlineElements {
+		if element == e {
+			return true
+		}
+	}
+	return false
+}
+
+// String is a helper function to return a pointer.
+func String(text string) *string {
+	return &text
+}
+
+// Options to customize the output. You can change stuff like
+// the character that is used for strong text.
+type Options struct {
+	// "setext" or "atx"
+	// default: "atx"
+	HeadingStyle string
+
+	// Any Thematic break
+	// default: "* * *"
+	HorizontalRule string
+
+	// "-", "+", or "*"
+	// default: "-"
+	BulletListMarker string
+
+	// "indented" or "fenced"
+	// default: "indented"
+	CodeBlockStyle string
+
+	// ``` or ~~~
+	// default: ```
+	Fence string
+
+	// _ or *
+	// default: _
+	EmDelimiter string
+
+	// ** or __
+	// default: **
+	StrongDelimiter string
+
+	// inlined or referenced
+	// default: inlined
+	LinkStyle string
+
+	// full, collapsed, or shortcut
+	// default: full
+	LinkReferenceStyle string
+
+	// basic, disabled
+	// default: basic
+	EscapeMode string
+
+	domain string
+
+	// GetAbsoluteURL parses the `rawURL` and adds the `domain` to convert relative (/page.html)
+	// urls to absolute urls (http://domain.com/page.html).
+	//
+	// The default is `DefaultGetAbsoluteURL`, unless you override it. That can also
+	// be useful if you want to proxy the images.
+	GetAbsoluteURL func(selec *goquery.Selection, rawURL string, domain string) string
+
+	// GetCodeBlockLanguage identifies the language for syntax highlighting
+	// of a code block. The default is `DefaultGetCodeBlockLanguage`, which
+	// only gets the attribute x from the selection.
+	//
+	// You can override it if you want more results, for example by using
+	// lexers.Analyse(content) from github.com/alecthomas/chroma
+	// TODO: implement
+	// GetCodeBlockLanguage func(s *goquery.Selection, content string) string
+}
+
+// DefaultGetAbsoluteURL is the default function and can be overridden through `GetAbsoluteURL` in the options.
+func DefaultGetAbsoluteURL(selec *goquery.Selection, rawURL string, domain string) string {
+	if domain == "" {
+		return rawURL
+	}
+
+	u, err := url.Parse(rawURL)
+	if err != nil {
+		// we can't do anything with this url because it is invalid
+		return rawURL
+	}
+
+	if u.Scheme == "data" {
+		// this is a data uri (for example an inline base64 image)
+		return rawURL
+	}
+
+	if u.Scheme == "" {
+		u.Scheme = "http"
+	}
+	if u.Host == "" {
+		u.Host = domain
+	}
+
+	return u.String()
+}
+
+// AdvancedResult is used for example for links. If you use LinkStyle:referenced
+// the link href is placed at the bottom of the generated markdown (Footer).
+type AdvancedResult struct {
+	Header   string
+	Markdown string
+	Footer   string
+}
+
+// Rule to convert certain html tags to markdown.
+//  md.Rule{
+//    Filter: []string{"del", "s", "strike"},
+//    Replacement: func(content string, selec *goquery.Selection, opt *md.Options) *string {
+//      // You need to return a pointer to a string (md.String is just a helper function).
+//      // If you return nil the next function for that html element
+//      // will be picked. For example you could only convert an element
+//      // if it has a certain class name and fallback if not.
+//      return md.String("~" + content + "~")
+//    },
+//  }
+type Rule struct {
+	Filter              []string
+	Replacement         func(content string, selec *goquery.Selection, options *Options) *string
+	AdvancedReplacement func(content string, selec *goquery.Selection, options *Options) (res AdvancedResult, skip bool)
+}
+
+var leadingNewlinesR = regexp.MustCompile(`^\n+`)
+var trailingNewlinesR = regexp.MustCompile(`\n+$`)
+
+var newlinesR = regexp.MustCompile(`\n+`)
+var tabR = regexp.MustCompile(`\t+`)
+var indentR = regexp.MustCompile(`(?m)\n`)
+
+func (conv *Converter) selecToMD(domain string, selec *goquery.Selection, opt *Options) AdvancedResult {
+	var result AdvancedResult
+
+	var builder strings.Builder
+	selec.Contents().Each(func(i int, s *goquery.Selection) {
+		name := goquery.NodeName(s)
+		rules := conv.getRuleFuncs(name)
+
+		for i := len(rules) - 1; i >= 0; i-- {
+			rule := rules[i]
+
+			content := conv.selecToMD(domain, s, opt)
+			if content.Header != "" {
+				result.Header += content.Header
+			}
+			if content.Footer != "" {
+				result.Footer += content.Footer
+			}
+
+			res, skip := rule(content.Markdown, s, opt)
+			if res.Header != "" {
+				result.Header += res.Header + "\n"
+			}
+			if res.Footer != "" {
+				result.Footer += res.Footer + "\n"
+			}
+
+			if !skip {
+				builder.WriteString(res.Markdown)
+				return
+			}
+		}
+	})
+	result.Markdown = builder.String()
+	return result
+}

vendor/github.com/JohannesKaufmann/html-to-markdown/utils.go 🔗

@@ -0,0 +1,533 @@
+package md
+
+import (
+	"bytes"
+	"fmt"
+	"regexp"
+	"strconv"
+	"strings"
+	"unicode"
+	"unicode/utf8"
+
+	"github.com/PuerkitoBio/goquery"
+	"golang.org/x/net/html"
+)
+
+/*
+WARNING: The functions from this file can be used externally
+but there is no garanty that they will stay exported.
+*/
+
+// CollectText returns the text of the node and all its children
+func CollectText(n *html.Node) string {
+	text := &bytes.Buffer{}
+	collectText(n, text)
+	return text.String()
+}
+func collectText(n *html.Node, buf *bytes.Buffer) {
+	if n.Type == html.TextNode {
+		buf.WriteString(n.Data)
+	}
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		collectText(c, buf)
+	}
+}
+
+func getName(node *html.Node) string {
+	selec := &goquery.Selection{Nodes: []*html.Node{node}}
+	return goquery.NodeName(selec)
+}
+
+// What elements automatically trim their content?
+// Don't add another space if the other element is going to add a
+// space already.
+func isTrimmedElement(name string) bool {
+	nodes := []string{
+		"a",
+		"strong", "b",
+		"i", "em",
+		"del", "s", "strike",
+		"code",
+	}
+
+	for _, node := range nodes {
+		if name == node {
+			return true
+		}
+	}
+	return false
+}
+
+func getPrevNodeText(node *html.Node) (string, bool) {
+	if node == nil {
+		return "", false
+	}
+
+	for ; node != nil; node = node.PrevSibling {
+		text := CollectText(node)
+
+		name := getName(node)
+		if name == "br" {
+			return "\n", true
+		}
+
+		// if the content is empty, try our luck with the next node
+		if strings.TrimSpace(text) == "" {
+			continue
+		}
+
+		if isTrimmedElement(name) {
+			text = strings.TrimSpace(text)
+		}
+
+		return text, true
+	}
+	return "", false
+}
+func getNextNodeText(node *html.Node) (string, bool) {
+	if node == nil {
+		return "", false
+	}
+
+	for ; node != nil; node = node.NextSibling {
+		text := CollectText(node)
+
+		name := getName(node)
+		if name == "br" {
+			return "\n", true
+		}
+
+		// if the content is empty, try our luck with the next node
+		if strings.TrimSpace(text) == "" {
+			continue
+		}
+
+		// if you have "a a a", three elements that are trimmed, then only add
+		// a space to one side, since the other's are also adding a space.
+		if isTrimmedElement(name) {
+			text = " "
+		}
+
+		return text, true
+	}
+	return "", false
+}
+
+// AddSpaceIfNessesary adds spaces to the text based on the neighbors.
+// That makes sure that there is always a space to the side, to recognize the delimiter.
+func AddSpaceIfNessesary(selec *goquery.Selection, markdown string) string {
+	if len(selec.Nodes) == 0 {
+		return markdown
+	}
+	rootNode := selec.Nodes[0]
+
+	prev, hasPrev := getPrevNodeText(rootNode.PrevSibling)
+	if hasPrev {
+		lastChar, size := utf8.DecodeLastRuneInString(prev)
+		if size > 0 && !unicode.IsSpace(lastChar) {
+			markdown = " " + markdown
+		}
+	}
+
+	next, hasNext := getNextNodeText(rootNode.NextSibling)
+	if hasNext {
+		firstChar, size := utf8.DecodeRuneInString(next)
+		if size > 0 && !unicode.IsSpace(firstChar) && !unicode.IsPunct(firstChar) {
+			markdown = markdown + " "
+		}
+	}
+
+	return markdown
+}
+
+func isLineCodeDelimiter(chars []rune) bool {
+	if len(chars) < 3 {
+		return false
+	}
+
+	// TODO: If it starts with 4 (instead of 3) fence characters, we should only end it
+	// if we see the same amount of ending fence characters.
+	return chars[0] == '`' && chars[1] == '`' && chars[2] == '`'
+}
+
+// TrimpLeadingSpaces removes spaces from the beginning of a line
+// but makes sure that list items and code blocks are not affected.
+func TrimpLeadingSpaces(text string) string {
+	var insideCodeBlock bool
+
+	lines := strings.Split(text, "\n")
+	for index := range lines {
+		chars := []rune(lines[index])
+
+		if isLineCodeDelimiter(chars) {
+			if !insideCodeBlock {
+				// start the code block
+				insideCodeBlock = true
+			} else {
+				// end the code block
+				insideCodeBlock = false
+			}
+		}
+		if insideCodeBlock {
+			// We are inside a code block and don't want to
+			// disturb that formatting (e.g. python indentation)
+			continue
+		}
+
+		var spaces int
+		for i := 0; i < len(chars); i++ {
+			if unicode.IsSpace(chars[i]) {
+				if chars[i] == '	' {
+					spaces = spaces + 4
+				} else {
+					spaces++
+				}
+				continue
+			}
+
+			// this seems to be a list item
+			if chars[i] == '-' {
+				break
+			}
+
+			// this seems to be a code block
+			if spaces >= 4 {
+				break
+			}
+
+			// remove the space characters from the string
+			chars = chars[i:]
+			break
+		}
+		lines[index] = string(chars)
+	}
+
+	return strings.Join(lines, "\n")
+}
+
+// TrimTrailingSpaces removes unnecessary spaces from the end of lines.
+func TrimTrailingSpaces(text string) string {
+	parts := strings.Split(text, "\n")
+	for i := range parts {
+		parts[i] = strings.TrimRightFunc(parts[i], func(r rune) bool {
+			return unicode.IsSpace(r)
+		})
+
+	}
+
+	return strings.Join(parts, "\n")
+}
+
+// The same as `multipleNewLinesRegex`, but applies to escaped new lines inside a link `\n\`
+var multipleNewLinesInLinkRegex = regexp.MustCompile(`(\n\\){1,}`) // `([\n\r\s]\\)`
+
+// EscapeMultiLine deals with multiline content inside a link
+func EscapeMultiLine(content string) string {
+	content = strings.TrimSpace(content)
+	content = strings.Replace(content, "\n", `\`+"\n", -1)
+
+	content = multipleNewLinesInLinkRegex.ReplaceAllString(content, "\n\\")
+
+	return content
+}
+
+func calculateCodeFenceOccurrences(fenceChar rune, content string) int {
+	var occurrences []int
+
+	var charsTogether int
+	for _, char := range content {
+		// we encountered a fence character, now count how many
+		// are directly afterwards
+		if char == fenceChar {
+			charsTogether++
+		} else if charsTogether != 0 {
+			occurrences = append(occurrences, charsTogether)
+			charsTogether = 0
+		}
+	}
+
+	// if the last element in the content was a fenceChar
+	if charsTogether != 0 {
+		occurrences = append(occurrences, charsTogether)
+	}
+
+	return findMax(occurrences)
+}
+
+// CalculateCodeFence can be passed the content of a code block and it returns
+// how many fence characters (` or ~) should be used.
+//
+// This is useful if the html content includes the same fence characters
+// for example ```
+// -> https://stackoverflow.com/a/49268657
+func CalculateCodeFence(fenceChar rune, content string) string {
+	repeat := calculateCodeFenceOccurrences(fenceChar, content)
+
+	// the outer fence block always has to have
+	// at least one character more than any content inside
+	repeat++
+
+	// you have to have at least three fence characters
+	// to be recognized as a code block
+	if repeat < 3 {
+		repeat = 3
+	}
+
+	return strings.Repeat(string(fenceChar), repeat)
+}
+
+func findMax(a []int) (max int) {
+	for i, value := range a {
+		if i == 0 {
+			max = a[i]
+		}
+
+		if value > max {
+			max = value
+		}
+	}
+	return max
+}
+
+func getCodeWithoutTags(startNode *html.Node) []byte {
+	var buf bytes.Buffer
+
+	var f func(*html.Node)
+	f = func(n *html.Node) {
+		if n.Type == html.ElementNode && (n.Data == "style" || n.Data == "script" || n.Data == "textarea") {
+			return
+		}
+		if n.Type == html.ElementNode && (n.Data == "br" || n.Data == "div") {
+			buf.WriteString("\n")
+		}
+
+		if n.Type == html.TextNode {
+			buf.WriteString(n.Data)
+			return
+		}
+
+		for c := n.FirstChild; c != nil; c = c.NextSibling {
+			f(c)
+		}
+	}
+
+	f(startNode)
+
+	return buf.Bytes()
+}
+
+// getCodeContent gets the content of pre/code and unescapes the encoded characters.
+// Returns "" if there is an error.
+func getCodeContent(selec *goquery.Selection) string {
+	if len(selec.Nodes) == 0 {
+		return ""
+	}
+
+	code := getCodeWithoutTags(selec.Nodes[0])
+
+	return string(code)
+}
+
+// delimiterForEveryLine puts the delimiter not just at the start and end of the string
+// but if the text is divided on multiple lines, puts the delimiters on every line with content.
+//
+// Otherwise the bold/italic delimiters won't be recognized if it contains new line characters.
+func delimiterForEveryLine(text string, delimiter string) string {
+	lines := strings.Split(text, "\n")
+
+	for i, line := range lines {
+		line = strings.TrimSpace(line)
+		if line == "" {
+			// Skip empty lines
+			continue
+		}
+
+		lines[i] = delimiter + line + delimiter
+	}
+	return strings.Join(lines, "\n")
+}
+
+// isWrapperListItem returns wether the list item has own
+// content or is just a wrapper for another list.
+// e.g. "<li><ul>..."
+func isWrapperListItem(s *goquery.Selection) bool {
+	directText := s.Contents().Not("ul").Not("ol").Text()
+
+	noOwnText := strings.TrimSpace(directText) == ""
+	childIsList := s.ChildrenFiltered("ul").Length() > 0 || s.ChildrenFiltered("ol").Length() > 0
+
+	return noOwnText && childIsList
+}
+
+// getListStart returns the integer from which the counting
+// for for the list items should start from.
+// -> https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ol#start
+func getListStart(parent *goquery.Selection) int {
+	val := parent.AttrOr("start", "")
+	if val == "" {
+		return 1
+	}
+
+	num, err := strconv.Atoi(val)
+	if err != nil {
+		return 1
+	}
+
+	if num < 0 {
+		return 1
+	}
+	return num
+}
+
+// getListPrefix returns the appropriate prefix for the list item.
+// For example "- ", "* ", "1. ", "01. ", ...
+func getListPrefix(opt *Options, s *goquery.Selection) string {
+	if isWrapperListItem(s) {
+		return ""
+	}
+
+	parent := s.Parent()
+	if parent.Is("ul") {
+		return opt.BulletListMarker + " "
+	} else if parent.Is("ol") {
+		start := getListStart(parent)
+		currentIndex := start + s.Index()
+
+		lastIndex := parent.Children().Last().Index() + 1
+		maxLength := len(strconv.Itoa(lastIndex))
+
+		// pad the numbers so that all prefix numbers in the list take up the same space
+		// `%02d.` -> "01. "
+		format := `%0` + strconv.Itoa(maxLength) + `d. `
+		return fmt.Sprintf(format, currentIndex)
+	}
+	// If the HTML is malformed and the list element isn't in a ul or ol, return no prefix
+	return ""
+}
+
+// countListParents counts how much space is reserved for the prefixes at all the parent lists.
+// This is useful to calculate the correct level of indentation for nested lists.
+func countListParents(opt *Options, selec *goquery.Selection) (int, int) {
+	var values []int
+	for n := selec.Parent(); n != nil; n = n.Parent() {
+		if n.Is("li") {
+			continue
+		}
+		if !n.Is("ul") && !n.Is("ol") {
+			break
+		}
+
+		prefix := n.Children().First().AttrOr(attrListPrefix, "")
+
+		values = append(values, len(prefix))
+	}
+
+	// how many spaces are reserved for the prefixes of my siblings
+	var prefixCount int
+
+	// how many spaces are reserved in total for all of the other
+	// list parents up the tree
+	var previousPrefixCounts int
+
+	for i, val := range values {
+		if i == 0 {
+			prefixCount = val
+			continue
+		}
+
+		previousPrefixCounts += val
+	}
+
+	return prefixCount, previousPrefixCounts
+}
+
+// IndentMultiLineListItem makes sure that multiline list items
+// are properly indented.
+func IndentMultiLineListItem(opt *Options, text string, spaces int) string {
+	parts := strings.Split(text, "\n")
+	for i := range parts {
+		// dont touch the first line since its indented through the prefix
+		if i == 0 {
+			continue
+		}
+
+		if isListItem(opt, parts[i]) {
+			return strings.Join(parts, "\n")
+		}
+
+		indent := strings.Repeat(" ", spaces)
+		parts[i] = indent + parts[i]
+	}
+
+	return strings.Join(parts, "\n")
+}
+
+// isListItem checks wether the line is a markdown list item
+func isListItem(opt *Options, line string) bool {
+	b := []rune(line)
+
+	bulletMarker := []rune(opt.BulletListMarker)[0]
+
+	var hasNumber bool
+	var hasMarker bool
+	var hasSpace bool
+
+	for i := 0; i < len(b); i++ {
+		// A marker followed by a space qualifies as a list item
+		if hasMarker && hasSpace {
+			if b[i] == bulletMarker {
+				// But if another BulletListMarker is found, it
+				// might be a HorizontalRule
+				return false
+			}
+
+			if !unicode.IsSpace(b[i]) {
+				// Now we have some text
+				return true
+			}
+		}
+
+		if hasMarker {
+			if unicode.IsSpace(b[i]) {
+				hasSpace = true
+				continue
+			}
+			// A marker like "1." that is not immediately followed by a space
+			// is probably a false positive
+			return false
+		}
+
+		if b[i] == bulletMarker {
+			hasMarker = true
+			continue
+		}
+
+		if hasNumber && b[i] == '.' {
+			hasMarker = true
+			continue
+		}
+		if unicode.IsDigit(b[i]) {
+			hasNumber = true
+			continue
+		}
+
+		if unicode.IsSpace(b[i]) {
+			continue
+		}
+
+		// If we encouter any other character
+		// before finding an indicator, its
+		// not a list item
+		return false
+	}
+	return false
+}
+
+// IndexWithText is similar to goquery's Index function but
+// returns the index of the current element while
+// NOT counting the empty elements beforehand.
+func IndexWithText(s *goquery.Selection) int {
+	return s.PrevAll().FilterFunction(func(i int, s *goquery.Selection) bool {
+		return strings.TrimSpace(s.Text()) != ""
+	}).Length()
+}

vendor/github.com/MakeNowJust/heredoc/LICENSE 🔗

@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2014-2019 TSUYUSATO Kitsune
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.

vendor/github.com/MakeNowJust/heredoc/README.md 🔗

@@ -0,0 +1,52 @@
+# heredoc

+

+[![Build Status](https://circleci.com/gh/MakeNowJust/heredoc.svg?style=svg)](https://circleci.com/gh/MakeNowJust/heredoc) [![GoDoc](https://godoc.org/github.com/MakeNowJusti/heredoc?status.svg)](https://godoc.org/github.com/MakeNowJust/heredoc)

+

+## About

+

+Package heredoc provides the here-document with keeping indent.

+

+## Install

+

+```console

+$ go get github.com/MakeNowJust/heredoc

+```

+

+## Import

+

+```go

+// usual

+import "github.com/MakeNowJust/heredoc"

+```

+

+## Example

+

+```go

+package main

+

+import (

+	"fmt"

+	"github.com/MakeNowJust/heredoc"

+)

+

+func main() {

+	fmt.Println(heredoc.Doc(`

+		Lorem ipsum dolor sit amet, consectetur adipisicing elit,

+		sed do eiusmod tempor incididunt ut labore et dolore magna

+		aliqua. Ut enim ad minim veniam, ...

+	`))

+	// Output:

+	// Lorem ipsum dolor sit amet, consectetur adipisicing elit,

+	// sed do eiusmod tempor incididunt ut labore et dolore magna

+	// aliqua. Ut enim ad minim veniam, ...

+	//

+}

+```

+

+## API Document

+

+ - [heredoc - GoDoc](https://godoc.org/github.com/MakeNowJust/heredoc)

+

+## License

+

+This software is released under the MIT License, see LICENSE.

vendor/github.com/MakeNowJust/heredoc/heredoc.go 🔗

@@ -0,0 +1,105 @@
+// Copyright (c) 2014-2019 TSUYUSATO Kitsune
+// This software is released under the MIT License.
+// http://opensource.org/licenses/mit-license.php
+
+// Package heredoc provides creation of here-documents from raw strings.
+//
+// Golang supports raw-string syntax.
+//
+//     doc := `
+//     	Foo
+//     	Bar
+//     `
+//
+// But raw-string cannot recognize indentation. Thus such content is an indented string, equivalent to
+//
+//     "\n\tFoo\n\tBar\n"
+//
+// I dont't want this!
+//
+// However this problem is solved by package heredoc.
+//
+//     doc := heredoc.Doc(`
+//     	Foo
+//     	Bar
+//     `)
+//
+// Is equivalent to
+//
+//     "Foo\nBar\n"
+package heredoc
+
+import (
+	"fmt"
+	"strings"
+	"unicode"
+)
+
+const maxInt = int(^uint(0) >> 1)
+
+// Doc returns un-indented string as here-document.
+func Doc(raw string) string {
+	skipFirstLine := false
+	if len(raw) > 0 && raw[0] == '\n' {
+		raw = raw[1:]
+	} else {
+		skipFirstLine = true
+	}
+
+	lines := strings.Split(raw, "\n")
+
+	minIndentSize := getMinIndent(lines, skipFirstLine)
+	lines = removeIndentation(lines, minIndentSize, skipFirstLine)
+
+	return strings.Join(lines, "\n")
+}
+
+// getMinIndent calculates the minimum indentation in lines, excluding empty lines.
+func getMinIndent(lines []string, skipFirstLine bool) int {
+	minIndentSize := maxInt
+
+	for i, line := range lines {
+		if i == 0 && skipFirstLine {
+			continue
+		}
+
+		indentSize := 0
+		for _, r := range []rune(line) {
+			if unicode.IsSpace(r) {
+				indentSize += 1
+			} else {
+				break
+			}
+		}
+
+		if len(line) == indentSize {
+			if i == len(lines)-1 && indentSize < minIndentSize {
+				lines[i] = ""
+			}
+		} else if indentSize < minIndentSize {
+			minIndentSize = indentSize
+		}
+	}
+	return minIndentSize
+}
+
+// removeIndentation removes n characters from the front of each line in lines.
+// Skips first line if skipFirstLine is true, skips empty lines.
+func removeIndentation(lines []string, n int, skipFirstLine bool) []string {
+	for i, line := range lines {
+		if i == 0 && skipFirstLine {
+			continue
+		}
+
+		if len(lines[i]) >= n {
+			lines[i] = line[n:]
+		}
+	}
+	return lines
+}
+
+// Docf returns unindented and formatted string as here-document.
+// Formatting is done as for fmt.Printf().
+func Docf(raw string, args ...interface{}) string {
+	return fmt.Sprintf(Doc(raw), args...)
+}

vendor/github.com/PuerkitoBio/goquery/LICENSE 🔗

@@ -0,0 +1,12 @@
+Copyright (c) 2012-2021, Martin Angers & Contributors
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
+
+* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

vendor/github.com/PuerkitoBio/goquery/README.md 🔗

@@ -0,0 +1,202 @@
+# goquery - a little like that j-thing, only in Go
+
+[![Build Status](https://github.com/PuerkitoBio/goquery/actions/workflows/test.yml/badge.svg?branch=master)](https://github.com/PuerkitoBio/goquery/actions)
+[![Go Reference](https://pkg.go.dev/badge/github.com/PuerkitoBio/goquery.svg)](https://pkg.go.dev/github.com/PuerkitoBio/goquery)
+[![Sourcegraph Badge](https://sourcegraph.com/github.com/PuerkitoBio/goquery/-/badge.svg)](https://sourcegraph.com/github.com/PuerkitoBio/goquery?badge)
+
+goquery brings a syntax and a set of features similar to [jQuery][] to the [Go language][go]. It is based on Go's [net/html package][html] and the CSS Selector library [cascadia][]. Since the net/html parser returns nodes, and not a full-featured DOM tree, jQuery's stateful manipulation functions (like height(), css(), detach()) have been left off.
+
+Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML. See the [wiki][] for various options to do this.
+
+Syntax-wise, it is as close as possible to jQuery, with the same function names when possible, and that warm and fuzzy chainable interface. jQuery being the ultra-popular library that it is, I felt that writing a similar HTML-manipulating library was better to follow its API than to start anew (in the same spirit as Go's `fmt` package), even though some of its methods are less than intuitive (looking at you, [index()][index]...).
+
+## Table of Contents
+
+* [Installation](#installation)
+* [Changelog](#changelog)
+* [API](#api)
+* [Examples](#examples)
+* [Related Projects](#related-projects)
+* [Support](#support)
+* [License](#license)
+
+## Installation
+
+Please note that starting with version `v1.9.0` of goquery, Go 1.18+ is required due to the use of generics. For previous goquery versions, a Go version of 1.1+ was required because of the `net/html` dependency. Ongoing goquery development is tested on the latest 2 versions of Go.
+
+    $ go get github.com/PuerkitoBio/goquery
+
+(optional) To run unit tests:
+
+    $ cd $GOPATH/src/github.com/PuerkitoBio/goquery
+    $ go test
+
+(optional) To run benchmarks (warning: it runs for a few minutes):
+
+    $ cd $GOPATH/src/github.com/PuerkitoBio/goquery
+    $ go test -bench=".*"
+
+## Changelog
+
+**Note that goquery's API is now stable, and will not break.**
+
+*    **2024-04-29 (v1.9.2)** : Update `go.mod` dependencies.
+*    **2024-02-29 (v1.9.1)** : Improve allocation and performance of the `Map` function and `Selection.Map` method, better document the cascadia differences (thanks [@jwilsson](https://github.com/jwilsson)).
+*    **2024-02-22 (v1.9.0)** : Add a generic `Map` function, **goquery now requires Go version 1.18+** (thanks [@Fesaa](https://github.com/Fesaa)).
+*    **2023-02-18 (v1.8.1)** : Update `go.mod` dependencies, update CI workflow.
+*    **2021-10-25 (v1.8.0)** : Add `Render` function to render a `Selection` to an `io.Writer` (thanks [@anthonygedeon](https://github.com/anthonygedeon)).
+*    **2021-07-11 (v1.7.1)** : Update go.mod dependencies and add dependabot config (thanks [@jauderho](https://github.com/jauderho)).
+*    **2021-06-14 (v1.7.0)** : Add `Single` and `SingleMatcher` functions to optimize first-match selection (thanks [@gdollardollar](https://github.com/gdollardollar)).
+*    **2021-01-11 (v1.6.1)** : Fix panic when calling `{Prepend,Append,Set}Html` on a `Selection` that contains non-Element nodes.
+*    **2020-10-08 (v1.6.0)** : Parse html in context of the container node for all functions that deal with html strings (`AfterHtml`, `AppendHtml`, etc.). Thanks to [@thiemok][thiemok] and [@davidjwilkins][djw] for their work on this.
+*    **2020-02-04 (v1.5.1)** : Update module dependencies.
+*    **2018-11-15 (v1.5.0)** : Go module support (thanks @Zaba505).
+*    **2018-06-07 (v1.4.1)** : Add `NewDocumentFromReader` examples.
+*    **2018-03-24 (v1.4.0)** : Deprecate `NewDocument(url)` and `NewDocumentFromResponse(response)`.
+*    **2018-01-28 (v1.3.0)** : Add `ToEnd` constant to `Slice` until the end of the selection (thanks to @davidjwilkins for raising the issue).
+*    **2018-01-11 (v1.2.0)** : Add `AddBack*` and deprecate `AndSelf` (thanks to @davidjwilkins).
+*    **2017-02-12 (v1.1.0)** : Add `SetHtml` and `SetText` (thanks to @glebtv).
+*    **2016-12-29 (v1.0.2)** : Optimize allocations for `Selection.Text` (thanks to @radovskyb).
+*    **2016-08-28 (v1.0.1)** : Optimize performance for large documents.
+*    **2016-07-27 (v1.0.0)** : Tag version 1.0.0.
+*    **2016-06-15** : Invalid selector strings internally compile to a `Matcher` implementation that never matches any node (instead of a panic). So for example, `doc.Find("~")` returns an empty `*Selection` object.
+*    **2016-02-02** : Add `NodeName` utility function similar to the DOM's `nodeName` property. It returns the tag name of the first element in a selection, and other relevant values of non-element nodes (see [doc][] for details). Add `OuterHtml` utility function similar to the DOM's `outerHTML` property (named `OuterHtml` in small caps for consistency with the existing `Html` method on the `Selection`).
+*    **2015-04-20** : Add `AttrOr` helper method to return the attribute's value or a default value if absent. Thanks to [piotrkowalczuk][piotr].
+*    **2015-02-04** : Add more manipulation functions - Prepend* - thanks again to [Andrew Stone][thatguystone].
+*    **2014-11-28** : Add more manipulation functions - ReplaceWith*, Wrap* and Unwrap - thanks again to [Andrew Stone][thatguystone].
+*    **2014-11-07** : Add manipulation functions (thanks to [Andrew Stone][thatguystone]) and `*Matcher` functions, that receive compiled cascadia selectors instead of selector strings, thus avoiding potential panics thrown by goquery via `cascadia.MustCompile` calls. This results in better performance (selectors can be compiled once and reused) and more idiomatic error handling (you can handle cascadia's compilation errors, instead of recovering from panics, which had been bugging me for a long time). Note that the actual type expected is a `Matcher` interface, that `cascadia.Selector` implements. Other matcher implementations could be used.
+*    **2014-11-06** : Change import paths of net/html to golang.org/x/net/html (see https://groups.google.com/forum/#!topic/golang-nuts/eD8dh3T9yyA). Make sure to update your code to use the new import path too when you call goquery with `html.Node`s.
+*    **v0.3.2** : Add `NewDocumentFromReader()` (thanks jweir) which allows creating a goquery document from an io.Reader.
+*    **v0.3.1** : Add `NewDocumentFromResponse()` (thanks assassingj) which allows creating a goquery document from an http response.
+*    **v0.3.0** : Add `EachWithBreak()` which allows to break out of an `Each()` loop by returning false. This function was added instead of changing the existing `Each()` to avoid breaking compatibility.
+*    **v0.2.1** : Make go-getable, now that [go.net/html is Go1.0-compatible][gonet] (thanks to @matrixik for pointing this out).
+*    **v0.2.0** : Add support for negative indices in Slice(). **BREAKING CHANGE** `Document.Root` is removed, `Document` is now a `Selection` itself (a selection of one, the root element, just like `Document.Root` was before). Add jQuery's Closest() method.
+*    **v0.1.1** : Add benchmarks to use as baseline for refactorings, refactor Next...() and Prev...() methods to use the new html package's linked list features (Next/PrevSibling, FirstChild). Good performance boost (40+% in some cases).
+*    **v0.1.0** : Initial release.
+
+## API
+
+goquery exposes two structs, `Document` and `Selection`, and the `Matcher` interface. Unlike jQuery, which is loaded as part of a DOM document, and thus acts on its containing document, goquery doesn't know which HTML document to act upon. So it needs to be told, and that's what the `Document` type is for. It holds the root document node as the initial Selection value to manipulate.
+
+jQuery often has many variants for the same function (no argument, a selector string argument, a jQuery object argument, a DOM element argument, ...). Instead of exposing the same features in goquery as a single method with variadic empty interface arguments, statically-typed signatures are used following this naming convention:
+
+*    When the jQuery equivalent can be called with no argument, it has the same name as jQuery for the no argument signature (e.g.: `Prev()`), and the version with a selector string argument is called `XxxFiltered()` (e.g.: `PrevFiltered()`)
+*    When the jQuery equivalent **requires** one argument, the same name as jQuery is used for the selector string version (e.g.: `Is()`)
+*    The signatures accepting a jQuery object as argument are defined in goquery as `XxxSelection()` and take a `*Selection` object as argument (e.g.: `FilterSelection()`)
+*    The signatures accepting a DOM element as argument in jQuery are defined in goquery as `XxxNodes()` and take a variadic argument of type `*html.Node` (e.g.: `FilterNodes()`)
+*    The signatures accepting a function as argument in jQuery are defined in goquery as `XxxFunction()` and take a function as argument (e.g.: `FilterFunction()`)
+*    The goquery methods that can be called with a selector string have a corresponding version that take a `Matcher` interface and are defined as `XxxMatcher()` (e.g.: `IsMatcher()`)
+
+Utility functions that are not in jQuery but are useful in Go are implemented as functions (that take a `*Selection` as parameter), to avoid a potential naming clash on the `*Selection`'s methods (reserved for jQuery-equivalent behaviour).
+
+The complete [package reference documentation can be found here][doc].
+
+Please note that Cascadia's selectors do not necessarily match all supported selectors of jQuery (Sizzle). See the [cascadia project][cascadia] for details. Also, the selectors work more like the DOM's `querySelectorAll`, than jQuery's matchers - they have no concept of contextual matching (for some concrete examples of what that means, see [this ticket](https://github.com/andybalholm/cascadia/issues/61)). In practice, it doesn't matter very often but it's something worth mentioning. Invalid selector strings compile to a `Matcher` that fails to match any node. Behaviour of the various functions that take a selector string as argument follows from that fact, e.g. (where `~` is an invalid selector string):
+
+* `Find("~")` returns an empty selection because the selector string doesn't match anything.
+* `Add("~")` returns a new selection that holds the same nodes as the original selection, because it didn't add any node (selector string didn't match anything).
+* `ParentsFiltered("~")` returns an empty selection because the selector string doesn't match anything.
+* `ParentsUntil("~")` returns all parents of the selection because the selector string didn't match any element to stop before the top element.
+
+## Examples
+
+See some tips and tricks in the [wiki][].
+
+Adapted from example_test.go:
+
+```Go
+package main
+
+import (
+  "fmt"
+  "log"
+  "net/http"
+
+  "github.com/PuerkitoBio/goquery"
+)
+
+func ExampleScrape() {
+  // Request the HTML page.
+  res, err := http.Get("http://metalsucks.net")
+  if err != nil {
+    log.Fatal(err)
+  }
+  defer res.Body.Close()
+  if res.StatusCode != 200 {
+    log.Fatalf("status code error: %d %s", res.StatusCode, res.Status)
+  }
+
+  // Load the HTML document
+  doc, err := goquery.NewDocumentFromReader(res.Body)
+  if err != nil {
+    log.Fatal(err)
+  }
+
+  // Find the review items
+  doc.Find(".left-content article .post-title").Each(func(i int, s *goquery.Selection) {
+		// For each item found, get the title
+		title := s.Find("a").Text()
+		fmt.Printf("Review %d: %s\n", i, title)
+	})
+}
+
+func main() {
+  ExampleScrape()
+}
+```
+
+## Related Projects
+
+- [Goq][goq], an HTML deserialization and scraping library based on goquery and struct tags.
+- [andybalholm/cascadia][cascadia], the CSS selector library used by goquery.
+- [suntong/cascadia][cascadiacli], a command-line interface to the cascadia CSS selector library, useful to test selectors.
+- [gocolly/colly](https://github.com/gocolly/colly), a lightning fast and elegant Scraping Framework
+- [gnulnx/goperf](https://github.com/gnulnx/goperf), a website performance test tool that also fetches static assets.
+- [MontFerret/ferret](https://github.com/MontFerret/ferret), declarative web scraping.
+- [tacusci/berrycms](https://github.com/tacusci/berrycms), a modern simple to use CMS with easy to write plugins
+- [Dataflow kit](https://github.com/slotix/dataflowkit), Web Scraping framework for Gophers.
+- [Geziyor](https://github.com/geziyor/geziyor), a fast web crawling & scraping framework for Go. Supports JS rendering.
+- [Pagser](https://github.com/foolin/pagser), a simple, easy, extensible, configurable HTML parser to struct based on goquery and struct tags.
+- [stitcherd](https://github.com/vhodges/stitcherd), A server for doing server side includes using css selectors and DOM updates.
+- [goskyr](https://github.com/jakopako/goskyr), an easily configurable command-line scraper written in Go.
+- [goGetJS](https://github.com/davemolk/goGetJS), a tool for extracting, searching, and saving JavaScript files (with optional headless browser).
+- [fitter](https://github.com/PxyUp/fitter), a tool for selecting values from JSON, XML, HTML and XPath formatted pages.
+
+## Support
+
+There are a number of ways you can support the project:
+
+* Use it, star it, build something with it, spread the word!
+  - If you do build something open-source or otherwise publicly-visible, let me know so I can add it to the [Related Projects](#related-projects) section!
+* Raise issues to improve the project (note: doc typos and clarifications are issues too!)
+  - Please search existing issues before opening a new one - it may have already been addressed.
+* Pull requests: please discuss new code in an issue first, unless the fix is really trivial.
+  - Make sure new code is tested.
+  - Be mindful of existing code - PRs that break existing code have a high probability of being declined, unless it fixes a serious issue.
+* Sponsor the developer
+  - See the Github Sponsor button at the top of the repo on github
+  - or via BuyMeACoffee.com, below
+
+<a href="https://www.buymeacoffee.com/mna" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
+
+## License
+
+The [BSD 3-Clause license][bsd], the same as the [Go language][golic]. Cascadia's license is [here][caslic].
+
+[jquery]: http://jquery.com/
+[go]: http://golang.org/
+[cascadia]: https://github.com/andybalholm/cascadia
+[cascadiacli]: https://github.com/suntong/cascadia
+[bsd]: http://opensource.org/licenses/BSD-3-Clause
+[golic]: http://golang.org/LICENSE
+[caslic]: https://github.com/andybalholm/cascadia/blob/master/LICENSE
+[doc]: https://pkg.go.dev/github.com/PuerkitoBio/goquery
+[index]: http://api.jquery.com/index/
+[gonet]: https://github.com/golang/net/
+[html]: https://pkg.go.dev/golang.org/x/net/html
+[wiki]: https://github.com/PuerkitoBio/goquery/wiki/Tips-and-tricks
+[thatguystone]: https://github.com/thatguystone
+[piotr]: https://github.com/piotrkowalczuk
+[goq]: https://github.com/andrewstuart/goq
+[thiemok]: https://github.com/thiemok
+[djw]: https://github.com/davidjwilkins

vendor/github.com/PuerkitoBio/goquery/array.go 🔗

@@ -0,0 +1,124 @@
+package goquery
+
+import (
+	"golang.org/x/net/html"
+)
+
+const (
+	maxUint = ^uint(0)
+	maxInt  = int(maxUint >> 1)
+
+	// ToEnd is a special index value that can be used as end index in a call
+	// to Slice so that all elements are selected until the end of the Selection.
+	// It is equivalent to passing (*Selection).Length().
+	ToEnd = maxInt
+)
+
+// First reduces the set of matched elements to the first in the set.
+// It returns a new Selection object, and an empty Selection object if the
+// the selection is empty.
+func (s *Selection) First() *Selection {
+	return s.Eq(0)
+}
+
+// Last reduces the set of matched elements to the last in the set.
+// It returns a new Selection object, and an empty Selection object if
+// the selection is empty.
+func (s *Selection) Last() *Selection {
+	return s.Eq(-1)
+}
+
+// Eq reduces the set of matched elements to the one at the specified index.
+// If a negative index is given, it counts backwards starting at the end of the
+// set. It returns a new Selection object, and an empty Selection object if the
+// index is invalid.
+func (s *Selection) Eq(index int) *Selection {
+	if index < 0 {
+		index += len(s.Nodes)
+	}
+
+	if index >= len(s.Nodes) || index < 0 {
+		return newEmptySelection(s.document)
+	}
+
+	return s.Slice(index, index+1)
+}
+
+// Slice reduces the set of matched elements to a subset specified by a range
+// of indices. The start index is 0-based and indicates the index of the first
+// element to select. The end index is 0-based and indicates the index at which
+// the elements stop being selected (the end index is not selected).
+//
+// The indices may be negative, in which case they represent an offset from the
+// end of the selection.
+//
+// The special value ToEnd may be specified as end index, in which case all elements
+// until the end are selected. This works both for a positive and negative start
+// index.
+func (s *Selection) Slice(start, end int) *Selection {
+	if start < 0 {
+		start += len(s.Nodes)
+	}
+	if end == ToEnd {
+		end = len(s.Nodes)
+	} else if end < 0 {
+		end += len(s.Nodes)
+	}
+	return pushStack(s, s.Nodes[start:end])
+}
+
+// Get retrieves the underlying node at the specified index.
+// Get without parameter is not implemented, since the node array is available
+// on the Selection object.
+func (s *Selection) Get(index int) *html.Node {
+	if index < 0 {
+		index += len(s.Nodes) // Negative index gets from the end
+	}
+	return s.Nodes[index]
+}
+
+// Index returns the position of the first element within the Selection object
+// relative to its sibling elements.
+func (s *Selection) Index() int {
+	if len(s.Nodes) > 0 {
+		return newSingleSelection(s.Nodes[0], s.document).PrevAll().Length()
+	}
+	return -1
+}
+
+// IndexSelector returns the position of the first element within the
+// Selection object relative to the elements matched by the selector, or -1 if
+// not found.
+func (s *Selection) IndexSelector(selector string) int {
+	if len(s.Nodes) > 0 {
+		sel := s.document.Find(selector)
+		return indexInSlice(sel.Nodes, s.Nodes[0])
+	}
+	return -1
+}
+
+// IndexMatcher returns the position of the first element within the
+// Selection object relative to the elements matched by the matcher, or -1 if
+// not found.
+func (s *Selection) IndexMatcher(m Matcher) int {
+	if len(s.Nodes) > 0 {
+		sel := s.document.FindMatcher(m)
+		return indexInSlice(sel.Nodes, s.Nodes[0])
+	}
+	return -1
+}
+
+// IndexOfNode returns the position of the specified node within the Selection
+// object, or -1 if not found.
+func (s *Selection) IndexOfNode(node *html.Node) int {
+	return indexInSlice(s.Nodes, node)
+}
+
+// IndexOfSelection returns the position of the first node in the specified
+// Selection object within this Selection object, or -1 if not found.
+func (s *Selection) IndexOfSelection(sel *Selection) int {
+	if sel != nil && len(sel.Nodes) > 0 {
+		return indexInSlice(s.Nodes, sel.Nodes[0])
+	}
+	return -1
+}

vendor/github.com/PuerkitoBio/goquery/doc.go 🔗

@@ -0,0 +1,123 @@
+// Copyright (c) 2012-2016, Martin Angers & Contributors
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation and/or
+// other materials provided with the distribution.
+// * Neither the name of the author nor the names of its contributors may be used to
+// endorse or promote products derived from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
+// OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+// AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
+// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
+// WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+/*
+Package goquery implements features similar to jQuery, including the chainable
+syntax, to manipulate and query an HTML document.
+
+It brings a syntax and a set of features similar to jQuery to the Go language.
+It is based on Go's net/html package and the CSS Selector library cascadia.
+Since the net/html parser returns nodes, and not a full-featured DOM
+tree, jQuery's stateful manipulation functions (like height(), css(), detach())
+have been left off.
+
+Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is
+the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML.
+See the repository's wiki for various options on how to do this.
+
+Syntax-wise, it is as close as possible to jQuery, with the same method names when
+possible, and that warm and fuzzy chainable interface. jQuery being the
+ultra-popular library that it is, writing a similar HTML-manipulating
+library was better to follow its API than to start anew (in the same spirit as
+Go's fmt package), even though some of its methods are less than intuitive (looking
+at you, index()...).
+
+It is hosted on GitHub, along with additional documentation in the README.md
+file: https://github.com/puerkitobio/goquery
+
+Please note that because of the net/html dependency, goquery requires Go1.1+.
+
+The various methods are split into files based on the category of behavior.
+The three dots (...) indicate that various "overloads" are available.
+
+* array.go : array-like positional manipulation of the selection.
+    - Eq()
+    - First()
+    - Get()
+    - Index...()
+    - Last()
+    - Slice()
+
+* expand.go : methods that expand or augment the selection's set.
+    - Add...()
+    - AndSelf()
+    - Union(), which is an alias for AddSelection()
+
+* filter.go : filtering methods, that reduce the selection's set.
+    - End()
+    - Filter...()
+    - Has...()
+    - Intersection(), which is an alias of FilterSelection()
+    - Not...()
+
+* iteration.go : methods to loop over the selection's nodes.
+    - Each()
+    - EachWithBreak()
+    - Map()
+
+* manipulation.go : methods for modifying the document
+    - After...()
+    - Append...()
+    - Before...()
+    - Clone()
+    - Empty()
+    - Prepend...()
+    - Remove...()
+    - ReplaceWith...()
+    - Unwrap()
+    - Wrap...()
+    - WrapAll...()
+    - WrapInner...()
+
+* property.go : methods that inspect and get the node's properties values.
+    - Attr*(), RemoveAttr(), SetAttr()
+    - AddClass(), HasClass(), RemoveClass(), ToggleClass()
+    - Html()
+    - Length()
+    - Size(), which is an alias for Length()
+    - Text()
+
+* query.go : methods that query, or reflect, a node's identity.
+    - Contains()
+    - Is...()
+
+* traversal.go : methods to traverse the HTML document tree.
+    - Children...()
+    - Contents()
+    - Find...()
+    - Next...()
+    - Parent[s]...()
+    - Prev...()
+    - Siblings...()
+
+* type.go : definition of the types exposed by goquery.
+    - Document
+    - Selection
+    - Matcher
+
+* utilities.go : definition of helper functions (and not methods on a *Selection)
+that are not part of jQuery, but are useful to goquery.
+    - NodeName
+    - OuterHtml
+*/
+package goquery

vendor/github.com/PuerkitoBio/goquery/expand.go 🔗

@@ -0,0 +1,70 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+// Add adds the selector string's matching nodes to those in the current
+// selection and returns a new Selection object.
+// The selector string is run in the context of the document of the current
+// Selection object.
+func (s *Selection) Add(selector string) *Selection {
+	return s.AddNodes(findWithMatcher([]*html.Node{s.document.rootNode}, compileMatcher(selector))...)
+}
+
+// AddMatcher adds the matcher's matching nodes to those in the current
+// selection and returns a new Selection object.
+// The matcher is run in the context of the document of the current
+// Selection object.
+func (s *Selection) AddMatcher(m Matcher) *Selection {
+	return s.AddNodes(findWithMatcher([]*html.Node{s.document.rootNode}, m)...)
+}
+
+// AddSelection adds the specified Selection object's nodes to those in the
+// current selection and returns a new Selection object.
+func (s *Selection) AddSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return s.AddNodes()
+	}
+	return s.AddNodes(sel.Nodes...)
+}
+
+// Union is an alias for AddSelection.
+func (s *Selection) Union(sel *Selection) *Selection {
+	return s.AddSelection(sel)
+}
+
+// AddNodes adds the specified nodes to those in the
+// current selection and returns a new Selection object.
+func (s *Selection) AddNodes(nodes ...*html.Node) *Selection {
+	return pushStack(s, appendWithoutDuplicates(s.Nodes, nodes, nil))
+}
+
+// AndSelf adds the previous set of elements on the stack to the current set.
+// It returns a new Selection object containing the current Selection combined
+// with the previous one.
+// Deprecated: This function has been deprecated and is now an alias for AddBack().
+func (s *Selection) AndSelf() *Selection {
+	return s.AddBack()
+}
+
+// AddBack adds the previous set of elements on the stack to the current set.
+// It returns a new Selection object containing the current Selection combined
+// with the previous one.
+func (s *Selection) AddBack() *Selection {
+	return s.AddSelection(s.prevSel)
+}
+
+// AddBackFiltered reduces the previous set of elements on the stack to those that
+// match the selector string, and adds them to the current set.
+// It returns a new Selection object containing the current Selection combined
+// with the filtered previous one
+func (s *Selection) AddBackFiltered(selector string) *Selection {
+	return s.AddSelection(s.prevSel.Filter(selector))
+}
+
+// AddBackMatcher reduces the previous set of elements on the stack to those that match
+// the mateher, and adds them to the curernt set.
+// It returns a new Selection object containing the current Selection combined
+// with the filtered previous one
+func (s *Selection) AddBackMatcher(m Matcher) *Selection {
+	return s.AddSelection(s.prevSel.FilterMatcher(m))
+}

vendor/github.com/PuerkitoBio/goquery/filter.go 🔗

@@ -0,0 +1,163 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+// Filter reduces the set of matched elements to those that match the selector string.
+// It returns a new Selection object for this subset of matching elements.
+func (s *Selection) Filter(selector string) *Selection {
+	return s.FilterMatcher(compileMatcher(selector))
+}
+
+// FilterMatcher reduces the set of matched elements to those that match
+// the given matcher. It returns a new Selection object for this subset
+// of matching elements.
+func (s *Selection) FilterMatcher(m Matcher) *Selection {
+	return pushStack(s, winnow(s, m, true))
+}
+
+// Not removes elements from the Selection that match the selector string.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) Not(selector string) *Selection {
+	return s.NotMatcher(compileMatcher(selector))
+}
+
+// NotMatcher removes elements from the Selection that match the given matcher.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotMatcher(m Matcher) *Selection {
+	return pushStack(s, winnow(s, m, false))
+}
+
+// FilterFunction reduces the set of matched elements to those that pass the function's test.
+// It returns a new Selection object for this subset of elements.
+func (s *Selection) FilterFunction(f func(int, *Selection) bool) *Selection {
+	return pushStack(s, winnowFunction(s, f, true))
+}
+
+// NotFunction removes elements from the Selection that pass the function's test.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotFunction(f func(int, *Selection) bool) *Selection {
+	return pushStack(s, winnowFunction(s, f, false))
+}
+
+// FilterNodes reduces the set of matched elements to those that match the specified nodes.
+// It returns a new Selection object for this subset of elements.
+func (s *Selection) FilterNodes(nodes ...*html.Node) *Selection {
+	return pushStack(s, winnowNodes(s, nodes, true))
+}
+
+// NotNodes removes elements from the Selection that match the specified nodes.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotNodes(nodes ...*html.Node) *Selection {
+	return pushStack(s, winnowNodes(s, nodes, false))
+}
+
+// FilterSelection reduces the set of matched elements to those that match a
+// node in the specified Selection object.
+// It returns a new Selection object for this subset of elements.
+func (s *Selection) FilterSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return pushStack(s, winnowNodes(s, nil, true))
+	}
+	return pushStack(s, winnowNodes(s, sel.Nodes, true))
+}
+
+// NotSelection removes elements from the Selection that match a node in the specified
+// Selection object. It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return pushStack(s, winnowNodes(s, nil, false))
+	}
+	return pushStack(s, winnowNodes(s, sel.Nodes, false))
+}
+
+// Intersection is an alias for FilterSelection.
+func (s *Selection) Intersection(sel *Selection) *Selection {
+	return s.FilterSelection(sel)
+}
+
+// Has reduces the set of matched elements to those that have a descendant
+// that matches the selector.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) Has(selector string) *Selection {
+	return s.HasSelection(s.document.Find(selector))
+}
+
+// HasMatcher reduces the set of matched elements to those that have a descendant
+// that matches the matcher.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) HasMatcher(m Matcher) *Selection {
+	return s.HasSelection(s.document.FindMatcher(m))
+}
+
+// HasNodes reduces the set of matched elements to those that have a
+// descendant that matches one of the nodes.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) HasNodes(nodes ...*html.Node) *Selection {
+	return s.FilterFunction(func(_ int, sel *Selection) bool {
+		// Add all nodes that contain one of the specified nodes
+		for _, n := range nodes {
+			if sel.Contains(n) {
+				return true
+			}
+		}
+		return false
+	})
+}
+
+// HasSelection reduces the set of matched elements to those that have a
+// descendant that matches one of the nodes of the specified Selection object.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) HasSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return s.HasNodes()
+	}
+	return s.HasNodes(sel.Nodes...)
+}
+
+// End ends the most recent filtering operation in the current chain and
+// returns the set of matched elements to its previous state.
+func (s *Selection) End() *Selection {
+	if s.prevSel != nil {
+		return s.prevSel
+	}
+	return newEmptySelection(s.document)
+}
+
+// Filter based on the matcher, and the indicator to keep (Filter) or
+// to get rid of (Not) the matching elements.
+func winnow(sel *Selection, m Matcher, keep bool) []*html.Node {
+	// Optimize if keep is requested
+	if keep {
+		return m.Filter(sel.Nodes)
+	}
+	// Use grep
+	return grep(sel, func(i int, s *Selection) bool {
+		return !m.Match(s.Get(0))
+	})
+}
+
+// Filter based on an array of nodes, and the indicator to keep (Filter) or
+// to get rid of (Not) the matching elements.
+func winnowNodes(sel *Selection, nodes []*html.Node, keep bool) []*html.Node {
+	if len(nodes)+len(sel.Nodes) < minNodesForSet {
+		return grep(sel, func(i int, s *Selection) bool {
+			return isInSlice(nodes, s.Get(0)) == keep
+		})
+	}
+
+	set := make(map[*html.Node]bool)
+	for _, n := range nodes {
+		set[n] = true
+	}
+	return grep(sel, func(i int, s *Selection) bool {
+		return set[s.Get(0)] == keep
+	})
+}
+
+// Filter based on a function test, and the indicator to keep (Filter) or
+// to get rid of (Not) the matching elements.
+func winnowFunction(sel *Selection, f func(int, *Selection) bool, keep bool) []*html.Node {
+	return grep(sel, func(i int, s *Selection) bool {
+		return f(i, s) == keep
+	})
+}

vendor/github.com/PuerkitoBio/goquery/iteration.go 🔗

@@ -0,0 +1,47 @@
+package goquery
+
+// Each iterates over a Selection object, executing a function for each
+// matched element. It returns the current Selection object. The function
+// f is called for each element in the selection with the index of the
+// element in that selection starting at 0, and a *Selection that contains
+// only that element.
+func (s *Selection) Each(f func(int, *Selection)) *Selection {
+	for i, n := range s.Nodes {
+		f(i, newSingleSelection(n, s.document))
+	}
+	return s
+}
+
+// EachWithBreak iterates over a Selection object, executing a function for each
+// matched element. It is identical to Each except that it is possible to break
+// out of the loop by returning false in the callback function. It returns the
+// current Selection object.
+func (s *Selection) EachWithBreak(f func(int, *Selection) bool) *Selection {
+	for i, n := range s.Nodes {
+		if !f(i, newSingleSelection(n, s.document)) {
+			return s
+		}
+	}
+	return s
+}
+
+// Map passes each element in the current matched set through a function,
+// producing a slice of string holding the returned values. The function
+// f is called for each element in the selection with the index of the
+// element in that selection starting at 0, and a *Selection that contains
+// only that element.
+func (s *Selection) Map(f func(int, *Selection) string) (result []string) {
+	return Map(s, f)
+}
+
+// Map is the generic version of Selection.Map, allowing any type to be
+// returned.
+func Map[E any](s *Selection, f func(int, *Selection) E) (result []E) {
+	result = make([]E, len(s.Nodes))
+
+	for i, n := range s.Nodes {
+		result[i] = f(i, newSingleSelection(n, s.document))
+	}
+
+	return result
+}

vendor/github.com/PuerkitoBio/goquery/manipulation.go 🔗

@@ -0,0 +1,679 @@
+package goquery
+
+import (
+	"strings"
+
+	"golang.org/x/net/html"
+)
+
+// After applies the selector from the root document and inserts the matched elements
+// after the elements in the set of matched elements.
+//
+// If one of the matched elements in the selection is not currently in the
+// document, it's impossible to insert nodes after it, so it will be ignored.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) After(selector string) *Selection {
+	return s.AfterMatcher(compileMatcher(selector))
+}
+
+// AfterMatcher applies the matcher from the root document and inserts the matched elements
+// after the elements in the set of matched elements.
+//
+// If one of the matched elements in the selection is not currently in the
+// document, it's impossible to insert nodes after it, so it will be ignored.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterMatcher(m Matcher) *Selection {
+	return s.AfterNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// AfterSelection inserts the elements in the selection after each element in the set of matched
+// elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterSelection(sel *Selection) *Selection {
+	return s.AfterNodes(sel.Nodes...)
+}
+
+// AfterHtml parses the html and inserts it after the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterHtml(htmlStr string) *Selection {
+	return s.eachNodeHtml(htmlStr, true, func(node *html.Node, nodes []*html.Node) {
+		nextSibling := node.NextSibling
+		for _, n := range nodes {
+			if node.Parent != nil {
+				node.Parent.InsertBefore(n, nextSibling)
+			}
+		}
+	})
+}
+
+// AfterNodes inserts the nodes after each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterNodes(ns ...*html.Node) *Selection {
+	return s.manipulateNodes(ns, true, func(sn *html.Node, n *html.Node) {
+		if sn.Parent != nil {
+			sn.Parent.InsertBefore(n, sn.NextSibling)
+		}
+	})
+}
+
+// Append appends the elements specified by the selector to the end of each element
+// in the set of matched elements, following those rules:
+//
+// 1) The selector is applied to the root document.
+//
+// 2) Elements that are part of the document will be moved to the new location.
+//
+// 3) If there are multiple locations to append to, cloned nodes will be
+// appended to all target locations except the last one, which will be moved
+// as noted in (2).
+func (s *Selection) Append(selector string) *Selection {
+	return s.AppendMatcher(compileMatcher(selector))
+}
+
+// AppendMatcher appends the elements specified by the matcher to the end of each element
+// in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AppendMatcher(m Matcher) *Selection {
+	return s.AppendNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// AppendSelection appends the elements in the selection to the end of each element
+// in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AppendSelection(sel *Selection) *Selection {
+	return s.AppendNodes(sel.Nodes...)
+}
+
+// AppendHtml parses the html and appends it to the set of matched elements.
+func (s *Selection) AppendHtml(htmlStr string) *Selection {
+	return s.eachNodeHtml(htmlStr, false, func(node *html.Node, nodes []*html.Node) {
+		for _, n := range nodes {
+			node.AppendChild(n)
+		}
+	})
+}
+
+// AppendNodes appends the specified nodes to each node in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AppendNodes(ns ...*html.Node) *Selection {
+	return s.manipulateNodes(ns, false, func(sn *html.Node, n *html.Node) {
+		sn.AppendChild(n)
+	})
+}
+
+// Before inserts the matched elements before each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) Before(selector string) *Selection {
+	return s.BeforeMatcher(compileMatcher(selector))
+}
+
+// BeforeMatcher inserts the matched elements before each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeMatcher(m Matcher) *Selection {
+	return s.BeforeNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// BeforeSelection inserts the elements in the selection before each element in the set of matched
+// elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeSelection(sel *Selection) *Selection {
+	return s.BeforeNodes(sel.Nodes...)
+}
+
+// BeforeHtml parses the html and inserts it before the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeHtml(htmlStr string) *Selection {
+	return s.eachNodeHtml(htmlStr, true, func(node *html.Node, nodes []*html.Node) {
+		for _, n := range nodes {
+			if node.Parent != nil {
+				node.Parent.InsertBefore(n, node)
+			}
+		}
+	})
+}
+
+// BeforeNodes inserts the nodes before each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeNodes(ns ...*html.Node) *Selection {
+	return s.manipulateNodes(ns, false, func(sn *html.Node, n *html.Node) {
+		if sn.Parent != nil {
+			sn.Parent.InsertBefore(n, sn)
+		}
+	})
+}
+
+// Clone creates a deep copy of the set of matched nodes. The new nodes will not be
+// attached to the document.
+func (s *Selection) Clone() *Selection {
+	ns := newEmptySelection(s.document)
+	ns.Nodes = cloneNodes(s.Nodes)
+	return ns
+}
+
+// Empty removes all children nodes from the set of matched elements.
+// It returns the children nodes in a new Selection.
+func (s *Selection) Empty() *Selection {
+	var nodes []*html.Node
+
+	for _, n := range s.Nodes {
+		for c := n.FirstChild; c != nil; c = n.FirstChild {
+			n.RemoveChild(c)
+			nodes = append(nodes, c)
+		}
+	}
+
+	return pushStack(s, nodes)
+}
+
+// Prepend prepends the elements specified by the selector to each element in
+// the set of matched elements, following the same rules as Append.
+func (s *Selection) Prepend(selector string) *Selection {
+	return s.PrependMatcher(compileMatcher(selector))
+}
+
+// PrependMatcher prepends the elements specified by the matcher to each
+// element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) PrependMatcher(m Matcher) *Selection {
+	return s.PrependNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// PrependSelection prepends the elements in the selection to each element in
+// the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) PrependSelection(sel *Selection) *Selection {
+	return s.PrependNodes(sel.Nodes...)
+}
+
+// PrependHtml parses the html and prepends it to the set of matched elements.
+func (s *Selection) PrependHtml(htmlStr string) *Selection {
+	return s.eachNodeHtml(htmlStr, false, func(node *html.Node, nodes []*html.Node) {
+		firstChild := node.FirstChild
+		for _, n := range nodes {
+			node.InsertBefore(n, firstChild)
+		}
+	})
+}
+
+// PrependNodes prepends the specified nodes to each node in the set of
+// matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) PrependNodes(ns ...*html.Node) *Selection {
+	return s.manipulateNodes(ns, true, func(sn *html.Node, n *html.Node) {
+		// sn.FirstChild may be nil, in which case this functions like
+		// sn.AppendChild()
+		sn.InsertBefore(n, sn.FirstChild)
+	})
+}
+
+// Remove removes the set of matched elements from the document.
+// It returns the same selection, now consisting of nodes not in the document.
+func (s *Selection) Remove() *Selection {
+	for _, n := range s.Nodes {
+		if n.Parent != nil {
+			n.Parent.RemoveChild(n)
+		}
+	}
+
+	return s
+}
+
+// RemoveFiltered removes from the current set of matched elements those that
+// match the selector filter. It returns the Selection of removed nodes.
+//
+// For example if the selection s contains "<h1>", "<h2>" and "<h3>"
+// and s.RemoveFiltered("h2") is called, only the "<h2>" node is removed
+// (and returned), while "<h1>" and "<h3>" are kept in the document.
+func (s *Selection) RemoveFiltered(selector string) *Selection {
+	return s.RemoveMatcher(compileMatcher(selector))
+}
+
+// RemoveMatcher removes from the current set of matched elements those that
+// match the Matcher filter. It returns the Selection of removed nodes.
+// See RemoveFiltered for additional information.
+func (s *Selection) RemoveMatcher(m Matcher) *Selection {
+	return s.FilterMatcher(m).Remove()
+}
+
+// ReplaceWith replaces each element in the set of matched elements with the
+// nodes matched by the given selector.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWith(selector string) *Selection {
+	return s.ReplaceWithMatcher(compileMatcher(selector))
+}
+
+// ReplaceWithMatcher replaces each element in the set of matched elements with
+// the nodes matched by the given Matcher.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithMatcher(m Matcher) *Selection {
+	return s.ReplaceWithNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// ReplaceWithSelection replaces each element in the set of matched elements with
+// the nodes from the given Selection.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithSelection(sel *Selection) *Selection {
+	return s.ReplaceWithNodes(sel.Nodes...)
+}
+
+// ReplaceWithHtml replaces each element in the set of matched elements with
+// the parsed HTML.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithHtml(htmlStr string) *Selection {
+	s.eachNodeHtml(htmlStr, true, func(node *html.Node, nodes []*html.Node) {
+		nextSibling := node.NextSibling
+		for _, n := range nodes {
+			if node.Parent != nil {
+				node.Parent.InsertBefore(n, nextSibling)
+			}
+		}
+	})
+	return s.Remove()
+}
+
+// ReplaceWithNodes replaces each element in the set of matched elements with
+// the given nodes.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithNodes(ns ...*html.Node) *Selection {
+	s.AfterNodes(ns...)
+	return s.Remove()
+}
+
+// SetHtml sets the html content of each element in the selection to
+// specified html string.
+func (s *Selection) SetHtml(htmlStr string) *Selection {
+	for _, context := range s.Nodes {
+		for c := context.FirstChild; c != nil; c = context.FirstChild {
+			context.RemoveChild(c)
+		}
+	}
+	return s.eachNodeHtml(htmlStr, false, func(node *html.Node, nodes []*html.Node) {
+		for _, n := range nodes {
+			node.AppendChild(n)
+		}
+	})
+}
+
+// SetText sets the content of each element in the selection to specified content.
+// The provided text string is escaped.
+func (s *Selection) SetText(text string) *Selection {
+	return s.SetHtml(html.EscapeString(text))
+}
+
+// Unwrap removes the parents of the set of matched elements, leaving the matched
+// elements (and their siblings, if any) in their place.
+// It returns the original selection.
+func (s *Selection) Unwrap() *Selection {
+	s.Parent().Each(func(i int, ss *Selection) {
+		// For some reason, jquery allows unwrap to remove the <head> element, so
+		// allowing it here too. Same for <html>. Why it allows those elements to
+		// be unwrapped while not allowing body is a mystery to me.
+		if ss.Nodes[0].Data != "body" {
+			ss.ReplaceWithSelection(ss.Contents())
+		}
+	})
+
+	return s
+}
+
+// Wrap wraps each element in the set of matched elements inside the first
+// element matched by the given selector. The matched child is cloned before
+// being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) Wrap(selector string) *Selection {
+	return s.WrapMatcher(compileMatcher(selector))
+}
+
+// WrapMatcher wraps each element in the set of matched elements inside the
+// first element matched by the given matcher. The matched child is cloned
+// before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapMatcher(m Matcher) *Selection {
+	return s.wrapNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// WrapSelection wraps each element in the set of matched elements inside the
+// first element in the given Selection. The element is cloned before being
+// inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapSelection(sel *Selection) *Selection {
+	return s.wrapNodes(sel.Nodes...)
+}
+
+// WrapHtml wraps each element in the set of matched elements inside the inner-
+// most child of the given HTML.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapHtml(htmlStr string) *Selection {
+	nodesMap := make(map[string][]*html.Node)
+	for _, context := range s.Nodes {
+		var parent *html.Node
+		if context.Parent != nil {
+			parent = context.Parent
+		} else {
+			parent = &html.Node{Type: html.ElementNode}
+		}
+		nodes, found := nodesMap[nodeName(parent)]
+		if !found {
+			nodes = parseHtmlWithContext(htmlStr, parent)
+			nodesMap[nodeName(parent)] = nodes
+		}
+		newSingleSelection(context, s.document).wrapAllNodes(cloneNodes(nodes)...)
+	}
+	return s
+}
+
+// WrapNode wraps each element in the set of matched elements inside the inner-
+// most child of the given node. The given node is copied before being inserted
+// into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapNode(n *html.Node) *Selection {
+	return s.wrapNodes(n)
+}
+
+func (s *Selection) wrapNodes(ns ...*html.Node) *Selection {
+	s.Each(func(i int, ss *Selection) {
+		ss.wrapAllNodes(ns...)
+	})
+
+	return s
+}
+
+// WrapAll wraps a single HTML structure, matched by the given selector, around
+// all elements in the set of matched elements. The matched child is cloned
+// before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAll(selector string) *Selection {
+	return s.WrapAllMatcher(compileMatcher(selector))
+}
+
+// WrapAllMatcher wraps a single HTML structure, matched by the given Matcher,
+// around all elements in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllMatcher(m Matcher) *Selection {
+	return s.wrapAllNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// WrapAllSelection wraps a single HTML structure, the first node of the given
+// Selection, around all elements in the set of matched elements. The matched
+// child is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllSelection(sel *Selection) *Selection {
+	return s.wrapAllNodes(sel.Nodes...)
+}
+
+// WrapAllHtml wraps the given HTML structure around all elements in the set of
+// matched elements. The matched child is cloned before being inserted into the
+// document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllHtml(htmlStr string) *Selection {
+	var context *html.Node
+	var nodes []*html.Node
+	if len(s.Nodes) > 0 {
+		context = s.Nodes[0]
+		if context.Parent != nil {
+			nodes = parseHtmlWithContext(htmlStr, context)
+		} else {
+			nodes = parseHtml(htmlStr)
+		}
+	}
+	return s.wrapAllNodes(nodes...)
+}
+
+func (s *Selection) wrapAllNodes(ns ...*html.Node) *Selection {
+	if len(ns) > 0 {
+		return s.WrapAllNode(ns[0])
+	}
+	return s
+}
+
+// WrapAllNode wraps the given node around the first element in the Selection,
+// making all other nodes in the Selection children of the given node. The node
+// is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllNode(n *html.Node) *Selection {
+	if s.Size() == 0 {
+		return s
+	}
+
+	wrap := cloneNode(n)
+
+	first := s.Nodes[0]
+	if first.Parent != nil {
+		first.Parent.InsertBefore(wrap, first)
+		first.Parent.RemoveChild(first)
+	}
+
+	for c := getFirstChildEl(wrap); c != nil; c = getFirstChildEl(wrap) {
+		wrap = c
+	}
+
+	newSingleSelection(wrap, s.document).AppendSelection(s)
+
+	return s
+}
+
+// WrapInner wraps an HTML structure, matched by the given selector, around the
+// content of element in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInner(selector string) *Selection {
+	return s.WrapInnerMatcher(compileMatcher(selector))
+}
+
+// WrapInnerMatcher wraps an HTML structure, matched by the given selector,
+// around the content of element in the set of matched elements. The matched
+// child is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerMatcher(m Matcher) *Selection {
+	return s.wrapInnerNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// WrapInnerSelection wraps an HTML structure, matched by the given selector,
+// around the content of element in the set of matched elements. The matched
+// child is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerSelection(sel *Selection) *Selection {
+	return s.wrapInnerNodes(sel.Nodes...)
+}
+
+// WrapInnerHtml wraps an HTML structure, matched by the given selector, around
+// the content of element in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerHtml(htmlStr string) *Selection {
+	nodesMap := make(map[string][]*html.Node)
+	for _, context := range s.Nodes {
+		nodes, found := nodesMap[nodeName(context)]
+		if !found {
+			nodes = parseHtmlWithContext(htmlStr, context)
+			nodesMap[nodeName(context)] = nodes
+		}
+		newSingleSelection(context, s.document).wrapInnerNodes(cloneNodes(nodes)...)
+	}
+	return s
+}
+
+// WrapInnerNode wraps an HTML structure, matched by the given selector, around
+// the content of element in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerNode(n *html.Node) *Selection {
+	return s.wrapInnerNodes(n)
+}
+
+func (s *Selection) wrapInnerNodes(ns ...*html.Node) *Selection {
+	if len(ns) == 0 {
+		return s
+	}
+
+	s.Each(func(i int, s *Selection) {
+		contents := s.Contents()
+
+		if contents.Size() > 0 {
+			contents.wrapAllNodes(ns...)
+		} else {
+			s.AppendNodes(cloneNode(ns[0]))
+		}
+	})
+
+	return s
+}
+
+func parseHtml(h string) []*html.Node {
+	// Errors are only returned when the io.Reader returns any error besides
+	// EOF, but strings.Reader never will
+	nodes, err := html.ParseFragment(strings.NewReader(h), &html.Node{Type: html.ElementNode})
+	if err != nil {
+		panic("goquery: failed to parse HTML: " + err.Error())
+	}
+	return nodes
+}
+
+func parseHtmlWithContext(h string, context *html.Node) []*html.Node {
+	// Errors are only returned when the io.Reader returns any error besides
+	// EOF, but strings.Reader never will
+	nodes, err := html.ParseFragment(strings.NewReader(h), context)
+	if err != nil {
+		panic("goquery: failed to parse HTML: " + err.Error())
+	}
+	return nodes
+}
+
+// Get the first child that is an ElementNode
+func getFirstChildEl(n *html.Node) *html.Node {
+	c := n.FirstChild
+	for c != nil && c.Type != html.ElementNode {
+		c = c.NextSibling
+	}
+	return c
+}
+
+// Deep copy a slice of nodes.
+func cloneNodes(ns []*html.Node) []*html.Node {
+	cns := make([]*html.Node, 0, len(ns))
+
+	for _, n := range ns {
+		cns = append(cns, cloneNode(n))
+	}
+
+	return cns
+}
+
+// Deep copy a node. The new node has clones of all the original node's
+// children but none of its parents or siblings.
+func cloneNode(n *html.Node) *html.Node {
+	nn := &html.Node{
+		Type:     n.Type,
+		DataAtom: n.DataAtom,
+		Data:     n.Data,
+		Attr:     make([]html.Attribute, len(n.Attr)),
+	}
+
+	copy(nn.Attr, n.Attr)
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		nn.AppendChild(cloneNode(c))
+	}
+
+	return nn
+}
+
+func (s *Selection) manipulateNodes(ns []*html.Node, reverse bool,
+	f func(sn *html.Node, n *html.Node)) *Selection {
+
+	lasti := s.Size() - 1
+
+	// net.Html doesn't provide document fragments for insertion, so to get
+	// things in the correct order with After() and Prepend(), the callback
+	// needs to be called on the reverse of the nodes.
+	if reverse {
+		for i, j := 0, len(ns)-1; i < j; i, j = i+1, j-1 {
+			ns[i], ns[j] = ns[j], ns[i]
+		}
+	}
+
+	for i, sn := range s.Nodes {
+		for _, n := range ns {
+			if i != lasti {
+				f(sn, cloneNode(n))
+			} else {
+				if n.Parent != nil {
+					n.Parent.RemoveChild(n)
+				}
+				f(sn, n)
+			}
+		}
+	}
+
+	return s
+}
+
+// eachNodeHtml parses the given html string and inserts the resulting nodes in the dom with the mergeFn.
+// The parsed nodes are inserted for each element of the selection.
+// isParent can be used to indicate that the elements of the selection should be treated as the parent for the parsed html.
+// A cache is used to avoid parsing the html multiple times should the elements of the selection result in the same context.
+func (s *Selection) eachNodeHtml(htmlStr string, isParent bool, mergeFn func(n *html.Node, nodes []*html.Node)) *Selection {
+	// cache to avoid parsing the html for the same context multiple times
+	nodeCache := make(map[string][]*html.Node)
+	var context *html.Node
+	for _, n := range s.Nodes {
+		if isParent {
+			context = n.Parent
+		} else {
+			if n.Type != html.ElementNode {
+				continue
+			}
+			context = n
+		}
+		if context != nil {
+			nodes, found := nodeCache[nodeName(context)]
+			if !found {
+				nodes = parseHtmlWithContext(htmlStr, context)
+				nodeCache[nodeName(context)] = nodes
+			}
+			mergeFn(n, cloneNodes(nodes))
+		}
+	}
+	return s
+}

vendor/github.com/PuerkitoBio/goquery/property.go 🔗

@@ -0,0 +1,275 @@
+package goquery
+
+import (
+	"bytes"
+	"regexp"
+	"strings"
+
+	"golang.org/x/net/html"
+)
+
+var rxClassTrim = regexp.MustCompile("[\t\r\n]")
+
+// Attr gets the specified attribute's value for the first element in the
+// Selection. To get the value for each element individually, use a looping
+// construct such as Each or Map method.
+func (s *Selection) Attr(attrName string) (val string, exists bool) {
+	if len(s.Nodes) == 0 {
+		return
+	}
+	return getAttributeValue(attrName, s.Nodes[0])
+}
+
+// AttrOr works like Attr but returns default value if attribute is not present.
+func (s *Selection) AttrOr(attrName, defaultValue string) string {
+	if len(s.Nodes) == 0 {
+		return defaultValue
+	}
+
+	val, exists := getAttributeValue(attrName, s.Nodes[0])
+	if !exists {
+		return defaultValue
+	}
+
+	return val
+}
+
+// RemoveAttr removes the named attribute from each element in the set of matched elements.
+func (s *Selection) RemoveAttr(attrName string) *Selection {
+	for _, n := range s.Nodes {
+		removeAttr(n, attrName)
+	}
+
+	return s
+}
+
+// SetAttr sets the given attribute on each element in the set of matched elements.
+func (s *Selection) SetAttr(attrName, val string) *Selection {
+	for _, n := range s.Nodes {
+		attr := getAttributePtr(attrName, n)
+		if attr == nil {
+			n.Attr = append(n.Attr, html.Attribute{Key: attrName, Val: val})
+		} else {
+			attr.Val = val
+		}
+	}
+
+	return s
+}
+
+// Text gets the combined text contents of each element in the set of matched
+// elements, including their descendants.
+func (s *Selection) Text() string {
+	var buf bytes.Buffer
+
+	// Slightly optimized vs calling Each: no single selection object created
+	var f func(*html.Node)
+	f = func(n *html.Node) {
+		if n.Type == html.TextNode {
+			// Keep newlines and spaces, like jQuery
+			buf.WriteString(n.Data)
+		}
+		if n.FirstChild != nil {
+			for c := n.FirstChild; c != nil; c = c.NextSibling {
+				f(c)
+			}
+		}
+	}
+	for _, n := range s.Nodes {
+		f(n)
+	}
+
+	return buf.String()
+}
+
+// Size is an alias for Length.
+func (s *Selection) Size() int {
+	return s.Length()
+}
+
+// Length returns the number of elements in the Selection object.
+func (s *Selection) Length() int {
+	return len(s.Nodes)
+}
+
+// Html gets the HTML contents of the first element in the set of matched
+// elements. It includes text and comment nodes.
+func (s *Selection) Html() (ret string, e error) {
+	// Since there is no .innerHtml, the HTML content must be re-created from
+	// the nodes using html.Render.
+	var buf bytes.Buffer
+
+	if len(s.Nodes) > 0 {
+		for c := s.Nodes[0].FirstChild; c != nil; c = c.NextSibling {
+			e = html.Render(&buf, c)
+			if e != nil {
+				return
+			}
+		}
+		ret = buf.String()
+	}
+
+	return
+}
+
+// AddClass adds the given class(es) to each element in the set of matched elements.
+// Multiple class names can be specified, separated by a space or via multiple arguments.
+func (s *Selection) AddClass(class ...string) *Selection {
+	classStr := strings.TrimSpace(strings.Join(class, " "))
+
+	if classStr == "" {
+		return s
+	}
+
+	tcls := getClassesSlice(classStr)
+	for _, n := range s.Nodes {
+		curClasses, attr := getClassesAndAttr(n, true)
+		for _, newClass := range tcls {
+			if !strings.Contains(curClasses, " "+newClass+" ") {
+				curClasses += newClass + " "
+			}
+		}
+
+		setClasses(n, attr, curClasses)
+	}
+
+	return s
+}
+
+// HasClass determines whether any of the matched elements are assigned the
+// given class.
+func (s *Selection) HasClass(class string) bool {
+	class = " " + class + " "
+	for _, n := range s.Nodes {
+		classes, _ := getClassesAndAttr(n, false)
+		if strings.Contains(classes, class) {
+			return true
+		}
+	}
+	return false
+}
+
+// RemoveClass removes the given class(es) from each element in the set of matched elements.
+// Multiple class names can be specified, separated by a space or via multiple arguments.
+// If no class name is provided, all classes are removed.
+func (s *Selection) RemoveClass(class ...string) *Selection {
+	var rclasses []string
+
+	classStr := strings.TrimSpace(strings.Join(class, " "))
+	remove := classStr == ""
+
+	if !remove {
+		rclasses = getClassesSlice(classStr)
+	}
+
+	for _, n := range s.Nodes {
+		if remove {
+			removeAttr(n, "class")
+		} else {
+			classes, attr := getClassesAndAttr(n, true)
+			for _, rcl := range rclasses {
+				classes = strings.Replace(classes, " "+rcl+" ", " ", -1)
+			}
+
+			setClasses(n, attr, classes)
+		}
+	}
+
+	return s
+}
+
+// ToggleClass adds or removes the given class(es) for each element in the set of matched elements.
+// Multiple class names can be specified, separated by a space or via multiple arguments.
+func (s *Selection) ToggleClass(class ...string) *Selection {
+	classStr := strings.TrimSpace(strings.Join(class, " "))
+
+	if classStr == "" {
+		return s
+	}
+
+	tcls := getClassesSlice(classStr)
+
+	for _, n := range s.Nodes {
+		classes, attr := getClassesAndAttr(n, true)
+		for _, tcl := range tcls {
+			if strings.Contains(classes, " "+tcl+" ") {
+				classes = strings.Replace(classes, " "+tcl+" ", " ", -1)
+			} else {
+				classes += tcl + " "
+			}
+		}
+
+		setClasses(n, attr, classes)
+	}
+
+	return s
+}
+
+func getAttributePtr(attrName string, n *html.Node) *html.Attribute {
+	if n == nil {
+		return nil
+	}
+
+	for i, a := range n.Attr {
+		if a.Key == attrName {
+			return &n.Attr[i]
+		}
+	}
+	return nil
+}
+
+// Private function to get the specified attribute's value from a node.
+func getAttributeValue(attrName string, n *html.Node) (val string, exists bool) {
+	if a := getAttributePtr(attrName, n); a != nil {
+		val = a.Val
+		exists = true
+	}
+	return
+}
+
+// Get and normalize the "class" attribute from the node.
+func getClassesAndAttr(n *html.Node, create bool) (classes string, attr *html.Attribute) {
+	// Applies only to element nodes
+	if n.Type == html.ElementNode {
+		attr = getAttributePtr("class", n)
+		if attr == nil && create {
+			n.Attr = append(n.Attr, html.Attribute{
+				Key: "class",
+				Val: "",
+			})
+			attr = &n.Attr[len(n.Attr)-1]
+		}
+	}
+
+	if attr == nil {
+		classes = " "
+	} else {
+		classes = rxClassTrim.ReplaceAllString(" "+attr.Val+" ", " ")
+	}
+
+	return
+}
+
+func getClassesSlice(classes string) []string {
+	return strings.Split(rxClassTrim.ReplaceAllString(" "+classes+" ", " "), " ")
+}
+
+func removeAttr(n *html.Node, attrName string) {
+	for i, a := range n.Attr {
+		if a.Key == attrName {
+			n.Attr[i], n.Attr[len(n.Attr)-1], n.Attr =
+				n.Attr[len(n.Attr)-1], html.Attribute{}, n.Attr[:len(n.Attr)-1]
+			return
+		}
+	}
+}
+
+func setClasses(n *html.Node, attr *html.Attribute, classes string) {
+	classes = strings.TrimSpace(classes)
+	if classes == "" {
+		removeAttr(n, "class")
+		return
+	}
+
+	attr.Val = classes
+}

vendor/github.com/PuerkitoBio/goquery/query.go 🔗

@@ -0,0 +1,49 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+// Is checks the current matched set of elements against a selector and
+// returns true if at least one of these elements matches.
+func (s *Selection) Is(selector string) bool {
+	return s.IsMatcher(compileMatcher(selector))
+}
+
+// IsMatcher checks the current matched set of elements against a matcher and
+// returns true if at least one of these elements matches.
+func (s *Selection) IsMatcher(m Matcher) bool {
+	if len(s.Nodes) > 0 {
+		if len(s.Nodes) == 1 {
+			return m.Match(s.Nodes[0])
+		}
+		return len(m.Filter(s.Nodes)) > 0
+	}
+
+	return false
+}
+
+// IsFunction checks the current matched set of elements against a predicate and
+// returns true if at least one of these elements matches.
+func (s *Selection) IsFunction(f func(int, *Selection) bool) bool {
+	return s.FilterFunction(f).Length() > 0
+}
+
+// IsSelection checks the current matched set of elements against a Selection object
+// and returns true if at least one of these elements matches.
+func (s *Selection) IsSelection(sel *Selection) bool {
+	return s.FilterSelection(sel).Length() > 0
+}
+
+// IsNodes checks the current matched set of elements against the specified nodes
+// and returns true if at least one of these elements matches.
+func (s *Selection) IsNodes(nodes ...*html.Node) bool {
+	return s.FilterNodes(nodes...).Length() > 0
+}
+
+// Contains returns true if the specified Node is within,
+// at any depth, one of the nodes in the Selection object.
+// It is NOT inclusive, to behave like jQuery's implementation, and
+// unlike Javascript's .contains, so if the contained
+// node is itself in the selection, it returns false.
+func (s *Selection) Contains(n *html.Node) bool {
+	return sliceContains(s.Nodes, n)
+}

vendor/github.com/PuerkitoBio/goquery/traversal.go 🔗

@@ -0,0 +1,704 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+type siblingType int
+
+// Sibling type, used internally when iterating over children at the same
+// level (siblings) to specify which nodes are requested.
+const (
+	siblingPrevUntil siblingType = iota - 3
+	siblingPrevAll
+	siblingPrev
+	siblingAll
+	siblingNext
+	siblingNextAll
+	siblingNextUntil
+	siblingAllIncludingNonElements
+)
+
+// Find gets the descendants of each element in the current set of matched
+// elements, filtered by a selector. It returns a new Selection object
+// containing these matched elements.
+//
+// Note that as for all methods accepting a selector string, the selector is
+// compiled and applied by the cascadia package and inherits its behavior and
+// constraints regarding supported selectors. See the note on cascadia in
+// the goquery documentation here:
+// https://github.com/PuerkitoBio/goquery?tab=readme-ov-file#api
+func (s *Selection) Find(selector string) *Selection {
+	return pushStack(s, findWithMatcher(s.Nodes, compileMatcher(selector)))
+}
+
+// FindMatcher gets the descendants of each element in the current set of matched
+// elements, filtered by the matcher. It returns a new Selection object
+// containing these matched elements.
+func (s *Selection) FindMatcher(m Matcher) *Selection {
+	return pushStack(s, findWithMatcher(s.Nodes, m))
+}
+
+// FindSelection gets the descendants of each element in the current
+// Selection, filtered by a Selection. It returns a new Selection object
+// containing these matched elements.
+func (s *Selection) FindSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return pushStack(s, nil)
+	}
+	return s.FindNodes(sel.Nodes...)
+}
+
+// FindNodes gets the descendants of each element in the current
+// Selection, filtered by some nodes. It returns a new Selection object
+// containing these matched elements.
+func (s *Selection) FindNodes(nodes ...*html.Node) *Selection {
+	return pushStack(s, mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+		if sliceContains(s.Nodes, n) {
+			return []*html.Node{n}
+		}
+		return nil
+	}))
+}
+
+// Contents gets the children of each element in the Selection,
+// including text and comment nodes. It returns a new Selection object
+// containing these elements.
+func (s *Selection) Contents() *Selection {
+	return pushStack(s, getChildrenNodes(s.Nodes, siblingAllIncludingNonElements))
+}
+
+// ContentsFiltered gets the children of each element in the Selection,
+// filtered by the specified selector. It returns a new Selection
+// object containing these elements. Since selectors only act on Element nodes,
+// this function is an alias to ChildrenFiltered unless the selector is empty,
+// in which case it is an alias to Contents.
+func (s *Selection) ContentsFiltered(selector string) *Selection {
+	if selector != "" {
+		return s.ChildrenFiltered(selector)
+	}
+	return s.Contents()
+}
+
+// ContentsMatcher gets the children of each element in the Selection,
+// filtered by the specified matcher. It returns a new Selection
+// object containing these elements. Since matchers only act on Element nodes,
+// this function is an alias to ChildrenMatcher.
+func (s *Selection) ContentsMatcher(m Matcher) *Selection {
+	return s.ChildrenMatcher(m)
+}
+
+// Children gets the child elements of each element in the Selection.
+// It returns a new Selection object containing these elements.
+func (s *Selection) Children() *Selection {
+	return pushStack(s, getChildrenNodes(s.Nodes, siblingAll))
+}
+
+// ChildrenFiltered gets the child elements of each element in the Selection,
+// filtered by the specified selector. It returns a new
+// Selection object containing these elements.
+func (s *Selection) ChildrenFiltered(selector string) *Selection {
+	return filterAndPush(s, getChildrenNodes(s.Nodes, siblingAll), compileMatcher(selector))
+}
+
+// ChildrenMatcher gets the child elements of each element in the Selection,
+// filtered by the specified matcher. It returns a new
+// Selection object containing these elements.
+func (s *Selection) ChildrenMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getChildrenNodes(s.Nodes, siblingAll), m)
+}
+
+// Parent gets the parent of each element in the Selection. It returns a
+// new Selection object containing the matched elements.
+func (s *Selection) Parent() *Selection {
+	return pushStack(s, getParentNodes(s.Nodes))
+}
+
+// ParentFiltered gets the parent of each element in the Selection filtered by a
+// selector. It returns a new Selection object containing the matched elements.
+func (s *Selection) ParentFiltered(selector string) *Selection {
+	return filterAndPush(s, getParentNodes(s.Nodes), compileMatcher(selector))
+}
+
+// ParentMatcher gets the parent of each element in the Selection filtered by a
+// matcher. It returns a new Selection object containing the matched elements.
+func (s *Selection) ParentMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getParentNodes(s.Nodes), m)
+}
+
+// Closest gets the first element that matches the selector by testing the
+// element itself and traversing up through its ancestors in the DOM tree.
+func (s *Selection) Closest(selector string) *Selection {
+	cs := compileMatcher(selector)
+	return s.ClosestMatcher(cs)
+}
+
+// ClosestMatcher gets the first element that matches the matcher by testing the
+// element itself and traversing up through its ancestors in the DOM tree.
+func (s *Selection) ClosestMatcher(m Matcher) *Selection {
+	return pushStack(s, mapNodes(s.Nodes, func(i int, n *html.Node) []*html.Node {
+		// For each node in the selection, test the node itself, then each parent
+		// until a match is found.
+		for ; n != nil; n = n.Parent {
+			if m.Match(n) {
+				return []*html.Node{n}
+			}
+		}
+		return nil
+	}))
+}
+
+// ClosestNodes gets the first element that matches one of the nodes by testing the
+// element itself and traversing up through its ancestors in the DOM tree.
+func (s *Selection) ClosestNodes(nodes ...*html.Node) *Selection {
+	set := make(map[*html.Node]bool)
+	for _, n := range nodes {
+		set[n] = true
+	}
+	return pushStack(s, mapNodes(s.Nodes, func(i int, n *html.Node) []*html.Node {
+		// For each node in the selection, test the node itself, then each parent
+		// until a match is found.
+		for ; n != nil; n = n.Parent {
+			if set[n] {
+				return []*html.Node{n}
+			}
+		}
+		return nil
+	}))
+}
+
+// ClosestSelection gets the first element that matches one of the nodes in the
+// Selection by testing the element itself and traversing up through its ancestors
+// in the DOM tree.
+func (s *Selection) ClosestSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return pushStack(s, nil)
+	}
+	return s.ClosestNodes(sel.Nodes...)
+}
+
+// Parents gets the ancestors of each element in the current Selection. It
+// returns a new Selection object with the matched elements.
+func (s *Selection) Parents() *Selection {
+	return pushStack(s, getParentsNodes(s.Nodes, nil, nil))
+}
+
+// ParentsFiltered gets the ancestors of each element in the current
+// Selection. It returns a new Selection object with the matched elements.
+func (s *Selection) ParentsFiltered(selector string) *Selection {
+	return filterAndPush(s, getParentsNodes(s.Nodes, nil, nil), compileMatcher(selector))
+}
+
+// ParentsMatcher gets the ancestors of each element in the current
+// Selection. It returns a new Selection object with the matched elements.
+func (s *Selection) ParentsMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getParentsNodes(s.Nodes, nil, nil), m)
+}
+
+// ParentsUntil gets the ancestors of each element in the Selection, up to but
+// not including the element matched by the selector. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) ParentsUntil(selector string) *Selection {
+	return pushStack(s, getParentsNodes(s.Nodes, compileMatcher(selector), nil))
+}
+
+// ParentsUntilMatcher gets the ancestors of each element in the Selection, up to but
+// not including the element matched by the matcher. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) ParentsUntilMatcher(m Matcher) *Selection {
+	return pushStack(s, getParentsNodes(s.Nodes, m, nil))
+}
+
+// ParentsUntilSelection gets the ancestors of each element in the Selection,
+// up to but not including the elements in the specified Selection. It returns a
+// new Selection object containing the matched elements.
+func (s *Selection) ParentsUntilSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return s.Parents()
+	}
+	return s.ParentsUntilNodes(sel.Nodes...)
+}
+
+// ParentsUntilNodes gets the ancestors of each element in the Selection,
+// up to but not including the specified nodes. It returns a
+// new Selection object containing the matched elements.
+func (s *Selection) ParentsUntilNodes(nodes ...*html.Node) *Selection {
+	return pushStack(s, getParentsNodes(s.Nodes, nil, nodes))
+}
+
+// ParentsFilteredUntil is like ParentsUntil, with the option to filter the
+// results based on a selector string. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) ParentsFilteredUntil(filterSelector, untilSelector string) *Selection {
+	return filterAndPush(s, getParentsNodes(s.Nodes, compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
+}
+
+// ParentsFilteredUntilMatcher is like ParentsUntilMatcher, with the option to filter the
+// results based on a matcher. It returns a new Selection object containing the matched elements.
+func (s *Selection) ParentsFilteredUntilMatcher(filter, until Matcher) *Selection {
+	return filterAndPush(s, getParentsNodes(s.Nodes, until, nil), filter)
+}
+
+// ParentsFilteredUntilSelection is like ParentsUntilSelection, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
+	return s.ParentsMatcherUntilSelection(compileMatcher(filterSelector), sel)
+}
+
+// ParentsMatcherUntilSelection is like ParentsUntilSelection, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
+	if sel == nil {
+		return s.ParentsMatcher(filter)
+	}
+	return s.ParentsMatcherUntilNodes(filter, sel.Nodes...)
+}
+
+// ParentsFilteredUntilNodes is like ParentsUntilNodes, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
+	return filterAndPush(s, getParentsNodes(s.Nodes, nil, nodes), compileMatcher(filterSelector))
+}
+
+// ParentsMatcherUntilNodes is like ParentsUntilNodes, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
+	return filterAndPush(s, getParentsNodes(s.Nodes, nil, nodes), filter)
+}
+
+// Siblings gets the siblings of each element in the Selection. It returns
+// a new Selection object containing the matched elements.
+func (s *Selection) Siblings() *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil))
+}
+
+// SiblingsFiltered gets the siblings of each element in the Selection
+// filtered by a selector. It returns a new Selection object containing the
+// matched elements.
+func (s *Selection) SiblingsFiltered(selector string) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil), compileMatcher(selector))
+}
+
+// SiblingsMatcher gets the siblings of each element in the Selection
+// filtered by a matcher. It returns a new Selection object containing the
+// matched elements.
+func (s *Selection) SiblingsMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil), m)
+}
+
+// Next gets the immediately following sibling of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) Next() *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil))
+}
+
+// NextFiltered gets the immediately following sibling of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextFiltered(selector string) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil), compileMatcher(selector))
+}
+
+// NextMatcher gets the immediately following sibling of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil), m)
+}
+
+// NextAll gets all the following siblings of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) NextAll() *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil))
+}
+
+// NextAllFiltered gets all the following siblings of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextAllFiltered(selector string) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil), compileMatcher(selector))
+}
+
+// NextAllMatcher gets all the following siblings of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextAllMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil), m)
+}
+
+// Prev gets the immediately preceding sibling of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) Prev() *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil))
+}
+
+// PrevFiltered gets the immediately preceding sibling of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevFiltered(selector string) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil), compileMatcher(selector))
+}
+
+// PrevMatcher gets the immediately preceding sibling of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil), m)
+}
+
+// PrevAll gets all the preceding siblings of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) PrevAll() *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil))
+}
+
+// PrevAllFiltered gets all the preceding siblings of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevAllFiltered(selector string) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil), compileMatcher(selector))
+}
+
+// PrevAllMatcher gets all the preceding siblings of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevAllMatcher(m Matcher) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil), m)
+}
+
+// NextUntil gets all following siblings of each element up to but not
+// including the element matched by the selector. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntil(selector string) *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+		compileMatcher(selector), nil))
+}
+
+// NextUntilMatcher gets all following siblings of each element up to but not
+// including the element matched by the matcher. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntilMatcher(m Matcher) *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+		m, nil))
+}
+
+// NextUntilSelection gets all following siblings of each element up to but not
+// including the element matched by the Selection. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntilSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return s.NextAll()
+	}
+	return s.NextUntilNodes(sel.Nodes...)
+}
+
+// NextUntilNodes gets all following siblings of each element up to but not
+// including the element matched by the nodes. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntilNodes(nodes ...*html.Node) *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+		nil, nodes))
+}
+
+// PrevUntil gets all preceding siblings of each element up to but not
+// including the element matched by the selector. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntil(selector string) *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+		compileMatcher(selector), nil))
+}
+
+// PrevUntilMatcher gets all preceding siblings of each element up to but not
+// including the element matched by the matcher. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntilMatcher(m Matcher) *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+		m, nil))
+}
+
+// PrevUntilSelection gets all preceding siblings of each element up to but not
+// including the element matched by the Selection. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntilSelection(sel *Selection) *Selection {
+	if sel == nil {
+		return s.PrevAll()
+	}
+	return s.PrevUntilNodes(sel.Nodes...)
+}
+
+// PrevUntilNodes gets all preceding siblings of each element up to but not
+// including the element matched by the nodes. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntilNodes(nodes ...*html.Node) *Selection {
+	return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+		nil, nodes))
+}
+
+// NextFilteredUntil is like NextUntil, with the option to filter
+// the results based on a selector string.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntil(filterSelector, untilSelector string) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+		compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
+}
+
+// NextFilteredUntilMatcher is like NextUntilMatcher, with the option to filter
+// the results based on a matcher.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntilMatcher(filter, until Matcher) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+		until, nil), filter)
+}
+
+// NextFilteredUntilSelection is like NextUntilSelection, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
+	return s.NextMatcherUntilSelection(compileMatcher(filterSelector), sel)
+}
+
+// NextMatcherUntilSelection is like NextUntilSelection, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
+	if sel == nil {
+		return s.NextMatcher(filter)
+	}
+	return s.NextMatcherUntilNodes(filter, sel.Nodes...)
+}
+
+// NextFilteredUntilNodes is like NextUntilNodes, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+		nil, nodes), compileMatcher(filterSelector))
+}
+
+// NextMatcherUntilNodes is like NextUntilNodes, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+		nil, nodes), filter)
+}
+
+// PrevFilteredUntil is like PrevUntil, with the option to filter
+// the results based on a selector string.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntil(filterSelector, untilSelector string) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+		compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
+}
+
+// PrevFilteredUntilMatcher is like PrevUntilMatcher, with the option to filter
+// the results based on a matcher.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntilMatcher(filter, until Matcher) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+		until, nil), filter)
+}
+
+// PrevFilteredUntilSelection is like PrevUntilSelection, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
+	return s.PrevMatcherUntilSelection(compileMatcher(filterSelector), sel)
+}
+
+// PrevMatcherUntilSelection is like PrevUntilSelection, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
+	if sel == nil {
+		return s.PrevMatcher(filter)
+	}
+	return s.PrevMatcherUntilNodes(filter, sel.Nodes...)
+}
+
+// PrevFilteredUntilNodes is like PrevUntilNodes, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+		nil, nodes), compileMatcher(filterSelector))
+}
+
+// PrevMatcherUntilNodes is like PrevUntilNodes, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
+	return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+		nil, nodes), filter)
+}
+
+// Filter and push filters the nodes based on a matcher, and pushes the results
+// on the stack, with the srcSel as previous selection.
+func filterAndPush(srcSel *Selection, nodes []*html.Node, m Matcher) *Selection {
+	// Create a temporary Selection with the specified nodes to filter using winnow
+	sel := &Selection{nodes, srcSel.document, nil}
+	// Filter based on matcher and push on stack
+	return pushStack(srcSel, winnow(sel, m, true))
+}
+
+// Internal implementation of Find that return raw nodes.
+func findWithMatcher(nodes []*html.Node, m Matcher) []*html.Node {
+	// Map nodes to find the matches within the children of each node
+	return mapNodes(nodes, func(i int, n *html.Node) (result []*html.Node) {
+		// Go down one level, becausejQuery's Find selects only within descendants
+		for c := n.FirstChild; c != nil; c = c.NextSibling {
+			if c.Type == html.ElementNode {
+				result = append(result, m.MatchAll(c)...)
+			}
+		}
+		return
+	})
+}
+
+// Internal implementation to get all parent nodes, stopping at the specified
+// node (or nil if no stop).
+func getParentsNodes(nodes []*html.Node, stopm Matcher, stopNodes []*html.Node) []*html.Node {
+	return mapNodes(nodes, func(i int, n *html.Node) (result []*html.Node) {
+		for p := n.Parent; p != nil; p = p.Parent {
+			sel := newSingleSelection(p, nil)
+			if stopm != nil {
+				if sel.IsMatcher(stopm) {
+					break
+				}
+			} else if len(stopNodes) > 0 {
+				if sel.IsNodes(stopNodes...) {
+					break
+				}
+			}
+			if p.Type == html.ElementNode {
+				result = append(result, p)
+			}
+		}
+		return
+	})
+}
+
+// Internal implementation of sibling nodes that return a raw slice of matches.
+func getSiblingNodes(nodes []*html.Node, st siblingType, untilm Matcher, untilNodes []*html.Node) []*html.Node {
+	var f func(*html.Node) bool
+
+	// If the requested siblings are ...Until, create the test function to
+	// determine if the until condition is reached (returns true if it is)
+	if st == siblingNextUntil || st == siblingPrevUntil {
+		f = func(n *html.Node) bool {
+			if untilm != nil {
+				// Matcher-based condition
+				sel := newSingleSelection(n, nil)
+				return sel.IsMatcher(untilm)
+			} else if len(untilNodes) > 0 {
+				// Nodes-based condition
+				sel := newSingleSelection(n, nil)
+				return sel.IsNodes(untilNodes...)
+			}
+			return false
+		}
+	}
+
+	return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+		return getChildrenWithSiblingType(n.Parent, st, n, f)
+	})
+}
+
+// Gets the children nodes of each node in the specified slice of nodes,
+// based on the sibling type request.
+func getChildrenNodes(nodes []*html.Node, st siblingType) []*html.Node {
+	return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+		return getChildrenWithSiblingType(n, st, nil, nil)
+	})
+}
+
+// Gets the children of the specified parent, based on the requested sibling
+// type, skipping a specified node if required.
+func getChildrenWithSiblingType(parent *html.Node, st siblingType, skipNode *html.Node,
+	untilFunc func(*html.Node) bool) (result []*html.Node) {
+
+	// Create the iterator function
+	var iter = func(cur *html.Node) (ret *html.Node) {
+		// Based on the sibling type requested, iterate the right way
+		for {
+			switch st {
+			case siblingAll, siblingAllIncludingNonElements:
+				if cur == nil {
+					// First iteration, start with first child of parent
+					// Skip node if required
+					if ret = parent.FirstChild; ret == skipNode && skipNode != nil {
+						ret = skipNode.NextSibling
+					}
+				} else {
+					// Skip node if required
+					if ret = cur.NextSibling; ret == skipNode && skipNode != nil {
+						ret = skipNode.NextSibling
+					}
+				}
+			case siblingPrev, siblingPrevAll, siblingPrevUntil:
+				if cur == nil {
+					// Start with previous sibling of the skip node
+					ret = skipNode.PrevSibling
+				} else {
+					ret = cur.PrevSibling
+				}
+			case siblingNext, siblingNextAll, siblingNextUntil:
+				if cur == nil {
+					// Start with next sibling of the skip node
+					ret = skipNode.NextSibling
+				} else {
+					ret = cur.NextSibling
+				}
+			default:
+				panic("Invalid sibling type.")
+			}
+			if ret == nil || ret.Type == html.ElementNode || st == siblingAllIncludingNonElements {
+				return
+			}
+			// Not a valid node, try again from this one
+			cur = ret
+		}
+	}
+
+	for c := iter(nil); c != nil; c = iter(c) {
+		// If this is an ...Until case, test before append (returns true
+		// if the until condition is reached)
+		if st == siblingNextUntil || st == siblingPrevUntil {
+			if untilFunc(c) {
+				return
+			}
+		}
+		result = append(result, c)
+		if st == siblingNext || st == siblingPrev {
+			// Only one node was requested (immediate next or previous), so exit
+			return
+		}
+	}
+	return
+}
+
+// Internal implementation of parent nodes that return a raw slice of Nodes.
+func getParentNodes(nodes []*html.Node) []*html.Node {
+	return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+		if n.Parent != nil && n.Parent.Type == html.ElementNode {
+			return []*html.Node{n.Parent}
+		}
+		return nil
+	})
+}
+
+// Internal map function used by many traversing methods. Takes the source nodes
+// to iterate on and the mapping function that returns an array of nodes.
+// Returns an array of nodes mapped by calling the callback function once for
+// each node in the source nodes.
+func mapNodes(nodes []*html.Node, f func(int, *html.Node) []*html.Node) (result []*html.Node) {
+	set := make(map[*html.Node]bool)
+	for i, n := range nodes {
+		if vals := f(i, n); len(vals) > 0 {
+			result = appendWithoutDuplicates(result, vals, set)
+		}
+	}
+	return result
+}

vendor/github.com/PuerkitoBio/goquery/type.go 🔗

@@ -0,0 +1,203 @@
+package goquery
+
+import (
+	"errors"
+	"io"
+	"net/http"
+	"net/url"
+
+	"github.com/andybalholm/cascadia"
+	"golang.org/x/net/html"
+)
+
+// Document represents an HTML document to be manipulated. Unlike jQuery, which
+// is loaded as part of a DOM document, and thus acts upon its containing
+// document, GoQuery doesn't know which HTML document to act upon. So it needs
+// to be told, and that's what the Document class is for. It holds the root
+// document node to manipulate, and can make selections on this document.
+type Document struct {
+	*Selection
+	Url      *url.URL
+	rootNode *html.Node
+}
+
+// NewDocumentFromNode is a Document constructor that takes a root html Node
+// as argument.
+func NewDocumentFromNode(root *html.Node) *Document {
+	return newDocument(root, nil)
+}
+
+// NewDocument is a Document constructor that takes a string URL as argument.
+// It loads the specified document, parses it, and stores the root Document
+// node, ready to be manipulated.
+//
+// Deprecated: Use the net/http standard library package to make the request
+// and validate the response before calling goquery.NewDocumentFromReader
+// with the response's body.
+func NewDocument(url string) (*Document, error) {
+	// Load the URL
+	res, e := http.Get(url)
+	if e != nil {
+		return nil, e
+	}
+	return NewDocumentFromResponse(res)
+}
+
+// NewDocumentFromReader returns a Document from an io.Reader.
+// It returns an error as second value if the reader's data cannot be parsed
+// as html. It does not check if the reader is also an io.Closer, the
+// provided reader is never closed by this call. It is the responsibility
+// of the caller to close it if required.
+func NewDocumentFromReader(r io.Reader) (*Document, error) {
+	root, e := html.Parse(r)
+	if e != nil {
+		return nil, e
+	}
+	return newDocument(root, nil), nil
+}
+
+// NewDocumentFromResponse is another Document constructor that takes an http response as argument.
+// It loads the specified response's document, parses it, and stores the root Document
+// node, ready to be manipulated. The response's body is closed on return.
+//
+// Deprecated: Use goquery.NewDocumentFromReader with the response's body.
+func NewDocumentFromResponse(res *http.Response) (*Document, error) {
+	if res == nil {
+		return nil, errors.New("Response is nil")
+	}
+	defer res.Body.Close()
+	if res.Request == nil {
+		return nil, errors.New("Response.Request is nil")
+	}
+
+	// Parse the HTML into nodes
+	root, e := html.Parse(res.Body)
+	if e != nil {
+		return nil, e
+	}
+
+	// Create and fill the document
+	return newDocument(root, res.Request.URL), nil
+}
+
+// CloneDocument creates a deep-clone of a document.
+func CloneDocument(doc *Document) *Document {
+	return newDocument(cloneNode(doc.rootNode), doc.Url)
+}
+
+// Private constructor, make sure all fields are correctly filled.
+func newDocument(root *html.Node, url *url.URL) *Document {
+	// Create and fill the document
+	d := &Document{nil, url, root}
+	d.Selection = newSingleSelection(root, d)
+	return d
+}
+
+// Selection represents a collection of nodes matching some criteria. The
+// initial Selection can be created by using Document.Find, and then
+// manipulated using the jQuery-like chainable syntax and methods.
+type Selection struct {
+	Nodes    []*html.Node
+	document *Document
+	prevSel  *Selection
+}
+
+// Helper constructor to create an empty selection
+func newEmptySelection(doc *Document) *Selection {
+	return &Selection{nil, doc, nil}
+}
+
+// Helper constructor to create a selection of only one node
+func newSingleSelection(node *html.Node, doc *Document) *Selection {
+	return &Selection{[]*html.Node{node}, doc, nil}
+}
+
+// Matcher is an interface that defines the methods to match
+// HTML nodes against a compiled selector string. Cascadia's
+// Selector implements this interface.
+type Matcher interface {
+	Match(*html.Node) bool
+	MatchAll(*html.Node) []*html.Node
+	Filter([]*html.Node) []*html.Node
+}
+
+// Single compiles a selector string to a Matcher that stops after the first
+// match is found.
+//
+// By default, Selection.Find and other functions that accept a selector string
+// to select nodes will use all matches corresponding to that selector. By
+// using the Matcher returned by Single, at most the first match will be
+// selected.
+//
+// For example, those two statements are semantically equivalent:
+//
+//     sel1 := doc.Find("a").First()
+//     sel2 := doc.FindMatcher(goquery.Single("a"))
+//
+// The one using Single is optimized to be potentially much faster on large
+// documents.
+//
+// Only the behaviour of the MatchAll method of the Matcher interface is
+// altered compared to standard Matchers. This means that the single-selection
+// property of the Matcher only applies for Selection methods where the Matcher
+// is used to select nodes, not to filter or check if a node matches the
+// Matcher - in those cases, the behaviour of the Matcher is unchanged (e.g.
+// FilterMatcher(Single("div")) will still result in a Selection with multiple
+// "div"s if there were many "div"s in the Selection to begin with).
+func Single(selector string) Matcher {
+	return singleMatcher{compileMatcher(selector)}
+}
+
+// SingleMatcher returns a Matcher matches the same nodes as m, but that stops
+// after the first match is found.
+//
+// See the documentation of function Single for more details.
+func SingleMatcher(m Matcher) Matcher {
+	if _, ok := m.(singleMatcher); ok {
+		// m is already a singleMatcher
+		return m
+	}
+	return singleMatcher{m}
+}
+
+// compileMatcher compiles the selector string s and returns
+// the corresponding Matcher. If s is an invalid selector string,
+// it returns a Matcher that fails all matches.
+func compileMatcher(s string) Matcher {
+	cs, err := cascadia.Compile(s)
+	if err != nil {
+		return invalidMatcher{}
+	}
+	return cs
+}
+
+type singleMatcher struct {
+	Matcher
+}
+
+func (m singleMatcher) MatchAll(n *html.Node) []*html.Node {
+	// Optimized version - stops finding at the first match (cascadia-compiled
+	// matchers all use this code path).
+	if mm, ok := m.Matcher.(interface{ MatchFirst(*html.Node) *html.Node }); ok {
+		node := mm.MatchFirst(n)
+		if node == nil {
+			return nil
+		}
+		return []*html.Node{node}
+	}
+
+	// Fallback version, for e.g. test mocks that don't provide the MatchFirst
+	// method.
+	nodes := m.Matcher.MatchAll(n)
+	if len(nodes) > 0 {
+		return nodes[:1:1]
+	}
+	return nil
+}
+
+// invalidMatcher is a Matcher that always fails to match.
+type invalidMatcher struct{}
+
+func (invalidMatcher) Match(n *html.Node) bool             { return false }
+func (invalidMatcher) MatchAll(n *html.Node) []*html.Node  { return nil }
+func (invalidMatcher) Filter(ns []*html.Node) []*html.Node { return nil }

vendor/github.com/PuerkitoBio/goquery/utilities.go 🔗

@@ -0,0 +1,178 @@
+package goquery
+
+import (
+	"bytes"
+	"io"
+
+	"golang.org/x/net/html"
+)
+
+// used to determine if a set (map[*html.Node]bool) should be used
+// instead of iterating over a slice. The set uses more memory and
+// is slower than slice iteration for small N.
+const minNodesForSet = 1000
+
+var nodeNames = []string{
+	html.ErrorNode:    "#error",
+	html.TextNode:     "#text",
+	html.DocumentNode: "#document",
+	html.CommentNode:  "#comment",
+}
+
+// NodeName returns the node name of the first element in the selection.
+// It tries to behave in a similar way as the DOM's nodeName property
+// (https://developer.mozilla.org/en-US/docs/Web/API/Node/nodeName).
+//
+// Go's net/html package defines the following node types, listed with
+// the corresponding returned value from this function:
+//
+//     ErrorNode : #error
+//     TextNode : #text
+//     DocumentNode : #document
+//     ElementNode : the element's tag name
+//     CommentNode : #comment
+//     DoctypeNode : the name of the document type
+//
+func NodeName(s *Selection) string {
+	if s.Length() == 0 {
+		return ""
+	}
+	return nodeName(s.Get(0))
+}
+
+// nodeName returns the node name of the given html node.
+// See NodeName for additional details on behaviour.
+func nodeName(node *html.Node) string {
+	if node == nil {
+		return ""
+	}
+
+	switch node.Type {
+	case html.ElementNode, html.DoctypeNode:
+		return node.Data
+	default:
+		if int(node.Type) < len(nodeNames) {
+			return nodeNames[node.Type]
+		}
+		return ""
+	}
+}
+
+// Render renders the HTML of the first item in the selection and writes it to
+// the writer. It behaves the same as OuterHtml but writes to w instead of
+// returning the string.
+func Render(w io.Writer, s *Selection) error {
+	if s.Length() == 0 {
+		return nil
+	}
+	n := s.Get(0)
+	return html.Render(w, n)
+}
+
+// OuterHtml returns the outer HTML rendering of the first item in
+// the selection - that is, the HTML including the first element's
+// tag and attributes.
+//
+// Unlike Html, this is a function and not a method on the Selection,
+// because this is not a jQuery method (in javascript-land, this is
+// a property provided by the DOM).
+func OuterHtml(s *Selection) (string, error) {
+	var buf bytes.Buffer
+	if err := Render(&buf, s); err != nil {
+		return "", err
+	}
+	return buf.String(), nil
+}
+
+// Loop through all container nodes to search for the target node.
+func sliceContains(container []*html.Node, contained *html.Node) bool {
+	for _, n := range container {
+		if nodeContains(n, contained) {
+			return true
+		}
+	}
+
+	return false
+}
+
+// Checks if the contained node is within the container node.
+func nodeContains(container *html.Node, contained *html.Node) bool {
+	// Check if the parent of the contained node is the container node, traversing
+	// upward until the top is reached, or the container is found.
+	for contained = contained.Parent; contained != nil; contained = contained.Parent {
+		if container == contained {
+			return true
+		}
+	}
+	return false
+}
+
+// Checks if the target node is in the slice of nodes.
+func isInSlice(slice []*html.Node, node *html.Node) bool {
+	return indexInSlice(slice, node) > -1
+}
+
+// Returns the index of the target node in the slice, or -1.
+func indexInSlice(slice []*html.Node, node *html.Node) int {
+	if node != nil {
+		for i, n := range slice {
+			if n == node {
+				return i
+			}
+		}
+	}
+	return -1
+}
+
+// Appends the new nodes to the target slice, making sure no duplicate is added.
+// There is no check to the original state of the target slice, so it may still
+// contain duplicates. The target slice is returned because append() may create
+// a new underlying array. If targetSet is nil, a local set is created with the
+// target if len(target) + len(nodes) is greater than minNodesForSet.
+func appendWithoutDuplicates(target []*html.Node, nodes []*html.Node, targetSet map[*html.Node]bool) []*html.Node {
+	// if there are not that many nodes, don't use the map, faster to just use nested loops
+	// (unless a non-nil targetSet is passed, in which case the caller knows better).
+	if targetSet == nil && len(target)+len(nodes) < minNodesForSet {
+		for _, n := range nodes {
+			if !isInSlice(target, n) {
+				target = append(target, n)
+			}
+		}
+		return target
+	}
+
+	// if a targetSet is passed, then assume it is reliable, otherwise create one
+	// and initialize it with the current target contents.
+	if targetSet == nil {
+		targetSet = make(map[*html.Node]bool, len(target))
+		for _, n := range target {
+			targetSet[n] = true
+		}
+	}
+	for _, n := range nodes {
+		if !targetSet[n] {
+			target = append(target, n)
+			targetSet[n] = true
+		}
+	}
+
+	return target
+}
+
+// Loop through a selection, returning only those nodes that pass the predicate
+// function.
+func grep(sel *Selection, predicate func(i int, s *Selection) bool) (result []*html.Node) {
+	for i, n := range sel.Nodes {
+		if predicate(i, newSingleSelection(n, sel.document)) {
+			result = append(result, n)
+		}
+	}
+	return result
+}
+
+// Creates a new Selection object based on the specified nodes, and keeps the
+// source Selection object on the stack (linked list).
+func pushStack(fromSel *Selection, nodes []*html.Node) *Selection {
+	result := &Selection{nodes, fromSel.document, fromSel}
+	return result
+}

vendor/github.com/alecthomas/chroma/v2/.editorconfig 🔗

@@ -0,0 +1,17 @@
+root = true
+
+[*]
+indent_style = tab
+end_of_line = lf
+charset = utf-8
+trim_trailing_whitespace = true
+insert_final_newline = true
+
+[*.xml]
+indent_style = space
+indent_size = 2
+insert_final_newline = false
+
+[*.yml]
+indent_style = space
+indent_size = 2

vendor/github.com/alecthomas/chroma/v2/.gitignore 🔗

@@ -0,0 +1,25 @@
+# Binaries for programs and plugins
+.git
+.idea
+.vscode
+.hermit
+*.exe
+*.dll
+*.so
+*.dylib
+/cmd/chroma/chroma
+
+# Test binary, build with `go test -c`
+*.test
+
+# Output of the go coverage tool, specifically when used with LiteIDE
+*.out
+
+# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
+.glide/
+
+_models/
+
+_examples/
+*.min.*
+build/

vendor/github.com/alecthomas/chroma/v2/.golangci.yml 🔗

@@ -0,0 +1,95 @@
+run:
+  tests: true
+  skip-dirs:
+    - _examples
+
+output:
+  print-issued-lines: false
+
+linters:
+  enable-all: true
+  disable:
+    - maligned
+    - megacheck
+    - lll
+    - gocyclo
+    - dupl
+    - gochecknoglobals
+    - funlen
+    - godox
+    - wsl
+    - gomnd
+    - gocognit
+    - goerr113
+    - nolintlint
+    - testpackage
+    - godot
+    - nestif
+    - paralleltest
+    - nlreturn
+    - cyclop
+    - exhaustivestruct
+    - gci
+    - gofumpt
+    - errorlint
+    - exhaustive
+    - ifshort
+    - wrapcheck
+    - stylecheck
+    - thelper
+    - nonamedreturns
+    - revive
+    - dupword
+    - exhaustruct
+    - varnamelen
+    - forcetypeassert
+    - ireturn
+    - maintidx
+    - govet
+    - nosnakecase
+    - testableexamples
+    - musttag
+    - depguard
+    - goconst
+    - perfsprint
+    - mnd
+    - predeclared
+
+linters-settings:
+  govet:
+    check-shadowing: true
+  gocyclo:
+    min-complexity: 10
+  dupl:
+    threshold: 100
+  goconst:
+    min-len: 8
+    min-occurrences: 3
+  forbidigo:
+    #forbid:
+    #  - (Must)?NewLexer$
+    exclude_godoc_examples: false
+
+
+issues:
+  max-per-linter: 0
+  max-same: 0
+  exclude-use-default: false
+  exclude:
+    # Captured by errcheck.
+    - '^(G104|G204):'
+    # Very commonly not checked.
+    - 'Error return value of .(.*\.Help|.*\.MarkFlagRequired|(os\.)?std(out|err)\..*|.*Close|.*Flush|os\.Remove(All)?|.*printf?|os\.(Un)?Setenv). is not checked'
+    - 'exported method (.*\.MarshalJSON|.*\.UnmarshalJSON|.*\.EntityURN|.*\.GoString|.*\.Pos) should have comment or be unexported'
+    - 'composite literal uses unkeyed fields'
+    - 'declaration of "err" shadows declaration'
+    - 'should not use dot imports'
+    - 'Potential file inclusion via variable'
+    - 'should have comment or be unexported'
+    - 'comment on exported var .* should be of the form'
+    - 'at least one file in a package should have a package comment'
+    - 'string literal contains the Unicode'
+    - 'methods on the same type should have the same receiver name'
+    - '_TokenType_name should be _TokenTypeName'
+    - '`_TokenType_map` should be `_TokenTypeMap`'
+    - 'rewrite if-else to switch statement'

vendor/github.com/alecthomas/chroma/v2/.goreleaser.yml 🔗

@@ -0,0 +1,37 @@
+project_name: chroma
+release:
+  github:
+    owner: alecthomas
+    name: chroma
+brews:
+  -
+    install: bin.install "chroma"
+env:
+  - CGO_ENABLED=0
+builds:
+- goos:
+    - linux
+    - darwin
+    - windows
+  goarch:
+    - arm64
+    - amd64
+    - "386"
+  goarm:
+    - "6"
+  dir: ./cmd/chroma
+  main: .
+  ldflags: -s -w -X main.version={{.Version}} -X main.commit={{.Commit}} -X main.date={{.Date}}
+  binary: chroma
+archives:
+  -
+    format: tar.gz
+    name_template: '{{ .Binary }}-{{ .Version }}-{{ .Os }}-{{ .Arch }}{{ if .Arm }}v{{
+    .Arm }}{{ end }}'
+    files:
+      - COPYING
+      - README*
+snapshot:
+  name_template: SNAPSHOT-{{ .Commit }}
+checksum:
+  name_template: '{{ .ProjectName }}-{{ .Version }}-checksums.txt'

vendor/github.com/alecthomas/chroma/v2/Bitfile 🔗

@@ -0,0 +1,24 @@
+VERSION = %(git describe --tags --dirty  --always)%
+export CGOENABLED = 0
+
+tokentype_enumer.go: types.go
+  build: go generate
+
+# Regenerate the list of lexers in the README
+README.md: lexers/*.go lexers/*/*.xml table.py
+  build: ./table.py
+  -clean
+
+implicit %{1}%{2}.min.%{3}: **/*.{css,js}
+  build: esbuild --bundle %{IN} --minify --outfile=%{OUT}
+
+implicit build/%{1}: cmd/*
+  cd cmd/%{1}
+  inputs: cmd/%{1}/**/* **/*.go
+  build: go build -ldflags="-X 'main.version=%{VERSION}'" -o ../../build/%{1} .
+
+#upload: chromad
+#  build:
+#    scp chromad root@swapoff.org:
+#    ssh root@swapoff.org 'install -m755 ./chromad /srv/http/swapoff.org/bin && service chromad restart'
+#    touch upload

vendor/github.com/alecthomas/chroma/v2/COPYING 🔗

@@ -0,0 +1,19 @@
+Copyright (C) 2017 Alec Thomas
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of
+this software and associated documentation files (the "Software"), to deal in
+the Software without restriction, including without limitation the rights to
+use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
+of the Software, and to permit persons to whom the Software is furnished to do
+so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.

vendor/github.com/alecthomas/chroma/v2/Makefile 🔗

@@ -0,0 +1,23 @@
+.PHONY: chromad upload all
+
+VERSION ?= $(shell git describe --tags --dirty  --always)
+export GOOS ?= linux
+export GOARCH ?= amd64
+
+all: README.md tokentype_string.go
+
+README.md: lexers/*/*.go
+	./table.py
+
+tokentype_string.go: types.go
+	go generate
+
+chromad:
+	rm -rf build
+	esbuild --bundle cmd/chromad/static/index.js --minify --outfile=cmd/chromad/static/index.min.js
+	esbuild --bundle cmd/chromad/static/index.css --minify --outfile=cmd/chromad/static/index.min.css
+	(export CGOENABLED=0 ; cd ./cmd/chromad && go build -ldflags="-X 'main.version=$(VERSION)'" -o ../../build/chromad .)
+
+upload: build/chromad
+	scp build/chromad root@swapoff.org: && \
+		ssh root@swapoff.org 'install -m755 ./chromad /srv/http/swapoff.org/bin && service chromad restart'

vendor/github.com/alecthomas/chroma/v2/README.md 🔗

@@ -0,0 +1,297 @@
+# Chroma — A general purpose syntax highlighter in pure Go
+
+[![Golang Documentation](https://godoc.org/github.com/alecthomas/chroma?status.svg)](https://godoc.org/github.com/alecthomas/chroma) [![CI](https://github.com/alecthomas/chroma/actions/workflows/ci.yml/badge.svg)](https://github.com/alecthomas/chroma/actions/workflows/ci.yml) [![Slack chat](https://img.shields.io/static/v1?logo=slack&style=flat&label=slack&color=green&message=gophers)](https://invite.slack.golangbridge.org/)
+
+Chroma takes source code and other structured text and converts it into syntax
+highlighted HTML, ANSI-coloured text, etc.
+
+Chroma is based heavily on [Pygments](http://pygments.org/), and includes
+translators for Pygments lexers and styles.
+
+## Table of Contents
+
+<!-- TOC -->
+
+1. [Supported languages](#supported-languages)
+2. [Try it](#try-it)
+3. [Using the library](#using-the-library)
+   1. [Quick start](#quick-start)
+   2. [Identifying the language](#identifying-the-language)
+   3. [Formatting the output](#formatting-the-output)
+   4. [The HTML formatter](#the-html-formatter)
+4. [More detail](#more-detail)
+   1. [Lexers](#lexers)
+   2. [Formatters](#formatters)
+   3. [Styles](#styles)
+5. [Command-line interface](#command-line-interface)
+6. [Testing lexers](#testing-lexers)
+7. [What's missing compared to Pygments?](#whats-missing-compared-to-pygments)
+
+<!-- /TOC -->
+
+## Supported languages
+
+| Prefix | Language                                                                                                                                                                                                                                            |
+| :----: | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|   A    | ABAP, ABNF, ActionScript, ActionScript 3, Ada, Agda, AL, Alloy, Angular2, ANTLR, ApacheConf, APL, AppleScript, ArangoDB AQL, Arduino, ArmAsm, AutoHotkey, AutoIt, Awk                                                                               |
+|   B    | Ballerina, Bash, Bash Session, Batchfile, BibTeX, Bicep, BlitzBasic, BNF, BQN, Brainfuck                                                                                                                                                            |
+|   C    | C, C#, C++, Caddyfile, Caddyfile Directives, Cap'n Proto, Cassandra CQL, Ceylon, CFEngine3, cfstatement, ChaiScript, Chapel, Cheetah, Clojure, CMake, COBOL, CoffeeScript, Common Lisp, Coq, Crystal, CSS, Cython                                   |
+|   D    | D, Dart, Dax, Desktop Entry, Diff, Django/Jinja, dns, Docker, DTD, Dylan                                                                                                                                                                            |
+|   E    | EBNF, Elixir, Elm, EmacsLisp, Erlang                                                                                                                                                                                                                |
+|   F    | Factor, Fennel, Fish, Forth, Fortran, FortranFixed, FSharp                                                                                                                                                                                          |
+|   G    | GAS, GDScript, Genshi, Genshi HTML, Genshi Text, Gherkin, Gleam, GLSL, Gnuplot, Go, Go HTML Template, Go Text Template, GraphQL, Groff, Groovy                                                                                                      |
+|   H    | Handlebars, Hare, Haskell, Haxe, HCL, Hexdump, HLB, HLSL, HolyC, HTML, HTTP, Hy                                                                                                                                                                     |
+|   I    | Idris, Igor, INI, Io, ISCdhcpd                                                                                                                                                                                                                      |
+|   J    | J, Java, JavaScript, JSON, Jsonnet, Julia, Jungle                                                                                                                                                                                                   |
+|   K    | Kotlin                                                                                                                                                                                                                                              |
+|   L    | Lighttpd configuration file, LLVM, Lua                                                                                                                                                                                                              |
+|   M    | Makefile, Mako, markdown, Mason, Materialize SQL dialect, Mathematica, Matlab, MCFunction, Meson, Metal, MiniZinc, MLIR, Modula-2, MonkeyC, MorrowindScript, Myghty, MySQL                                                                          |
+|   N    | NASM, Natural, Newspeak, Nginx configuration file, Nim, Nix, NSIS                                                                                                                                                                                   |
+|   O    | Objective-C, OCaml, Octave, Odin, OnesEnterprise, OpenEdge ABL, OpenSCAD, Org Mode                                                                                                                                                                  |
+|   P    | PacmanConf, Perl, PHP, PHTML, Pig, PkgConfig, PL/pgSQL, plaintext, Plutus Core, Pony, PostgreSQL SQL dialect, PostScript, POVRay, PowerQuery, PowerShell, Prolog, PromQL, Promela, properties, Protocol Buffer, PRQL, PSL, Puppet, Python, Python 2 |
+|   Q    | QBasic, QML                                                                                                                                                                                                                                         |
+|   R    | R, Racket, Ragel, Raku, react, ReasonML, reg, Rego, reStructuredText, Rexx, RPMSpec, Ruby, Rust                                                                                                                                                     |
+|   S    | SAS, Sass, Scala, Scheme, Scilab, SCSS, Sed, Sieve, Smali, Smalltalk, Smarty, SNBT, Snobol, Solidity, SourcePawn, SPARQL, SQL, SquidConf, Standard ML, stas, Stylus, Svelte, Swift, SYSTEMD, systemverilog                                                |
+|   T    | TableGen, Tal, TASM, Tcl, Tcsh, Termcap, Terminfo, Terraform, TeX, Thrift, TOML, TradingView, Transact-SQL, Turing, Turtle, Twig, TypeScript, TypoScript, TypoScriptCssData, TypoScriptHtmlData, Typst                                              |
+|   V    | V, V shell, Vala, VB.net, verilog, VHDL, VHS, VimL, vue                                                                                                                                                                                             |
+|   W    | WDTE, WebGPU Shading Language, Whiley                                                                                                                                                                                                               |
+|   X    | XML, Xorg                                                                                                                                                                                                                                           |
+|   Y    | YAML, YANG                                                                                                                                                                                                                                          |
+|   Z    | Z80 Assembly, Zed, Zig                                                                                                                                                                                                                              |
+
+_I will attempt to keep this section up to date, but an authoritative list can be
+displayed with `chroma --list`._
+
+## Try it
+
+Try out various languages and styles on the [Chroma Playground](https://swapoff.org/chroma/playground/).
+
+## Using the library
+
+This is version 2 of Chroma, use the import path:
+
+```go
+import "github.com/alecthomas/chroma/v2"
+```
+
+Chroma, like Pygments, has the concepts of
+[lexers](https://github.com/alecthomas/chroma/tree/master/lexers),
+[formatters](https://github.com/alecthomas/chroma/tree/master/formatters) and
+[styles](https://github.com/alecthomas/chroma/tree/master/styles).
+
+Lexers convert source text into a stream of tokens, styles specify how token
+types are mapped to colours, and formatters convert tokens and styles into
+formatted output.
+
+A package exists for each of these, containing a global `Registry` variable
+with all of the registered implementations. There are also helper functions
+for using the registry in each package, such as looking up lexers by name or
+matching filenames, etc.
+
+In all cases, if a lexer, formatter or style can not be determined, `nil` will
+be returned. In this situation you may want to default to the `Fallback`
+value in each respective package, which provides sane defaults.
+
+### Quick start
+
+A convenience function exists that can be used to simply format some source
+text, without any effort:
+
+```go
+err := quick.Highlight(os.Stdout, someSourceCode, "go", "html", "monokai")
+```
+
+### Identifying the language
+
+To highlight code, you'll first have to identify what language the code is
+written in. There are three primary ways to do that:
+
+1. Detect the language from its filename.
+
+   ```go
+   lexer := lexers.Match("foo.go")
+   ```
+
+2. Explicitly specify the language by its Chroma syntax ID (a full list is available from `lexers.Names()`).
+
+   ```go
+   lexer := lexers.Get("go")
+   ```
+
+3. Detect the language from its content.
+
+   ```go
+   lexer := lexers.Analyse("package main\n\nfunc main()\n{\n}\n")
+   ```
+
+In all cases, `nil` will be returned if the language can not be identified.
+
+```go
+if lexer == nil {
+  lexer = lexers.Fallback
+}
+```
+
+At this point, it should be noted that some lexers can be extremely chatty. To
+mitigate this, you can use the coalescing lexer to coalesce runs of identical
+token types into a single token:
+
+```go
+lexer = chroma.Coalesce(lexer)
+```
+
+### Formatting the output
+
+Once a language is identified you will need to pick a formatter and a style (theme).
+
+```go
+style := styles.Get("swapoff")
+if style == nil {
+  style = styles.Fallback
+}
+formatter := formatters.Get("html")
+if formatter == nil {
+  formatter = formatters.Fallback
+}
+```
+
+Then obtain an iterator over the tokens:
+
+```go
+contents, err := ioutil.ReadAll(r)
+iterator, err := lexer.Tokenise(nil, string(contents))
+```
+
+And finally, format the tokens from the iterator:
+
+```go
+err := formatter.Format(w, style, iterator)
+```
+
+### The HTML formatter
+
+By default the `html` registered formatter generates standalone HTML with
+embedded CSS. More flexibility is available through the `formatters/html` package.
+
+Firstly, the output generated by the formatter can be customised with the
+following constructor options:
+
+- `Standalone()` - generate standalone HTML with embedded CSS.
+- `WithClasses()` - use classes rather than inlined style attributes.
+- `ClassPrefix(prefix)` - prefix each generated CSS class.
+- `TabWidth(width)` - Set the rendered tab width, in characters.
+- `WithLineNumbers()` - Render line numbers (style with `LineNumbers`).
+- `WithLinkableLineNumbers()` - Make the line numbers linkable and be a link to themselves.
+- `HighlightLines(ranges)` - Highlight lines in these ranges (style with `LineHighlight`).
+- `LineNumbersInTable()` - Use a table for formatting line numbers and code, rather than spans.
+
+If `WithClasses()` is used, the corresponding CSS can be obtained from the formatter with:
+
+```go
+formatter := html.New(html.WithClasses(true))
+err := formatter.WriteCSS(w, style)
+```
+
+## More detail
+
+### Lexers
+
+See the [Pygments documentation](http://pygments.org/docs/lexerdevelopment/)
+for details on implementing lexers. Most concepts apply directly to Chroma,
+but see existing lexer implementations for real examples.
+
+In many cases lexers can be automatically converted directly from Pygments by
+using the included Python 3 script `pygments2chroma_xml.py`. I use something like
+the following:
+
+```sh
+python3 _tools/pygments2chroma_xml.py \
+  pygments.lexers.jvm.KotlinLexer \
+  > lexers/embedded/kotlin.xml
+```
+
+See notes in [pygments-lexers.txt](https://github.com/alecthomas/chroma/blob/master/pygments-lexers.txt)
+for a list of lexers, and notes on some of the issues importing them.
+
+### Formatters
+
+Chroma supports HTML output, as well as terminal output in 8 colour, 256 colour, and true-colour.
+
+A `noop` formatter is included that outputs the token text only, and a `tokens`
+formatter outputs raw tokens. The latter is useful for debugging lexers.
+
+### Styles
+
+Chroma styles are defined in XML. The style entries use the
+[same syntax](http://pygments.org/docs/styles/) as Pygments.
+
+All Pygments styles have been converted to Chroma using the `_tools/style.py`
+script.
+
+When you work with one of [Chroma's styles](https://github.com/alecthomas/chroma/tree/master/styles),
+know that the `Background` token type provides the default style for tokens. It does so
+by defining a foreground color and background color.
+
+For example, this gives each token name not defined in the style a default color
+of `#f8f8f8` and uses `#000000` for the highlighted code block's background:
+
+```xml
+<entry type="Background" style="#f8f8f2 bg:#000000"/>
+```
+
+Also, token types in a style file are hierarchical. For instance, when `CommentSpecial` is not defined, Chroma uses the token style from `Comment`. So when several comment tokens use the same color, you'll only need to define `Comment` and override the one that has a different color.
+
+For a quick overview of the available styles and how they look, check out the [Chroma Style Gallery](https://xyproto.github.io/splash/docs/).
+
+## Command-line interface
+
+A command-line interface to Chroma is included.
+
+Binaries are available to install from [the releases page](https://github.com/alecthomas/chroma/releases).
+
+The CLI can be used as a preprocessor to colorise output of `less(1)`,
+see documentation for the `LESSOPEN` environment variable.
+
+The `--fail` flag can be used to suppress output and return with exit status
+1 to facilitate falling back to some other preprocessor in case chroma
+does not resolve a specific lexer to use for the given file. For example:
+
+```shell
+export LESSOPEN='| p() { chroma --fail "$1" || cat "$1"; }; p "%s"'
+```
+
+Replace `cat` with your favourite fallback preprocessor.
+
+When invoked as `.lessfilter`, the `--fail` flag is automatically turned
+on under the hood for easy integration with [lesspipe shipping with
+Debian and derivatives](https://manpages.debian.org/lesspipe#USER_DEFINED_FILTERS);
+for that setup the `chroma` executable can be just symlinked to `~/.lessfilter`.
+
+## Testing lexers
+
+If you edit some lexers and want to try it, open a shell in `cmd/chromad` and run:
+
+```shell
+go run . --csrf-key=securekey
+```
+
+A Link will be printed. Open it in your Browser. Now you can test on the Playground with your local changes.
+
+If you want to run the tests and the lexers, open a shell in the root directory and run:
+
+```shell
+go test ./lexers
+```
+
+When updating or adding a lexer, please add tests. See [lexers/README.md](lexers/README.md) for more.
+
+## What's missing compared to Pygments?
+
+- Quite a few lexers, for various reasons (pull-requests welcome):
+  - Pygments lexers for complex languages often include custom code to
+    handle certain aspects, such as Raku's ability to nest code inside
+    regular expressions. These require time and effort to convert.
+  - I mostly only converted languages I had heard of, to reduce the porting cost.
+- Some more esoteric features of Pygments are omitted for simplicity.
+- Though the Chroma API supports content detection, very few languages support them.
+  I have plans to implement a statistical analyser at some point, but not enough time.

vendor/github.com/alecthomas/chroma/v2/coalesce.go 🔗

@@ -0,0 +1,35 @@
+package chroma
+
+// Coalesce is a Lexer interceptor that collapses runs of common types into a single token.
+func Coalesce(lexer Lexer) Lexer { return &coalescer{lexer} }
+
+type coalescer struct{ Lexer }
+
+func (d *coalescer) Tokenise(options *TokeniseOptions, text string) (Iterator, error) {
+	var prev Token
+	it, err := d.Lexer.Tokenise(options, text)
+	if err != nil {
+		return nil, err
+	}
+	return func() Token {
+		for token := it(); token != (EOF); token = it() {
+			if len(token.Value) == 0 {
+				continue
+			}
+			if prev == EOF {
+				prev = token
+			} else {
+				if prev.Type == token.Type && len(prev.Value) < 8192 {
+					prev.Value += token.Value
+				} else {
+					out := prev
+					prev = token
+					return out
+				}
+			}
+		}
+		out := prev
+		prev = EOF
+		return out
+	}, nil
+}

vendor/github.com/alecthomas/chroma/v2/colour.go 🔗

@@ -0,0 +1,192 @@
+package chroma
+
+import (
+	"fmt"
+	"math"
+	"strconv"
+	"strings"
+)
+
+// ANSI2RGB maps ANSI colour names, as supported by Chroma, to hex RGB values.
+var ANSI2RGB = map[string]string{
+	"#ansiblack":     "000000",
+	"#ansidarkred":   "7f0000",
+	"#ansidarkgreen": "007f00",
+	"#ansibrown":     "7f7fe0",
+	"#ansidarkblue":  "00007f",
+	"#ansipurple":    "7f007f",
+	"#ansiteal":      "007f7f",
+	"#ansilightgray": "e5e5e5",
+	// Normal
+	"#ansidarkgray":  "555555",
+	"#ansired":       "ff0000",
+	"#ansigreen":     "00ff00",
+	"#ansiyellow":    "ffff00",
+	"#ansiblue":      "0000ff",
+	"#ansifuchsia":   "ff00ff",
+	"#ansiturquoise": "00ffff",
+	"#ansiwhite":     "ffffff",
+
+	// Aliases without the "ansi" prefix, because...why?
+	"#black":     "000000",
+	"#darkred":   "7f0000",
+	"#darkgreen": "007f00",
+	"#brown":     "7f7fe0",
+	"#darkblue":  "00007f",
+	"#purple":    "7f007f",
+	"#teal":      "007f7f",
+	"#lightgray": "e5e5e5",
+	// Normal
+	"#darkgray":  "555555",
+	"#red":       "ff0000",
+	"#green":     "00ff00",
+	"#yellow":    "ffff00",
+	"#blue":      "0000ff",
+	"#fuchsia":   "ff00ff",
+	"#turquoise": "00ffff",
+	"#white":     "ffffff",
+}
+
+// Colour represents an RGB colour.
+type Colour int32
+
+// NewColour creates a Colour directly from RGB values.
+func NewColour(r, g, b uint8) Colour {
+	return ParseColour(fmt.Sprintf("%02x%02x%02x", r, g, b))
+}
+
+// Distance between this colour and another.
+//
+// This uses the approach described here (https://www.compuphase.com/cmetric.htm).
+// This is not as accurate as LAB, et. al. but is *vastly* simpler and sufficient for our needs.
+func (c Colour) Distance(e2 Colour) float64 {
+	ar, ag, ab := int64(c.Red()), int64(c.Green()), int64(c.Blue())
+	br, bg, bb := int64(e2.Red()), int64(e2.Green()), int64(e2.Blue())
+	rmean := (ar + br) / 2
+	r := ar - br
+	g := ag - bg
+	b := ab - bb
+	return math.Sqrt(float64((((512 + rmean) * r * r) >> 8) + 4*g*g + (((767 - rmean) * b * b) >> 8)))
+}
+
+// Brighten returns a copy of this colour with its brightness adjusted.
+//
+// If factor is negative, the colour is darkened.
+//
+// Uses approach described here (http://www.pvladov.com/2012/09/make-color-lighter-or-darker.html).
+func (c Colour) Brighten(factor float64) Colour {
+	r := float64(c.Red())
+	g := float64(c.Green())
+	b := float64(c.Blue())
+
+	if factor < 0 {
+		factor++
+		r *= factor
+		g *= factor
+		b *= factor
+	} else {
+		r = (255-r)*factor + r
+		g = (255-g)*factor + g
+		b = (255-b)*factor + b
+	}
+	return NewColour(uint8(r), uint8(g), uint8(b))
+}
+
+// BrightenOrDarken brightens a colour if it is < 0.5 brightness or darkens if > 0.5 brightness.
+func (c Colour) BrightenOrDarken(factor float64) Colour {
+	if c.Brightness() < 0.5 {
+		return c.Brighten(factor)
+	}
+	return c.Brighten(-factor)
+}
+
+// ClampBrightness returns a copy of this colour with its brightness adjusted such that
+// it falls within the range [min, max] (or very close to it due to rounding errors).
+// The supplied values use the same [0.0, 1.0] range as Brightness.
+func (c Colour) ClampBrightness(min, max float64) Colour {
+	if !c.IsSet() {
+		return c
+	}
+
+	min = math.Max(min, 0)
+	max = math.Min(max, 1)
+	current := c.Brightness()
+	target := math.Min(math.Max(current, min), max)
+	if current == target {
+		return c
+	}
+
+	r := float64(c.Red())
+	g := float64(c.Green())
+	b := float64(c.Blue())
+	rgb := r + g + b
+	if target > current {
+		// Solve for x: target == ((255-r)*x + r + (255-g)*x + g + (255-b)*x + b) / 255 / 3
+		return c.Brighten((target*255*3 - rgb) / (255*3 - rgb))
+	}
+	// Solve for x: target == (r*(x+1) + g*(x+1) + b*(x+1)) / 255 / 3
+	return c.Brighten((target*255*3)/rgb - 1)
+}
+
+// Brightness of the colour (roughly) in the range 0.0 to 1.0.
+func (c Colour) Brightness() float64 {
+	return (float64(c.Red()) + float64(c.Green()) + float64(c.Blue())) / 255.0 / 3.0
+}
+
+// ParseColour in the forms #rgb, #rrggbb, #ansi<colour>, or #<colour>.
+// Will return an "unset" colour if invalid.
+func ParseColour(colour string) Colour {
+	colour = normaliseColour(colour)
+	n, err := strconv.ParseUint(colour, 16, 32)
+	if err != nil {
+		return 0
+	}
+	return Colour(n + 1) //nolint:gosec
+}
+
+// MustParseColour is like ParseColour except it panics if the colour is invalid.
+//
+// Will panic if colour is in an invalid format.
+func MustParseColour(colour string) Colour {
+	parsed := ParseColour(colour)
+	if !parsed.IsSet() {
+		panic(fmt.Errorf("invalid colour %q", colour))
+	}
+	return parsed
+}
+
+// IsSet returns true if the colour is set.
+func (c Colour) IsSet() bool { return c != 0 }
+
+func (c Colour) String() string   { return fmt.Sprintf("#%06x", int(c-1)) }
+func (c Colour) GoString() string { return fmt.Sprintf("Colour(0x%06x)", int(c-1)) }
+
+// Red component of colour.
+func (c Colour) Red() uint8 { return uint8(((c - 1) >> 16) & 0xff) } //nolint:gosec
+
+// Green component of colour.
+func (c Colour) Green() uint8 { return uint8(((c - 1) >> 8) & 0xff) } //nolint:gosec
+
+// Blue component of colour.
+func (c Colour) Blue() uint8 { return uint8((c - 1) & 0xff) } //nolint:gosec
+
+// Colours is an orderable set of colours.
+type Colours []Colour
+
+func (c Colours) Len() int           { return len(c) }
+func (c Colours) Swap(i, j int)      { c[i], c[j] = c[j], c[i] }
+func (c Colours) Less(i, j int) bool { return c[i] < c[j] }
+
+// Convert colours to #rrggbb.
+func normaliseColour(colour string) string {
+	if ansi, ok := ANSI2RGB[colour]; ok {
+		return ansi
+	}
+	if strings.HasPrefix(colour, "#") {
+		colour = colour[1:]
+		if len(colour) == 3 {
+			return colour[0:1] + colour[0:1] + colour[1:2] + colour[1:2] + colour[2:3] + colour[2:3]
+		}
+	}
+	return colour
+}

vendor/github.com/alecthomas/chroma/v2/delegate.go 🔗

@@ -0,0 +1,152 @@
+package chroma
+
+import (
+	"bytes"
+)
+
+type delegatingLexer struct {
+	root     Lexer
+	language Lexer
+}
+
+// DelegatingLexer combines two lexers to handle the common case of a language embedded inside another, such as PHP
+// inside HTML or PHP inside plain text.
+//
+// It takes two lexer as arguments: a root lexer and a language lexer.  First everything is scanned using the language
+// lexer, which must return "Other" for unrecognised tokens. Then all "Other" tokens are lexed using the root lexer.
+// Finally, these two sets of tokens are merged.
+//
+// The lexers from the template lexer package use this base lexer.
+func DelegatingLexer(root Lexer, language Lexer) Lexer {
+	return &delegatingLexer{
+		root:     root,
+		language: language,
+	}
+}
+
+func (d *delegatingLexer) AnalyseText(text string) float32 {
+	return d.root.AnalyseText(text)
+}
+
+func (d *delegatingLexer) SetAnalyser(analyser func(text string) float32) Lexer {
+	d.root.SetAnalyser(analyser)
+	return d
+}
+
+func (d *delegatingLexer) SetRegistry(r *LexerRegistry) Lexer {
+	d.root.SetRegistry(r)
+	d.language.SetRegistry(r)
+	return d
+}
+
+func (d *delegatingLexer) Config() *Config {
+	return d.language.Config()
+}
+
+// An insertion is the character range where language tokens should be inserted.
+type insertion struct {
+	start, end int
+	tokens     []Token
+}
+
+func (d *delegatingLexer) Tokenise(options *TokeniseOptions, text string) (Iterator, error) { // nolint: gocognit
+	tokens, err := Tokenise(Coalesce(d.language), options, text)
+	if err != nil {
+		return nil, err
+	}
+	// Compute insertions and gather "Other" tokens.
+	others := &bytes.Buffer{}
+	insertions := []*insertion{}
+	var insert *insertion
+	offset := 0
+	var last Token
+	for _, t := range tokens {
+		if t.Type == Other {
+			if last != EOF && insert != nil && last.Type != Other {
+				insert.end = offset
+			}
+			others.WriteString(t.Value)
+		} else {
+			if last == EOF || last.Type == Other {
+				insert = &insertion{start: offset}
+				insertions = append(insertions, insert)
+			}
+			insert.tokens = append(insert.tokens, t)
+		}
+		last = t
+		offset += len(t.Value)
+	}
+
+	if len(insertions) == 0 {
+		return d.root.Tokenise(options, text)
+	}
+
+	// Lex the other tokens.
+	rootTokens, err := Tokenise(Coalesce(d.root), options, others.String())
+	if err != nil {
+		return nil, err
+	}
+
+	// Interleave the two sets of tokens.
+	var out []Token
+	offset = 0 // Offset into text.
+	tokenIndex := 0
+	nextToken := func() Token {
+		if tokenIndex >= len(rootTokens) {
+			return EOF
+		}
+		t := rootTokens[tokenIndex]
+		tokenIndex++
+		return t
+	}
+	insertionIndex := 0
+	nextInsertion := func() *insertion {
+		if insertionIndex >= len(insertions) {
+			return nil
+		}
+		i := insertions[insertionIndex]
+		insertionIndex++
+		return i
+	}
+	t := nextToken()
+	i := nextInsertion()
+	for t != EOF || i != nil {
+		// fmt.Printf("%d->%d:%q   %d->%d:%q\n", offset, offset+len(t.Value), t.Value, i.start, i.end, Stringify(i.tokens...))
+		if t == EOF || (i != nil && i.start < offset+len(t.Value)) {
+			var l Token
+			l, t = splitToken(t, i.start-offset)
+			if l != EOF {
+				out = append(out, l)
+				offset += len(l.Value)
+			}
+			out = append(out, i.tokens...)
+			offset += i.end - i.start
+			if t == EOF {
+				t = nextToken()
+			}
+			i = nextInsertion()
+		} else {
+			out = append(out, t)
+			offset += len(t.Value)
+			t = nextToken()
+		}
+	}
+	return Literator(out...), nil
+}
+
+func splitToken(t Token, offset int) (l Token, r Token) {
+	if t == EOF {
+		return EOF, EOF
+	}
+	if offset == 0 {
+		return EOF, t
+	}
+	if offset == len(t.Value) {
+		return t, EOF
+	}
+	l = t.Clone()
+	r = t.Clone()
+	l.Value = l.Value[:offset]
+	r.Value = r.Value[offset:]
+	return
+}

vendor/github.com/alecthomas/chroma/v2/doc.go 🔗

@@ -0,0 +1,7 @@
+// Package chroma takes source code and other structured text and converts it into syntax highlighted HTML, ANSI-
+// coloured text, etc.
+//
+// Chroma is based heavily on Pygments, and includes translators for Pygments lexers and styles.
+//
+// For more information, go here: https://github.com/alecthomas/chroma
+package chroma

vendor/github.com/alecthomas/chroma/v2/emitters.go 🔗

@@ -0,0 +1,218 @@
+package chroma
+
+import (
+	"fmt"
+)
+
+// An Emitter takes group matches and returns tokens.
+type Emitter interface {
+	// Emit tokens for the given regex groups.
+	Emit(groups []string, state *LexerState) Iterator
+}
+
+// SerialisableEmitter is an Emitter that can be serialised and deserialised to/from JSON.
+type SerialisableEmitter interface {
+	Emitter
+	EmitterKind() string
+}
+
+// EmitterFunc is a function that is an Emitter.
+type EmitterFunc func(groups []string, state *LexerState) Iterator
+
+// Emit tokens for groups.
+func (e EmitterFunc) Emit(groups []string, state *LexerState) Iterator {
+	return e(groups, state)
+}
+
+type Emitters []Emitter
+
+type byGroupsEmitter struct {
+	Emitters
+}
+
+// ByGroups emits a token for each matching group in the rule's regex.
+func ByGroups(emitters ...Emitter) Emitter {
+	return &byGroupsEmitter{Emitters: emitters}
+}
+
+func (b *byGroupsEmitter) EmitterKind() string { return "bygroups" }
+
+func (b *byGroupsEmitter) Emit(groups []string, state *LexerState) Iterator {
+	iterators := make([]Iterator, 0, len(groups)-1)
+	if len(b.Emitters) != len(groups)-1 {
+		iterators = append(iterators, Error.Emit(groups, state))
+		// panic(errors.Errorf("number of groups %q does not match number of emitters %v", groups, emitters))
+	} else {
+		for i, group := range groups[1:] {
+			if b.Emitters[i] != nil {
+				iterators = append(iterators, b.Emitters[i].Emit([]string{group}, state))
+			}
+		}
+	}
+	return Concaterator(iterators...)
+}
+
+// ByGroupNames emits a token for each named matching group in the rule's regex.
+func ByGroupNames(emitters map[string]Emitter) Emitter {
+	return EmitterFunc(func(groups []string, state *LexerState) Iterator {
+		iterators := make([]Iterator, 0, len(state.NamedGroups)-1)
+		if len(state.NamedGroups)-1 == 0 {
+			if emitter, ok := emitters[`0`]; ok {
+				iterators = append(iterators, emitter.Emit(groups, state))
+			} else {
+				iterators = append(iterators, Error.Emit(groups, state))
+			}
+		} else {
+			ruleRegex := state.Rules[state.State][state.Rule].Regexp
+			for i := 1; i < len(state.NamedGroups); i++ {
+				groupName := ruleRegex.GroupNameFromNumber(i)
+				group := state.NamedGroups[groupName]
+				if emitter, ok := emitters[groupName]; ok {
+					if emitter != nil {
+						iterators = append(iterators, emitter.Emit([]string{group}, state))
+					}
+				} else {
+					iterators = append(iterators, Error.Emit([]string{group}, state))
+				}
+			}
+		}
+		return Concaterator(iterators...)
+	})
+}
+
+// UsingByGroup emits tokens for the matched groups in the regex using a
+// sublexer. Used when lexing code blocks where the name of a sublexer is
+// contained within the block, for example on a Markdown text block or SQL
+// language block.
+//
+// An attempt to load the sublexer will be made using the captured value from
+// the text of the matched sublexerNameGroup. If a sublexer matching the
+// sublexerNameGroup is available, then tokens for the matched codeGroup will
+// be emitted using the sublexer. Otherwise, if no sublexer is available, then
+// tokens will be emitted from the passed emitter.
+//
+// Example:
+//
+//	var Markdown = internal.Register(MustNewLexer(
+//		&Config{
+//			Name:      "markdown",
+//			Aliases:   []string{"md", "mkd"},
+//			Filenames: []string{"*.md", "*.mkd", "*.markdown"},
+//			MimeTypes: []string{"text/x-markdown"},
+//		},
+//		Rules{
+//			"root": {
+//				{"^(```)(\\w+)(\\n)([\\w\\W]*?)(^```$)",
+//					UsingByGroup(
+//						2, 4,
+//						String, String, String, Text, String,
+//					),
+//					nil,
+//				},
+//			},
+//		},
+//	))
+//
+// See the lexers/markdown.go for the complete example.
+//
+// Note: panic's if the number of emitters does not equal the number of matched
+// groups in the regex.
+func UsingByGroup(sublexerNameGroup, codeGroup int, emitters ...Emitter) Emitter {
+	return &usingByGroup{
+		SublexerNameGroup: sublexerNameGroup,
+		CodeGroup:         codeGroup,
+		Emitters:          emitters,
+	}
+}
+
+type usingByGroup struct {
+	SublexerNameGroup int      `xml:"sublexer_name_group"`
+	CodeGroup         int      `xml:"code_group"`
+	Emitters          Emitters `xml:"emitters"`
+}
+
+func (u *usingByGroup) EmitterKind() string { return "usingbygroup" }
+func (u *usingByGroup) Emit(groups []string, state *LexerState) Iterator {
+	// bounds check
+	if len(u.Emitters) != len(groups)-1 {
+		panic("UsingByGroup expects number of emitters to be the same as len(groups)-1")
+	}
+
+	// grab sublexer
+	sublexer := state.Registry.Get(groups[u.SublexerNameGroup])
+
+	// build iterators
+	iterators := make([]Iterator, len(groups)-1)
+	for i, group := range groups[1:] {
+		if i == u.CodeGroup-1 && sublexer != nil {
+			var err error
+			iterators[i], err = sublexer.Tokenise(nil, groups[u.CodeGroup])
+			if err != nil {
+				panic(err)
+			}
+		} else if u.Emitters[i] != nil {
+			iterators[i] = u.Emitters[i].Emit([]string{group}, state)
+		}
+	}
+	return Concaterator(iterators...)
+}
+
+// UsingLexer returns an Emitter that uses a given Lexer for parsing and emitting.
+//
+// This Emitter is not serialisable.
+func UsingLexer(lexer Lexer) Emitter {
+	return EmitterFunc(func(groups []string, _ *LexerState) Iterator {
+		it, err := lexer.Tokenise(&TokeniseOptions{State: "root", Nested: true}, groups[0])
+		if err != nil {
+			panic(err)
+		}
+		return it
+	})
+}
+
+type usingEmitter struct {
+	Lexer string `xml:"lexer,attr"`
+}
+
+func (u *usingEmitter) EmitterKind() string { return "using" }
+
+func (u *usingEmitter) Emit(groups []string, state *LexerState) Iterator {
+	if state.Registry == nil {
+		panic(fmt.Sprintf("no LexerRegistry available for Using(%q)", u.Lexer))
+	}
+	lexer := state.Registry.Get(u.Lexer)
+	if lexer == nil {
+		panic(fmt.Sprintf("no such lexer %q", u.Lexer))
+	}
+	it, err := lexer.Tokenise(&TokeniseOptions{State: "root", Nested: true}, groups[0])
+	if err != nil {
+		panic(err)
+	}
+	return it
+}
+
+// Using returns an Emitter that uses a given Lexer reference for parsing and emitting.
+//
+// The referenced lexer must be stored in the same LexerRegistry.
+func Using(lexer string) Emitter {
+	return &usingEmitter{Lexer: lexer}
+}
+
+type usingSelfEmitter struct {
+	State string `xml:"state,attr"`
+}
+
+func (u *usingSelfEmitter) EmitterKind() string { return "usingself" }
+
+func (u *usingSelfEmitter) Emit(groups []string, state *LexerState) Iterator {
+	it, err := state.Lexer.Tokenise(&TokeniseOptions{State: u.State, Nested: true}, groups[0])
+	if err != nil {
+		panic(err)
+	}
+	return it
+}
+
+// UsingSelf is like Using, but uses the current Lexer.
+func UsingSelf(stateName string) Emitter {
+	return &usingSelfEmitter{stateName}
+}

vendor/github.com/alecthomas/chroma/v2/formatter.go 🔗

@@ -0,0 +1,43 @@
+package chroma
+
+import (
+	"io"
+)
+
+// A Formatter for Chroma lexers.
+type Formatter interface {
+	// Format returns a formatting function for tokens.
+	//
+	// If the iterator panics, the Formatter should recover.
+	Format(w io.Writer, style *Style, iterator Iterator) error
+}
+
+// A FormatterFunc is a Formatter implemented as a function.
+//
+// Guards against iterator panics.
+type FormatterFunc func(w io.Writer, style *Style, iterator Iterator) error
+
+func (f FormatterFunc) Format(w io.Writer, s *Style, it Iterator) (err error) { // nolint
+	defer func() {
+		if perr := recover(); perr != nil {
+			err = perr.(error)
+		}
+	}()
+	return f(w, s, it)
+}
+
+type recoveringFormatter struct {
+	Formatter
+}
+
+func (r recoveringFormatter) Format(w io.Writer, s *Style, it Iterator) (err error) {
+	defer func() {
+		if perr := recover(); perr != nil {
+			err = perr.(error)
+		}
+	}()
+	return r.Formatter.Format(w, s, it)
+}
+
+// RecoveringFormatter wraps a formatter with panic recovery.
+func RecoveringFormatter(formatter Formatter) Formatter { return recoveringFormatter{formatter} }

vendor/github.com/alecthomas/chroma/v2/formatters/api.go 🔗

@@ -0,0 +1,57 @@
+package formatters
+
+import (
+	"io"
+	"sort"
+
+	"github.com/alecthomas/chroma/v2"
+	"github.com/alecthomas/chroma/v2/formatters/html"
+	"github.com/alecthomas/chroma/v2/formatters/svg"
+)
+
+var (
+	// NoOp formatter.
+	NoOp = Register("noop", chroma.FormatterFunc(func(w io.Writer, s *chroma.Style, iterator chroma.Iterator) error {
+		for t := iterator(); t != chroma.EOF; t = iterator() {
+			if _, err := io.WriteString(w, t.Value); err != nil {
+				return err
+			}
+		}
+		return nil
+	}))
+	// Default HTML formatter outputs self-contained HTML.
+	htmlFull = Register("html", html.New(html.Standalone(true), html.WithClasses(true))) // nolint
+	SVG      = Register("svg", svg.New(svg.EmbedFont("Liberation Mono", svg.FontLiberationMono, svg.WOFF)))
+)
+
+// Fallback formatter.
+var Fallback = NoOp
+
+// Registry of Formatters.
+var Registry = map[string]chroma.Formatter{}
+
+// Names of registered formatters.
+func Names() []string {
+	out := []string{}
+	for name := range Registry {
+		out = append(out, name)
+	}
+	sort.Strings(out)
+	return out
+}
+
+// Get formatter by name.
+//
+// If the given formatter is not found, the Fallback formatter will be returned.
+func Get(name string) chroma.Formatter {
+	if f, ok := Registry[name]; ok {
+		return f
+	}
+	return Fallback
+}
+
+// Register a named formatter.
+func Register(name string, formatter chroma.Formatter) chroma.Formatter {
+	Registry[name] = formatter
+	return formatter
+}

vendor/github.com/alecthomas/chroma/v2/formatters/html/html.go 🔗

@@ -0,0 +1,623 @@
+package html
+
+import (
+	"fmt"
+	"html"
+	"io"
+	"sort"
+	"strconv"
+	"strings"
+	"sync"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+// Option sets an option of the HTML formatter.
+type Option func(f *Formatter)
+
+// Standalone configures the HTML formatter for generating a standalone HTML document.
+func Standalone(b bool) Option { return func(f *Formatter) { f.standalone = b } }
+
+// ClassPrefix sets the CSS class prefix.
+func ClassPrefix(prefix string) Option { return func(f *Formatter) { f.prefix = prefix } }
+
+// WithClasses emits HTML using CSS classes, rather than inline styles.
+func WithClasses(b bool) Option { return func(f *Formatter) { f.Classes = b } }
+
+// WithAllClasses disables an optimisation that omits redundant CSS classes.
+func WithAllClasses(b bool) Option { return func(f *Formatter) { f.allClasses = b } }
+
+// WithCustomCSS sets user's custom CSS styles.
+func WithCustomCSS(css map[chroma.TokenType]string) Option {
+	return func(f *Formatter) {
+		f.customCSS = css
+	}
+}
+
+// TabWidth sets the number of characters for a tab. Defaults to 8.
+func TabWidth(width int) Option { return func(f *Formatter) { f.tabWidth = width } }
+
+// PreventSurroundingPre prevents the surrounding pre tags around the generated code.
+func PreventSurroundingPre(b bool) Option {
+	return func(f *Formatter) {
+		f.preventSurroundingPre = b
+
+		if b {
+			f.preWrapper = nopPreWrapper
+		} else {
+			f.preWrapper = defaultPreWrapper
+		}
+	}
+}
+
+// InlineCode creates inline code wrapped in a code tag.
+func InlineCode(b bool) Option {
+	return func(f *Formatter) {
+		f.inlineCode = b
+		f.preWrapper = preWrapper{
+			start: func(code bool, styleAttr string) string {
+				if code {
+					return fmt.Sprintf(`<code%s>`, styleAttr)
+				}
+
+				return ``
+			},
+			end: func(code bool) string {
+				if code {
+					return `</code>`
+				}
+
+				return ``
+			},
+		}
+	}
+}
+
+// WithPreWrapper allows control of the surrounding pre tags.
+func WithPreWrapper(wrapper PreWrapper) Option {
+	return func(f *Formatter) {
+		f.preWrapper = wrapper
+	}
+}
+
+// WrapLongLines wraps long lines.
+func WrapLongLines(b bool) Option {
+	return func(f *Formatter) {
+		f.wrapLongLines = b
+	}
+}
+
+// WithLineNumbers formats output with line numbers.
+func WithLineNumbers(b bool) Option {
+	return func(f *Formatter) {
+		f.lineNumbers = b
+	}
+}
+
+// LineNumbersInTable will, when combined with WithLineNumbers, separate the line numbers
+// and code in table td's, which make them copy-and-paste friendly.
+func LineNumbersInTable(b bool) Option {
+	return func(f *Formatter) {
+		f.lineNumbersInTable = b
+	}
+}
+
+// WithLinkableLineNumbers decorates the line numbers HTML elements with an "id"
+// attribute so they can be linked.
+func WithLinkableLineNumbers(b bool, prefix string) Option {
+	return func(f *Formatter) {
+		f.linkableLineNumbers = b
+		f.lineNumbersIDPrefix = prefix
+	}
+}
+
+// HighlightLines higlights the given line ranges with the Highlight style.
+//
+// A range is the beginning and ending of a range as 1-based line numbers, inclusive.
+func HighlightLines(ranges [][2]int) Option {
+	return func(f *Formatter) {
+		f.highlightRanges = ranges
+		sort.Sort(f.highlightRanges)
+	}
+}
+
+// BaseLineNumber sets the initial number to start line numbering at. Defaults to 1.
+func BaseLineNumber(n int) Option {
+	return func(f *Formatter) {
+		f.baseLineNumber = n
+	}
+}
+
+// New HTML formatter.
+func New(options ...Option) *Formatter {
+	f := &Formatter{
+		baseLineNumber: 1,
+		preWrapper:     defaultPreWrapper,
+	}
+	f.styleCache = newStyleCache(f)
+	for _, option := range options {
+		option(f)
+	}
+	return f
+}
+
+// PreWrapper defines the operations supported in WithPreWrapper.
+type PreWrapper interface {
+	// Start is called to write a start <pre> element.
+	// The code flag tells whether this block surrounds
+	// highlighted code. This will be false when surrounding
+	// line numbers.
+	Start(code bool, styleAttr string) string
+
+	// End is called to write the end </pre> element.
+	End(code bool) string
+}
+
+type preWrapper struct {
+	start func(code bool, styleAttr string) string
+	end   func(code bool) string
+}
+
+func (p preWrapper) Start(code bool, styleAttr string) string {
+	return p.start(code, styleAttr)
+}
+
+func (p preWrapper) End(code bool) string {
+	return p.end(code)
+}
+
+var (
+	nopPreWrapper = preWrapper{
+		start: func(code bool, styleAttr string) string { return "" },
+		end:   func(code bool) string { return "" },
+	}
+	defaultPreWrapper = preWrapper{
+		start: func(code bool, styleAttr string) string {
+			if code {
+				return fmt.Sprintf(`<pre%s><code>`, styleAttr)
+			}
+
+			return fmt.Sprintf(`<pre%s>`, styleAttr)
+		},
+		end: func(code bool) string {
+			if code {
+				return `</code></pre>`
+			}
+
+			return `</pre>`
+		},
+	}
+)
+
+// Formatter that generates HTML.
+type Formatter struct {
+	styleCache            *styleCache
+	standalone            bool
+	prefix                string
+	Classes               bool // Exported field to detect when classes are being used
+	allClasses            bool
+	customCSS             map[chroma.TokenType]string
+	preWrapper            PreWrapper
+	inlineCode            bool
+	preventSurroundingPre bool
+	tabWidth              int
+	wrapLongLines         bool
+	lineNumbers           bool
+	lineNumbersInTable    bool
+	linkableLineNumbers   bool
+	lineNumbersIDPrefix   string
+	highlightRanges       highlightRanges
+	baseLineNumber        int
+}
+
+type highlightRanges [][2]int
+
+func (h highlightRanges) Len() int           { return len(h) }
+func (h highlightRanges) Swap(i, j int)      { h[i], h[j] = h[j], h[i] }
+func (h highlightRanges) Less(i, j int) bool { return h[i][0] < h[j][0] }
+
+func (f *Formatter) Format(w io.Writer, style *chroma.Style, iterator chroma.Iterator) (err error) {
+	return f.writeHTML(w, style, iterator.Tokens())
+}
+
+// We deliberately don't use html/template here because it is two orders of magnitude slower (benchmarked).
+//
+// OTOH we need to be super careful about correct escaping...
+func (f *Formatter) writeHTML(w io.Writer, style *chroma.Style, tokens []chroma.Token) (err error) { // nolint: gocyclo
+	css := f.styleCache.get(style, true)
+	if f.standalone {
+		fmt.Fprint(w, "<html>\n")
+		if f.Classes {
+			fmt.Fprint(w, "<style type=\"text/css\">\n")
+			err = f.WriteCSS(w, style)
+			if err != nil {
+				return err
+			}
+			fmt.Fprintf(w, "body { %s; }\n", css[chroma.Background])
+			fmt.Fprint(w, "</style>")
+		}
+		fmt.Fprintf(w, "<body%s>\n", f.styleAttr(css, chroma.Background))
+	}
+
+	wrapInTable := f.lineNumbers && f.lineNumbersInTable
+
+	lines := chroma.SplitTokensIntoLines(tokens)
+	lineDigits := len(strconv.Itoa(f.baseLineNumber + len(lines) - 1))
+	highlightIndex := 0
+
+	if wrapInTable {
+		// List line numbers in its own <td>
+		fmt.Fprintf(w, "<div%s>\n", f.styleAttr(css, chroma.PreWrapper))
+		fmt.Fprintf(w, "<table%s><tr>", f.styleAttr(css, chroma.LineTable))
+		fmt.Fprintf(w, "<td%s>\n", f.styleAttr(css, chroma.LineTableTD))
+		fmt.Fprintf(w, "%s", f.preWrapper.Start(false, f.styleAttr(css, chroma.PreWrapper)))
+		for index := range lines {
+			line := f.baseLineNumber + index
+			highlight, next := f.shouldHighlight(highlightIndex, line)
+			if next {
+				highlightIndex++
+			}
+			if highlight {
+				fmt.Fprintf(w, "<span%s>", f.styleAttr(css, chroma.LineHighlight))
+			}
+
+			fmt.Fprintf(w, "<span%s%s>%s\n</span>", f.styleAttr(css, chroma.LineNumbersTable), f.lineIDAttribute(line), f.lineTitleWithLinkIfNeeded(css, lineDigits, line))
+
+			if highlight {
+				fmt.Fprintf(w, "</span>")
+			}
+		}
+		fmt.Fprint(w, f.preWrapper.End(false))
+		fmt.Fprint(w, "</td>\n")
+		fmt.Fprintf(w, "<td%s>\n", f.styleAttr(css, chroma.LineTableTD, "width:100%"))
+	}
+
+	fmt.Fprintf(w, "%s", f.preWrapper.Start(true, f.styleAttr(css, chroma.PreWrapper)))
+
+	highlightIndex = 0
+	for index, tokens := range lines {
+		// 1-based line number.
+		line := f.baseLineNumber + index
+		highlight, next := f.shouldHighlight(highlightIndex, line)
+		if next {
+			highlightIndex++
+		}
+
+		if !(f.preventSurroundingPre || f.inlineCode) {
+			// Start of Line
+			fmt.Fprint(w, `<span`)
+
+			if highlight {
+				// Line + LineHighlight
+				if f.Classes {
+					fmt.Fprintf(w, ` class="%s %s"`, f.class(chroma.Line), f.class(chroma.LineHighlight))
+				} else {
+					fmt.Fprintf(w, ` style="%s %s"`, css[chroma.Line], css[chroma.LineHighlight])
+				}
+				fmt.Fprint(w, `>`)
+			} else {
+				fmt.Fprintf(w, "%s>", f.styleAttr(css, chroma.Line))
+			}
+
+			// Line number
+			if f.lineNumbers && !wrapInTable {
+				fmt.Fprintf(w, "<span%s%s>%s</span>", f.styleAttr(css, chroma.LineNumbers), f.lineIDAttribute(line), f.lineTitleWithLinkIfNeeded(css, lineDigits, line))
+			}
+
+			fmt.Fprintf(w, `<span%s>`, f.styleAttr(css, chroma.CodeLine))
+		}
+
+		for _, token := range tokens {
+			html := html.EscapeString(token.String())
+			attr := f.styleAttr(css, token.Type)
+			if attr != "" {
+				html = fmt.Sprintf("<span%s>%s</span>", attr, html)
+			}
+			fmt.Fprint(w, html)
+		}
+
+		if !(f.preventSurroundingPre || f.inlineCode) {
+			fmt.Fprint(w, `</span>`) // End of CodeLine
+
+			fmt.Fprint(w, `</span>`) // End of Line
+		}
+	}
+	fmt.Fprintf(w, "%s", f.preWrapper.End(true))
+
+	if wrapInTable {
+		fmt.Fprint(w, "</td></tr></table>\n")
+		fmt.Fprint(w, "</div>\n")
+	}
+
+	if f.standalone {
+		fmt.Fprint(w, "\n</body>\n")
+		fmt.Fprint(w, "</html>\n")
+	}
+
+	return nil
+}
+
+func (f *Formatter) lineIDAttribute(line int) string {
+	if !f.linkableLineNumbers {
+		return ""
+	}
+	return fmt.Sprintf(" id=\"%s\"", f.lineID(line))
+}
+
+func (f *Formatter) lineTitleWithLinkIfNeeded(css map[chroma.TokenType]string, lineDigits, line int) string {
+	title := fmt.Sprintf("%*d", lineDigits, line)
+	if !f.linkableLineNumbers {
+		return title
+	}
+	return fmt.Sprintf("<a%s href=\"#%s\">%s</a>", f.styleAttr(css, chroma.LineLink), f.lineID(line), title)
+}
+
+func (f *Formatter) lineID(line int) string {
+	return fmt.Sprintf("%s%d", f.lineNumbersIDPrefix, line)
+}
+
+func (f *Formatter) shouldHighlight(highlightIndex, line int) (bool, bool) {
+	next := false
+	for highlightIndex < len(f.highlightRanges) && line > f.highlightRanges[highlightIndex][1] {
+		highlightIndex++
+		next = true
+	}
+	if highlightIndex < len(f.highlightRanges) {
+		hrange := f.highlightRanges[highlightIndex]
+		if line >= hrange[0] && line <= hrange[1] {
+			return true, next
+		}
+	}
+	return false, next
+}
+
+func (f *Formatter) class(t chroma.TokenType) string {
+	for t != 0 {
+		if cls, ok := chroma.StandardTypes[t]; ok {
+			if cls != "" {
+				return f.prefix + cls
+			}
+			return ""
+		}
+		t = t.Parent()
+	}
+	if cls := chroma.StandardTypes[t]; cls != "" {
+		return f.prefix + cls
+	}
+	return ""
+}
+
+func (f *Formatter) styleAttr(styles map[chroma.TokenType]string, tt chroma.TokenType, extraCSS ...string) string {
+	if f.Classes {
+		cls := f.class(tt)
+		if cls == "" {
+			return ""
+		}
+		return fmt.Sprintf(` class="%s"`, cls)
+	}
+	if _, ok := styles[tt]; !ok {
+		tt = tt.SubCategory()
+		if _, ok := styles[tt]; !ok {
+			tt = tt.Category()
+			if _, ok := styles[tt]; !ok {
+				return ""
+			}
+		}
+	}
+	css := []string{styles[tt]}
+	css = append(css, extraCSS...)
+	return fmt.Sprintf(` style="%s"`, strings.Join(css, ";"))
+}
+
+func (f *Formatter) tabWidthStyle() string {
+	if f.tabWidth != 0 && f.tabWidth != 8 {
+		return fmt.Sprintf("-moz-tab-size: %[1]d; -o-tab-size: %[1]d; tab-size: %[1]d;", f.tabWidth)
+	}
+	return ""
+}
+
+// WriteCSS writes CSS style definitions (without any surrounding HTML).
+func (f *Formatter) WriteCSS(w io.Writer, style *chroma.Style) error {
+	css := f.styleCache.get(style, false)
+	// Special-case background as it is mapped to the outer ".chroma" class.
+	if _, err := fmt.Fprintf(w, "/* %s */ .%sbg { %s }\n", chroma.Background, f.prefix, css[chroma.Background]); err != nil {
+		return err
+	}
+	// Special-case PreWrapper as it is the ".chroma" class.
+	if _, err := fmt.Fprintf(w, "/* %s */ .%schroma { %s }\n", chroma.PreWrapper, f.prefix, css[chroma.PreWrapper]); err != nil {
+		return err
+	}
+	// Special-case code column of table to expand width.
+	if f.lineNumbers && f.lineNumbersInTable {
+		if _, err := fmt.Fprintf(w, "/* %s */ .%schroma .%s:last-child { width: 100%%; }",
+			chroma.LineTableTD, f.prefix, f.class(chroma.LineTableTD)); err != nil {
+			return err
+		}
+	}
+	// Special-case line number highlighting when targeted.
+	if f.lineNumbers || f.lineNumbersInTable {
+		targetedLineCSS := StyleEntryToCSS(style.Get(chroma.LineHighlight))
+		for _, tt := range []chroma.TokenType{chroma.LineNumbers, chroma.LineNumbersTable} {
+			fmt.Fprintf(w, "/* %s targeted by URL anchor */ .%schroma .%s:target { %s }\n", tt, f.prefix, f.class(tt), targetedLineCSS)
+		}
+	}
+	tts := []int{}
+	for tt := range css {
+		tts = append(tts, int(tt))
+	}
+	sort.Ints(tts)
+	for _, ti := range tts {
+		tt := chroma.TokenType(ti)
+		switch tt {
+		case chroma.Background, chroma.PreWrapper:
+			continue
+		}
+		class := f.class(tt)
+		if class == "" {
+			continue
+		}
+		styles := css[tt]
+		if _, err := fmt.Fprintf(w, "/* %s */ .%schroma .%s { %s }\n", tt, f.prefix, class, styles); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+func (f *Formatter) styleToCSS(style *chroma.Style) map[chroma.TokenType]string {
+	classes := map[chroma.TokenType]string{}
+	bg := style.Get(chroma.Background)
+	// Convert the style.
+	for t := range chroma.StandardTypes {
+		entry := style.Get(t)
+		if t != chroma.Background {
+			entry = entry.Sub(bg)
+		}
+
+		// Inherit from custom CSS provided by user
+		tokenCategory := t.Category()
+		tokenSubCategory := t.SubCategory()
+		if t != tokenCategory {
+			if css, ok := f.customCSS[tokenCategory]; ok {
+				classes[t] = css
+			}
+		}
+		if tokenCategory != tokenSubCategory {
+			if css, ok := f.customCSS[tokenSubCategory]; ok {
+				classes[t] += css
+			}
+		}
+		// Add custom CSS provided by user
+		if css, ok := f.customCSS[t]; ok {
+			classes[t] += css
+		}
+
+		if !f.allClasses && entry.IsZero() && classes[t] == `` {
+			continue
+		}
+
+		styleEntryCSS := StyleEntryToCSS(entry)
+		if styleEntryCSS != `` && classes[t] != `` {
+			styleEntryCSS += `;`
+		}
+		classes[t] = styleEntryCSS + classes[t]
+	}
+	classes[chroma.Background] += `;` + f.tabWidthStyle()
+	classes[chroma.PreWrapper] += classes[chroma.Background]
+	// Make PreWrapper a grid to show highlight style with full width.
+	if len(f.highlightRanges) > 0 && f.customCSS[chroma.PreWrapper] == `` {
+		classes[chroma.PreWrapper] += `display: grid;`
+	}
+	// Make PreWrapper wrap long lines.
+	if f.wrapLongLines {
+		classes[chroma.PreWrapper] += `white-space: pre-wrap; word-break: break-word;`
+	}
+	lineNumbersStyle := `white-space: pre; -webkit-user-select: none; user-select: none; margin-right: 0.4em; padding: 0 0.4em 0 0.4em;`
+	// All rules begin with default rules followed by user provided rules
+	classes[chroma.Line] = `display: flex;` + classes[chroma.Line]
+	classes[chroma.LineNumbers] = lineNumbersStyle + classes[chroma.LineNumbers]
+	classes[chroma.LineNumbersTable] = lineNumbersStyle + classes[chroma.LineNumbersTable]
+	classes[chroma.LineTable] = "border-spacing: 0; padding: 0; margin: 0; border: 0;" + classes[chroma.LineTable]
+	classes[chroma.LineTableTD] = "vertical-align: top; padding: 0; margin: 0; border: 0;" + classes[chroma.LineTableTD]
+	classes[chroma.LineLink] = "outline: none; text-decoration: none; color: inherit" + classes[chroma.LineLink]
+	return classes
+}
+
+// StyleEntryToCSS converts a chroma.StyleEntry to CSS attributes.
+func StyleEntryToCSS(e chroma.StyleEntry) string {
+	styles := []string{}
+	if e.Colour.IsSet() {
+		styles = append(styles, "color: "+e.Colour.String())
+	}
+	if e.Background.IsSet() {
+		styles = append(styles, "background-color: "+e.Background.String())
+	}
+	if e.Bold == chroma.Yes {
+		styles = append(styles, "font-weight: bold")
+	}
+	if e.Italic == chroma.Yes {
+		styles = append(styles, "font-style: italic")
+	}
+	if e.Underline == chroma.Yes {
+		styles = append(styles, "text-decoration: underline")
+	}
+	return strings.Join(styles, "; ")
+}
+
+// Compress CSS attributes - remove spaces, transform 6-digit colours to 3.
+func compressStyle(s string) string {
+	parts := strings.Split(s, ";")
+	out := []string{}
+	for _, p := range parts {
+		p = strings.Join(strings.Fields(p), " ")
+		p = strings.Replace(p, ": ", ":", 1)
+		if strings.Contains(p, "#") {
+			c := p[len(p)-6:]
+			if c[0] == c[1] && c[2] == c[3] && c[4] == c[5] {
+				p = p[:len(p)-6] + c[0:1] + c[2:3] + c[4:5]
+			}
+		}
+		out = append(out, p)
+	}
+	return strings.Join(out, ";")
+}
+
+const styleCacheLimit = 32
+
+type styleCacheEntry struct {
+	style      *chroma.Style
+	compressed bool
+	cache      map[chroma.TokenType]string
+}
+
+type styleCache struct {
+	mu sync.Mutex
+	// LRU cache of compiled (and possibly compressed) styles. This is a slice
+	// because the cache size is small, and a slice is sufficiently fast for
+	// small N.
+	cache []styleCacheEntry
+	f     *Formatter
+}
+
+func newStyleCache(f *Formatter) *styleCache {
+	return &styleCache{f: f}
+}
+
+func (l *styleCache) get(style *chroma.Style, compress bool) map[chroma.TokenType]string {
+	l.mu.Lock()
+	defer l.mu.Unlock()
+
+	// Look for an existing entry.
+	for i := len(l.cache) - 1; i >= 0; i-- {
+		entry := l.cache[i]
+		if entry.style == style && entry.compressed == compress {
+			// Top of the cache, no need to adjust the order.
+			if i == len(l.cache)-1 {
+				return entry.cache
+			}
+			// Move this entry to the end of the LRU
+			copy(l.cache[i:], l.cache[i+1:])
+			l.cache[len(l.cache)-1] = entry
+			return entry.cache
+		}
+	}
+
+	// No entry, create one.
+	cached := l.f.styleToCSS(style)
+	if !l.f.Classes {
+		for t, style := range cached {
+			cached[t] = compressStyle(style)
+		}
+	}
+	if compress {
+		for t, style := range cached {
+			cached[t] = compressStyle(style)
+		}
+	}
+	// Evict the oldest entry.
+	if len(l.cache) >= styleCacheLimit {
+		l.cache = l.cache[0:copy(l.cache, l.cache[1:])]
+	}
+	l.cache = append(l.cache, styleCacheEntry{style: style, cache: cached, compressed: compress})
+	return cached
+}

vendor/github.com/alecthomas/chroma/v2/formatters/json.go 🔗

@@ -0,0 +1,39 @@
+package formatters
+
+import (
+	"encoding/json"
+	"fmt"
+	"io"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+// JSON formatter outputs the raw token structures as JSON.
+var JSON = Register("json", chroma.FormatterFunc(func(w io.Writer, s *chroma.Style, it chroma.Iterator) error {
+	if _, err := fmt.Fprintln(w, "["); err != nil {
+		return err
+	}
+	i := 0
+	for t := it(); t != chroma.EOF; t = it() {
+		if i > 0 {
+			if _, err := fmt.Fprintln(w, ","); err != nil {
+				return err
+			}
+		}
+		i++
+		bytes, err := json.Marshal(t)
+		if err != nil {
+			return err
+		}
+		if _, err := fmt.Fprint(w, "  "+string(bytes)); err != nil {
+			return err
+		}
+	}
+	if _, err := fmt.Fprintln(w); err != nil {
+		return err
+	}
+	if _, err := fmt.Fprintln(w, "]"); err != nil {
+		return err
+	}
+	return nil
+}))

vendor/github.com/alecthomas/chroma/v2/formatters/svg/font_liberation_mono.go 🔗

@@ -0,0 +1,51 @@
+// Digitized data copyright (c) 2010 Google Corporation
+// 	with Reserved Font Arimo, Tinos and Cousine.
+// Copyright (c) 2012 Red Hat, Inc.
+// 	with Reserved Font Name Liberation.
+//
+// This Font Software is licensed under the SIL Open Font License, Version 1.1.
+// This license is copied below, and is also available with a FAQ at: http://scripts.sil.org/OFL
+//
+// -----------------------------------------------------------
+// SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
+// -----------------------------------------------------------
+//
+// PREAMBLE
+// The goals of the Open Font License (OFL) are to stimulate worldwide development of collaborative font projects, to support the font creation efforts of academic and linguistic communities, and to provide a free and open framework in which fonts may be shared and improved in partnership with others.
+//
+// The OFL allows the licensed fonts to be used, studied, modified and redistributed freely as long as they are not sold by themselves. The fonts, including any derivative works, can be bundled, embedded, redistributed and/or sold with any software provided that any reserved names are not used by derivative works. The fonts and derivatives, however, cannot be released under any other type of license. The requirement for fonts to remain under this license does not apply to any document created using the fonts or their derivatives.
+//
+// DEFINITIONS
+// "Font Software" refers to the set of files released by the Copyright Holder(s) under this license and clearly marked as such. This may include source files, build scripts and documentation.
+//
+// "Reserved Font Name" refers to any names specified as such after the copyright statement(s).
+//
+// "Original Version" refers to the collection of Font Software components as distributed by the Copyright Holder(s).
+//
+// "Modified Version" refers to any derivative made by adding to, deleting, or substituting -- in part or in whole -- any of the components of the Original Version, by changing formats or by porting the Font Software to a new environment.
+//
+// "Author" refers to any designer, engineer, programmer, technical writer or other person who contributed to the Font Software.
+//
+// PERMISSION & CONDITIONS
+// Permission is hereby granted, free of charge, to any person obtaining a copy of the Font Software, to use, study, copy, merge, embed, modify, redistribute, and sell modified and unmodified copies of the Font Software, subject to the following conditions:
+//
+// 1) Neither the Font Software nor any of its individual components, in Original or Modified Versions, may be sold by itself.
+//
+// 2) Original or Modified Versions of the Font Software may be bundled, redistributed and/or sold with any software, provided that each copy contains the above copyright notice and this license. These can be included either as stand-alone text files, human-readable headers or in the appropriate machine-readable metadata fields within text or binary files as long as those fields can be easily viewed by the user.
+//
+// 3) No Modified Version of the Font Software may use the Reserved Font Name(s) unless explicit written permission is granted by the corresponding Copyright Holder. This restriction only applies to the primary font name as presented to the users.
+//
+// 4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font Software shall not be used to promote, endorse or advertise any Modified Version, except to acknowledge the contribution(s) of the Copyright Holder(s) and the Author(s) or with their explicit written permission.
+//
+// 5) The Font Software, modified or unmodified, in part or in whole, must be distributed entirely under this license, and must not be distributed under any other license. The requirement for fonts to remain under this license does not apply to any document created using the Font Software.
+//
+// TERMINATION
+// This license becomes null and void if any of the above conditions are not met.
+//
+// DISCLAIMER
+// THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER DEALINGS IN THE FONT SOFTWARE.
+
+package svg
+
+// Liberation Mono as base64 encoded woff (SIL Open Font License)[https://en.wikipedia.org/wiki/Liberation_fonts]

vendor/github.com/alecthomas/chroma/v2/formatters/svg/svg.go 🔗

@@ -0,0 +1,222 @@
+// Package svg contains an SVG formatter.
+package svg
+
+import (
+	"encoding/base64"
+	"errors"
+	"fmt"
+	"io"
+	"os"
+	"path"
+	"strings"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+// Option sets an option of the SVG formatter.
+type Option func(f *Formatter)
+
+// FontFamily sets the font-family.
+func FontFamily(fontFamily string) Option { return func(f *Formatter) { f.fontFamily = fontFamily } }
+
+// EmbedFontFile embeds given font file
+func EmbedFontFile(fontFamily string, fileName string) (option Option, err error) {
+	var format FontFormat
+	switch path.Ext(fileName) {
+	case ".woff":
+		format = WOFF
+	case ".woff2":
+		format = WOFF2
+	case ".ttf":
+		format = TRUETYPE
+	default:
+		return nil, errors.New("unexpected font file suffix")
+	}
+
+	var content []byte
+	if content, err = os.ReadFile(fileName); err == nil {
+		option = EmbedFont(fontFamily, base64.StdEncoding.EncodeToString(content), format)
+	}
+	return
+}
+
+// EmbedFont embeds given base64 encoded font
+func EmbedFont(fontFamily string, font string, format FontFormat) Option {
+	return func(f *Formatter) { f.fontFamily = fontFamily; f.embeddedFont = font; f.fontFormat = format }
+}
+
+// New SVG formatter.
+func New(options ...Option) *Formatter {
+	f := &Formatter{fontFamily: "Consolas, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono, Courier New, monospace"}
+	for _, option := range options {
+		option(f)
+	}
+	return f
+}
+
+// Formatter that generates SVG.
+type Formatter struct {
+	fontFamily   string
+	embeddedFont string
+	fontFormat   FontFormat
+}
+
+func (f *Formatter) Format(w io.Writer, style *chroma.Style, iterator chroma.Iterator) (err error) {
+	f.writeSVG(w, style, iterator.Tokens())
+	return err
+}
+
+var svgEscaper = strings.NewReplacer(
+	`&`, "&amp;",
+	`<`, "&lt;",
+	`>`, "&gt;",
+	`"`, "&quot;",
+	` `, "&#160;",
+	`	`, "&#160;&#160;&#160;&#160;",
+)
+
+// EscapeString escapes special characters.
+func escapeString(s string) string {
+	return svgEscaper.Replace(s)
+}
+
+func (f *Formatter) writeSVG(w io.Writer, style *chroma.Style, tokens []chroma.Token) { // nolint: gocyclo
+	svgStyles := f.styleToSVG(style)
+	lines := chroma.SplitTokensIntoLines(tokens)
+
+	fmt.Fprint(w, "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n")
+	fmt.Fprint(w, "<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.0//EN\" \"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd\">\n")
+	fmt.Fprintf(w, "<svg width=\"%dpx\" height=\"%dpx\" xmlns=\"http://www.w3.org/2000/svg\">\n", 8*maxLineWidth(lines), 10+int(16.8*float64(len(lines)+1)))
+
+	if f.embeddedFont != "" {
+		f.writeFontStyle(w)
+	}
+
+	fmt.Fprintf(w, "<rect width=\"100%%\" height=\"100%%\" fill=\"%s\"/>\n", style.Get(chroma.Background).Background.String())
+	fmt.Fprintf(w, "<g font-family=\"%s\" font-size=\"14px\" fill=\"%s\">\n", f.fontFamily, style.Get(chroma.Text).Colour.String())
+
+	f.writeTokenBackgrounds(w, lines, style)
+
+	for index, tokens := range lines {
+		fmt.Fprintf(w, "<text x=\"0\" y=\"%fem\" xml:space=\"preserve\">", 1.2*float64(index+1))
+
+		for _, token := range tokens {
+			text := escapeString(token.String())
+			attr := f.styleAttr(svgStyles, token.Type)
+			if attr != "" {
+				text = fmt.Sprintf("<tspan %s>%s</tspan>", attr, text)
+			}
+			fmt.Fprint(w, text)
+		}
+		fmt.Fprint(w, "</text>")
+	}
+
+	fmt.Fprint(w, "\n</g>\n")
+	fmt.Fprint(w, "</svg>\n")
+}
+
+func maxLineWidth(lines [][]chroma.Token) int {
+	maxWidth := 0
+	for _, tokens := range lines {
+		length := 0
+		for _, token := range tokens {
+			length += len(strings.ReplaceAll(token.String(), `	`, "    "))
+		}
+		if length > maxWidth {
+			maxWidth = length
+		}
+	}
+	return maxWidth
+}
+
+// There is no background attribute for text in SVG so simply calculate the position and text
+// of tokens with a background color that differs from the default and add a rectangle for each before
+// adding the token.
+func (f *Formatter) writeTokenBackgrounds(w io.Writer, lines [][]chroma.Token, style *chroma.Style) {
+	for index, tokens := range lines {
+		lineLength := 0
+		for _, token := range tokens {
+			length := len(strings.ReplaceAll(token.String(), `	`, "    "))
+			tokenBackground := style.Get(token.Type).Background
+			if tokenBackground.IsSet() && tokenBackground != style.Get(chroma.Background).Background {
+				fmt.Fprintf(w, "<rect id=\"%s\" x=\"%dch\" y=\"%fem\" width=\"%dch\" height=\"1.2em\" fill=\"%s\" />\n", escapeString(token.String()), lineLength, 1.2*float64(index)+0.25, length, style.Get(token.Type).Background.String())
+			}
+			lineLength += length
+		}
+	}
+}
+
+type FontFormat int
+
+// https://transfonter.org/formats
+const (
+	WOFF FontFormat = iota
+	WOFF2
+	TRUETYPE
+)
+
+var fontFormats = [...]string{
+	"woff",
+	"woff2",
+	"truetype",
+}
+
+func (f *Formatter) writeFontStyle(w io.Writer) {
+	fmt.Fprintf(w, `<style>
+@font-face {
+	font-family: '%s';
+	src: url(data:application/x-font-%s;charset=utf-8;base64,%s) format('%s');'
+	font-weight: normal;
+	font-style: normal;
+}
+</style>`, f.fontFamily, fontFormats[f.fontFormat], f.embeddedFont, fontFormats[f.fontFormat])
+}
+
+func (f *Formatter) styleAttr(styles map[chroma.TokenType]string, tt chroma.TokenType) string {
+	if _, ok := styles[tt]; !ok {
+		tt = tt.SubCategory()
+		if _, ok := styles[tt]; !ok {
+			tt = tt.Category()
+			if _, ok := styles[tt]; !ok {
+				return ""
+			}
+		}
+	}
+	return styles[tt]
+}
+
+func (f *Formatter) styleToSVG(style *chroma.Style) map[chroma.TokenType]string {
+	converted := map[chroma.TokenType]string{}
+	bg := style.Get(chroma.Background)
+	// Convert the style.
+	for t := range chroma.StandardTypes {
+		entry := style.Get(t)
+		if t != chroma.Background {
+			entry = entry.Sub(bg)
+		}
+		if entry.IsZero() {
+			continue
+		}
+		converted[t] = StyleEntryToSVG(entry)
+	}
+	return converted
+}
+
+// StyleEntryToSVG converts a chroma.StyleEntry to SVG attributes.
+func StyleEntryToSVG(e chroma.StyleEntry) string {
+	var styles []string
+
+	if e.Colour.IsSet() {
+		styles = append(styles, "fill=\""+e.Colour.String()+"\"")
+	}
+	if e.Bold == chroma.Yes {
+		styles = append(styles, "font-weight=\"bold\"")
+	}
+	if e.Italic == chroma.Yes {
+		styles = append(styles, "font-style=\"italic\"")
+	}
+	if e.Underline == chroma.Yes {
+		styles = append(styles, "text-decoration=\"underline\"")
+	}
+	return strings.Join(styles, " ")
+}

vendor/github.com/alecthomas/chroma/v2/formatters/tokens.go 🔗

@@ -0,0 +1,18 @@
+package formatters
+
+import (
+	"fmt"
+	"io"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+// Tokens formatter outputs the raw token structures.
+var Tokens = Register("tokens", chroma.FormatterFunc(func(w io.Writer, s *chroma.Style, it chroma.Iterator) error {
+	for t := it(); t != chroma.EOF; t = it() {
+		if _, err := fmt.Fprintln(w, t.GoString()); err != nil {
+			return err
+		}
+	}
+	return nil
+}))

vendor/github.com/alecthomas/chroma/v2/formatters/tty_indexed.go 🔗

@@ -0,0 +1,284 @@
+package formatters
+
+import (
+	"io"
+	"math"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+type ttyTable struct {
+	foreground map[chroma.Colour]string
+	background map[chroma.Colour]string
+}
+
+var c = chroma.MustParseColour
+
+var ttyTables = map[int]*ttyTable{
+	8: {
+		foreground: map[chroma.Colour]string{
+			c("#000000"): "\033[30m", c("#7f0000"): "\033[31m", c("#007f00"): "\033[32m", c("#7f7fe0"): "\033[33m",
+			c("#00007f"): "\033[34m", c("#7f007f"): "\033[35m", c("#007f7f"): "\033[36m", c("#e5e5e5"): "\033[37m",
+			c("#555555"): "\033[1m\033[30m", c("#ff0000"): "\033[1m\033[31m", c("#00ff00"): "\033[1m\033[32m", c("#ffff00"): "\033[1m\033[33m",
+			c("#0000ff"): "\033[1m\033[34m", c("#ff00ff"): "\033[1m\033[35m", c("#00ffff"): "\033[1m\033[36m", c("#ffffff"): "\033[1m\033[37m",
+		},
+		background: map[chroma.Colour]string{
+			c("#000000"): "\033[40m", c("#7f0000"): "\033[41m", c("#007f00"): "\033[42m", c("#7f7fe0"): "\033[43m",
+			c("#00007f"): "\033[44m", c("#7f007f"): "\033[45m", c("#007f7f"): "\033[46m", c("#e5e5e5"): "\033[47m",
+			c("#555555"): "\033[1m\033[40m", c("#ff0000"): "\033[1m\033[41m", c("#00ff00"): "\033[1m\033[42m", c("#ffff00"): "\033[1m\033[43m",
+			c("#0000ff"): "\033[1m\033[44m", c("#ff00ff"): "\033[1m\033[45m", c("#00ffff"): "\033[1m\033[46m", c("#ffffff"): "\033[1m\033[47m",
+		},
+	},
+	16: {
+		foreground: map[chroma.Colour]string{
+			c("#000000"): "\033[30m", c("#7f0000"): "\033[31m", c("#007f00"): "\033[32m", c("#7f7fe0"): "\033[33m",
+			c("#00007f"): "\033[34m", c("#7f007f"): "\033[35m", c("#007f7f"): "\033[36m", c("#e5e5e5"): "\033[37m",
+			c("#555555"): "\033[90m", c("#ff0000"): "\033[91m", c("#00ff00"): "\033[92m", c("#ffff00"): "\033[93m",
+			c("#0000ff"): "\033[94m", c("#ff00ff"): "\033[95m", c("#00ffff"): "\033[96m", c("#ffffff"): "\033[97m",
+		},
+		background: map[chroma.Colour]string{
+			c("#000000"): "\033[40m", c("#7f0000"): "\033[41m", c("#007f00"): "\033[42m", c("#7f7fe0"): "\033[43m",
+			c("#00007f"): "\033[44m", c("#7f007f"): "\033[45m", c("#007f7f"): "\033[46m", c("#e5e5e5"): "\033[47m",
+			c("#555555"): "\033[100m", c("#ff0000"): "\033[101m", c("#00ff00"): "\033[102m", c("#ffff00"): "\033[103m",
+			c("#0000ff"): "\033[104m", c("#ff00ff"): "\033[105m", c("#00ffff"): "\033[106m", c("#ffffff"): "\033[107m",
+		},
+	},
+	256: {
+		foreground: map[chroma.Colour]string{
+			c("#000000"): "\033[38;5;0m", c("#800000"): "\033[38;5;1m", c("#008000"): "\033[38;5;2m", c("#808000"): "\033[38;5;3m",
+			c("#000080"): "\033[38;5;4m", c("#800080"): "\033[38;5;5m", c("#008080"): "\033[38;5;6m", c("#c0c0c0"): "\033[38;5;7m",
+			c("#808080"): "\033[38;5;8m", c("#ff0000"): "\033[38;5;9m", c("#00ff00"): "\033[38;5;10m", c("#ffff00"): "\033[38;5;11m",
+			c("#0000ff"): "\033[38;5;12m", c("#ff00ff"): "\033[38;5;13m", c("#00ffff"): "\033[38;5;14m", c("#ffffff"): "\033[38;5;15m",
+			c("#000000"): "\033[38;5;16m", c("#00005f"): "\033[38;5;17m", c("#000087"): "\033[38;5;18m", c("#0000af"): "\033[38;5;19m",
+			c("#0000d7"): "\033[38;5;20m", c("#0000ff"): "\033[38;5;21m", c("#005f00"): "\033[38;5;22m", c("#005f5f"): "\033[38;5;23m",
+			c("#005f87"): "\033[38;5;24m", c("#005faf"): "\033[38;5;25m", c("#005fd7"): "\033[38;5;26m", c("#005fff"): "\033[38;5;27m",
+			c("#008700"): "\033[38;5;28m", c("#00875f"): "\033[38;5;29m", c("#008787"): "\033[38;5;30m", c("#0087af"): "\033[38;5;31m",
+			c("#0087d7"): "\033[38;5;32m", c("#0087ff"): "\033[38;5;33m", c("#00af00"): "\033[38;5;34m", c("#00af5f"): "\033[38;5;35m",
+			c("#00af87"): "\033[38;5;36m", c("#00afaf"): "\033[38;5;37m", c("#00afd7"): "\033[38;5;38m", c("#00afff"): "\033[38;5;39m",
+			c("#00d700"): "\033[38;5;40m", c("#00d75f"): "\033[38;5;41m", c("#00d787"): "\033[38;5;42m", c("#00d7af"): "\033[38;5;43m",
+			c("#00d7d7"): "\033[38;5;44m", c("#00d7ff"): "\033[38;5;45m", c("#00ff00"): "\033[38;5;46m", c("#00ff5f"): "\033[38;5;47m",
+			c("#00ff87"): "\033[38;5;48m", c("#00ffaf"): "\033[38;5;49m", c("#00ffd7"): "\033[38;5;50m", c("#00ffff"): "\033[38;5;51m",
+			c("#5f0000"): "\033[38;5;52m", c("#5f005f"): "\033[38;5;53m", c("#5f0087"): "\033[38;5;54m", c("#5f00af"): "\033[38;5;55m",
+			c("#5f00d7"): "\033[38;5;56m", c("#5f00ff"): "\033[38;5;57m", c("#5f5f00"): "\033[38;5;58m", c("#5f5f5f"): "\033[38;5;59m",
+			c("#5f5f87"): "\033[38;5;60m", c("#5f5faf"): "\033[38;5;61m", c("#5f5fd7"): "\033[38;5;62m", c("#5f5fff"): "\033[38;5;63m",
+			c("#5f8700"): "\033[38;5;64m", c("#5f875f"): "\033[38;5;65m", c("#5f8787"): "\033[38;5;66m", c("#5f87af"): "\033[38;5;67m",
+			c("#5f87d7"): "\033[38;5;68m", c("#5f87ff"): "\033[38;5;69m", c("#5faf00"): "\033[38;5;70m", c("#5faf5f"): "\033[38;5;71m",
+			c("#5faf87"): "\033[38;5;72m", c("#5fafaf"): "\033[38;5;73m", c("#5fafd7"): "\033[38;5;74m", c("#5fafff"): "\033[38;5;75m",
+			c("#5fd700"): "\033[38;5;76m", c("#5fd75f"): "\033[38;5;77m", c("#5fd787"): "\033[38;5;78m", c("#5fd7af"): "\033[38;5;79m",
+			c("#5fd7d7"): "\033[38;5;80m", c("#5fd7ff"): "\033[38;5;81m", c("#5fff00"): "\033[38;5;82m", c("#5fff5f"): "\033[38;5;83m",
+			c("#5fff87"): "\033[38;5;84m", c("#5fffaf"): "\033[38;5;85m", c("#5fffd7"): "\033[38;5;86m", c("#5fffff"): "\033[38;5;87m",
+			c("#870000"): "\033[38;5;88m", c("#87005f"): "\033[38;5;89m", c("#870087"): "\033[38;5;90m", c("#8700af"): "\033[38;5;91m",
+			c("#8700d7"): "\033[38;5;92m", c("#8700ff"): "\033[38;5;93m", c("#875f00"): "\033[38;5;94m", c("#875f5f"): "\033[38;5;95m",
+			c("#875f87"): "\033[38;5;96m", c("#875faf"): "\033[38;5;97m", c("#875fd7"): "\033[38;5;98m", c("#875fff"): "\033[38;5;99m",
+			c("#878700"): "\033[38;5;100m", c("#87875f"): "\033[38;5;101m", c("#878787"): "\033[38;5;102m", c("#8787af"): "\033[38;5;103m",
+			c("#8787d7"): "\033[38;5;104m", c("#8787ff"): "\033[38;5;105m", c("#87af00"): "\033[38;5;106m", c("#87af5f"): "\033[38;5;107m",
+			c("#87af87"): "\033[38;5;108m", c("#87afaf"): "\033[38;5;109m", c("#87afd7"): "\033[38;5;110m", c("#87afff"): "\033[38;5;111m",
+			c("#87d700"): "\033[38;5;112m", c("#87d75f"): "\033[38;5;113m", c("#87d787"): "\033[38;5;114m", c("#87d7af"): "\033[38;5;115m",
+			c("#87d7d7"): "\033[38;5;116m", c("#87d7ff"): "\033[38;5;117m", c("#87ff00"): "\033[38;5;118m", c("#87ff5f"): "\033[38;5;119m",
+			c("#87ff87"): "\033[38;5;120m", c("#87ffaf"): "\033[38;5;121m", c("#87ffd7"): "\033[38;5;122m", c("#87ffff"): "\033[38;5;123m",
+			c("#af0000"): "\033[38;5;124m", c("#af005f"): "\033[38;5;125m", c("#af0087"): "\033[38;5;126m", c("#af00af"): "\033[38;5;127m",
+			c("#af00d7"): "\033[38;5;128m", c("#af00ff"): "\033[38;5;129m", c("#af5f00"): "\033[38;5;130m", c("#af5f5f"): "\033[38;5;131m",
+			c("#af5f87"): "\033[38;5;132m", c("#af5faf"): "\033[38;5;133m", c("#af5fd7"): "\033[38;5;134m", c("#af5fff"): "\033[38;5;135m",
+			c("#af8700"): "\033[38;5;136m", c("#af875f"): "\033[38;5;137m", c("#af8787"): "\033[38;5;138m", c("#af87af"): "\033[38;5;139m",
+			c("#af87d7"): "\033[38;5;140m", c("#af87ff"): "\033[38;5;141m", c("#afaf00"): "\033[38;5;142m", c("#afaf5f"): "\033[38;5;143m",
+			c("#afaf87"): "\033[38;5;144m", c("#afafaf"): "\033[38;5;145m", c("#afafd7"): "\033[38;5;146m", c("#afafff"): "\033[38;5;147m",
+			c("#afd700"): "\033[38;5;148m", c("#afd75f"): "\033[38;5;149m", c("#afd787"): "\033[38;5;150m", c("#afd7af"): "\033[38;5;151m",
+			c("#afd7d7"): "\033[38;5;152m", c("#afd7ff"): "\033[38;5;153m", c("#afff00"): "\033[38;5;154m", c("#afff5f"): "\033[38;5;155m",
+			c("#afff87"): "\033[38;5;156m", c("#afffaf"): "\033[38;5;157m", c("#afffd7"): "\033[38;5;158m", c("#afffff"): "\033[38;5;159m",
+			c("#d70000"): "\033[38;5;160m", c("#d7005f"): "\033[38;5;161m", c("#d70087"): "\033[38;5;162m", c("#d700af"): "\033[38;5;163m",
+			c("#d700d7"): "\033[38;5;164m", c("#d700ff"): "\033[38;5;165m", c("#d75f00"): "\033[38;5;166m", c("#d75f5f"): "\033[38;5;167m",
+			c("#d75f87"): "\033[38;5;168m", c("#d75faf"): "\033[38;5;169m", c("#d75fd7"): "\033[38;5;170m", c("#d75fff"): "\033[38;5;171m",
+			c("#d78700"): "\033[38;5;172m", c("#d7875f"): "\033[38;5;173m", c("#d78787"): "\033[38;5;174m", c("#d787af"): "\033[38;5;175m",
+			c("#d787d7"): "\033[38;5;176m", c("#d787ff"): "\033[38;5;177m", c("#d7af00"): "\033[38;5;178m", c("#d7af5f"): "\033[38;5;179m",
+			c("#d7af87"): "\033[38;5;180m", c("#d7afaf"): "\033[38;5;181m", c("#d7afd7"): "\033[38;5;182m", c("#d7afff"): "\033[38;5;183m",
+			c("#d7d700"): "\033[38;5;184m", c("#d7d75f"): "\033[38;5;185m", c("#d7d787"): "\033[38;5;186m", c("#d7d7af"): "\033[38;5;187m",
+			c("#d7d7d7"): "\033[38;5;188m", c("#d7d7ff"): "\033[38;5;189m", c("#d7ff00"): "\033[38;5;190m", c("#d7ff5f"): "\033[38;5;191m",
+			c("#d7ff87"): "\033[38;5;192m", c("#d7ffaf"): "\033[38;5;193m", c("#d7ffd7"): "\033[38;5;194m", c("#d7ffff"): "\033[38;5;195m",
+			c("#ff0000"): "\033[38;5;196m", c("#ff005f"): "\033[38;5;197m", c("#ff0087"): "\033[38;5;198m", c("#ff00af"): "\033[38;5;199m",
+			c("#ff00d7"): "\033[38;5;200m", c("#ff00ff"): "\033[38;5;201m", c("#ff5f00"): "\033[38;5;202m", c("#ff5f5f"): "\033[38;5;203m",
+			c("#ff5f87"): "\033[38;5;204m", c("#ff5faf"): "\033[38;5;205m", c("#ff5fd7"): "\033[38;5;206m", c("#ff5fff"): "\033[38;5;207m",
+			c("#ff8700"): "\033[38;5;208m", c("#ff875f"): "\033[38;5;209m", c("#ff8787"): "\033[38;5;210m", c("#ff87af"): "\033[38;5;211m",
+			c("#ff87d7"): "\033[38;5;212m", c("#ff87ff"): "\033[38;5;213m", c("#ffaf00"): "\033[38;5;214m", c("#ffaf5f"): "\033[38;5;215m",
+			c("#ffaf87"): "\033[38;5;216m", c("#ffafaf"): "\033[38;5;217m", c("#ffafd7"): "\033[38;5;218m", c("#ffafff"): "\033[38;5;219m",
+			c("#ffd700"): "\033[38;5;220m", c("#ffd75f"): "\033[38;5;221m", c("#ffd787"): "\033[38;5;222m", c("#ffd7af"): "\033[38;5;223m",
+			c("#ffd7d7"): "\033[38;5;224m", c("#ffd7ff"): "\033[38;5;225m", c("#ffff00"): "\033[38;5;226m", c("#ffff5f"): "\033[38;5;227m",
+			c("#ffff87"): "\033[38;5;228m", c("#ffffaf"): "\033[38;5;229m", c("#ffffd7"): "\033[38;5;230m", c("#ffffff"): "\033[38;5;231m",
+			c("#080808"): "\033[38;5;232m", c("#121212"): "\033[38;5;233m", c("#1c1c1c"): "\033[38;5;234m", c("#262626"): "\033[38;5;235m",
+			c("#303030"): "\033[38;5;236m", c("#3a3a3a"): "\033[38;5;237m", c("#444444"): "\033[38;5;238m", c("#4e4e4e"): "\033[38;5;239m",
+			c("#585858"): "\033[38;5;240m", c("#626262"): "\033[38;5;241m", c("#6c6c6c"): "\033[38;5;242m", c("#767676"): "\033[38;5;243m",
+			c("#808080"): "\033[38;5;244m", c("#8a8a8a"): "\033[38;5;245m", c("#949494"): "\033[38;5;246m", c("#9e9e9e"): "\033[38;5;247m",
+			c("#a8a8a8"): "\033[38;5;248m", c("#b2b2b2"): "\033[38;5;249m", c("#bcbcbc"): "\033[38;5;250m", c("#c6c6c6"): "\033[38;5;251m",
+			c("#d0d0d0"): "\033[38;5;252m", c("#dadada"): "\033[38;5;253m", c("#e4e4e4"): "\033[38;5;254m", c("#eeeeee"): "\033[38;5;255m",
+		},
+		background: map[chroma.Colour]string{
+			c("#000000"): "\033[48;5;0m", c("#800000"): "\033[48;5;1m", c("#008000"): "\033[48;5;2m", c("#808000"): "\033[48;5;3m",
+			c("#000080"): "\033[48;5;4m", c("#800080"): "\033[48;5;5m", c("#008080"): "\033[48;5;6m", c("#c0c0c0"): "\033[48;5;7m",
+			c("#808080"): "\033[48;5;8m", c("#ff0000"): "\033[48;5;9m", c("#00ff00"): "\033[48;5;10m", c("#ffff00"): "\033[48;5;11m",
+			c("#0000ff"): "\033[48;5;12m", c("#ff00ff"): "\033[48;5;13m", c("#00ffff"): "\033[48;5;14m", c("#ffffff"): "\033[48;5;15m",
+			c("#000000"): "\033[48;5;16m", c("#00005f"): "\033[48;5;17m", c("#000087"): "\033[48;5;18m", c("#0000af"): "\033[48;5;19m",
+			c("#0000d7"): "\033[48;5;20m", c("#0000ff"): "\033[48;5;21m", c("#005f00"): "\033[48;5;22m", c("#005f5f"): "\033[48;5;23m",
+			c("#005f87"): "\033[48;5;24m", c("#005faf"): "\033[48;5;25m", c("#005fd7"): "\033[48;5;26m", c("#005fff"): "\033[48;5;27m",
+			c("#008700"): "\033[48;5;28m", c("#00875f"): "\033[48;5;29m", c("#008787"): "\033[48;5;30m", c("#0087af"): "\033[48;5;31m",
+			c("#0087d7"): "\033[48;5;32m", c("#0087ff"): "\033[48;5;33m", c("#00af00"): "\033[48;5;34m", c("#00af5f"): "\033[48;5;35m",
+			c("#00af87"): "\033[48;5;36m", c("#00afaf"): "\033[48;5;37m", c("#00afd7"): "\033[48;5;38m", c("#00afff"): "\033[48;5;39m",
+			c("#00d700"): "\033[48;5;40m", c("#00d75f"): "\033[48;5;41m", c("#00d787"): "\033[48;5;42m", c("#00d7af"): "\033[48;5;43m",
+			c("#00d7d7"): "\033[48;5;44m", c("#00d7ff"): "\033[48;5;45m", c("#00ff00"): "\033[48;5;46m", c("#00ff5f"): "\033[48;5;47m",
+			c("#00ff87"): "\033[48;5;48m", c("#00ffaf"): "\033[48;5;49m", c("#00ffd7"): "\033[48;5;50m", c("#00ffff"): "\033[48;5;51m",
+			c("#5f0000"): "\033[48;5;52m", c("#5f005f"): "\033[48;5;53m", c("#5f0087"): "\033[48;5;54m", c("#5f00af"): "\033[48;5;55m",
+			c("#5f00d7"): "\033[48;5;56m", c("#5f00ff"): "\033[48;5;57m", c("#5f5f00"): "\033[48;5;58m", c("#5f5f5f"): "\033[48;5;59m",
+			c("#5f5f87"): "\033[48;5;60m", c("#5f5faf"): "\033[48;5;61m", c("#5f5fd7"): "\033[48;5;62m", c("#5f5fff"): "\033[48;5;63m",
+			c("#5f8700"): "\033[48;5;64m", c("#5f875f"): "\033[48;5;65m", c("#5f8787"): "\033[48;5;66m", c("#5f87af"): "\033[48;5;67m",
+			c("#5f87d7"): "\033[48;5;68m", c("#5f87ff"): "\033[48;5;69m", c("#5faf00"): "\033[48;5;70m", c("#5faf5f"): "\033[48;5;71m",
+			c("#5faf87"): "\033[48;5;72m", c("#5fafaf"): "\033[48;5;73m", c("#5fafd7"): "\033[48;5;74m", c("#5fafff"): "\033[48;5;75m",
+			c("#5fd700"): "\033[48;5;76m", c("#5fd75f"): "\033[48;5;77m", c("#5fd787"): "\033[48;5;78m", c("#5fd7af"): "\033[48;5;79m",
+			c("#5fd7d7"): "\033[48;5;80m", c("#5fd7ff"): "\033[48;5;81m", c("#5fff00"): "\033[48;5;82m", c("#5fff5f"): "\033[48;5;83m",
+			c("#5fff87"): "\033[48;5;84m", c("#5fffaf"): "\033[48;5;85m", c("#5fffd7"): "\033[48;5;86m", c("#5fffff"): "\033[48;5;87m",
+			c("#870000"): "\033[48;5;88m", c("#87005f"): "\033[48;5;89m", c("#870087"): "\033[48;5;90m", c("#8700af"): "\033[48;5;91m",
+			c("#8700d7"): "\033[48;5;92m", c("#8700ff"): "\033[48;5;93m", c("#875f00"): "\033[48;5;94m", c("#875f5f"): "\033[48;5;95m",
+			c("#875f87"): "\033[48;5;96m", c("#875faf"): "\033[48;5;97m", c("#875fd7"): "\033[48;5;98m", c("#875fff"): "\033[48;5;99m",
+			c("#878700"): "\033[48;5;100m", c("#87875f"): "\033[48;5;101m", c("#878787"): "\033[48;5;102m", c("#8787af"): "\033[48;5;103m",
+			c("#8787d7"): "\033[48;5;104m", c("#8787ff"): "\033[48;5;105m", c("#87af00"): "\033[48;5;106m", c("#87af5f"): "\033[48;5;107m",
+			c("#87af87"): "\033[48;5;108m", c("#87afaf"): "\033[48;5;109m", c("#87afd7"): "\033[48;5;110m", c("#87afff"): "\033[48;5;111m",
+			c("#87d700"): "\033[48;5;112m", c("#87d75f"): "\033[48;5;113m", c("#87d787"): "\033[48;5;114m", c("#87d7af"): "\033[48;5;115m",
+			c("#87d7d7"): "\033[48;5;116m", c("#87d7ff"): "\033[48;5;117m", c("#87ff00"): "\033[48;5;118m", c("#87ff5f"): "\033[48;5;119m",
+			c("#87ff87"): "\033[48;5;120m", c("#87ffaf"): "\033[48;5;121m", c("#87ffd7"): "\033[48;5;122m", c("#87ffff"): "\033[48;5;123m",
+			c("#af0000"): "\033[48;5;124m", c("#af005f"): "\033[48;5;125m", c("#af0087"): "\033[48;5;126m", c("#af00af"): "\033[48;5;127m",
+			c("#af00d7"): "\033[48;5;128m", c("#af00ff"): "\033[48;5;129m", c("#af5f00"): "\033[48;5;130m", c("#af5f5f"): "\033[48;5;131m",
+			c("#af5f87"): "\033[48;5;132m", c("#af5faf"): "\033[48;5;133m", c("#af5fd7"): "\033[48;5;134m", c("#af5fff"): "\033[48;5;135m",
+			c("#af8700"): "\033[48;5;136m", c("#af875f"): "\033[48;5;137m", c("#af8787"): "\033[48;5;138m", c("#af87af"): "\033[48;5;139m",
+			c("#af87d7"): "\033[48;5;140m", c("#af87ff"): "\033[48;5;141m", c("#afaf00"): "\033[48;5;142m", c("#afaf5f"): "\033[48;5;143m",
+			c("#afaf87"): "\033[48;5;144m", c("#afafaf"): "\033[48;5;145m", c("#afafd7"): "\033[48;5;146m", c("#afafff"): "\033[48;5;147m",
+			c("#afd700"): "\033[48;5;148m", c("#afd75f"): "\033[48;5;149m", c("#afd787"): "\033[48;5;150m", c("#afd7af"): "\033[48;5;151m",
+			c("#afd7d7"): "\033[48;5;152m", c("#afd7ff"): "\033[48;5;153m", c("#afff00"): "\033[48;5;154m", c("#afff5f"): "\033[48;5;155m",
+			c("#afff87"): "\033[48;5;156m", c("#afffaf"): "\033[48;5;157m", c("#afffd7"): "\033[48;5;158m", c("#afffff"): "\033[48;5;159m",
+			c("#d70000"): "\033[48;5;160m", c("#d7005f"): "\033[48;5;161m", c("#d70087"): "\033[48;5;162m", c("#d700af"): "\033[48;5;163m",
+			c("#d700d7"): "\033[48;5;164m", c("#d700ff"): "\033[48;5;165m", c("#d75f00"): "\033[48;5;166m", c("#d75f5f"): "\033[48;5;167m",
+			c("#d75f87"): "\033[48;5;168m", c("#d75faf"): "\033[48;5;169m", c("#d75fd7"): "\033[48;5;170m", c("#d75fff"): "\033[48;5;171m",
+			c("#d78700"): "\033[48;5;172m", c("#d7875f"): "\033[48;5;173m", c("#d78787"): "\033[48;5;174m", c("#d787af"): "\033[48;5;175m",
+			c("#d787d7"): "\033[48;5;176m", c("#d787ff"): "\033[48;5;177m", c("#d7af00"): "\033[48;5;178m", c("#d7af5f"): "\033[48;5;179m",
+			c("#d7af87"): "\033[48;5;180m", c("#d7afaf"): "\033[48;5;181m", c("#d7afd7"): "\033[48;5;182m", c("#d7afff"): "\033[48;5;183m",
+			c("#d7d700"): "\033[48;5;184m", c("#d7d75f"): "\033[48;5;185m", c("#d7d787"): "\033[48;5;186m", c("#d7d7af"): "\033[48;5;187m",
+			c("#d7d7d7"): "\033[48;5;188m", c("#d7d7ff"): "\033[48;5;189m", c("#d7ff00"): "\033[48;5;190m", c("#d7ff5f"): "\033[48;5;191m",
+			c("#d7ff87"): "\033[48;5;192m", c("#d7ffaf"): "\033[48;5;193m", c("#d7ffd7"): "\033[48;5;194m", c("#d7ffff"): "\033[48;5;195m",
+			c("#ff0000"): "\033[48;5;196m", c("#ff005f"): "\033[48;5;197m", c("#ff0087"): "\033[48;5;198m", c("#ff00af"): "\033[48;5;199m",
+			c("#ff00d7"): "\033[48;5;200m", c("#ff00ff"): "\033[48;5;201m", c("#ff5f00"): "\033[48;5;202m", c("#ff5f5f"): "\033[48;5;203m",
+			c("#ff5f87"): "\033[48;5;204m", c("#ff5faf"): "\033[48;5;205m", c("#ff5fd7"): "\033[48;5;206m", c("#ff5fff"): "\033[48;5;207m",
+			c("#ff8700"): "\033[48;5;208m", c("#ff875f"): "\033[48;5;209m", c("#ff8787"): "\033[48;5;210m", c("#ff87af"): "\033[48;5;211m",
+			c("#ff87d7"): "\033[48;5;212m", c("#ff87ff"): "\033[48;5;213m", c("#ffaf00"): "\033[48;5;214m", c("#ffaf5f"): "\033[48;5;215m",
+			c("#ffaf87"): "\033[48;5;216m", c("#ffafaf"): "\033[48;5;217m", c("#ffafd7"): "\033[48;5;218m", c("#ffafff"): "\033[48;5;219m",
+			c("#ffd700"): "\033[48;5;220m", c("#ffd75f"): "\033[48;5;221m", c("#ffd787"): "\033[48;5;222m", c("#ffd7af"): "\033[48;5;223m",
+			c("#ffd7d7"): "\033[48;5;224m", c("#ffd7ff"): "\033[48;5;225m", c("#ffff00"): "\033[48;5;226m", c("#ffff5f"): "\033[48;5;227m",
+			c("#ffff87"): "\033[48;5;228m", c("#ffffaf"): "\033[48;5;229m", c("#ffffd7"): "\033[48;5;230m", c("#ffffff"): "\033[48;5;231m",
+			c("#080808"): "\033[48;5;232m", c("#121212"): "\033[48;5;233m", c("#1c1c1c"): "\033[48;5;234m", c("#262626"): "\033[48;5;235m",
+			c("#303030"): "\033[48;5;236m", c("#3a3a3a"): "\033[48;5;237m", c("#444444"): "\033[48;5;238m", c("#4e4e4e"): "\033[48;5;239m",
+			c("#585858"): "\033[48;5;240m", c("#626262"): "\033[48;5;241m", c("#6c6c6c"): "\033[48;5;242m", c("#767676"): "\033[48;5;243m",
+			c("#808080"): "\033[48;5;244m", c("#8a8a8a"): "\033[48;5;245m", c("#949494"): "\033[48;5;246m", c("#9e9e9e"): "\033[48;5;247m",
+			c("#a8a8a8"): "\033[48;5;248m", c("#b2b2b2"): "\033[48;5;249m", c("#bcbcbc"): "\033[48;5;250m", c("#c6c6c6"): "\033[48;5;251m",
+			c("#d0d0d0"): "\033[48;5;252m", c("#dadada"): "\033[48;5;253m", c("#e4e4e4"): "\033[48;5;254m", c("#eeeeee"): "\033[48;5;255m",
+		},
+	},
+}
+
+func entryToEscapeSequence(table *ttyTable, entry chroma.StyleEntry) string {
+	out := ""
+	if entry.Bold == chroma.Yes {
+		out += "\033[1m"
+	}
+	if entry.Underline == chroma.Yes {
+		out += "\033[4m"
+	}
+	if entry.Italic == chroma.Yes {
+		out += "\033[3m"
+	}
+	if entry.Colour.IsSet() {
+		out += table.foreground[findClosest(table, entry.Colour)]
+	}
+	if entry.Background.IsSet() {
+		out += table.background[findClosest(table, entry.Background)]
+	}
+	return out
+}
+
+func findClosest(table *ttyTable, seeking chroma.Colour) chroma.Colour {
+	closestColour := chroma.Colour(0)
+	closest := float64(math.MaxFloat64)
+	for colour := range table.foreground {
+		distance := colour.Distance(seeking)
+		if distance < closest {
+			closest = distance
+			closestColour = colour
+		}
+	}
+	return closestColour
+}
+
+func styleToEscapeSequence(table *ttyTable, style *chroma.Style) map[chroma.TokenType]string {
+	style = clearBackground(style)
+	out := map[chroma.TokenType]string{}
+	for _, ttype := range style.Types() {
+		entry := style.Get(ttype)
+		out[ttype] = entryToEscapeSequence(table, entry)
+	}
+	return out
+}
+
+// Clear the background colour.
+func clearBackground(style *chroma.Style) *chroma.Style {
+	builder := style.Builder()
+	bg := builder.Get(chroma.Background)
+	bg.Background = 0
+	bg.NoInherit = true
+	builder.AddEntry(chroma.Background, bg)
+	style, _ = builder.Build()
+	return style
+}
+
+type indexedTTYFormatter struct {
+	table *ttyTable
+}
+
+func (c *indexedTTYFormatter) Format(w io.Writer, style *chroma.Style, it chroma.Iterator) (err error) {
+	theme := styleToEscapeSequence(c.table, style)
+	for token := it(); token != chroma.EOF; token = it() {
+		clr, ok := theme[token.Type]
+
+		// This search mimics how styles.Get() is used in tty_truecolour.go.
+		if !ok {
+			clr, ok = theme[token.Type.SubCategory()]
+			if !ok {
+				clr, ok = theme[token.Type.Category()]
+				if !ok {
+					clr, ok = theme[chroma.Text]
+					if !ok {
+						clr = theme[chroma.Background]
+					}
+				}
+			}
+		}
+
+		writeToken(w, clr, token.Value)
+	}
+	return nil
+}
+
+// TTY is an 8-colour terminal formatter.
+//
+// The Lab colour space is used to map RGB values to the most appropriate index colour.
+var TTY = Register("terminal", &indexedTTYFormatter{ttyTables[8]})
+
+// TTY8 is an 8-colour terminal formatter.
+//
+// The Lab colour space is used to map RGB values to the most appropriate index colour.
+var TTY8 = Register("terminal8", &indexedTTYFormatter{ttyTables[8]})
+
+// TTY16 is a 16-colour terminal formatter.
+//
+// It uses \033[3xm for normal colours and \033[90Xm for bright colours.
+//
+// The Lab colour space is used to map RGB values to the most appropriate index colour.
+var TTY16 = Register("terminal16", &indexedTTYFormatter{ttyTables[16]})
+
+// TTY256 is a 256-colour terminal formatter.
+//
+// The Lab colour space is used to map RGB values to the most appropriate index colour.
+var TTY256 = Register("terminal256", &indexedTTYFormatter{ttyTables[256]})

vendor/github.com/alecthomas/chroma/v2/formatters/tty_truecolour.go 🔗

@@ -0,0 +1,76 @@
+package formatters
+
+import (
+	"fmt"
+	"io"
+	"regexp"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+// TTY16m is a true-colour terminal formatter.
+var TTY16m = Register("terminal16m", chroma.FormatterFunc(trueColourFormatter))
+
+var crOrCrLf = regexp.MustCompile(`\r?\n`)
+
+// Print the text with the given formatting, resetting the formatting at the end
+// of each line and resuming it on the next line.
+//
+// This way, a pager (like https://github.com/walles/moar for example) can show
+// any line in the output by itself, and it will get the right formatting.
+func writeToken(w io.Writer, formatting string, text string) {
+	if formatting == "" {
+		fmt.Fprint(w, text)
+		return
+	}
+
+	newlineIndices := crOrCrLf.FindAllStringIndex(text, -1)
+
+	afterLastNewline := 0
+	for _, indices := range newlineIndices {
+		newlineStart, afterNewline := indices[0], indices[1]
+		fmt.Fprint(w, formatting)
+		fmt.Fprint(w, text[afterLastNewline:newlineStart])
+		fmt.Fprint(w, "\033[0m")
+		fmt.Fprint(w, text[newlineStart:afterNewline])
+		afterLastNewline = afterNewline
+	}
+
+	if afterLastNewline < len(text) {
+		// Print whatever is left after the last newline
+		fmt.Fprint(w, formatting)
+		fmt.Fprint(w, text[afterLastNewline:])
+		fmt.Fprint(w, "\033[0m")
+	}
+}
+
+func trueColourFormatter(w io.Writer, style *chroma.Style, it chroma.Iterator) error {
+	style = clearBackground(style)
+	for token := it(); token != chroma.EOF; token = it() {
+		entry := style.Get(token.Type)
+		if entry.IsZero() {
+			fmt.Fprint(w, token.Value)
+			continue
+		}
+
+		formatting := ""
+		if entry.Bold == chroma.Yes {
+			formatting += "\033[1m"
+		}
+		if entry.Underline == chroma.Yes {
+			formatting += "\033[4m"
+		}
+		if entry.Italic == chroma.Yes {
+			formatting += "\033[3m"
+		}
+		if entry.Colour.IsSet() {
+			formatting += fmt.Sprintf("\033[38;2;%d;%d;%dm", entry.Colour.Red(), entry.Colour.Green(), entry.Colour.Blue())
+		}
+		if entry.Background.IsSet() {
+			formatting += fmt.Sprintf("\033[48;2;%d;%d;%dm", entry.Background.Red(), entry.Background.Green(), entry.Background.Blue())
+		}
+
+		writeToken(w, formatting, token.Value)
+	}
+	return nil
+}

vendor/github.com/alecthomas/chroma/v2/iterator.go 🔗

@@ -0,0 +1,76 @@
+package chroma
+
+import "strings"
+
+// An Iterator across tokens.
+//
+// EOF will be returned at the end of the Token stream.
+//
+// If an error occurs within an Iterator, it may propagate this in a panic. Formatters should recover.
+type Iterator func() Token
+
+// Tokens consumes all tokens from the iterator and returns them as a slice.
+func (i Iterator) Tokens() []Token {
+	var out []Token
+	for t := i(); t != EOF; t = i() {
+		out = append(out, t)
+	}
+	return out
+}
+
+// Concaterator concatenates tokens from a series of iterators.
+func Concaterator(iterators ...Iterator) Iterator {
+	return func() Token {
+		for len(iterators) > 0 {
+			t := iterators[0]()
+			if t != EOF {
+				return t
+			}
+			iterators = iterators[1:]
+		}
+		return EOF
+	}
+}
+
+// Literator converts a sequence of literal Tokens into an Iterator.
+func Literator(tokens ...Token) Iterator {
+	return func() Token {
+		if len(tokens) == 0 {
+			return EOF
+		}
+		token := tokens[0]
+		tokens = tokens[1:]
+		return token
+	}
+}
+
+// SplitTokensIntoLines splits tokens containing newlines in two.
+func SplitTokensIntoLines(tokens []Token) (out [][]Token) {
+	var line []Token // nolint: prealloc
+	for _, token := range tokens {
+		for strings.Contains(token.Value, "\n") {
+			parts := strings.SplitAfterN(token.Value, "\n", 2)
+			// Token becomes the tail.
+			token.Value = parts[1]
+
+			// Append the head to the line and flush the line.
+			clone := token.Clone()
+			clone.Value = parts[0]
+			line = append(line, clone)
+			out = append(out, line)
+			line = nil
+		}
+		line = append(line, token)
+	}
+	if len(line) > 0 {
+		out = append(out, line)
+	}
+	// Strip empty trailing token line.
+	if len(out) > 0 {
+		last := out[len(out)-1]
+		if len(last) == 1 && last[0].Value == "" {
+			out = out[:len(out)-1]
+		}
+	}
+	return
+}

vendor/github.com/alecthomas/chroma/v2/lexer.go 🔗

@@ -0,0 +1,162 @@
+package chroma
+
+import (
+	"fmt"
+	"strings"
+)
+
+var (
+	defaultOptions = &TokeniseOptions{
+		State:    "root",
+		EnsureLF: true,
+	}
+)
+
+// Config for a lexer.
+type Config struct {
+	// Name of the lexer.
+	Name string `xml:"name,omitempty"`
+
+	// Shortcuts for the lexer
+	Aliases []string `xml:"alias,omitempty"`
+
+	// File name globs
+	Filenames []string `xml:"filename,omitempty"`
+
+	// Secondary file name globs
+	AliasFilenames []string `xml:"alias_filename,omitempty"`
+
+	// MIME types
+	MimeTypes []string `xml:"mime_type,omitempty"`
+
+	// Regex matching is case-insensitive.
+	CaseInsensitive bool `xml:"case_insensitive,omitempty"`
+
+	// Regex matches all characters.
+	DotAll bool `xml:"dot_all,omitempty"`
+
+	// Regex does not match across lines ($ matches EOL).
+	//
+	// Defaults to multiline.
+	NotMultiline bool `xml:"not_multiline,omitempty"`
+
+	// Don't strip leading and trailing newlines from the input.
+	// DontStripNL bool
+
+	// Strip all leading and trailing whitespace from the input
+	// StripAll bool
+
+	// Make sure that the input ends with a newline. This
+	// is required for some lexers that consume input linewise.
+	EnsureNL bool `xml:"ensure_nl,omitempty"`
+
+	// If given and greater than 0, expand tabs in the input.
+	// TabSize int
+
+	// Priority of lexer.
+	//
+	// If this is 0 it will be treated as a default of 1.
+	Priority float32 `xml:"priority,omitempty"`
+
+	// Analyse is a list of regexes to match against the input.
+	//
+	// If a match is found, the score is returned if single attribute is set to true,
+	// otherwise the sum of all the score of matching patterns will be
+	// used as the final score.
+	Analyse *AnalyseConfig `xml:"analyse,omitempty"`
+}
+
+// AnalyseConfig defines the list of regexes analysers.
+type AnalyseConfig struct {
+	Regexes []RegexConfig `xml:"regex,omitempty"`
+	// If true, the first matching score is returned.
+	First bool `xml:"first,attr"`
+}
+
+// RegexConfig defines a single regex pattern and its score in case of match.
+type RegexConfig struct {
+	Pattern string  `xml:"pattern,attr"`
+	Score   float32 `xml:"score,attr"`
+}
+
+// Token output to formatter.
+type Token struct {
+	Type  TokenType `json:"type"`
+	Value string    `json:"value"`
+}
+
+func (t *Token) String() string   { return t.Value }
+func (t *Token) GoString() string { return fmt.Sprintf("&Token{%s, %q}", t.Type, t.Value) }
+
+// Clone returns a clone of the Token.
+func (t *Token) Clone() Token {
+	return *t
+}
+
+// EOF is returned by lexers at the end of input.
+var EOF Token
+
+// TokeniseOptions contains options for tokenisers.
+type TokeniseOptions struct {
+	// State to start tokenisation in. Defaults to "root".
+	State string
+	// Nested tokenisation.
+	Nested bool
+
+	// If true, all EOLs are converted into LF
+	// by replacing CRLF and CR
+	EnsureLF bool
+}
+
+// A Lexer for tokenising source code.
+type Lexer interface {
+	// Config describing the features of the Lexer.
+	Config() *Config
+	// Tokenise returns an Iterator over tokens in text.
+	Tokenise(options *TokeniseOptions, text string) (Iterator, error)
+	// SetRegistry sets the registry this Lexer is associated with.
+	//
+	// The registry should be used by the Lexer if it needs to look up other
+	// lexers.
+	SetRegistry(registry *LexerRegistry) Lexer
+	// SetAnalyser sets a function the Lexer should use for scoring how
+	// likely a fragment of text is to match this lexer, between 0.0 and 1.0.
+	// A value of 1 indicates high confidence.
+	//
+	// Lexers may ignore this if they implement their own analysers.
+	SetAnalyser(analyser func(text string) float32) Lexer
+	// AnalyseText scores how likely a fragment of text is to match
+	// this lexer, between 0.0 and 1.0. A value of 1 indicates high confidence.
+	AnalyseText(text string) float32
+}
+
+// Lexers is a slice of lexers sortable by name.
+type Lexers []Lexer
+
+func (l Lexers) Len() int      { return len(l) }
+func (l Lexers) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
+func (l Lexers) Less(i, j int) bool {
+	return strings.ToLower(l[i].Config().Name) < strings.ToLower(l[j].Config().Name)
+}
+
+// PrioritisedLexers is a slice of lexers sortable by priority.
+type PrioritisedLexers []Lexer
+
+func (l PrioritisedLexers) Len() int      { return len(l) }
+func (l PrioritisedLexers) Swap(i, j int) { l[i], l[j] = l[j], l[i] }
+func (l PrioritisedLexers) Less(i, j int) bool {
+	ip := l[i].Config().Priority
+	if ip == 0 {
+		ip = 1
+	}
+	jp := l[j].Config().Priority
+	if jp == 0 {
+		jp = 1
+	}
+	return ip > jp
+}
+
+// Analyser determines how appropriate this lexer is for the given text.
+type Analyser interface {
+	AnalyseText(text string) float32
+}

vendor/github.com/alecthomas/chroma/v2/lexers/README.md 🔗

@@ -0,0 +1,46 @@
+# Chroma lexers
+
+All lexers in Chroma should now be defined in XML unless they require custom code.
+
+## Lexer tests
+
+The tests in this directory feed a known input `testdata/<name>.actual` into the parser for `<name>` and check
+that its output matches `<name>.expected`.
+
+It is also possible to perform several tests on a same parser `<name>`, by placing know inputs `*.actual` into a
+directory `testdata/<name>/`.
+
+### Running the tests
+
+Run the tests as normal:
+```go
+go test ./lexers
+```
+
+### Update existing tests
+
+When you add a new test data file (`*.actual`), you need to regenerate all tests. That's how Chroma creates the `*.expected` test file based on the corresponding lexer.
+
+To regenerate all tests, type in your terminal:
+
+```go
+RECORD=true go test ./lexers
+```
+
+This first sets the `RECORD` environment variable to `true`. Then it runs `go test` on the `./lexers` directory of the Chroma project.
+
+(That environment variable tells Chroma it needs to output test data. After running `go test ./lexers` you can remove or reset that variable.)
+
+#### Windows users
+
+Windows users will find that the `RECORD=true go test ./lexers` command fails in both the standard command prompt terminal and in PowerShell.
+
+Instead we have to perform both steps separately:
+
+- Set the `RECORD` environment variable to `true`.
+	+ In the regular command prompt window, the `set` command sets an environment variable for the current session: `set RECORD=true`. See [this page](https://superuser.com/questions/212150/how-to-set-env-variable-in-windows-cmd-line) for more.
+	+ In PowerShell, you can use the `$env:RECORD = 'true'` command for that. See [this article](https://mcpmag.com/articles/2019/03/28/environment-variables-in-powershell.aspx) for more.
+	+ You can also make a persistent environment variable by hand in the Windows computer settings. See [this article](https://www.computerhope.com/issues/ch000549.htm) for how.
+- When the environment variable is set, run `go test ./lexers`.
+
+Chroma will now regenerate the test files and print its results to the console window.

vendor/github.com/alecthomas/chroma/v2/lexers/caddyfile.go 🔗

@@ -0,0 +1,275 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Matcher token stub for docs, or
+// Named matcher: @name, or
+// Path matcher: /foo, or
+// Wildcard path matcher: *
+// nolint: gosec
+var caddyfileMatcherTokenRegexp = `(\[\<matcher\>\]|@[^\s]+|/[^\s]+|\*)`
+
+// Comment at start of line, or
+// Comment preceded by whitespace
+var caddyfileCommentRegexp = `(^|\s+)#.*\n`
+
+// caddyfileCommon are the rules common to both of the lexer variants
+func caddyfileCommonRules() Rules {
+	return Rules{
+		"site_block_common": {
+			Include("site_body"),
+			// Any other directive
+			{`[^\s#]+`, Keyword, Push("directive")},
+			Include("base"),
+		},
+		"site_body": {
+			// Import keyword
+			{`\b(import|invoke)\b( [^\s#]+)`, ByGroups(Keyword, Text), Push("subdirective")},
+			// Matcher definition
+			{`@[^\s]+(?=\s)`, NameDecorator, Push("matcher")},
+			// Matcher token stub for docs
+			{`\[\<matcher\>\]`, NameDecorator, Push("matcher")},
+			// These cannot have matchers but may have things that look like
+			// matchers in their arguments, so we just parse as a subdirective.
+			{`\b(try_files|tls|log|bind)\b`, Keyword, Push("subdirective")},
+			// These are special, they can nest more directives
+			{`\b(handle_errors|handle_path|handle_response|replace_status|handle|route)\b`, Keyword, Push("nested_directive")},
+			// uri directive has special syntax
+			{`\b(uri)\b`, Keyword, Push("uri_directive")},
+		},
+		"matcher": {
+			{`\{`, Punctuation, Push("block")},
+			// Not can be one-liner
+			{`not`, Keyword, Push("deep_not_matcher")},
+			// Heredoc for CEL expression
+			Include("heredoc"),
+			// Backtick for CEL expression
+			{"`", StringBacktick, Push("backticks")},
+			// Any other same-line matcher
+			{`[^\s#]+`, Keyword, Push("arguments")},
+			// Terminators
+			{`\s*\n`, Text, Pop(1)},
+			{`\}`, Punctuation, Pop(1)},
+			Include("base"),
+		},
+		"block": {
+			{`\}`, Punctuation, Pop(2)},
+			// Using double quotes doesn't stop at spaces
+			{`"`, StringDouble, Push("double_quotes")},
+			// Using backticks doesn't stop at spaces
+			{"`", StringBacktick, Push("backticks")},
+			// Not can be one-liner
+			{`not`, Keyword, Push("not_matcher")},
+			// Directives & matcher definitions
+			Include("site_body"),
+			// Any directive
+			{`[^\s#]+`, Keyword, Push("subdirective")},
+			Include("base"),
+		},
+		"nested_block": {
+			{`\}`, Punctuation, Pop(2)},
+			// Using double quotes doesn't stop at spaces
+			{`"`, StringDouble, Push("double_quotes")},
+			// Using backticks doesn't stop at spaces
+			{"`", StringBacktick, Push("backticks")},
+			// Not can be one-liner
+			{`not`, Keyword, Push("not_matcher")},
+			// Directives & matcher definitions
+			Include("site_body"),
+			// Any other subdirective
+			{`[^\s#]+`, Keyword, Push("directive")},
+			Include("base"),
+		},
+		"not_matcher": {
+			{`\}`, Punctuation, Pop(2)},
+			{`\{(?=\s)`, Punctuation, Push("block")},
+			{`[^\s#]+`, Keyword, Push("arguments")},
+			{`\s+`, Text, nil},
+		},
+		"deep_not_matcher": {
+			{`\}`, Punctuation, Pop(2)},
+			{`\{(?=\s)`, Punctuation, Push("block")},
+			{`[^\s#]+`, Keyword, Push("deep_subdirective")},
+			{`\s+`, Text, nil},
+		},
+		"directive": {
+			{`\{(?=\s)`, Punctuation, Push("block")},
+			{caddyfileMatcherTokenRegexp, NameDecorator, Push("arguments")},
+			{caddyfileCommentRegexp, CommentSingle, Pop(1)},
+			{`\s*\n`, Text, Pop(1)},
+			Include("base"),
+		},
+		"nested_directive": {
+			{`\{(?=\s)`, Punctuation, Push("nested_block")},
+			{caddyfileMatcherTokenRegexp, NameDecorator, Push("nested_arguments")},
+			{caddyfileCommentRegexp, CommentSingle, Pop(1)},
+			{`\s*\n`, Text, Pop(1)},
+			Include("base"),
+		},
+		"subdirective": {
+			{`\{(?=\s)`, Punctuation, Push("block")},
+			{caddyfileCommentRegexp, CommentSingle, Pop(1)},
+			{`\s*\n`, Text, Pop(1)},
+			Include("base"),
+		},
+		"arguments": {
+			{`\{(?=\s)`, Punctuation, Push("block")},
+			{caddyfileCommentRegexp, CommentSingle, Pop(2)},
+			{`\\\n`, Text, nil}, // Skip escaped newlines
+			{`\s*\n`, Text, Pop(2)},
+			Include("base"),
+		},
+		"nested_arguments": {
+			{`\{(?=\s)`, Punctuation, Push("nested_block")},
+			{caddyfileCommentRegexp, CommentSingle, Pop(2)},
+			{`\\\n`, Text, nil}, // Skip escaped newlines
+			{`\s*\n`, Text, Pop(2)},
+			Include("base"),
+		},
+		"deep_subdirective": {
+			{`\{(?=\s)`, Punctuation, Push("block")},
+			{caddyfileCommentRegexp, CommentSingle, Pop(3)},
+			{`\s*\n`, Text, Pop(3)},
+			Include("base"),
+		},
+		"uri_directive": {
+			{`\{(?=\s)`, Punctuation, Push("block")},
+			{caddyfileMatcherTokenRegexp, NameDecorator, nil},
+			{`(strip_prefix|strip_suffix|replace|path_regexp)`, NameConstant, Push("arguments")},
+			{caddyfileCommentRegexp, CommentSingle, Pop(1)},
+			{`\s*\n`, Text, Pop(1)},
+			Include("base"),
+		},
+		"double_quotes": {
+			Include("placeholder"),
+			{`\\"`, StringDouble, nil},
+			{`[^"]`, StringDouble, nil},
+			{`"`, StringDouble, Pop(1)},
+		},
+		"backticks": {
+			Include("placeholder"),
+			{"\\\\`", StringBacktick, nil},
+			{"[^`]", StringBacktick, nil},
+			{"`", StringBacktick, Pop(1)},
+		},
+		"optional": {
+			// Docs syntax for showing optional parts with [ ]
+			{`\[`, Punctuation, Push("optional")},
+			Include("name_constants"),
+			{`\|`, Punctuation, nil},
+			{`[^\[\]\|]+`, String, nil},
+			{`\]`, Punctuation, Pop(1)},
+		},
+		"heredoc": {
+			{`(<<([a-zA-Z0-9_-]+))(\n(.*|\n)*)(\s*)(\2)`, ByGroups(StringHeredoc, nil, String, String, String, StringHeredoc), nil},
+		},
+		"name_constants": {
+			{`\b(most_recently_modified|largest_size|smallest_size|first_exist|internal|disable_redirects|ignore_loaded_certs|disable_certs|private_ranges|first|last|before|after|on|off)\b(\||(?=\]|\s|$))`, ByGroups(NameConstant, Punctuation), nil},
+		},
+		"placeholder": {
+			// Placeholder with dots, colon for default value, brackets for args[0:]
+			{`\{[\w+.\[\]\:\$-]+\}`, StringEscape, nil},
+			// Handle opening brackets with no matching closing one
+			{`\{[^\}\s]*\b`, String, nil},
+		},
+		"base": {
+			{caddyfileCommentRegexp, CommentSingle, nil},
+			{`\[\<matcher\>\]`, NameDecorator, nil},
+			Include("name_constants"),
+			Include("heredoc"),
+			{`(https?://)?([a-z0-9.-]+)(:)([0-9]+)([^\s]*)`, ByGroups(Name, Name, Punctuation, NumberInteger, Name), nil},
+			{`\[`, Punctuation, Push("optional")},
+			{"`", StringBacktick, Push("backticks")},
+			{`"`, StringDouble, Push("double_quotes")},
+			Include("placeholder"),
+			{`[a-z-]+/[a-z-+]+`, String, nil},
+			{`[0-9]+([smhdk]|ns|us|µs|ms)?\b`, NumberInteger, nil},
+			{`[^\s\n#\{]+`, String, nil},
+			{`/[^\s#]*`, Name, nil},
+			{`\s+`, Text, nil},
+		},
+	}
+}
+
+// Caddyfile lexer.
+var Caddyfile = Register(MustNewLexer(
+	&Config{
+		Name:      "Caddyfile",
+		Aliases:   []string{"caddyfile", "caddy"},
+		Filenames: []string{"Caddyfile*"},
+		MimeTypes: []string{},
+	},
+	caddyfileRules,
+))
+
+func caddyfileRules() Rules {
+	return Rules{
+		"root": {
+			{caddyfileCommentRegexp, CommentSingle, nil},
+			// Global options block
+			{`^\s*(\{)\s*$`, ByGroups(Punctuation), Push("globals")},
+			// Top level import
+			{`(import)(\s+)([^\s]+)`, ByGroups(Keyword, Text, NameVariableMagic), nil},
+			// Snippets
+			{`(&?\([^\s#]+\))(\s*)(\{)`, ByGroups(NameVariableAnonymous, Text, Punctuation), Push("snippet")},
+			// Site label
+			{`[^#{(\s,]+`, GenericHeading, Push("label")},
+			// Site label with placeholder
+			{`\{[\w+.\[\]\:\$-]+\}`, StringEscape, Push("label")},
+			{`\s+`, Text, nil},
+		},
+		"globals": {
+			{`\}`, Punctuation, Pop(1)},
+			// Global options are parsed as subdirectives (no matcher)
+			{`[^\s#]+`, Keyword, Push("subdirective")},
+			Include("base"),
+		},
+		"snippet": {
+			{`\}`, Punctuation, Pop(1)},
+			Include("site_body"),
+			// Any other directive
+			{`[^\s#]+`, Keyword, Push("directive")},
+			Include("base"),
+		},
+		"label": {
+			// Allow multiple labels, comma separated, newlines after
+			// a comma means another label is coming
+			{`,\s*\n?`, Text, nil},
+			{` `, Text, nil},
+			// Site label with placeholder
+			Include("placeholder"),
+			// Site label
+			{`[^#{(\s,]+`, GenericHeading, nil},
+			// Comment after non-block label (hack because comments end in \n)
+			{`#.*\n`, CommentSingle, Push("site_block")},
+			// Note: if \n, we'll never pop out of the site_block, it's valid
+			{`\{(?=\s)|\n`, Punctuation, Push("site_block")},
+		},
+		"site_block": {
+			{`\}`, Punctuation, Pop(2)},
+			Include("site_block_common"),
+		},
+	}.Merge(caddyfileCommonRules())
+}
+
+// Caddyfile directive-only lexer.
+var CaddyfileDirectives = Register(MustNewLexer(
+	&Config{
+		Name:      "Caddyfile Directives",
+		Aliases:   []string{"caddyfile-directives", "caddyfile-d", "caddy-d"},
+		Filenames: []string{},
+		MimeTypes: []string{},
+	},
+	caddyfileDirectivesRules,
+))
+
+func caddyfileDirectivesRules() Rules {
+	return Rules{
+		// Same as "site_block" in Caddyfile
+		"root": {
+			Include("site_block_common"),
+		},
+	}.Merge(caddyfileCommonRules())
+}

vendor/github.com/alecthomas/chroma/v2/lexers/cl.go 🔗

@@ -0,0 +1,243 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+var (
+	clBuiltinFunctions = []string{
+		"<", "<=", "=", ">", ">=", "-", "/", "/=", "*", "+", "1-", "1+",
+		"abort", "abs", "acons", "acos", "acosh", "add-method", "adjoin",
+		"adjustable-array-p", "adjust-array", "allocate-instance",
+		"alpha-char-p", "alphanumericp", "append", "apply", "apropos",
+		"apropos-list", "aref", "arithmetic-error-operands",
+		"arithmetic-error-operation", "array-dimension", "array-dimensions",
+		"array-displacement", "array-element-type", "array-has-fill-pointer-p",
+		"array-in-bounds-p", "arrayp", "array-rank", "array-row-major-index",
+		"array-total-size", "ash", "asin", "asinh", "assoc", "assoc-if",
+		"assoc-if-not", "atan", "atanh", "atom", "bit", "bit-and", "bit-andc1",
+		"bit-andc2", "bit-eqv", "bit-ior", "bit-nand", "bit-nor", "bit-not",
+		"bit-orc1", "bit-orc2", "bit-vector-p", "bit-xor", "boole",
+		"both-case-p", "boundp", "break", "broadcast-stream-streams",
+		"butlast", "byte", "byte-position", "byte-size", "caaaar", "caaadr",
+		"caaar", "caadar", "caaddr", "caadr", "caar", "cadaar", "cadadr",
+		"cadar", "caddar", "cadddr", "caddr", "cadr", "call-next-method", "car",
+		"cdaaar", "cdaadr", "cdaar", "cdadar", "cdaddr", "cdadr", "cdar",
+		"cddaar", "cddadr", "cddar", "cdddar", "cddddr", "cdddr", "cddr", "cdr",
+		"ceiling", "cell-error-name", "cerror", "change-class", "char", "char<",
+		"char<=", "char=", "char>", "char>=", "char/=", "character",
+		"characterp", "char-code", "char-downcase", "char-equal",
+		"char-greaterp", "char-int", "char-lessp", "char-name",
+		"char-not-equal", "char-not-greaterp", "char-not-lessp", "char-upcase",
+		"cis", "class-name", "class-of", "clear-input", "clear-output",
+		"close", "clrhash", "code-char", "coerce", "compile",
+		"compiled-function-p", "compile-file", "compile-file-pathname",
+		"compiler-macro-function", "complement", "complex", "complexp",
+		"compute-applicable-methods", "compute-restarts", "concatenate",
+		"concatenated-stream-streams", "conjugate", "cons", "consp",
+		"constantly", "constantp", "continue", "copy-alist", "copy-list",
+		"copy-pprint-dispatch", "copy-readtable", "copy-seq", "copy-structure",
+		"copy-symbol", "copy-tree", "cos", "cosh", "count", "count-if",
+		"count-if-not", "decode-float", "decode-universal-time", "delete",
+		"delete-duplicates", "delete-file", "delete-if", "delete-if-not",
+		"delete-package", "denominator", "deposit-field", "describe",
+		"describe-object", "digit-char", "digit-char-p", "directory",
+		"directory-namestring", "disassemble", "documentation", "dpb",
+		"dribble", "echo-stream-input-stream", "echo-stream-output-stream",
+		"ed", "eighth", "elt", "encode-universal-time", "endp",
+		"enough-namestring", "ensure-directories-exist",
+		"ensure-generic-function", "eq", "eql", "equal", "equalp", "error",
+		"eval", "evenp", "every", "exp", "export", "expt", "fboundp",
+		"fceiling", "fdefinition", "ffloor", "fifth", "file-author",
+		"file-error-pathname", "file-length", "file-namestring",
+		"file-position", "file-string-length", "file-write-date",
+		"fill", "fill-pointer", "find", "find-all-symbols", "find-class",
+		"find-if", "find-if-not", "find-method", "find-package", "find-restart",
+		"find-symbol", "finish-output", "first", "float", "float-digits",
+		"floatp", "float-precision", "float-radix", "float-sign", "floor",
+		"fmakunbound", "force-output", "format", "fourth", "fresh-line",
+		"fround", "ftruncate", "funcall", "function-keywords",
+		"function-lambda-expression", "functionp", "gcd", "gensym", "gentemp",
+		"get", "get-decoded-time", "get-dispatch-macro-character", "getf",
+		"gethash", "get-internal-real-time", "get-internal-run-time",
+		"get-macro-character", "get-output-stream-string", "get-properties",
+		"get-setf-expansion", "get-universal-time", "graphic-char-p",
+		"hash-table-count", "hash-table-p", "hash-table-rehash-size",
+		"hash-table-rehash-threshold", "hash-table-size", "hash-table-test",
+		"host-namestring", "identity", "imagpart", "import",
+		"initialize-instance", "input-stream-p", "inspect",
+		"integer-decode-float", "integer-length", "integerp",
+		"interactive-stream-p", "intern", "intersection",
+		"invalid-method-error", "invoke-debugger", "invoke-restart",
+		"invoke-restart-interactively", "isqrt", "keywordp", "last", "lcm",
+		"ldb", "ldb-test", "ldiff", "length", "lisp-implementation-type",
+		"lisp-implementation-version", "list", "list*", "list-all-packages",
+		"listen", "list-length", "listp", "load",
+		"load-logical-pathname-translations", "log", "logand", "logandc1",
+		"logandc2", "logbitp", "logcount", "logeqv", "logical-pathname",
+		"logical-pathname-translations", "logior", "lognand", "lognor",
+		"lognot", "logorc1", "logorc2", "logtest", "logxor", "long-site-name",
+		"lower-case-p", "machine-instance", "machine-type", "machine-version",
+		"macroexpand", "macroexpand-1", "macro-function", "make-array",
+		"make-broadcast-stream", "make-concatenated-stream", "make-condition",
+		"make-dispatch-macro-character", "make-echo-stream", "make-hash-table",
+		"make-instance", "make-instances-obsolete", "make-list",
+		"make-load-form", "make-load-form-saving-slots", "make-package",
+		"make-pathname", "make-random-state", "make-sequence", "make-string",
+		"make-string-input-stream", "make-string-output-stream", "make-symbol",
+		"make-synonym-stream", "make-two-way-stream", "makunbound", "map",
+		"mapc", "mapcan", "mapcar", "mapcon", "maphash", "map-into", "mapl",
+		"maplist", "mask-field", "max", "member", "member-if", "member-if-not",
+		"merge", "merge-pathnames", "method-combination-error",
+		"method-qualifiers", "min", "minusp", "mismatch", "mod",
+		"muffle-warning", "name-char", "namestring", "nbutlast", "nconc",
+		"next-method-p", "nintersection", "ninth", "no-applicable-method",
+		"no-next-method", "not", "notany", "notevery", "nreconc", "nreverse",
+		"nset-difference", "nset-exclusive-or", "nstring-capitalize",
+		"nstring-downcase", "nstring-upcase", "nsublis", "nsubst", "nsubst-if",
+		"nsubst-if-not", "nsubstitute", "nsubstitute-if", "nsubstitute-if-not",
+		"nth", "nthcdr", "null", "numberp", "numerator", "nunion", "oddp",
+		"open", "open-stream-p", "output-stream-p", "package-error-package",
+		"package-name", "package-nicknames", "packagep",
+		"package-shadowing-symbols", "package-used-by-list", "package-use-list",
+		"pairlis", "parse-integer", "parse-namestring", "pathname",
+		"pathname-device", "pathname-directory", "pathname-host",
+		"pathname-match-p", "pathname-name", "pathnamep", "pathname-type",
+		"pathname-version", "peek-char", "phase", "plusp", "position",
+		"position-if", "position-if-not", "pprint", "pprint-dispatch",
+		"pprint-fill", "pprint-indent", "pprint-linear", "pprint-newline",
+		"pprint-tab", "pprint-tabular", "prin1", "prin1-to-string", "princ",
+		"princ-to-string", "print", "print-object", "probe-file", "proclaim",
+		"provide", "random", "random-state-p", "rassoc", "rassoc-if",
+		"rassoc-if-not", "rational", "rationalize", "rationalp", "read",
+		"read-byte", "read-char", "read-char-no-hang", "read-delimited-list",
+		"read-from-string", "read-line", "read-preserving-whitespace",
+		"read-sequence", "readtable-case", "readtablep", "realp", "realpart",
+		"reduce", "reinitialize-instance", "rem", "remhash", "remove",
+		"remove-duplicates", "remove-if", "remove-if-not", "remove-method",
+		"remprop", "rename-file", "rename-package", "replace", "require",
+		"rest", "restart-name", "revappend", "reverse", "room", "round",
+		"row-major-aref", "rplaca", "rplacd", "sbit", "scale-float", "schar",
+		"search", "second", "set", "set-difference",
+		"set-dispatch-macro-character", "set-exclusive-or",
+		"set-macro-character", "set-pprint-dispatch", "set-syntax-from-char",
+		"seventh", "shadow", "shadowing-import", "shared-initialize",
+		"short-site-name", "signal", "signum", "simple-bit-vector-p",
+		"simple-condition-format-arguments", "simple-condition-format-control",
+		"simple-string-p", "simple-vector-p", "sin", "sinh", "sixth", "sleep",
+		"slot-boundp", "slot-exists-p", "slot-makunbound", "slot-missing",
+		"slot-unbound", "slot-value", "software-type", "software-version",
+		"some", "sort", "special-operator-p", "sqrt", "stable-sort",
+		"standard-char-p", "store-value", "stream-element-type",
+		"stream-error-stream", "stream-external-format", "streamp", "string",
+		"string<", "string<=", "string=", "string>", "string>=", "string/=",
+		"string-capitalize", "string-downcase", "string-equal",
+		"string-greaterp", "string-left-trim", "string-lessp",
+		"string-not-equal", "string-not-greaterp", "string-not-lessp",
+		"stringp", "string-right-trim", "string-trim", "string-upcase",
+		"sublis", "subseq", "subsetp", "subst", "subst-if", "subst-if-not",
+		"substitute", "substitute-if", "substitute-if-not", "subtypep", "svref",
+		"sxhash", "symbol-function", "symbol-name", "symbolp", "symbol-package",
+		"symbol-plist", "symbol-value", "synonym-stream-symbol", "syntax:",
+		"tailp", "tan", "tanh", "tenth", "terpri", "third",
+		"translate-logical-pathname", "translate-pathname", "tree-equal",
+		"truename", "truncate", "two-way-stream-input-stream",
+		"two-way-stream-output-stream", "type-error-datum",
+		"type-error-expected-type", "type-of", "typep", "unbound-slot-instance",
+		"unexport", "unintern", "union", "unread-char", "unuse-package",
+		"update-instance-for-different-class",
+		"update-instance-for-redefined-class", "upgraded-array-element-type",
+		"upgraded-complex-part-type", "upper-case-p", "use-package",
+		"user-homedir-pathname", "use-value", "values", "values-list", "vector",
+		"vectorp", "vector-pop", "vector-push", "vector-push-extend", "warn",
+		"wild-pathname-p", "write", "write-byte", "write-char", "write-line",
+		"write-sequence", "write-string", "write-to-string", "yes-or-no-p",
+		"y-or-n-p", "zerop",
+	}
+
+	clSpecialForms = []string{
+		"block", "catch", "declare", "eval-when", "flet", "function", "go", "if",
+		"labels", "lambda", "let", "let*", "load-time-value", "locally", "macrolet",
+		"multiple-value-call", "multiple-value-prog1", "progn", "progv", "quote",
+		"return-from", "setq", "symbol-macrolet", "tagbody", "the", "throw",
+		"unwind-protect",
+	}
+
+	clMacros = []string{
+		"and", "assert", "call-method", "case", "ccase", "check-type", "cond",
+		"ctypecase", "decf", "declaim", "defclass", "defconstant", "defgeneric",
+		"define-compiler-macro", "define-condition", "define-method-combination",
+		"define-modify-macro", "define-setf-expander", "define-symbol-macro",
+		"defmacro", "defmethod", "defpackage", "defparameter", "defsetf",
+		"defstruct", "deftype", "defun", "defvar", "destructuring-bind", "do",
+		"do*", "do-all-symbols", "do-external-symbols", "dolist", "do-symbols",
+		"dotimes", "ecase", "etypecase", "formatter", "handler-bind",
+		"handler-case", "ignore-errors", "incf", "in-package", "lambda", "loop",
+		"loop-finish", "make-method", "multiple-value-bind", "multiple-value-list",
+		"multiple-value-setq", "nth-value", "or", "pop",
+		"pprint-exit-if-list-exhausted", "pprint-logical-block", "pprint-pop",
+		"print-unreadable-object", "prog", "prog*", "prog1", "prog2", "psetf",
+		"psetq", "push", "pushnew", "remf", "restart-bind", "restart-case",
+		"return", "rotatef", "setf", "shiftf", "step", "time", "trace", "typecase",
+		"unless", "untrace", "when", "with-accessors", "with-compilation-unit",
+		"with-condition-restarts", "with-hash-table-iterator",
+		"with-input-from-string", "with-open-file", "with-open-stream",
+		"with-output-to-string", "with-package-iterator", "with-simple-restart",
+		"with-slots", "with-standard-io-syntax",
+	}
+
+	clLambdaListKeywords = []string{
+		"&allow-other-keys", "&aux", "&body", "&environment", "&key", "&optional",
+		"&rest", "&whole",
+	}
+
+	clDeclarations = []string{
+		"dynamic-extent", "ignore", "optimize", "ftype", "inline", "special",
+		"ignorable", "notinline", "type",
+	}
+
+	clBuiltinTypes = []string{
+		"atom", "boolean", "base-char", "base-string", "bignum", "bit",
+		"compiled-function", "extended-char", "fixnum", "keyword", "nil",
+		"signed-byte", "short-float", "single-float", "double-float", "long-float",
+		"simple-array", "simple-base-string", "simple-bit-vector", "simple-string",
+		"simple-vector", "standard-char", "unsigned-byte",
+
+		// Condition Types
+		"arithmetic-error", "cell-error", "condition", "control-error",
+		"division-by-zero", "end-of-file", "error", "file-error",
+		"floating-point-inexact", "floating-point-overflow",
+		"floating-point-underflow", "floating-point-invalid-operation",
+		"parse-error", "package-error", "print-not-readable", "program-error",
+		"reader-error", "serious-condition", "simple-condition", "simple-error",
+		"simple-type-error", "simple-warning", "stream-error", "storage-condition",
+		"style-warning", "type-error", "unbound-variable", "unbound-slot",
+		"undefined-function", "warning",
+	}
+
+	clBuiltinClasses = []string{
+		"array", "broadcast-stream", "bit-vector", "built-in-class", "character",
+		"class", "complex", "concatenated-stream", "cons", "echo-stream",
+		"file-stream", "float", "function", "generic-function", "hash-table",
+		"integer", "list", "logical-pathname", "method-combination", "method",
+		"null", "number", "package", "pathname", "ratio", "rational", "readtable",
+		"real", "random-state", "restart", "sequence", "standard-class",
+		"standard-generic-function", "standard-method", "standard-object",
+		"string-stream", "stream", "string", "structure-class", "structure-object",
+		"symbol", "synonym-stream", "t", "two-way-stream", "vector",
+	}
+)
+
+// Common Lisp lexer.
+var CommonLisp = Register(TypeRemappingLexer(MustNewXMLLexer(
+	embedded,
+	"embedded/common_lisp.xml",
+), TypeMapping{
+	{NameVariable, NameFunction, clBuiltinFunctions},
+	{NameVariable, Keyword, clSpecialForms},
+	{NameVariable, NameBuiltin, clMacros},
+	{NameVariable, Keyword, clLambdaListKeywords},
+	{NameVariable, Keyword, clDeclarations},
+	{NameVariable, KeywordType, clBuiltinTypes},
+	{NameVariable, NameClass, clBuiltinClasses},
+}))

vendor/github.com/alecthomas/chroma/v2/lexers/dns.go 🔗

@@ -0,0 +1,17 @@
+package lexers
+
+import (
+	"regexp"
+)
+
+// TODO(moorereason): can this be factored away?
+var zoneAnalyserRe = regexp.MustCompile(`(?m)^@\s+IN\s+SOA\s+`)
+
+func init() { // nolint: gochecknoinits
+	Get("dns").SetAnalyser(func(text string) float32 {
+		if zoneAnalyserRe.FindString(text) != "" {
+			return 1.0
+		}
+		return 0.0
+	})
+}

vendor/github.com/alecthomas/chroma/v2/lexers/emacs.go 🔗

@@ -0,0 +1,533 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+var (
+	emacsMacros = []string{
+		"atomic-change-group", "case", "block", "cl-block", "cl-callf", "cl-callf2",
+		"cl-case", "cl-decf", "cl-declaim", "cl-declare",
+		"cl-define-compiler-macro", "cl-defmacro", "cl-defstruct",
+		"cl-defsubst", "cl-deftype", "cl-defun", "cl-destructuring-bind",
+		"cl-do", "cl-do*", "cl-do-all-symbols", "cl-do-symbols", "cl-dolist",
+		"cl-dotimes", "cl-ecase", "cl-etypecase", "eval-when", "cl-eval-when", "cl-flet",
+		"cl-flet*", "cl-function", "cl-incf", "cl-labels", "cl-letf",
+		"cl-letf*", "cl-load-time-value", "cl-locally", "cl-loop",
+		"cl-macrolet", "cl-multiple-value-bind", "cl-multiple-value-setq",
+		"cl-progv", "cl-psetf", "cl-psetq", "cl-pushnew", "cl-remf",
+		"cl-return", "cl-return-from", "cl-rotatef", "cl-shiftf",
+		"cl-symbol-macrolet", "cl-tagbody", "cl-the", "cl-typecase",
+		"combine-after-change-calls", "condition-case-unless-debug", "decf",
+		"declaim", "declare", "declare-function", "def-edebug-spec",
+		"defadvice", "defclass", "defcustom", "defface", "defgeneric",
+		"defgroup", "define-advice", "define-alternatives",
+		"define-compiler-macro", "define-derived-mode", "define-generic-mode",
+		"define-global-minor-mode", "define-globalized-minor-mode",
+		"define-minor-mode", "define-modify-macro",
+		"define-obsolete-face-alias", "define-obsolete-function-alias",
+		"define-obsolete-variable-alias", "define-setf-expander",
+		"define-skeleton", "defmacro", "defmethod", "defsetf", "defstruct",
+		"defsubst", "deftheme", "deftype", "defun", "defvar-local",
+		"delay-mode-hooks", "destructuring-bind", "do", "do*",
+		"do-all-symbols", "do-symbols", "dolist", "dont-compile", "dotimes",
+		"dotimes-with-progress-reporter", "ecase", "ert-deftest", "etypecase",
+		"eval-and-compile", "eval-when-compile", "flet", "ignore-errors",
+		"incf", "labels", "lambda", "letrec", "lexical-let", "lexical-let*",
+		"loop", "multiple-value-bind", "multiple-value-setq", "noreturn",
+		"oref", "oref-default", "oset", "oset-default", "pcase",
+		"pcase-defmacro", "pcase-dolist", "pcase-exhaustive", "pcase-let",
+		"pcase-let*", "pop", "psetf", "psetq", "push", "pushnew", "remf",
+		"return", "rotatef", "rx", "save-match-data", "save-selected-window",
+		"save-window-excursion", "setf", "setq-local", "shiftf",
+		"track-mouse", "typecase", "unless", "use-package", "when",
+		"while-no-input", "with-case-table", "with-category-table",
+		"with-coding-priority", "with-current-buffer", "with-demoted-errors",
+		"with-eval-after-load", "with-file-modes", "with-local-quit",
+		"with-output-to-string", "with-output-to-temp-buffer",
+		"with-parsed-tramp-file-name", "with-selected-frame",
+		"with-selected-window", "with-silent-modifications", "with-slots",
+		"with-syntax-table", "with-temp-buffer", "with-temp-file",
+		"with-temp-message", "with-timeout", "with-tramp-connection-property",
+		"with-tramp-file-property", "with-tramp-progress-reporter",
+		"with-wrapper-hook", "load-time-value", "locally", "macrolet", "progv",
+		"return-from",
+	}
+
+	emacsSpecialForms = []string{
+		"and", "catch", "cond", "condition-case", "defconst", "defvar",
+		"function", "if", "interactive", "let", "let*", "or", "prog1",
+		"prog2", "progn", "quote", "save-current-buffer", "save-excursion",
+		"save-restriction", "setq", "setq-default", "subr-arity",
+		"unwind-protect", "while",
+	}
+
+	emacsBuiltinFunction = []string{
+		"%", "*", "+", "-", "/", "/=", "1+", "1-", "<", "<=", "=", ">", ">=",
+		"Snarf-documentation", "abort-recursive-edit", "abs",
+		"accept-process-output", "access-file", "accessible-keymaps", "acos",
+		"active-minibuffer-window", "add-face-text-property",
+		"add-name-to-file", "add-text-properties", "all-completions",
+		"append", "apply", "apropos-internal", "aref", "arrayp", "aset",
+		"ash", "asin", "assoc", "assoc-string", "assq", "atan", "atom",
+		"autoload", "autoload-do-load", "backtrace", "backtrace--locals",
+		"backtrace-debug", "backtrace-eval", "backtrace-frame",
+		"backward-char", "backward-prefix-chars", "barf-if-buffer-read-only",
+		"base64-decode-region", "base64-decode-string",
+		"base64-encode-region", "base64-encode-string", "beginning-of-line",
+		"bidi-find-overridden-directionality", "bidi-resolved-levels",
+		"bitmap-spec-p", "bobp", "bolp", "bool-vector",
+		"bool-vector-count-consecutive", "bool-vector-count-population",
+		"bool-vector-exclusive-or", "bool-vector-intersection",
+		"bool-vector-not", "bool-vector-p", "bool-vector-set-difference",
+		"bool-vector-subsetp", "bool-vector-union", "boundp",
+		"buffer-base-buffer", "buffer-chars-modified-tick",
+		"buffer-enable-undo", "buffer-file-name", "buffer-has-markers-at",
+		"buffer-list", "buffer-live-p", "buffer-local-value",
+		"buffer-local-variables", "buffer-modified-p", "buffer-modified-tick",
+		"buffer-name", "buffer-size", "buffer-string", "buffer-substring",
+		"buffer-substring-no-properties", "buffer-swap-text", "bufferp",
+		"bury-buffer-internal", "byte-code", "byte-code-function-p",
+		"byte-to-position", "byte-to-string", "byteorder",
+		"call-interactively", "call-last-kbd-macro", "call-process",
+		"call-process-region", "cancel-kbd-macro-events", "capitalize",
+		"capitalize-region", "capitalize-word", "car", "car-less-than-car",
+		"car-safe", "case-table-p", "category-docstring",
+		"category-set-mnemonics", "category-table", "category-table-p",
+		"ccl-execute", "ccl-execute-on-string", "ccl-program-p", "cdr",
+		"cdr-safe", "ceiling", "char-after", "char-before",
+		"char-category-set", "char-charset", "char-equal", "char-or-string-p",
+		"char-resolve-modifiers", "char-syntax", "char-table-extra-slot",
+		"char-table-p", "char-table-parent", "char-table-range",
+		"char-table-subtype", "char-to-string", "char-width", "characterp",
+		"charset-after", "charset-id-internal", "charset-plist",
+		"charset-priority-list", "charsetp", "check-coding-system",
+		"check-coding-systems-region", "clear-buffer-auto-save-failure",
+		"clear-charset-maps", "clear-face-cache", "clear-font-cache",
+		"clear-image-cache", "clear-string", "clear-this-command-keys",
+		"close-font", "clrhash", "coding-system-aliases",
+		"coding-system-base", "coding-system-eol-type", "coding-system-p",
+		"coding-system-plist", "coding-system-priority-list",
+		"coding-system-put", "color-distance", "color-gray-p",
+		"color-supported-p", "combine-after-change-execute",
+		"command-error-default-function", "command-remapping", "commandp",
+		"compare-buffer-substrings", "compare-strings",
+		"compare-window-configurations", "completing-read",
+		"compose-region-internal", "compose-string-internal",
+		"composition-get-gstring", "compute-motion", "concat", "cons",
+		"consp", "constrain-to-field", "continue-process",
+		"controlling-tty-p", "coordinates-in-window-p", "copy-alist",
+		"copy-category-table", "copy-file", "copy-hash-table", "copy-keymap",
+		"copy-marker", "copy-sequence", "copy-syntax-table", "copysign",
+		"cos", "current-active-maps", "current-bidi-paragraph-direction",
+		"current-buffer", "current-case-table", "current-column",
+		"current-global-map", "current-idle-time", "current-indentation",
+		"current-input-mode", "current-local-map", "current-message",
+		"current-minor-mode-maps", "current-time", "current-time-string",
+		"current-time-zone", "current-window-configuration",
+		"cygwin-convert-file-name-from-windows",
+		"cygwin-convert-file-name-to-windows", "daemon-initialized",
+		"daemonp", "dbus--init-bus", "dbus-get-unique-name",
+		"dbus-message-internal", "debug-timer-check", "declare-equiv-charset",
+		"decode-big5-char", "decode-char", "decode-coding-region",
+		"decode-coding-string", "decode-sjis-char", "decode-time",
+		"default-boundp", "default-file-modes", "default-printer-name",
+		"default-toplevel-value", "default-value", "define-category",
+		"define-charset-alias", "define-charset-internal",
+		"define-coding-system-alias", "define-coding-system-internal",
+		"define-fringe-bitmap", "define-hash-table-test", "define-key",
+		"define-prefix-command", "delete",
+		"delete-all-overlays", "delete-and-extract-region", "delete-char",
+		"delete-directory-internal", "delete-field", "delete-file",
+		"delete-frame", "delete-other-windows-internal", "delete-overlay",
+		"delete-process", "delete-region", "delete-terminal",
+		"delete-window-internal", "delq", "describe-buffer-bindings",
+		"describe-vector", "destroy-fringe-bitmap", "detect-coding-region",
+		"detect-coding-string", "ding", "directory-file-name",
+		"directory-files", "directory-files-and-attributes", "discard-input",
+		"display-supports-face-attributes-p", "do-auto-save", "documentation",
+		"documentation-property", "downcase", "downcase-region",
+		"downcase-word", "draw-string", "dump-colors", "dump-emacs",
+		"dump-face", "dump-frame-glyph-matrix", "dump-glyph-matrix",
+		"dump-glyph-row", "dump-redisplay-history", "dump-tool-bar-row",
+		"elt", "emacs-pid", "encode-big5-char", "encode-char",
+		"encode-coding-region", "encode-coding-string", "encode-sjis-char",
+		"encode-time", "end-kbd-macro", "end-of-line", "eobp", "eolp", "eq",
+		"eql", "equal", "equal-including-properties", "erase-buffer",
+		"error-message-string", "eval", "eval-buffer", "eval-region",
+		"event-convert-list", "execute-kbd-macro", "exit-recursive-edit",
+		"exp", "expand-file-name", "expt", "external-debugging-output",
+		"face-attribute-relative-p", "face-attributes-as-vector", "face-font",
+		"fboundp", "fceiling", "fetch-bytecode", "ffloor",
+		"field-beginning", "field-end", "field-string",
+		"field-string-no-properties", "file-accessible-directory-p",
+		"file-acl", "file-attributes", "file-attributes-lessp",
+		"file-directory-p", "file-executable-p", "file-exists-p",
+		"file-locked-p", "file-modes", "file-name-absolute-p",
+		"file-name-all-completions", "file-name-as-directory",
+		"file-name-completion", "file-name-directory",
+		"file-name-nondirectory", "file-newer-than-file-p", "file-readable-p",
+		"file-regular-p", "file-selinux-context", "file-symlink-p",
+		"file-system-info", "file-system-info", "file-writable-p",
+		"fillarray", "find-charset-region", "find-charset-string",
+		"find-coding-systems-region-internal", "find-composition-internal",
+		"find-file-name-handler", "find-font", "find-operation-coding-system",
+		"float", "float-time", "floatp", "floor", "fmakunbound",
+		"following-char", "font-at", "font-drive-otf", "font-face-attributes",
+		"font-family-list", "font-get", "font-get-glyphs",
+		"font-get-system-font", "font-get-system-normal-font", "font-info",
+		"font-match-p", "font-otf-alternates", "font-put",
+		"font-shape-gstring", "font-spec", "font-variation-glyphs",
+		"font-xlfd-name", "fontp", "fontset-font", "fontset-info",
+		"fontset-list", "fontset-list-all", "force-mode-line-update",
+		"force-window-update", "format", "format-mode-line",
+		"format-network-address", "format-time-string", "forward-char",
+		"forward-comment", "forward-line", "forward-word",
+		"frame-border-width", "frame-bottom-divider-width",
+		"frame-can-run-window-configuration-change-hook", "frame-char-height",
+		"frame-char-width", "frame-face-alist", "frame-first-window",
+		"frame-focus", "frame-font-cache", "frame-fringe-width", "frame-list",
+		"frame-live-p", "frame-or-buffer-changed-p", "frame-parameter",
+		"frame-parameters", "frame-pixel-height", "frame-pixel-width",
+		"frame-pointer-visible-p", "frame-right-divider-width",
+		"frame-root-window", "frame-scroll-bar-height",
+		"frame-scroll-bar-width", "frame-selected-window", "frame-terminal",
+		"frame-text-cols", "frame-text-height", "frame-text-lines",
+		"frame-text-width", "frame-total-cols", "frame-total-lines",
+		"frame-visible-p", "framep", "frexp", "fringe-bitmaps-at-pos",
+		"fround", "fset", "ftruncate", "funcall", "funcall-interactively",
+		"function-equal", "functionp", "gap-position", "gap-size",
+		"garbage-collect", "gc-status", "generate-new-buffer-name", "get",
+		"get-buffer", "get-buffer-create", "get-buffer-process",
+		"get-buffer-window", "get-byte", "get-char-property",
+		"get-char-property-and-overlay", "get-file-buffer", "get-file-char",
+		"get-internal-run-time", "get-load-suffixes", "get-pos-property",
+		"get-process", "get-screen-color", "get-text-property",
+		"get-unicode-property-internal", "get-unused-category",
+		"get-unused-iso-final-char", "getenv-internal", "gethash",
+		"gfile-add-watch", "gfile-rm-watch", "global-key-binding",
+		"gnutls-available-p", "gnutls-boot", "gnutls-bye", "gnutls-deinit",
+		"gnutls-error-fatalp", "gnutls-error-string", "gnutls-errorp",
+		"gnutls-get-initstage", "gnutls-peer-status",
+		"gnutls-peer-status-warning-describe", "goto-char", "gpm-mouse-start",
+		"gpm-mouse-stop", "group-gid", "group-real-gid",
+		"handle-save-session", "handle-switch-frame", "hash-table-count",
+		"hash-table-p", "hash-table-rehash-size",
+		"hash-table-rehash-threshold", "hash-table-size", "hash-table-test",
+		"hash-table-weakness", "iconify-frame", "identity", "image-flush",
+		"image-mask-p", "image-metadata", "image-size", "imagemagick-types",
+		"imagep", "indent-to", "indirect-function", "indirect-variable",
+		"init-image-library", "inotify-add-watch", "inotify-rm-watch",
+		"input-pending-p", "insert", "insert-and-inherit",
+		"insert-before-markers", "insert-before-markers-and-inherit",
+		"insert-buffer-substring", "insert-byte", "insert-char",
+		"insert-file-contents", "insert-startup-screen", "int86",
+		"integer-or-marker-p", "integerp", "interactive-form", "intern",
+		"intern-soft", "internal--track-mouse", "internal-char-font",
+		"internal-complete-buffer", "internal-copy-lisp-face",
+		"internal-default-process-filter",
+		"internal-default-process-sentinel", "internal-describe-syntax-value",
+		"internal-event-symbol-parse-modifiers",
+		"internal-face-x-get-resource", "internal-get-lisp-face-attribute",
+		"internal-lisp-face-attribute-values", "internal-lisp-face-empty-p",
+		"internal-lisp-face-equal-p", "internal-lisp-face-p",
+		"internal-make-lisp-face", "internal-make-var-non-special",
+		"internal-merge-in-global-face",
+		"internal-set-alternative-font-family-alist",
+		"internal-set-alternative-font-registry-alist",
+		"internal-set-font-selection-order",
+		"internal-set-lisp-face-attribute",
+		"internal-set-lisp-face-attribute-from-resource",
+		"internal-show-cursor", "internal-show-cursor-p", "interrupt-process",
+		"invisible-p", "invocation-directory", "invocation-name", "isnan",
+		"iso-charset", "key-binding", "key-description",
+		"keyboard-coding-system", "keymap-parent", "keymap-prompt", "keymapp",
+		"keywordp", "kill-all-local-variables", "kill-buffer", "kill-emacs",
+		"kill-local-variable", "kill-process", "last-nonminibuffer-frame",
+		"lax-plist-get", "lax-plist-put", "ldexp", "length",
+		"libxml-parse-html-region", "libxml-parse-xml-region",
+		"line-beginning-position", "line-end-position", "line-pixel-height",
+		"list", "list-fonts", "list-system-processes", "listp", "load",
+		"load-average", "local-key-binding", "local-variable-if-set-p",
+		"local-variable-p", "locale-info", "locate-file-internal",
+		"lock-buffer", "log", "logand", "logb", "logior", "lognot", "logxor",
+		"looking-at", "lookup-image", "lookup-image-map", "lookup-key",
+		"lower-frame", "lsh", "macroexpand", "make-bool-vector",
+		"make-byte-code", "make-category-set", "make-category-table",
+		"make-char", "make-char-table", "make-directory-internal",
+		"make-frame-invisible", "make-frame-visible", "make-hash-table",
+		"make-indirect-buffer", "make-keymap", "make-list",
+		"make-local-variable", "make-marker", "make-network-process",
+		"make-overlay", "make-serial-process", "make-sparse-keymap",
+		"make-string", "make-symbol", "make-symbolic-link", "make-temp-name",
+		"make-terminal-frame", "make-variable-buffer-local",
+		"make-variable-frame-local", "make-vector", "makunbound",
+		"map-char-table", "map-charset-chars", "map-keymap",
+		"map-keymap-internal", "mapatoms", "mapc", "mapcar", "mapconcat",
+		"maphash", "mark-marker", "marker-buffer", "marker-insertion-type",
+		"marker-position", "markerp", "match-beginning", "match-data",
+		"match-end", "matching-paren", "max", "max-char", "md5", "member",
+		"memory-info", "memory-limit", "memory-use-counts", "memq", "memql",
+		"menu-bar-menu-at-x-y", "menu-or-popup-active-p",
+		"menu-or-popup-active-p", "merge-face-attribute", "message",
+		"message-box", "message-or-box", "min",
+		"minibuffer-completion-contents", "minibuffer-contents",
+		"minibuffer-contents-no-properties", "minibuffer-depth",
+		"minibuffer-prompt", "minibuffer-prompt-end",
+		"minibuffer-selected-window", "minibuffer-window", "minibufferp",
+		"minor-mode-key-binding", "mod", "modify-category-entry",
+		"modify-frame-parameters", "modify-syntax-entry",
+		"mouse-pixel-position", "mouse-position", "move-overlay",
+		"move-point-visually", "move-to-column", "move-to-window-line",
+		"msdos-downcase-filename", "msdos-long-file-names", "msdos-memget",
+		"msdos-memput", "msdos-mouse-disable", "msdos-mouse-enable",
+		"msdos-mouse-init", "msdos-mouse-p", "msdos-remember-default-colors",
+		"msdos-set-keyboard", "msdos-set-mouse-buttons",
+		"multibyte-char-to-unibyte", "multibyte-string-p", "narrow-to-region",
+		"natnump", "nconc", "network-interface-info",
+		"network-interface-list", "new-fontset", "newline-cache-check",
+		"next-char-property-change", "next-frame", "next-overlay-change",
+		"next-property-change", "next-read-file-uses-dialog-p",
+		"next-single-char-property-change", "next-single-property-change",
+		"next-window", "nlistp", "nreverse", "nth", "nthcdr", "null",
+		"number-or-marker-p", "number-to-string", "numberp",
+		"open-dribble-file", "open-font", "open-termscript",
+		"optimize-char-table", "other-buffer", "other-window-for-scrolling",
+		"overlay-buffer", "overlay-end", "overlay-get", "overlay-lists",
+		"overlay-properties", "overlay-put", "overlay-recenter",
+		"overlay-start", "overlayp", "overlays-at", "overlays-in",
+		"parse-partial-sexp", "play-sound-internal", "plist-get",
+		"plist-member", "plist-put", "point", "point-marker", "point-max",
+		"point-max-marker", "point-min", "point-min-marker",
+		"pos-visible-in-window-p", "position-bytes", "posix-looking-at",
+		"posix-search-backward", "posix-search-forward", "posix-string-match",
+		"posn-at-point", "posn-at-x-y", "preceding-char",
+		"prefix-numeric-value", "previous-char-property-change",
+		"previous-frame", "previous-overlay-change",
+		"previous-property-change", "previous-single-char-property-change",
+		"previous-single-property-change", "previous-window", "prin1",
+		"prin1-to-string", "princ", "print", "process-attributes",
+		"process-buffer", "process-coding-system", "process-command",
+		"process-connection", "process-contact", "process-datagram-address",
+		"process-exit-status", "process-filter", "process-filter-multibyte-p",
+		"process-id", "process-inherit-coding-system-flag", "process-list",
+		"process-mark", "process-name", "process-plist",
+		"process-query-on-exit-flag", "process-running-child-p",
+		"process-send-eof", "process-send-region", "process-send-string",
+		"process-sentinel", "process-status", "process-tty-name",
+		"process-type", "processp", "profiler-cpu-log",
+		"profiler-cpu-running-p", "profiler-cpu-start", "profiler-cpu-stop",
+		"profiler-memory-log", "profiler-memory-running-p",
+		"profiler-memory-start", "profiler-memory-stop", "propertize",
+		"purecopy", "put", "put-text-property",
+		"put-unicode-property-internal", "puthash", "query-font",
+		"query-fontset", "quit-process", "raise-frame", "random", "rassoc",
+		"rassq", "re-search-backward", "re-search-forward", "read",
+		"read-buffer", "read-char", "read-char-exclusive",
+		"read-coding-system", "read-command", "read-event",
+		"read-from-minibuffer", "read-from-string", "read-function",
+		"read-key-sequence", "read-key-sequence-vector",
+		"read-no-blanks-input", "read-non-nil-coding-system", "read-string",
+		"read-variable", "recent-auto-save-p", "recent-doskeys",
+		"recent-keys", "recenter", "recursion-depth", "recursive-edit",
+		"redirect-debugging-output", "redirect-frame-focus", "redisplay",
+		"redraw-display", "redraw-frame", "regexp-quote", "region-beginning",
+		"region-end", "register-ccl-program", "register-code-conversion-map",
+		"remhash", "remove-list-of-text-properties", "remove-text-properties",
+		"rename-buffer", "rename-file", "replace-match",
+		"reset-this-command-lengths", "resize-mini-window-internal",
+		"restore-buffer-modified-p", "resume-tty", "reverse", "round",
+		"run-hook-with-args", "run-hook-with-args-until-failure",
+		"run-hook-with-args-until-success", "run-hook-wrapped", "run-hooks",
+		"run-window-configuration-change-hook", "run-window-scroll-functions",
+		"safe-length", "scan-lists", "scan-sexps", "scroll-down",
+		"scroll-left", "scroll-other-window", "scroll-right", "scroll-up",
+		"search-backward", "search-forward", "secure-hash", "select-frame",
+		"select-window", "selected-frame", "selected-window",
+		"self-insert-command", "send-string-to-terminal", "sequencep",
+		"serial-process-configure", "set", "set-buffer",
+		"set-buffer-auto-saved", "set-buffer-major-mode",
+		"set-buffer-modified-p", "set-buffer-multibyte", "set-case-table",
+		"set-category-table", "set-char-table-extra-slot",
+		"set-char-table-parent", "set-char-table-range", "set-charset-plist",
+		"set-charset-priority", "set-coding-system-priority",
+		"set-cursor-size", "set-default", "set-default-file-modes",
+		"set-default-toplevel-value", "set-file-acl", "set-file-modes",
+		"set-file-selinux-context", "set-file-times", "set-fontset-font",
+		"set-frame-height", "set-frame-position", "set-frame-selected-window",
+		"set-frame-size", "set-frame-width", "set-fringe-bitmap-face",
+		"set-input-interrupt-mode", "set-input-meta-mode", "set-input-mode",
+		"set-keyboard-coding-system-internal", "set-keymap-parent",
+		"set-marker", "set-marker-insertion-type", "set-match-data",
+		"set-message-beep", "set-minibuffer-window",
+		"set-mouse-pixel-position", "set-mouse-position",
+		"set-network-process-option", "set-output-flow-control",
+		"set-process-buffer", "set-process-coding-system",
+		"set-process-datagram-address", "set-process-filter",
+		"set-process-filter-multibyte",
+		"set-process-inherit-coding-system-flag", "set-process-plist",
+		"set-process-query-on-exit-flag", "set-process-sentinel",
+		"set-process-window-size", "set-quit-char",
+		"set-safe-terminal-coding-system-internal", "set-screen-color",
+		"set-standard-case-table", "set-syntax-table",
+		"set-terminal-coding-system-internal", "set-terminal-local-value",
+		"set-terminal-parameter", "set-text-properties", "set-time-zone-rule",
+		"set-visited-file-modtime", "set-window-buffer",
+		"set-window-combination-limit", "set-window-configuration",
+		"set-window-dedicated-p", "set-window-display-table",
+		"set-window-fringes", "set-window-hscroll", "set-window-margins",
+		"set-window-new-normal", "set-window-new-pixel",
+		"set-window-new-total", "set-window-next-buffers",
+		"set-window-parameter", "set-window-point", "set-window-prev-buffers",
+		"set-window-redisplay-end-trigger", "set-window-scroll-bars",
+		"set-window-start", "set-window-vscroll", "setcar", "setcdr",
+		"setplist", "show-face-resources", "signal", "signal-process", "sin",
+		"single-key-description", "skip-chars-backward", "skip-chars-forward",
+		"skip-syntax-backward", "skip-syntax-forward", "sleep-for", "sort",
+		"sort-charsets", "special-variable-p", "split-char",
+		"split-window-internal", "sqrt", "standard-case-table",
+		"standard-category-table", "standard-syntax-table", "start-kbd-macro",
+		"start-process", "stop-process", "store-kbd-macro-event", "string",
+		"string-as-multibyte", "string-as-unibyte", "string-bytes",
+		"string-collate-equalp", "string-collate-lessp", "string-equal",
+		"string-lessp", "string-make-multibyte", "string-make-unibyte",
+		"string-match", "string-to-char", "string-to-multibyte",
+		"string-to-number", "string-to-syntax", "string-to-unibyte",
+		"string-width", "stringp", "subr-name", "subrp",
+		"subst-char-in-region", "substitute-command-keys",
+		"substitute-in-file-name", "substring", "substring-no-properties",
+		"suspend-emacs", "suspend-tty", "suspicious-object", "sxhash",
+		"symbol-function", "symbol-name", "symbol-plist", "symbol-value",
+		"symbolp", "syntax-table", "syntax-table-p", "system-groups",
+		"system-move-file-to-trash", "system-name", "system-users", "tan",
+		"terminal-coding-system", "terminal-list", "terminal-live-p",
+		"terminal-local-value", "terminal-name", "terminal-parameter",
+		"terminal-parameters", "terpri", "test-completion",
+		"text-char-description", "text-properties-at", "text-property-any",
+		"text-property-not-all", "this-command-keys",
+		"this-command-keys-vector", "this-single-command-keys",
+		"this-single-command-raw-keys", "time-add", "time-less-p",
+		"time-subtract", "tool-bar-get-system-style", "tool-bar-height",
+		"tool-bar-pixel-width", "top-level", "trace-redisplay",
+		"trace-to-stderr", "translate-region-internal", "transpose-regions",
+		"truncate", "try-completion", "tty-display-color-cells",
+		"tty-display-color-p", "tty-no-underline",
+		"tty-suppress-bold-inverse-default-colors", "tty-top-frame",
+		"tty-type", "type-of", "undo-boundary", "unencodable-char-position",
+		"unhandled-file-name-directory", "unibyte-char-to-multibyte",
+		"unibyte-string", "unicode-property-table-internal", "unify-charset",
+		"unintern", "unix-sync", "unlock-buffer", "upcase", "upcase-initials",
+		"upcase-initials-region", "upcase-region", "upcase-word",
+		"use-global-map", "use-local-map", "user-full-name",
+		"user-login-name", "user-real-login-name", "user-real-uid",
+		"user-uid", "variable-binding-locus", "vconcat", "vector",
+		"vector-or-char-table-p", "vectorp", "verify-visited-file-modtime",
+		"vertical-motion", "visible-frame-list", "visited-file-modtime",
+		"w16-get-clipboard-data", "w16-selection-exists-p",
+		"w16-set-clipboard-data", "w32-battery-status",
+		"w32-default-color-map", "w32-define-rgb-color",
+		"w32-display-monitor-attributes-list", "w32-frame-menu-bar-size",
+		"w32-frame-rect", "w32-get-clipboard-data",
+		"w32-get-codepage-charset", "w32-get-console-codepage",
+		"w32-get-console-output-codepage", "w32-get-current-locale-id",
+		"w32-get-default-locale-id", "w32-get-keyboard-layout",
+		"w32-get-locale-info", "w32-get-valid-codepages",
+		"w32-get-valid-keyboard-layouts", "w32-get-valid-locale-ids",
+		"w32-has-winsock", "w32-long-file-name", "w32-reconstruct-hot-key",
+		"w32-register-hot-key", "w32-registered-hot-keys",
+		"w32-selection-exists-p", "w32-send-sys-command",
+		"w32-set-clipboard-data", "w32-set-console-codepage",
+		"w32-set-console-output-codepage", "w32-set-current-locale",
+		"w32-set-keyboard-layout", "w32-set-process-priority",
+		"w32-shell-execute", "w32-short-file-name", "w32-toggle-lock-key",
+		"w32-unload-winsock", "w32-unregister-hot-key", "w32-window-exists-p",
+		"w32notify-add-watch", "w32notify-rm-watch",
+		"waiting-for-user-input-p", "where-is-internal", "widen",
+		"widget-apply", "widget-get", "widget-put",
+		"window-absolute-pixel-edges", "window-at", "window-body-height",
+		"window-body-width", "window-bottom-divider-width", "window-buffer",
+		"window-combination-limit", "window-configuration-frame",
+		"window-configuration-p", "window-dedicated-p",
+		"window-display-table", "window-edges", "window-end", "window-frame",
+		"window-fringes", "window-header-line-height", "window-hscroll",
+		"window-inside-absolute-pixel-edges", "window-inside-edges",
+		"window-inside-pixel-edges", "window-left-child",
+		"window-left-column", "window-line-height", "window-list",
+		"window-list-1", "window-live-p", "window-margins",
+		"window-minibuffer-p", "window-mode-line-height", "window-new-normal",
+		"window-new-pixel", "window-new-total", "window-next-buffers",
+		"window-next-sibling", "window-normal-size", "window-old-point",
+		"window-parameter", "window-parameters", "window-parent",
+		"window-pixel-edges", "window-pixel-height", "window-pixel-left",
+		"window-pixel-top", "window-pixel-width", "window-point",
+		"window-prev-buffers", "window-prev-sibling",
+		"window-redisplay-end-trigger", "window-resize-apply",
+		"window-resize-apply-total", "window-right-divider-width",
+		"window-scroll-bar-height", "window-scroll-bar-width",
+		"window-scroll-bars", "window-start", "window-system",
+		"window-text-height", "window-text-pixel-size", "window-text-width",
+		"window-top-child", "window-top-line", "window-total-height",
+		"window-total-width", "window-use-time", "window-valid-p",
+		"window-vscroll", "windowp", "write-char", "write-region",
+		"x-backspace-delete-keys-p", "x-change-window-property",
+		"x-change-window-property", "x-close-connection",
+		"x-close-connection", "x-create-frame", "x-create-frame",
+		"x-delete-window-property", "x-delete-window-property",
+		"x-disown-selection-internal", "x-display-backing-store",
+		"x-display-backing-store", "x-display-color-cells",
+		"x-display-color-cells", "x-display-grayscale-p",
+		"x-display-grayscale-p", "x-display-list", "x-display-list",
+		"x-display-mm-height", "x-display-mm-height", "x-display-mm-width",
+		"x-display-mm-width", "x-display-monitor-attributes-list",
+		"x-display-pixel-height", "x-display-pixel-height",
+		"x-display-pixel-width", "x-display-pixel-width", "x-display-planes",
+		"x-display-planes", "x-display-save-under", "x-display-save-under",
+		"x-display-screens", "x-display-screens", "x-display-visual-class",
+		"x-display-visual-class", "x-family-fonts", "x-file-dialog",
+		"x-file-dialog", "x-file-dialog", "x-focus-frame", "x-frame-geometry",
+		"x-frame-geometry", "x-get-atom-name", "x-get-resource",
+		"x-get-selection-internal", "x-hide-tip", "x-hide-tip",
+		"x-list-fonts", "x-load-color-file", "x-menu-bar-open-internal",
+		"x-menu-bar-open-internal", "x-open-connection", "x-open-connection",
+		"x-own-selection-internal", "x-parse-geometry", "x-popup-dialog",
+		"x-popup-menu", "x-register-dnd-atom", "x-select-font",
+		"x-select-font", "x-selection-exists-p", "x-selection-owner-p",
+		"x-send-client-message", "x-server-max-request-size",
+		"x-server-max-request-size", "x-server-vendor", "x-server-vendor",
+		"x-server-version", "x-server-version", "x-show-tip", "x-show-tip",
+		"x-synchronize", "x-synchronize", "x-uses-old-gtk-dialog",
+		"x-window-property", "x-window-property", "x-wm-set-size-hint",
+		"xw-color-defined-p", "xw-color-defined-p", "xw-color-values",
+		"xw-color-values", "xw-display-color-p", "xw-display-color-p",
+		"yes-or-no-p", "zlib-available-p", "zlib-decompress-region",
+		"forward-point",
+	}
+
+	emacsBuiltinFunctionHighlighted = []string{
+		"defvaralias", "provide", "require",
+		"with-no-warnings", "define-widget", "with-electric-help",
+		"throw", "defalias", "featurep",
+	}
+
+	emacsLambdaListKeywords = []string{
+		"&allow-other-keys", "&aux", "&body", "&environment", "&key", "&optional",
+		"&rest", "&whole",
+	}
+
+	emacsErrorKeywords = []string{
+		"cl-assert", "cl-check-type", "error", "signal",
+		"user-error", "warn",
+	}
+)
+
+// EmacsLisp lexer.
+var EmacsLisp = Register(TypeRemappingLexer(MustNewXMLLexer(
+	embedded,
+	"embedded/emacslisp.xml",
+), TypeMapping{
+	{NameVariable, NameFunction, emacsBuiltinFunction},
+	{NameVariable, NameBuiltin, emacsSpecialForms},
+	{NameVariable, NameException, emacsErrorKeywords},
+	{NameVariable, NameBuiltin, append(emacsBuiltinFunctionHighlighted, emacsMacros...)},
+	{NameVariable, KeywordPseudo, emacsLambdaListKeywords},
+}))

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/abap.xml 🔗

@@ -0,0 +1,154 @@
+<lexer>
+  <config>
+    <name>ABAP</name>
+    <alias>abap</alias>
+    <filename>*.abap</filename>
+    <filename>*.ABAP</filename>
+    <mime_type>text/x-abap</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="common">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^\*.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\&#34;.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="##\w+">
+        <token type="CommentSpecial"/>
+      </rule>
+    </state>
+    <state name="variable-names">
+      <rule pattern="&lt;\S+&gt;">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\w[\w~]*(?:(\[\])|-&gt;\*)?">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="common"/>
+      </rule>
+      <rule pattern="CALL\s+(?:BADI|CUSTOMER-FUNCTION|FUNCTION)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(CALL\s+(?:DIALOG|SCREEN|SUBSCREEN|SELECTION-SCREEN|TRANSACTION|TRANSFORMATION))\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(FORM|PERFORM)(\s+)(\w+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(PERFORM)(\s+)(\()(\w+)(\))">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="NameVariable"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(MODULE)(\s+)(\S+)(\s+)(INPUT|OUTPUT)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(METHOD)(\s+)([\w~]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s+)([\w\-]+)([=\-]&gt;)([\w\-~]+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;=(=|-)&gt;)([\w\-~]+)(?=\()">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(TEXT)(-)(\d{3})">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumberInteger"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(TEXT)(-)(\w{3})">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Punctuation"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(ADD-CORRESPONDING|AUTHORITY-CHECK|CLASS-DATA|CLASS-EVENTS|CLASS-METHODS|CLASS-POOL|DELETE-ADJACENT|DIVIDE-CORRESPONDING|EDITOR-CALL|ENHANCEMENT-POINT|ENHANCEMENT-SECTION|EXIT-COMMAND|FIELD-GROUPS|FIELD-SYMBOLS|FUNCTION-POOL|INTERFACE-POOL|INVERTED-DATE|LOAD-OF-PROGRAM|LOG-POINT|MESSAGE-ID|MOVE-CORRESPONDING|MULTIPLY-CORRESPONDING|NEW-LINE|NEW-PAGE|NEW-SECTION|NO-EXTENSION|OUTPUT-LENGTH|PRINT-CONTROL|SELECT-OPTIONS|START-OF-SELECTION|SUBTRACT-CORRESPONDING|SYNTAX-CHECK|SYSTEM-EXCEPTIONS|TYPE-POOL|TYPE-POOLS|NO-DISPLAY)\b">
+        <token type="Keyword"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/abnf.xml 🔗

@@ -0,0 +1,66 @@
+<lexer>
+  <config>
+    <name>ABNF</name>
+    <alias>abnf</alias>
+    <filename>*.abnf</filename>
+    <mime_type>text/x-abnf</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=";.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(%[si])?&#34;[^&#34;]*&#34;">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="%b[01]+\-[01]+\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="%b[01]+(\.[01]+)*\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="%d[0-9]+\-[0-9]+\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="%d[0-9]+(\.[0-9]+)*\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="%x[0-9a-fA-F]+\-[0-9a-fA-F]+\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="%x[0-9a-fA-F]+(\.[0-9a-fA-F]+)*\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="\b[0-9]+\*[0-9]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b[0-9]+\*">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b[0-9]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\*">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(HEXDIG|DQUOTE|DIGIT|VCHAR|OCTET|ALPHA|CHAR|CRLF|HTAB|LWSP|BIT|CTL|WSP|LF|SP|CR)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[a-zA-Z][a-zA-Z0-9-]+\b">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(=/|=|/)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[\[\]()]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/actionscript.xml 🔗

@@ -0,0 +1,68 @@
+<lexer>
+  <config>
+    <name>ActionScript</name>
+    <alias>as</alias>
+    <alias>actionscript</alias>
+    <filename>*.as</filename>
+    <mime_type>application/x-actionscript</mime_type>
+    <mime_type>text/x-actionscript</mime_type>
+    <mime_type>text/actionscript</mime_type>
+    <dot_all>true</dot_all>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\\|\\/|[^/\n])*/[gim]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[~^*!%&amp;&lt;&gt;|+=:;,/?\\-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{}\[\]();.]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(instanceof|arguments|continue|default|typeof|switch|return|catch|break|while|throw|each|this|with|else|case|var|new|for|try|if|do|in)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(implements|protected|namespace|interface|intrinsic|override|function|internal|private|package|extends|dynamic|import|native|return|public|static|class|const|super|final|get|set)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity|-Infinity|undefined|Void)\b">
+        <token type="KeywordConstant"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/actionscript_3.xml 🔗

@@ -0,0 +1,163 @@
+<lexer>
+  <config>
+    <name>ActionScript 3</name>
+    <alias>as3</alias>
+    <alias>actionscript3</alias>
+    <filename>*.as</filename>
+    <mime_type>application/x-actionscript3</mime_type>
+    <mime_type>text/x-actionscript3</mime_type>
+    <mime_type>text/actionscript3</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="funcparams">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(\s*)(\.\.\.)?([$a-zA-Z_]\w*)(\s*)(:)(\s*)([$a-zA-Z_]\w*(?:\.&lt;\w+&gt;)?|\*)(\s*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Name"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="defval"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Operator"/>
+        <push state="type"/>
+      </rule>
+    </state>
+    <state name="type">
+      <rule pattern="(\s*)(:)(\s*)([$a-zA-Z_]\w*(?:\.&lt;\w+&gt;)?|\*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+        </bygroups>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+        <pop depth="2"/>
+      </rule>
+      <rule>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="defval">
+      <rule pattern="(=)(\s*)([^(),]+)(\s*)(,?)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <usingself state="root"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(function\s+)([$a-zA-Z_]\w*)(\s*)(\()">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+        <push state="funcparams"/>
+      </rule>
+      <rule pattern="(var|const)(\s+)([$a-zA-Z_]\w*)(\s*)(:)(\s*)([$a-zA-Z_]\w*(?:\.&lt;\w+&gt;)?)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+          <token type="Name"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(import|package)(\s+)((?:[$a-zA-Z_]\w*|\.)+)(\s*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(new)(\s+)([$a-zA-Z_]\w*(?:\.&lt;\w+&gt;)?)(\s*)(\()">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\\|\\/|[^\n])*/[gisx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="(\.)([$a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(case|default|for|each|in|while|do|break|return|continue|if|else|throw|try|catch|with|new|typeof|arguments|instanceof|this|switch|import|include|as|is)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(class|public|final|internal|native|override|private|protected|static|import|extends|implements|interface|intrinsic|return|super|dynamic|function|const|get|namespace|package|set)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity|-Infinity|undefined|void)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(decodeURI|decodeURIComponent|encodeURI|escape|eval|isFinite|isNaN|isXMLName|clearInterval|fscommand|getTimer|getURL|getVersion|isFinite|parseFloat|parseInt|setInterval|trace|updateAfterEvent|unescape)\b">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="[$a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[~^*!%&amp;&lt;&gt;|+=:;,/?\\{}\[\]().-]+">
+        <token type="Operator"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ada.xml 🔗

@@ -0,0 +1,321 @@
+<lexer>
+  <config>
+    <name>Ada</name>
+    <alias>ada</alias>
+    <alias>ada95</alias>
+    <alias>ada2005</alias>
+    <filename>*.adb</filename>
+    <filename>*.ads</filename>
+    <filename>*.ada</filename>
+    <mime_type>text/x-ada</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="end">
+      <rule pattern="(if|case|record|loop|select)">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]+&#34;|[\w.]+">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="array_def">
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(\w+)(\s+)(range)">
+        <bygroups>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="KeywordReserved"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="package_instantiation">
+      <rule pattern="(&#34;[^&#34;]+&#34;|\w+)(\s+)(=&gt;)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[\w.\&#39;&#34;]">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="subprogram">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="#pop" state="formal_part"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="is\b">
+        <token type="KeywordReserved"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]+&#34;|\w+">
+        <token type="NameFunction"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="type_def">
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="formal_part"/>
+      </rule>
+      <rule pattern="with|and|use">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="array\b">
+        <token type="KeywordReserved"/>
+        <push state="#pop" state="array_def"/>
+      </rule>
+      <rule pattern="record\b">
+        <token type="KeywordReserved"/>
+        <push state="record_def"/>
+      </rule>
+      <rule pattern="(null record)(;)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[\w.]+">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="formal_part">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern=",|:[^=]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(in|not|null|out|access)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="package">
+      <rule pattern="body">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="is\s+new|renames">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="is">
+        <token type="KeywordReserved"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="package_instantiation"/>
+      </rule>
+      <rule pattern="([\w.]+)">
+        <token type="NameClass"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="attribute">
+      <rule pattern="(&#39;)(\w+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="record_def">
+      <rule pattern="end record">
+        <token type="KeywordReserved"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="--.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="function|procedure|entry">
+        <token type="KeywordDeclaration"/>
+        <push state="subprogram"/>
+      </rule>
+      <rule pattern="(subtype|type)(\s+)(\w+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+        </bygroups>
+        <push state="type_def"/>
+      </rule>
+      <rule pattern="task|protected">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(subtype)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(end)(\s+)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="end"/>
+      </rule>
+      <rule pattern="(pragma)(\s+)(\w+)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(Short_Short_Integer|Short_Short_Float|Long_Long_Integer|Long_Long_Float|Wide_Character|Reference_Type|Short_Integer|Long_Integer|Wide_String|Short_Float|Controlled|Long_Float|Character|Generator|File_Type|File_Mode|Positive|Duration|Boolean|Natural|Integer|Address|Cursor|String|Count|Float|Byte)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(and(\s+then)?|in|mod|not|or(\s+else)|rem)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="generic|private">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="package">
+        <token type="KeywordDeclaration"/>
+        <push state="package"/>
+      </rule>
+      <rule pattern="array\b">
+        <token type="KeywordReserved"/>
+        <push state="array_def"/>
+      </rule>
+      <rule pattern="(with|use)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="(\w+)(\s*)(:)(\s*)(constant)">
+        <bygroups>
+          <token type="NameConstant"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="KeywordReserved"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&lt;&lt;\w+&gt;&gt;">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="(\w+)(\s*)(:)(\s*)(declare|begin|loop|for|while)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="KeywordReserved"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(synchronized|overriding|terminate|interface|exception|protected|separate|constant|abstract|renames|reverse|subtype|aliased|declare|requeue|limited|return|tagged|access|record|select|accept|digits|others|pragma|entry|elsif|delta|delay|array|until|range|raise|while|begin|abort|else|loop|when|type|null|then|body|task|goto|case|exit|end|for|abs|xor|all|new|out|is|of|if|or|do|at)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="attribute"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule pattern="&#39;[^&#39;]&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(\w+)(\s*|[(,])">
+        <bygroups>
+          <token type="Name"/>
+          <usingself state="root"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;&gt;|=&gt;|:=|[()|:;,.&#39;])">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[*&lt;&gt;+=/&amp;-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\n+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="[0-9_]+#[0-9a-f]+#">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9_]+\.[0-9_]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9_]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/agda.xml 🔗

@@ -0,0 +1,66 @@
+<lexer>
+  <config>
+    <name>Agda</name>
+    <alias>agda</alias>
+    <filename>*.agda</filename>
+    <mime_type>text/x-agda</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^(\s*)([^\s(){}]+)(\s*)(:)(\s*)"><bygroups><token type="TextWhitespace"/><token type="NameFunction"/><token type="TextWhitespace"/><token type="OperatorWord"/><token type="TextWhitespace"/></bygroups></rule>
+      <rule pattern="--(?![!#$%&amp;*+./&lt;=&gt;?@^|_~:\\]).*?$"><token type="CommentSingle"/></rule>
+      <rule pattern="\{-"><token type="CommentMultiline"/><push state="comment"/></rule>
+      <rule pattern="\{!"><token type="CommentMultiline"/><push state="hole"/></rule>
+      <rule pattern="\b(abstract|codata|coinductive|constructor|data|do|eta-equality|field|forall|hiding|in|inductive|infix|infixl|infixr|instance|interleaved|let|macro|mutual|no-eta-equality|open|overlap|pattern|postulate|primitive|private|quote|quoteTerm|record|renaming|rewrite|syntax|tactic|unquote|unquoteDecl|unquoteDef|using|variable|where|with)(?!\&#x27;)\b"><token type="KeywordReserved"/></rule>
+      <rule pattern="(import|module)(\s+)"><bygroups><token type="KeywordReserved"/><token type="TextWhitespace"/></bygroups><push state="module"/></rule>
+      <rule pattern="\b(Set|Prop)[\u2080-\u2089]*\b"><token type="KeywordType"/></rule>
+      <rule pattern="(\(|\)|\{|\})"><token type="Operator"/></rule>
+      <rule pattern="(\.{1,3}|\||\u03BB|\u2200|\u2192|:|=|-&gt;)"><token type="OperatorWord"/></rule>
+      <rule pattern="\d+[eE][+-]?\d+"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="\d+\.\d+([eE][+-]?\d+)?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="0[xX][\da-fA-F]+"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="\d+"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralStringChar"/><push state="character"/></rule>
+      <rule pattern="&quot;"><token type="LiteralString"/><push state="string"/></rule>
+      <rule pattern="[^\s(){}]+"><token type="Text"/></rule>
+      <rule pattern="\s+?"><token type="TextWhitespace"/></rule>
+    </state>
+    <state name="hole">
+      <rule pattern="[^!{}]+"><token type="CommentMultiline"/></rule>
+      <rule pattern="\{!"><token type="CommentMultiline"/><push/></rule>
+      <rule pattern="!\}"><token type="CommentMultiline"/><pop depth="1"/></rule>
+      <rule pattern="[!{}]"><token type="CommentMultiline"/></rule>
+    </state>
+    <state name="module">
+      <rule pattern="\{-"><token type="CommentMultiline"/><push state="comment"/></rule>
+      <rule pattern="[a-zA-Z][\w.\&#x27;]*"><token type="Name"/><pop depth="1"/></rule>
+      <rule pattern="[\W0-9_]+"><token type="Text"/></rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^-{}]+"><token type="CommentMultiline"/></rule>
+      <rule pattern="\{-"><token type="CommentMultiline"/><push/></rule>
+      <rule pattern="-\}"><token type="CommentMultiline"/><pop depth="1"/></rule>
+      <rule pattern="[-{}]"><token type="CommentMultiline"/></rule>
+    </state>
+    <state name="character">
+      <rule pattern="[^\\&#x27;]&#x27;"><token type="LiteralStringChar"/><pop depth="1"/></rule>
+      <rule pattern="\\"><token type="LiteralStringEscape"/><push state="escape"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralStringChar"/><pop depth="1"/></rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&quot;]+"><token type="LiteralString"/></rule>
+      <rule pattern="\\"><token type="LiteralStringEscape"/><push state="escape"/></rule>
+      <rule pattern="&quot;"><token type="LiteralString"/><pop depth="1"/></rule>
+    </state>
+    <state name="escape">
+      <rule pattern="[abfnrtv&quot;\&#x27;&amp;\\]"><token type="LiteralStringEscape"/><pop depth="1"/></rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/al.xml 🔗

@@ -0,0 +1,75 @@
+<lexer>
+  <config>
+    <name>AL</name>
+    <alias>al</alias>
+    <filename>*.al</filename>
+    <filename>*.dal</filename>
+    <mime_type>text/x-al</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="(?s)\/\*.*?\\*\*\/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(?s)//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\&#34;([^\&#34;])*\&#34;">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&#39;([^&#39;])*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\b(?i:(ARRAY|ASSERTERROR|BEGIN|BREAK|CASE|DO|DOWNTO|ELSE|END|EVENT|EXIT|FOR|FOREACH|FUNCTION|IF|IMPLEMENTS|IN|INDATASET|INTERFACE|INTERNAL|LOCAL|OF|PROCEDURE|PROGRAM|PROTECTED|REPEAT|RUNONCLIENT|SECURITYFILTERING|SUPPRESSDISPOSE|TEMPORARY|THEN|TO|TRIGGER|UNTIL|VAR|WHILE|WITH|WITHEVENTS))\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(?i:(AND|DIV|MOD|NOT|OR|XOR))\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="\b(?i:(AVERAGE|CONST|COUNT|EXIST|FIELD|FILTER|LOOKUP|MAX|MIN|ORDER|SORTING|SUM|TABLEDATA|UPPERLIMIT|WHERE|ASCENDING|DESCENDING))\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(?i:(CODEUNIT|PAGE|PAGEEXTENSION|PAGECUSTOMIZATION|DOTNET|ENUM|ENUMEXTENSION|VALUE|QUERY|REPORT|TABLE|TABLEEXTENSION|XMLPORT|PROFILE|CONTROLADDIN|REPORTEXTENSION|INTERFACE|PERMISSIONSET|PERMISSIONSETEXTENSION|ENTITLEMENT))\b">
+        <token type="Keyword"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/alloy.xml 🔗

@@ -0,0 +1,58 @@
+
+<lexer>
+  <config>
+    <name>Alloy</name>
+    <alias>alloy</alias>
+    <filename>*.als</filename>
+    <mime_type>text/x-alloy</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="sig">
+      <rule pattern="(extends)\b"><token type="Keyword"/><pop depth="1"/></rule>
+      <rule pattern="[a-zA-Z_][\w]*&quot;*"><token type="Name"/></rule>
+      <rule pattern="[^\S\n]+"><token type="TextWhitespace"/></rule>
+      <rule pattern=","><token type="Punctuation"/></rule>
+      <rule pattern="\{"><token type="Operator"/><pop depth="1"/></rule>
+    </state>
+    <state name="module">
+      <rule pattern="[^\S\n]+"><token type="TextWhitespace"/></rule>
+      <rule pattern="[a-zA-Z_][\w]*&quot;*"><token type="Name"/><pop depth="1"/></rule>
+    </state>
+    <state name="fun">
+      <rule pattern="[^\S\n]+"><token type="TextWhitespace"/></rule>
+      <rule pattern="\{"><token type="Operator"/><pop depth="1"/></rule>
+      <rule pattern="[a-zA-Z_][\w]*&quot;*"><token type="Name"/><pop depth="1"/></rule>
+    </state>
+    <state name="fact">
+      <rule><include state="fun"/></rule>
+      <rule pattern="&quot;\b(\\\\|\\[^\\]|[^&quot;\\])*&quot;"><token type="LiteralString"/><pop depth="1"/></rule>
+    </state>
+    <state name="root">
+      <rule pattern="--.*?$"><token type="CommentSingle"/></rule>
+      <rule pattern="//.*?$"><token type="CommentSingle"/></rule>
+      <rule pattern="/\*.*?\*/"><token type="CommentMultiline"/></rule>
+      <rule pattern="[^\S\n]+"><token type="TextWhitespace"/></rule>
+      <rule pattern="(module|open)(\s+)"><bygroups><token type="KeywordNamespace"/><token type="TextWhitespace"/></bygroups><push state="module"/></rule>
+      <rule pattern="(sig|enum)(\s+)"><bygroups><token type="KeywordDeclaration"/><token type="TextWhitespace"/></bygroups><push state="sig"/></rule>
+      <rule pattern="(iden|univ|none)\b"><token type="KeywordConstant"/></rule>
+      <rule pattern="(int|Int)\b"><token type="KeywordType"/></rule>
+      <rule pattern="(var|this|abstract|extends|set|seq|one|lone|let)\b"><token type="Keyword"/></rule>
+      <rule pattern="(all|some|no|sum|disj|when|else)\b"><token type="Keyword"/></rule>
+      <rule pattern="(run|check|for|but|exactly|expect|as|steps)\b"><token type="Keyword"/></rule>
+      <rule pattern="(always|after|eventually|until|release)\b"><token type="Keyword"/></rule>
+      <rule pattern="(historically|before|once|since|triggered)\b"><token type="Keyword"/></rule>
+      <rule pattern="(and|or|implies|iff|in)\b"><token type="OperatorWord"/></rule>
+      <rule pattern="(fun|pred|assert)(\s+)"><bygroups><token type="Keyword"/><token type="TextWhitespace"/></bygroups><push state="fun"/></rule>
+      <rule pattern="(fact)(\s+)"><bygroups><token type="Keyword"/><token type="TextWhitespace"/></bygroups><push state="fact"/></rule>
+      <rule pattern="!|#|&amp;&amp;|\+\+|&lt;&lt;|&gt;&gt;|&gt;=|&lt;=&gt;|&lt;=|\.\.|\.|-&gt;"><token type="Operator"/></rule>
+      <rule pattern="[-+/*%=&lt;&gt;&amp;!^|~{}\[\]().\&#x27;;]"><token type="Operator"/></rule>
+      <rule pattern="[a-zA-Z_][\w]*&quot;*"><token type="Name"/></rule>
+      <rule pattern="[:,]"><token type="Punctuation"/></rule>
+      <rule pattern="[0-9]+"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="&quot;\b(\\\\|\\[^\\]|[^&quot;\\])*&quot;"><token type="LiteralString"/></rule>
+      <rule pattern="\n"><token type="TextWhitespace"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/angular2.xml 🔗

@@ -0,0 +1,108 @@
+<lexer>
+  <config>
+    <name>Angular2</name>
+    <alias>ng2</alias>
+  </config>
+  <rules>
+    <state name="attr">
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\s&gt;]+">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[^{([*#]+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="(\{\{)(\s*)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="ngExpression"/>
+      </rule>
+      <rule pattern="([([]+)([\w:.-]+)([\])]+)(\s*)(=)(\s*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameAttribute"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="attr"/>
+      </rule>
+      <rule pattern="([([]+)([\w:.-]+)([\])]+)(\s*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameAttribute"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([*#])([\w:.-]+)(\s*)(=)(\s*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameAttribute"/>
+          <token type="Punctuation"/>
+          <token type="Operator"/>
+        </bygroups>
+        <push state="attr"/>
+      </rule>
+      <rule pattern="([*#])([\w:.-]+)(\s*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameAttribute"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="ngExpression">
+      <rule pattern="\s+(\|\s+)?">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\}\}">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=":?(true|false)">
+        <token type="LiteralStringBoolean"/>
+      </rule>
+      <rule pattern=":?&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern=":?&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|0[xX][0-9a-fA-F]+[Ll]?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[a-zA-Z][\w-]*(\(.*\))?">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\.[\w-]+(\(.*\))?">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(\?)(\s*)([^}\s]+)(\s*)(:)(\s*)([^}\s]+)(\s*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/antlr.xml 🔗

@@ -0,0 +1,317 @@
+<lexer>
+  <config>
+    <name>ANTLR</name>
+    <alias>antlr</alias>
+  </config>
+  <rules>
+    <state name="nested-arg-action">
+      <rule pattern="([^$\[\]\&#39;&#34;/]+|&#34;(\\\\|\\&#34;|[^&#34;])*&#34;|&#39;(\\\\|\\&#39;|[^&#39;])*&#39;|//.*$\n?|/\*(.|\n)*?\*/|/(?!\*)(\\\\|\\/|[^/])*/|/)+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(\$[a-zA-Z]+)(\.?)(text|value)?">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Punctuation"/>
+          <token type="NameProperty"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\\\\|\\\]|\\\[|[^\[\]])+">
+        <token type="Other"/>
+      </rule>
+    </state>
+    <state name="exception">
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push state="nested-arg-action"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="action"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="(lexer|parser|tree)?(\s*)(grammar\b)(\s*)([A-Za-z]\w*)(;)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameClass"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="options\b">
+        <token type="Keyword"/>
+        <push state="options"/>
+      </rule>
+      <rule pattern="tokens\b">
+        <token type="Keyword"/>
+        <push state="tokens"/>
+      </rule>
+      <rule pattern="(scope)(\s*)([A-Za-z]\w*)(\s*)(\{)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariable"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="action"/>
+      </rule>
+      <rule pattern="(catch|finally)\b">
+        <token type="Keyword"/>
+        <push state="exception"/>
+      </rule>
+      <rule pattern="(@[A-Za-z]\w*)(\s*)(::)?(\s*)([A-Za-z]\w*)(\s*)(\{)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="action"/>
+      </rule>
+      <rule pattern="((?:protected|private|public|fragment)\b)?(\s*)([A-Za-z]\w*)(!)?">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="rule-alts" state="rule-prelims"/>
+      </rule>
+    </state>
+    <state name="tokens">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="([A-Z]\w*)(\s*)(=)?(\s*)(\&#39;(?:\\\\|\\\&#39;|[^\&#39;]*)\&#39;)?(\s*)(;)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+          <token type="LiteralString"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="options">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="([A-Za-z]\w*)(\s*)(=)(\s*)([A-Za-z]\w*|\&#39;(?:\\\\|\\\&#39;|[^\&#39;]*)\&#39;|[0-9]+|\*)(\s*)(;)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+          <token type="Text"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="rule-alts">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="options\b">
+        <token type="Keyword"/>
+        <push state="options"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&lt;&lt;([^&gt;]|&gt;[^&gt;])&gt;&gt;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$?[A-Z_]\w*">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\$?[a-z_]\w*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(\+|\||-&gt;|=&gt;|=|\(|\)|\.\.|\.|\?|\*|\^|!|\#|~)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push state="nested-arg-action"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="action"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="rule-prelims">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="returns\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push state="nested-arg-action"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="action"/>
+      </rule>
+      <rule pattern="(throws)(\s+)([A-Za-z]\w*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(,)(\s*)([A-Za-z]\w*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="options\b">
+        <token type="Keyword"/>
+        <push state="options"/>
+      </rule>
+      <rule pattern="(scope)(\s+)(\{)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="action"/>
+      </rule>
+      <rule pattern="(scope)(\s+)([A-Za-z]\w*)(\s*)(;)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(@[A-Za-z]\w*)(\s*)(\{)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="action"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="action">
+      <rule pattern="([^${}\&#39;&#34;/\\]+|&#34;(\\\\|\\&#34;|[^&#34;])*&#34;|&#39;(\\\\|\\&#39;|[^&#39;])*&#39;|//.*$\n?|/\*(.|\n)*?\*/|/(?!\*)(\\\\|\\/|[^/])*/|\\(?!%)|/)+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="(\\)(%)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Other"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$[a-zA-Z]+)(\.?)(text|value)?">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Punctuation"/>
+          <token type="NameProperty"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="//.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="/\*(.|\n)*?\*/">
+        <token type="Comment"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/apacheconf.xml 🔗

@@ -0,0 +1,74 @@
+<lexer>
+  <config>
+    <name>ApacheConf</name>
+    <alias>apacheconf</alias>
+    <alias>aconf</alias>
+    <alias>apache</alias>
+    <filename>.htaccess</filename>
+    <filename>apache.conf</filename>
+    <filename>apache2.conf</filename>
+    <mime_type>text/x-apacheconf</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(#.*?)$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(&lt;[^\s&gt;]+)(?:(\s+)(.*?))?(&gt;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([a-z]\w*)(\s+)">
+        <bygroups>
+          <token type="NameBuiltin"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="value"/>
+      </rule>
+      <rule pattern="\.+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="$">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\d+\.\d+\.\d+\.\d+(?:/\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="/([a-z0-9][\w./-]+)">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="(on|off|none|any|all|double|email|dns|min|minimal|os|productonly|full|emerg|alert|crit|error|warn|notice|info|debug|registry|script|inetd|standalone|user|group)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="&#34;([^&#34;\\]*(?:\\.[^&#34;\\]*)*)&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[^\s&#34;\\]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/apl.xml 🔗

@@ -0,0 +1,59 @@
+<lexer>
+  <config>
+    <name>APL</name>
+    <alias>apl</alias>
+    <filename>*.apl</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[⍝#].*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\&#39;((\&#39;\&#39;)|[^\&#39;])*\&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#34;((&#34;&#34;)|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[⋄◇()]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\[\];]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="⎕[A-Za-zΔ∆⍙][A-Za-zΔ∆⍙_¯0-9]*">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="[A-Za-zΔ∆⍙_][A-Za-zΔ∆⍙_¯0-9]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="¯?(0[Xx][0-9A-Fa-f]+|[0-9]*\.?[0-9]+([Ee][+¯]?[0-9]+)?|¯|∞)([Jj]¯?(0[Xx][0-9A-Fa-f]+|[0-9]*\.?[0-9]+([Ee][+¯]?[0-9]+)?|¯|∞))?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[\.\\/⌿⍀¨⍣⍨⍠⍤∘⍥@⌺⌶⍢]">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[+\-×÷⌈⌊∣|⍳?*⍟○!⌹&lt;≤=&gt;≥≠≡≢∊⍷∪∩~∨∧⍱⍲⍴,⍪⌽⊖⍉↑↓⊂⊃⌷⍋⍒⊤⊥⍕⍎⊣⊢⍁⍂≈⌸⍯↗⊆⍸]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="⍬">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="[⎕⍞]">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="[←→]">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="[⍺⍵⍶⍹∇:]">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="KeywordType"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/applescript.xml 🔗

@@ -0,0 +1,130 @@
+<lexer>
+  <config>
+    <name>AppleScript</name>
+    <alias>applescript</alias>
+    <filename>*.applescript</filename>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="¬\n">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#39;s\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(--|#).*?$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="[(){}!,.:]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(«)([^»]+)(»)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameBuiltin"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b((?:considering|ignoring)\s*)(application responses|case|diacriticals|hyphens|numeric strings|punctuation|white space)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="NameBuiltin"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(-|\*|\+|&amp;|≠|&gt;=?|&lt;=?|=|≥|≤|/|÷|\^)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(and|or|is equal|equals|(is )?equal to|is not|isn&#39;t|isn&#39;t equal( to)?|is not equal( to)?|doesn&#39;t equal|does not equal|(is )?greater than|comes after|is not less than or equal( to)?|isn&#39;t less than or equal( to)?|(is )?less than|comes before|is not greater than or equal( to)?|isn&#39;t greater than or equal( to)?|(is  )?greater than or equal( to)?|is not less than|isn&#39;t less than|does not come before|doesn&#39;t come before|(is )?less than or equal( to)?|is not greater than|isn&#39;t greater than|does not come after|doesn&#39;t come after|starts? with|begins? with|ends? with|contains?|does not contain|doesn&#39;t contain|is in|is contained by|is not in|is not contained by|isn&#39;t contained by|div|mod|not|(a  )?(ref( to)?|reference to)|is|does)\b">
+        <token type="OperatorWord"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/arangodb_aql.xml 🔗

@@ -0,0 +1,174 @@
+<lexer>
+  <config>
+    <name>ArangoDB AQL</name>
+    <alias>aql</alias>
+    <filename>*.aql</filename>
+    <mime_type>text/x-aql</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="comments-and-whitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comment"/>
+      </rule>
+    </state>
+    <state name="multiline-comment">
+      <rule pattern="[^*]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="double-quote">
+      <rule pattern="\\.">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[^&#34;\\]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="single-quote">
+      <rule pattern="\\.">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[^&#39;\\]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="backtick">
+      <rule pattern="\\.">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[^`\\]+">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="`">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="forwardtick">
+      <rule pattern="\\.">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[^´\\]+">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="´">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="identifier">
+      <rule pattern="(?:\$?|_+)[a-z]+[_a-z0-9]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="`">
+        <token type="Name"/>
+        <push state="backtick"/>
+      </rule>
+      <rule pattern="´">
+        <token type="Name"/>
+        <push state="forwardtick"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="comments-and-whitespace"/>
+      </rule>
+      <rule pattern="0b[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0x[0-9a-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(?:0|[1-9][0-9]*)(?![\.e])">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(?:(?:0|[1-9][0-9]*)(?:\.[0-9]+)?|\.[0-9]+)(?:e[\-\+]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="@@(?:_+[a-z0-9]+[a-z0-9_]*|[a-z0-9][a-z0-9_]*)">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="@(?:_+[a-z0-9]+[a-z0-9_]*|[a-z0-9][a-z0-9_]*)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="=~|!~|[=!&lt;&gt;]=?|[%?:/*+-]|\.\.|&amp;&amp;|\|\|">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[.,(){}\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[a-zA-Z0-9][a-zA-Z0-9_]*(?:::[a-zA-Z0-9_]+)+(?=\s*\()">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(WITH)(\s+)(COUNT)(\s+)(INTO)\b">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="KeywordPseudo"/>
+          <token type="Text"/>
+          <token type="KeywordReserved"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?:KEEP|PRUNE|SEARCH|TO)\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="OPTIONS(?=\s*\{)">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="(?:AGGREGATE|ALL|ALL_SHORTEST_PATHS|AND|ANY|ASC|AT LEAST|COLLECT|DESC|DISTINCT|FILTER|FOR|GRAPH|IN|INBOUND|INSERT|INTO|K_PATHS|K_SHORTEST_PATHS|LIKE|LIMIT|NONE|NOT|OR|OUTBOUND|REMOVE|REPLACE|RETURN|SHORTEST_PATH|SORT|UPDATE|UPSERT|WITH|WINDOW)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="LET\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(?:true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(?-i)(?:CURRENT|NEW|OLD)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/arduino.xml 🔗

@@ -0,0 +1,309 @@
+<lexer>
+  <config>
+    <name>Arduino</name>
+    <alias>arduino</alias>
+    <filename>*.ino</filename>
+    <mime_type>text/x-arduino</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="whitespace">
+      <rule pattern="^#if\s+0">
+        <token type="CommentPreproc"/>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^#">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#if\s+0)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-fA-F0-9]{2,4}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(include)(\s*(?:/[*].*?[*]/\s*)?)([^\n]+)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="CommentPreprocFile"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule pattern="(reinterpret_cast|static_assert|dynamic_cast|thread_local|static_cast|const_cast|protected|constexpr|namespace|restrict|noexcept|override|operator|typename|template|explicit|decltype|nullptr|private|alignof|virtual|mutable|alignas|typeid|friend|throws|export|public|delete|final|using|throw|catch|this|try|new)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="char(16_t|32_t)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(class)\b">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(R)(&#34;)([^\\()\s]{,16})(\()((?:.|\n)*?)(\)\3)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="LiteralString"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(u8|u|U)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(L?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(L?)(&#39;)(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]+[LlUu]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\d+[LlUu]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(restricted|volatile|continue|register|default|typedef|struct|extern|switch|sizeof|static|return|union|while|const|break|goto|enum|else|case|auto|for|asm|if|do)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(_Bool|_Complex|_Imaginary|array|atomic_bool|atomic_char|atomic_int|atomic_llong|atomic_long|atomic_schar|atomic_short|atomic_uchar|atomic_uint|atomic_ullong|atomic_ulong|atomic_ushort|auto|bool|boolean|BooleanVariables|Byte|byte|Char|char|char16_t|char32_t|class|complex|Const|const|const_cast|delete|double|dynamic_cast|enum|explicit|extern|Float|float|friend|inline|Int|int|int16_t|int32_t|int64_t|int8_t|Long|long|new|NULL|null|operator|private|PROGMEM|protected|public|register|reinterpret_cast|short|signed|sizeof|Static|static|static_cast|String|struct|typedef|uint16_t|uint32_t|uint64_t|uint8_t|union|unsigned|virtual|Void|void|Volatile|volatile|word)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(and|final|If|Loop|loop|not|or|override|setup|Setup|throw|try|xor)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(ANALOG_MESSAGE|BIN|CHANGE|DEC|DEFAULT|DIGITAL_MESSAGE|EXTERNAL|FALLING|FIRMATA_STRING|HALF_PI|HEX|HIGH|INPUT|INPUT_PULLUP|INTERNAL|INTERNAL1V1|INTERNAL1V1|INTERNAL2V56|INTERNAL2V56|LED_BUILTIN|LED_BUILTIN_RX|LED_BUILTIN_TX|LOW|LSBFIRST|MSBFIRST|OCT|OUTPUT|PI|REPORT_ANALOG|REPORT_DIGITAL|RISING|SET_PIN_MODE|SYSEX_START|SYSTEM_RESET|TWO_PI)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(boolean|const|byte|word|string|String|array)\b">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(Keyboard|KeyboardController|MouseController|SoftwareSerial|EthernetServer|EthernetClient|LiquidCrystal|RobotControl|GSMVoiceCall|EthernetUDP|EsploraTFT|HttpClient|RobotMotor|WiFiClient|GSMScanner|FileSystem|Scheduler|GSMServer|YunClient|YunServer|IPAddress|GSMClient|GSMModem|Keyboard|Ethernet|Console|GSMBand|Esplora|Stepper|Process|WiFiUDP|GSM_SMS|Mailbox|USBHost|Firmata|PImage|Client|Server|GSMPIN|FileIO|Bridge|Serial|EEPROM|Stream|Mouse|Audio|Servo|File|Task|GPRS|WiFi|Wire|TFT|GSM|SPI|SD)\b">
+        <token type="NameClass"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/armasm.xml 🔗

@@ -0,0 +1,126 @@
+<lexer>
+  <config>
+    <name>ArmAsm</name>
+    <alias>armasm</alias>
+    <filename>*.s</filename>
+    <filename>*.S</filename>
+    <mime_type>text/x-armasm</mime_type>
+    <mime_type>text/x-asm</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="(\.\w+)([ \t]+\w+\s+?)?">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\w+)(:)(\s+\.\w+\s+)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Punctuation"/>
+          <token type="KeywordNamespace"/>
+        </bygroups>
+        <push state="literal"/>
+      </rule>
+      <rule pattern="(\w+)(:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="svc\s+\w+">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="[a-zA-Z]+">
+        <token type="Text"/>
+        <push state="opcode"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[@;].*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="literal">
+      <rule pattern="0b[01]+">
+        <token type="LiteralNumberBin"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="0x\w{1,8}">
+        <token type="LiteralNumberHex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="0\d+">
+        <token type="LiteralNumberOct"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\d+?\.\d+?">
+        <token type="LiteralNumberFloat"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(&#34;)(.+)(&#34;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="LiteralStringDouble"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(&#39;)(.{1}|\\.{1})(&#39;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="LiteralStringChar"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="opcode">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(@|;).*\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(\s+|,)">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[rapcfxwbhsdqv]\d{1,2}">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="=0x\w+">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(=)(\w+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#">
+        <token type="Text"/>
+        <push state="literal"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/atl.xml 🔗

@@ -0,0 +1,165 @@
+<lexer>
+  <config>
+    <name>ATL</name>
+    <alias>atl</alias>
+    <filename>*.atl</filename>
+    <mime_type>text/x-atl</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(--.*?)(\n)">
+        <bygroups>
+          <token type="CommentSingle" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <rule pattern="(and|distinct|endif|else|for|foreach|if|implies|in|let|not|or|self|super|then|thisModule|xor)\b">
+        <token type="Keyword" />
+      </rule>
+      <rule pattern="(OclUndefined|true|false|#\w+)\b">
+        <token type="KeywordConstant" />
+      </rule>
+      <rule pattern="(module|query|library|create|from|to|uses)\b">
+         <token type="KeywordNamespace" />
+      </rule>
+      <rule pattern="(do)(\s*)({)">
+        <bygroups>
+          <token type="KeywordNamespace" />
+          <token type="TextWhitespace" />
+          <token type="Punctuation" />
+        </bygroups>
+      </rule>
+      <rule pattern="(abstract|endpoint|entrypoint|lazy|unique)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <rule pattern="(rule)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <rule pattern="(helper)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <rule pattern="(context)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <rule pattern="(def)(\s*)(:)(\s*)">
+        <bygroups>
+          <token type="KeywordNamespace" />
+          <token type="TextWhitespace" />
+          <token type="Punctuation" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <rule pattern="(Bag|Boolean|Integer|OrderedSet|Real|Sequence|Set|String|Tuple)">
+        <token type="KeywordType" />
+      </rule>
+      <rule pattern="(\w+)(\s*)(&lt;-|&lt;:=)">
+        <bygroups>
+          <token type="NameNamespace" />
+          <token type="TextWhitespace" />
+          <token type="Punctuation" />
+        </bygroups>
+      </rule>
+      <rule pattern="#&quot;">
+        <token type="KeywordConstant" />
+        <push state="quotedenumliteral" />
+      </rule>
+      <rule pattern="&quot;">
+        <token type="NameNamespace" />
+        <push state="quotedname" />
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="TextWhitespace" />
+      </rule>
+      <rule pattern="&#x27;">
+        <token type="LiteralString" />
+        <push state="string" />
+      </rule>
+      <rule
+        pattern="[0-9]*\.[0-9]+">
+        <token type="LiteralNumberFloat" />
+      </rule>
+      <rule pattern="0|[1-9][0-9]*">
+        <token type="LiteralNumberInteger" />
+      </rule>
+      <rule pattern="[*&lt;&gt;+=/-]">
+        <token type="Operator" />
+      </rule>
+      <rule pattern="([{}();:.,!|]|-&gt;)">
+        <token type="Punctuation" />
+      </rule>
+      <rule pattern="\n">
+        <token type="TextWhitespace" />
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameNamespace" />
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&#x27;]+">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="\\&#x27;">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="&#x27;">
+        <token type="LiteralString" />
+        <pop depth="1" />
+      </rule>
+    </state>
+    <state name="quotedname">
+      <rule pattern="[^\\&quot;]+">
+        <token type="NameNamespace" />
+      </rule>
+      <rule pattern="\\\\">
+        <token type="NameNamespace" />
+      </rule>
+      <rule pattern="\\&quot;">
+        <token type="NameNamespace" />
+      </rule>
+      <rule pattern="\\">
+        <token type="NameNamespace" />
+      </rule>
+      <rule pattern="&quot;">
+        <token type="NameNamespace" />
+        <pop depth="1" />
+      </rule>
+    </state>
+    <state name="quotedenumliteral">
+      <rule pattern="[^\\&quot;]+">
+        <token type="KeywordConstant" />
+      </rule>
+      <rule pattern="\\\\">
+        <token type="KeywordConstant" />
+      </rule>
+      <rule pattern="\\&quot;">
+        <token type="KeywordConstant" />
+      </rule>
+      <rule pattern="\\">
+        <token type="KeywordConstant" />
+      </rule>
+      <rule pattern="&quot;">
+        <token type="KeywordConstant" />
+        <pop depth="1" />
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/autohotkey.xml 🔗

@@ -0,0 +1,78 @@
+
+<lexer>
+  <config>
+    <name>AutoHotkey</name>
+    <alias>autohotkey</alias>
+    <alias>ahk</alias>
+    <filename>*.ahk</filename>
+    <filename>*.ahkl</filename>
+    <mime_type>text/x-autohotkey</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^(\s*)(/\*)"><bygroups><token type="Text"/><token type="CommentMultiline"/></bygroups><push state="incomment"/></rule>
+      <rule pattern="^(\s*)(\()"><bygroups><token type="Text"/><token type="Generic"/></bygroups><push state="incontinuation"/></rule>
+      <rule pattern="\s+;.*?$"><token type="CommentSingle"/></rule>
+      <rule pattern="^;.*?$"><token type="CommentSingle"/></rule>
+      <rule pattern="[]{}(),;[]"><token type="Punctuation"/></rule>
+      <rule pattern="(in|is|and|or|not)\b"><token type="OperatorWord"/></rule>
+      <rule pattern="\%[a-zA-Z_#@$][\w#@$]*\%"><token type="NameVariable"/></rule>
+      <rule pattern="!=|==|:=|\.=|&lt;&lt;|&gt;&gt;|[-~+/*%=&lt;&gt;&amp;^|?:!.]"><token type="Operator"/></rule>
+      <rule><include state="commands"/></rule>
+      <rule><include state="labels"/></rule>
+      <rule><include state="builtInFunctions"/></rule>
+      <rule><include state="builtInVariables"/></rule>
+      <rule pattern="&quot;"><token type="LiteralString"/><combined state="stringescape" state="dqs"/></rule>
+      <rule><include state="numbers"/></rule>
+      <rule pattern="[a-zA-Z_#@$][\w#@$]*"><token type="Name"/></rule>
+      <rule pattern="\\|\&#x27;"><token type="Text"/></rule>
+      <rule pattern="\`([,%`abfnrtv\-+;])"><token type="LiteralStringEscape"/></rule>
+      <rule><include state="garbage"/></rule>
+    </state>
+    <state name="incomment">
+      <rule pattern="^\s*\*/"><token type="CommentMultiline"/><pop depth="1"/></rule>
+      <rule pattern="[^*]+"><token type="CommentMultiline"/></rule>
+      <rule pattern="\*"><token type="CommentMultiline"/></rule>
+    </state>
+    <state name="incontinuation">
+      <rule pattern="^\s*\)"><token type="Generic"/><pop depth="1"/></rule>
+      <rule pattern="[^)]"><token type="Generic"/></rule>
+      <rule pattern="[)]"><token type="Generic"/></rule>
+    </state>
+    <state name="commands">

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/autoit.xml 🔗

@@ -0,0 +1,70 @@
+
+<lexer>
+  <config>
+    <name>AutoIt</name>
+    <alias>autoit</alias>
+    <filename>*.au3</filename>
+    <mime_type>text/x-autoit</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=";.*\n"><token type="CommentSingle"/></rule>
+      <rule pattern="(#comments-start|#cs)(.|\n)*?(#comments-end|#ce)"><token type="CommentMultiline"/></rule>
+      <rule pattern="[\[\]{}(),;]"><token type="Punctuation"/></rule>
+      <rule pattern="(and|or|not)\b"><token type="OperatorWord"/></rule>
+      <rule pattern="[$|@][a-zA-Z_]\w*"><token type="NameVariable"/></rule>
+      <rule pattern="!=|==|:=|\.=|&lt;&lt;|&gt;&gt;|[-~+/*%=&lt;&gt;&amp;^|?:!.]"><token type="Operator"/></rule>
+      <rule><include state="commands"/></rule>
+      <rule><include state="labels"/></rule>
+      <rule><include state="builtInFunctions"/></rule>
+      <rule><include state="builtInMarcros"/></rule>
+      <rule pattern="&quot;"><token type="LiteralString"/><combined state="stringescape" state="dqs"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralString"/><push state="sqs"/></rule>
+      <rule><include state="numbers"/></rule>
+      <rule pattern="[a-zA-Z_#@$][\w#@$]*"><token type="Name"/></rule>
+      <rule pattern="\\|\&#x27;"><token type="Text"/></rule>
+      <rule pattern="\`([,%`abfnrtv\-+;])"><token type="LiteralStringEscape"/></rule>
+      <rule pattern="_\n"><token type="Text"/></rule>
+      <rule><include state="garbage"/></rule>
+    </state>
+    <state name="commands">
+      <rule pattern="(?i)(\s*)(#include-once|#include|#endregion|#forcedef|#forceref|#region|and|byref|case|continueloop|dim|do|else|elseif|endfunc|endif|endselect|exit|exitloop|for|func|global|if|local|next|not|or|return|select|step|then|to|until|wend|while|exit)\b"><bygroups><token type="Text"/><token type="NameBuiltin"/></bygroups></rule>
+    </state>
+    <state name="builtInFunctions">

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/awk.xml 🔗

@@ -0,0 +1,95 @@
+<lexer>
+  <config>
+    <name>Awk</name>
+    <alias>awk</alias>
+    <alias>gawk</alias>
+    <alias>mawk</alias>
+    <alias>nawk</alias>
+    <filename>*.awk</filename>
+    <mime_type>application/x-awk</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^(?=\s|/)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="\+\+|--|\|\||&amp;&amp;|in\b|\$|!?~|\|&amp;|(\*\*|[-&lt;&gt;+*%\^/!=|])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(break|continue|do|while|exit|for|if|else|return|switch|case|default)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="function\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(atan2|cos|exp|int|log|rand|sin|sqrt|srand|gensub|gsub|index|length|match|split|patsplit|sprintf|sub|substr|tolower|toupper|close|fflush|getline|next(file)|print|printf|strftime|systime|mktime|delete|system|strtonum|and|compl|lshift|or|rshift|asorti?|isarray|bindtextdomain|dcn?gettext|@(include|load|namespace))\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(ARGC|ARGIND|ARGV|BEGIN(FILE)?|BINMODE|CONVFMT|ENVIRON|END(FILE)?|ERRNO|FIELDWIDTHS|FILENAME|FNR|FPAT|FS|IGNORECASE|LINT|NF|NR|OFMT|OFS|ORS|PROCINFO|RLENGTH|RS|RSTART|RT|SUBSEP|TEXTDOMAIN)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[@$a-zA-Z_]\w*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/\B">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="#pop" state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ballerina.xml 🔗

@@ -0,0 +1,97 @@
+<lexer>
+  <config>
+    <name>Ballerina</name>
+    <alias>ballerina</alias>
+    <filename>*.bal</filename>
+    <mime_type>text/x-ballerina</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(break|catch|continue|done|else|finally|foreach|forever|fork|if|lock|match|return|throw|transaction|try|while)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="((?:(?:[^\W\d]|\$)[\w.\[\]$&lt;&gt;]*\s+)+?)((?:[^\W\d]|\$)[\w$]*)(\s*)(\()">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="@[^\W\d][\w.]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="(annotation|bind|but|endpoint|error|function|object|private|public|returns|service|type|var|with|worker)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(boolean|byte|decimal|float|int|json|map|nil|record|string|table|xml)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(import)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;|&#39;\\u[0-9a-fA-F]{4}&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(\.)((?:[^\W\d]|\$)[\w$]*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^\s*([^\W\d]|\$)[\w$]*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="([^\W\d]|\$)[\w$]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="([0-9][0-9_]*\.([0-9][0-9_]*)?|\.[0-9][0-9_]*)([eE][+\-]?[0-9][0-9_]*)?[fFdD]?|[0-9][eE][+\-]?[0-9][0-9_]*[fFdD]?|[0-9]([eE][+\-]?[0-9][0-9_]*)?[fFdD]|0[xX]([0-9a-fA-F][0-9a-fA-F_]*\.?|([0-9a-fA-F][0-9a-fA-F_]*)?\.[0-9a-fA-F][0-9a-fA-F_]*)[pP][+\-]?[0-9][0-9_]*[fFdD]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F][0-9a-fA-F_]*[lL]?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[bB][01][01_]*[lL]?">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[0-7_]+[lL]?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0|[1-9][0-9_]*[lL]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[~^*!%&amp;\[\](){}&lt;&gt;|+=:;,./?-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[\w.]+">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bash.xml 🔗

@@ -0,0 +1,220 @@
+<lexer>
+  <config>
+    <name>Bash</name>
+    <alias>bash</alias>
+    <alias>sh</alias>
+    <alias>ksh</alias>
+    <alias>zsh</alias>
+    <alias>shell</alias>
+    <filename>*.sh</filename>
+    <filename>*.ksh</filename>
+    <filename>*.bash</filename>
+    <filename>*.ebuild</filename>
+    <filename>*.eclass</filename>
+    <filename>.env</filename>
+    <filename>*.env</filename>
+    <filename>*.exheres-0</filename>
+    <filename>*.exlib</filename>
+    <filename>*.zsh</filename>
+    <filename>*.zshrc</filename>
+    <filename>.bashrc</filename>
+    <filename>bashrc</filename>
+    <filename>.bash_*</filename>
+    <filename>bash_*</filename>
+    <filename>zshrc</filename>
+    <filename>.zshrc</filename>
+    <filename>PKGBUILD</filename>
+    <mime_type>application/x-sh</mime_type>
+    <mime_type>application/x-shellscript</mime_type>
+    <analyse first="true" >
+      <regex pattern="(?m)^#!.*/bin/(?:env |)(?:bash|zsh|sh|ksh)" score="1.0" />
+    </analyse>
+  </config>
+  <rules>
+    <state name="data">
+      <rule pattern="(?s)\$?&#34;(\\\\|\\[0-7]+|\\.|[^&#34;\\$])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(?s)\$&#39;(\\\\|\\[0-7]+|\\.|[^&#39;\\])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="(?s)&#39;.*?&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&amp;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\|">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\d+(?= |$)">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[^=\s\[\]{}()$&#34;\&#39;`\\&lt;&amp;|;]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?s)(\\\\|\\[0-7]+|\\.|[^&#34;\\$])+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+    </state>
+    <state name="interp">
+      <rule pattern="\$\(\(">
+        <token type="Keyword"/>
+        <push state="math"/>
+      </rule>
+      <rule pattern="\$\(">
+        <token type="Keyword"/>
+        <push state="paren"/>
+      </rule>
+      <rule pattern="\$\{#?">
+        <token type="LiteralStringInterpol"/>
+        <push state="curly"/>
+      </rule>
+      <rule pattern="\$[a-zA-Z_]\w*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\$(?:\d+|[#$?!_*@-])">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="paren">
+      <rule pattern="\)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="math">
+      <rule pattern="\)\)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[-+*/%^|&amp;]|\*\*|\|\|">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\d+#\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+#(?! )">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="backticks">
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="backticks"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+    </state>
+    <state name="basic">
+      <rule pattern="\b(if|fi|else|while|do|done|for|then|return|function|case|select|continue|until|esac|elif)(\s*)\b">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(alias|bg|bind|break|builtin|caller|cd|command|compgen|complete|declare|dirs|disown|echo|enable|eval|exec|exit|export|false|fc|fg|getopts|hash|help|history|jobs|kill|let|local|logout|popd|printf|pushd|pwd|read|readonly|set|shift|shopt|source|suspend|test|time|times|trap|true|type|typeset|ulimit|umask|unalias|unset|wait)(?=[\s)`])">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="\A#!.+\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="#.*(\S|$)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\\[\w\W]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="(\b\w+)(\s*)(\+?=)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[\[\]{}()=]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;&lt;&lt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;&lt;-?\s*(\&#39;?)\\?(\w+)[\w\W]+?\2">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&amp;&amp;|\|\|">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="curly">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=":-">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[^}:&#34;\&#39;`$\\]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bash_session.xml 🔗

@@ -0,0 +1,25 @@
+<lexer>
+  <config>
+    <name>Bash Session</name>
+    <alias>bash-session</alias>
+    <alias>console</alias>
+    <alias>shell-session</alias>
+    <filename>*.sh-session</filename>
+    <mime_type>text/x-sh</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^((?:\[[^]]+@[^]]+\]\s?)?[#$%&gt;])(\s*)(.*\n?)">
+        <bygroups>
+          <token type="GenericPrompt"/>
+          <token type="Text"/>
+          <using lexer="bash"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^.+\n?">
+        <token type="GenericOutput"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/batchfile.xml 🔗

@@ -0,0 +1,660 @@
+<lexer>
+  <config>
+    <name>Batchfile</name>
+    <alias>bat</alias>
+    <alias>batch</alias>
+    <alias>dosbatch</alias>
+    <alias>winbatch</alias>
+    <filename>*.bat</filename>
+    <filename>*.cmd</filename>
+    <mime_type>application/x-dos-batch</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="arithmetic">
+      <rule pattern="0[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0x[\da-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[(),]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="([=+\-*/!~]|%|\^\^)+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="((?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(\^[\n\x1a]?)?[^()=+\-*/!~%^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0]|\^[\n\x1a\t\v\f\r ,;=\xa0]?[\w\W])+">
+        <usingself state="variable"/>
+      </rule>
+      <rule pattern="(?=[\x00|&amp;])">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="follow"/>
+      </rule>
+    </state>
+    <state name="else?">
+      <rule pattern="(?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)">
+        <usingself state="text"/>
+      </rule>
+      <rule pattern="else(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="sqstring">
+      <rule>
+        <include state="variable-or-escape"/>
+      </rule>
+      <rule pattern="[^%]+|%">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\)((?=\()|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))(?:(?:[^\n\x1a^]|\^[\n\x1a]?[\w\W])*)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(?=((?:(?&lt;=^[^:])|^[^:]?)[\t\v\f\r ,;=\xa0]*)(:))">
+        <token type="Text"/>
+        <push state="follow"/>
+      </rule>
+      <rule pattern="(?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)">
+        <usingself state="text"/>
+      </rule>
+      <rule>
+        <include state="redirect"/>
+      </rule>
+      <rule pattern="[\n\x1a]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="root/compound"/>
+      </rule>
+      <rule pattern="@+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="((?:for|if|rem)(?:(?=(?:\^[\n\x1a]?)?/)|(?:(?!\^)|(?&lt;=m))(?:(?=\()|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+)?(?:\^[\n\x1a]?)?/(?:\^[\n\x1a]?)?\?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+        </bygroups>
+        <push state="follow"/>
+      </rule>
+      <rule pattern="(goto(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(]))((?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|[^&#34;%\n\x1a&amp;&lt;&gt;|])*(?:\^[\n\x1a]?)?/(?:\^[\n\x1a]?)?\?(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|[^&#34;%\n\x1a&amp;&lt;&gt;|])*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+        </bygroups>
+        <push state="follow"/>
+      </rule>
+      <rule pattern="(setlocal|endlocal|prompt|verify|rename|mklink|rmdir|shift|start|color|dpath|title|chdir|erase|pushd|ftype|break|pause|mkdir|assoc|date|path|time|popd|keys|exit|type|copy|echo|move|dir|del|ren|ver|cls|vol|rd|md|cd)(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(])">
+        <token type="Keyword"/>
+        <push state="follow"/>
+      </rule>
+      <rule pattern="(call)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)(:)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="call"/>
+      </rule>
+      <rule pattern="call(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(for(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])(?!\^))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(/f(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="for/f" state="for"/>
+      </rule>
+      <rule pattern="(for(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])(?!\^))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(/l(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="for/l" state="for"/>
+      </rule>
+      <rule pattern="for(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])(?!\^)">
+        <token type="Keyword"/>
+        <push state="for2" state="for"/>
+      </rule>
+      <rule pattern="(goto(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(]))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)(:?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="label"/>
+      </rule>
+      <rule pattern="(if(?:(?=\()|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))(?!\^))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)((?:/i(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))?)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)((?:not(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))?)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+        </bygroups>
+        <push state="(?" state="if"/>
+      </rule>
+      <rule pattern="rem(((?=\()|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))(?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+)?.*|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(])(?:(?:[^\n\x1a^]|\^[\n\x1a]?[\w\W])*))">
+        <token type="CommentSingle"/>
+        <push state="follow"/>
+      </rule>
+      <rule pattern="(set(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(]))((?:(?:\^[\n\x1a]?)?[^\S\n])*)(/a)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="arithmetic"/>
+      </rule>
+      <rule pattern="(set(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(]))((?:(?:\^[\n\x1a]?)?[^\S\n])*)((?:/p)?)((?:(?:\^[\n\x1a]?)?[^\S\n])*)((?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|^=]|\^[\n\x1a]?[^&#34;=])+)?)((?:(?:\^[\n\x1a]?)?=)?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <usingself state="variable"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="follow"/>
+      </rule>
+      <rule>
+        <push state="follow"/>
+      </rule>
+    </state>
+    <state name="follow">
+      <rule pattern="((?:(?&lt;=^[^:])|^[^:]?)[\t\v\f\r ,;=\xa0]*)(:)([\t\v\f\r ,;=\xa0]*)((?:(?:[^\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0+:^]|\^[\n\x1a]?[\w\W])*))(.*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameLabel"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="redirect"/>
+      </rule>
+      <rule pattern="(?=[\n\x1a])">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\|\|?|&amp;&amp;?">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="text"/>
+      </rule>
+    </state>
+    <state name="bqstring">
+      <rule>
+        <include state="variable-or-escape"/>
+      </rule>
+      <rule pattern="[^%]+|%">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="for2">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(do(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))">
+        <bygroups>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\n\x1a]+">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <include state="follow"/>
+      </rule>
+    </state>
+    <state name="label/compound">
+      <rule pattern="(?=\))">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="((?:(?:[^\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0+:^)]|\^[\n\x1a]?[^)])*)?)((?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|\^[\n\x1a]?[^)]|[^&#34;%^\n\x1a&amp;&lt;&gt;|)])*)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="for">
+      <rule pattern="((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(in)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(\()">
+        <bygroups>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="follow"/>
+      </rule>
+    </state>
+    <state name="redirect/compound">
+      <rule pattern="((?:(?&lt;=[\n\x1a\t\v\f\r ,;=\xa0])\d)?)(&gt;&gt;?&amp;|&lt;&amp;)([\n\x1a\t\v\f\r ,;=\xa0]*)(\d)">
+        <bygroups>
+          <token type="LiteralNumberInteger"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="LiteralNumberInteger"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:(?&lt;=[\n\x1a\t\v\f\r ,;=\xa0])(?&lt;!\^[\n\x1a])\d)?)(&gt;&gt;?|&lt;)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0)])+))+))">
+        <bygroups>
+          <token type="LiteralNumberInteger"/>
+          <token type="Punctuation"/>
+          <usingself state="text"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="if">
+      <rule pattern="((?:cmdextversion|errorlevel)(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(\d+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="LiteralNumberInteger"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(defined(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))((?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+))">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <usingself state="variable"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(exist(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+))">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="((?:-?(?:0[0-7]+|0x[\da-f]+|\d+)(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))(?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))((?:equ|geq|gtr|leq|lss|neq))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)(?:-?(?:0[0-7]+|0x[\da-f]+|\d+)(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))">
+        <bygroups>
+          <usingself state="arithmetic"/>
+          <token type="OperatorWord"/>
+          <usingself state="arithmetic"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+)">
+        <usingself state="text"/>
+        <push state="#pop" state="if2"/>
+      </rule>
+    </state>
+    <state name="root/compound">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=((?:(?&lt;=^[^:])|^[^:]?)[\t\v\f\r ,;=\xa0]*)(:))">
+        <token type="Text"/>
+        <push state="follow/compound"/>
+      </rule>
+      <rule pattern="(?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)">
+        <usingself state="text"/>
+      </rule>
+      <rule>
+        <include state="redirect/compound"/>
+      </rule>
+      <rule pattern="[\n\x1a]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="root/compound"/>
+      </rule>
+      <rule pattern="@+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="((?:for|if|rem)(?:(?=(?:\^[\n\x1a]?)?/)|(?:(?!\^)|(?&lt;=m))(?:(?=\()|(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0)])+)?(?:\^[\n\x1a]?)?/(?:\^[\n\x1a]?)?\?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+        </bygroups>
+        <push state="follow/compound"/>
+      </rule>
+      <rule pattern="(goto(?:(?=\))|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(])))((?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|[^&#34;%\n\x1a&amp;&lt;&gt;|)])*(?:\^[\n\x1a]?)?/(?:\^[\n\x1a]?)?\?(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|[^&#34;%\n\x1a&amp;&lt;&gt;|)])*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+        </bygroups>
+        <push state="follow/compound"/>
+      </rule>
+      <rule pattern="(setlocal|endlocal|prompt|verify|rename|mklink|rmdir|shift|start|color|dpath|title|chdir|erase|pushd|ftype|break|pause|mkdir|assoc|date|path|time|popd|keys|exit|type|copy|echo|move|dir|del|ren|ver|cls|vol|rd|md|cd)(?:(?=\))|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(]))">
+        <token type="Keyword"/>
+        <push state="follow/compound"/>
+      </rule>
+      <rule pattern="(call)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)(:)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="call/compound"/>
+      </rule>
+      <rule pattern="call(?:(?=\))|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(]))">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(for(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))(?!\^))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(/f(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="for/f" state="for"/>
+      </rule>
+      <rule pattern="(for(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))(?!\^))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))(/l(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="for/l" state="for"/>
+      </rule>
+      <rule pattern="for(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a]))(?!\^)">
+        <token type="Keyword"/>
+        <push state="for2" state="for"/>
+      </rule>
+      <rule pattern="(goto(?:(?=\))|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(])))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)(:?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="label/compound"/>
+      </rule>
+      <rule pattern="(if(?:(?=\()|(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))(?!\^))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)((?:/i(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))?)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)((?:not(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))?)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+        </bygroups>
+        <push state="(?" state="if"/>
+      </rule>
+      <rule pattern="rem(((?=\()|(?:(?=\))|(?=\^?[\t\v\f\r ,;=\xa0]|[&amp;&lt;&gt;|\n\x1a])))(?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+)?.*|(?:(?=\))|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(]))(?:(?:[^\n\x1a^)]|\^[\n\x1a]?[^)])*))">
+        <token type="CommentSingle"/>
+        <push state="follow/compound"/>
+      </rule>
+      <rule pattern="(set(?:(?=\))|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(])))((?:(?:\^[\n\x1a]?)?[^\S\n])*)(/a)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="arithmetic/compound"/>
+      </rule>
+      <rule pattern="(set(?:(?=\))|(?=(?:\^[\n\x1a]?)?[\t\v\f\r ,;=\xa0+./:[\\\]]|[\n\x1a&amp;&lt;&gt;|(])))((?:(?:\^[\n\x1a]?)?[^\S\n])*)((?:/p)?)((?:(?:\^[\n\x1a]?)?[^\S\n])*)((?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|^=)]|\^[\n\x1a]?[^&#34;=])+)?)((?:(?:\^[\n\x1a]?)?=)?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <token type="Keyword"/>
+          <usingself state="text"/>
+          <usingself state="variable"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="follow/compound"/>
+      </rule>
+      <rule>
+        <push state="follow/compound"/>
+      </rule>
+    </state>
+    <state name="follow/compound">
+      <rule pattern="(?=\))">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="((?:(?&lt;=^[^:])|^[^:]?)[\t\v\f\r ,;=\xa0]*)(:)([\t\v\f\r ,;=\xa0]*)((?:(?:[^\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0+:^)]|\^[\n\x1a]?[^)])*))(.*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameLabel"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="redirect/compound"/>
+      </rule>
+      <rule pattern="(?=[\n\x1a])">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\|\|?|&amp;&amp;?">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="text"/>
+      </rule>
+    </state>
+    <state name="text">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule>
+        <include state="variable-or-escape"/>
+      </rule>
+      <rule pattern="[^&#34;%^\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0\d)]+|.">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="redirect">
+      <rule pattern="((?:(?&lt;=[\n\x1a\t\v\f\r ,;=\xa0])\d)?)(&gt;&gt;?&amp;|&lt;&amp;)([\n\x1a\t\v\f\r ,;=\xa0]*)(\d)">
+        <bygroups>
+          <token type="LiteralNumberInteger"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="LiteralNumberInteger"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:(?&lt;=[\n\x1a\t\v\f\r ,;=\xa0])(?&lt;!\^[\n\x1a])\d)?)(&gt;&gt;?|&lt;)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+))">
+        <bygroups>
+          <token type="LiteralNumberInteger"/>
+          <token type="Punctuation"/>
+          <usingself state="text"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="label">
+      <rule pattern="((?:(?:[^\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0+:^]|\^[\n\x1a]?[\w\W])*)?)((?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|\^[\n\x1a]?[\w\W]|[^&#34;%^\n\x1a&amp;&lt;&gt;|])*)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="arithmetic/compound">
+      <rule pattern="(?=\))">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="0[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0x[\da-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[(),]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="([=+\-*/!~]|%|\^\^)+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="((?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(\^[\n\x1a]?)?[^()=+\-*/!~%^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0]|\^[\n\x1a\t\v\f\r ,;=\xa0]?[^)])+">
+        <usingself state="variable"/>
+      </rule>
+      <rule pattern="(?=[\x00|&amp;])">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="follow"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\^!|%%">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^&#34;%^\n\x1a]+|[%^]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="variable">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule>
+        <include state="variable-or-escape"/>
+      </rule>
+      <rule pattern="[^&#34;%^\n\x1a]+|.">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="call/compound">
+      <rule pattern="(?=\))">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(:?)((?:(?:[^\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0+:^)]|\^[\n\x1a]?[^)])*))">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameLabel"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="for/f">
+      <rule pattern="(&#34;)((?:(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|[^&#34;])*?&#34;)([\n\x1a\t\v\f\r ,;=\xa0]*)(\))">
+        <bygroups>
+          <token type="LiteralStringDouble"/>
+          <usingself state="string"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="#pop" state="for2" state="string"/>
+      </rule>
+      <rule pattern="(&#39;(?:%%|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|[\w\W])*?&#39;)([\n\x1a\t\v\f\r ,;=\xa0]*)(\))">
+        <bygroups>
+          <usingself state="sqstring"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(`(?:%%|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|[\w\W])*?`)([\n\x1a\t\v\f\r ,;=\xa0]*)(\))">
+        <bygroups>
+          <usingself state="bqstring"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="for2"/>
+      </rule>
+    </state>
+    <state name="for/l">
+      <rule pattern="-?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule>
+        <include state="for2"/>
+      </rule>
+    </state>
+    <state name="if2">
+      <rule pattern="((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?)(==)((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)?(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+))">
+        <bygroups>
+          <usingself state="text"/>
+          <token type="Operator"/>
+          <usingself state="text"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+))((?:equ|geq|gtr|leq|lss|neq))((?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)(?:[&amp;&lt;&gt;|]+|(?:(?:&#34;[^\n\x1a&#34;]*(?:&#34;|(?=[\n\x1a])))|(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))|(?:(?:(?:\^[\n\x1a]?)?[^&#34;\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0])+))+))">
+        <bygroups>
+          <usingself state="text"/>
+          <token type="OperatorWord"/>
+          <usingself state="text"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="(?">
+      <rule pattern="(?:(?:(?:\^[\n\x1a])?[\t\v\f\r ,;=\xa0])+)">
+        <usingself state="text"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="#pop" state="else?" state="root/compound"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="call">
+      <rule pattern="(:?)((?:(?:[^\n\x1a&amp;&lt;&gt;|\t\v\f\r ,;=\xa0+:^]|\^[\n\x1a]?[\w\W])*))">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameLabel"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="variable-or-escape">
+      <rule pattern="(?:(?:%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|[^%:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%\n\x1a^]|\^[^%\n\x1a])[^=\n\x1a]*=(?:[^%\n\x1a^]|\^[^%\n\x1a])*)?)?%))|(?:\^?![^!:\n\x1a]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^!\n\x1a^]|\^[^!\n\x1a])[^=\n\x1a]*=(?:[^!\n\x1a^]|\^[^!\n\x1a])*)?)?\^?!))">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="%%|\^[\n\x1a]?(\^!|[\w\W])">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/beef.xml 🔗

@@ -0,0 +1,120 @@
+<lexer>
+  <config>
+    <name>Beef</name>
+    <alias>beef</alias>
+    <filename>*.bf</filename>
+    <mime_type>text/x-beef</mime_type>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^\s*\[.*?\]">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="///[^\n\r]*">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="//[^\n\r]*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/[*].*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*()+=|\[\]:;,.&lt;&gt;/?-]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="@&#34;(&#34;&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$@?&#34;(&#34;&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;\n])*[&#34;\n]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+[Ll]?|\d[_\d]*(\.\d*)?([eE][+-]?\d+)?[flFLdD]?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="#[ \t]*(if|endif|else|elif|define|undef|line|error|warning|region|endregion|pragma|nullable)\b">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\b(extern)(\s+)(alias)\b">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(as|await|base|break|by|case|catch|checked|continue|default|delegate|else|event|finally|fixed|for|repeat|goto|if|in|init|is|let|lock|new|scope|on|out|params|readonly|ref|return|sizeof|stackalloc|switch|this|throw|try|typeof|unchecked|virtual|void|while|get|set|new|yield|add|remove|value|alias|ascending|descending|from|group|into|orderby|select|thenby|where|join|equals)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(global)(::)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(abstract|async|const|enum|explicit|extern|implicit|internal|operator|override|partial|extension|private|protected|public|static|sealed|unsafe|volatile)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(bool|byte|char8|char16|char32|decimal|double|float|int|int8|int16|int32|int64|long|object|sbyte|short|string|uint|uint8|uint16|uint32|uint64|uint|let|var)\b\??">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(class|struct|record|interface)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="(namespace|using)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="namespace"/>
+      </rule>
+      <rule pattern="@?[_a-zA-Z]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="class">
+      <rule pattern="@?[_a-zA-Z]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="namespace">
+      <rule pattern="(?=\()">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(@?[_a-zA-Z]\w*|\.)+">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bibtex.xml 🔗

@@ -0,0 +1,152 @@
+<lexer>
+  <config>
+    <name>BibTeX</name>
+    <alias>bib</alias>
+    <alias>bibtex</alias>
+    <filename>*.bib</filename>
+    <mime_type>text/x-bibtex</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="closing-brace">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[})]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="braced-string">
+      <rule pattern="\{">
+        <token type="LiteralString"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\{\}]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[a-z_@!$&amp;*+\-./:;&lt;&gt;?\[\\\]^`|~][\w@!$&amp;*+\-./:;&lt;&gt;?\[\\\]^`|~]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="quoted-string"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralString"/>
+        <push state="braced-string"/>
+      </rule>
+      <rule pattern="[\d]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="#">
+        <token type="Punctuation"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="quoted-string">
+      <rule pattern="\{">
+        <token type="LiteralString"/>
+        <push state="braced-string"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\{\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="@comment">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="@preamble">
+        <token type="NameClass"/>
+        <push state="closing-brace" state="value" state="opening-brace"/>
+      </rule>
+      <rule pattern="@string">
+        <token type="NameClass"/>
+        <push state="closing-brace" state="field" state="opening-brace"/>
+      </rule>
+      <rule pattern="@[a-z_@!$&amp;*+\-./:;&lt;&gt;?\[\\\]^`|~][\w@!$&amp;*+\-./:;&lt;&gt;?\[\\\]^`|~]*">
+        <token type="NameClass"/>
+        <push state="closing-brace" state="command-body" state="opening-brace"/>
+      </rule>
+      <rule pattern=".+">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="command-body">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[^\s\,\}]+">
+        <token type="NameLabel"/>
+        <push state="#pop" state="fields"/>
+      </rule>
+    </state>
+    <state name="fields">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+        <push state="field"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="=">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="=">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="field">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[a-z_@!$&amp;*+\-./:;&lt;&gt;?\[\\\]^`|~][\w@!$&amp;*+\-./:;&lt;&gt;?\[\\\]^`|~]*">
+        <token type="NameAttribute"/>
+        <push state="value" state="="/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="opening-brace">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[{(]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bicep.xml 🔗

@@ -0,0 +1,84 @@
+<lexer>
+  <config>
+    <name>Bicep</name>
+    <alias>bicep</alias>
+    <filename>*.bicep</filename>
+  </config>
+  <rules>
+    <state name="interp">
+      <rule pattern="'">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interp-inside"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[^'\\$]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="interp-inside">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="//[^\n\r]+">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="'''.*?'''">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="'">
+        <token type="LiteralString"/>
+        <push state="interp"/>
+      </rule>
+      <rule pattern="#[\w-]+\b">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="[\w_]+(?=\()">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="\b(metadata|targetScope|resource|module|param|var|output|for|in|if|existing|import|as|type|with|using|func|assert)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="\b(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(&gt;=|&gt;|&lt;=|&lt;|==|!=|=~|!~|::|&amp;&amp;|\?\?|!|-|%|\*|\/|\+)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(\(|\)|\[|\]|\.|:|\?|{|}|@|,|\||=&gt;|=)">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\w_]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/blitzbasic.xml 🔗

@@ -0,0 +1,141 @@
+<lexer>
+  <config>
+    <name>BlitzBasic</name>
+    <alias>blitzbasic</alias>
+    <alias>b3d</alias>
+    <alias>bplus</alias>
+    <filename>*.bb</filename>
+    <filename>*.decls</filename>
+    <mime_type>text/x-bb</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;C?">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[0-9]+\.[0-9]*(?!\.)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\.[0-9]+(?!\.)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\$[0-9a-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\%[10]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="\b(Before|Handle|After|First|Float|Last|Sgn|Abs|Not|And|Int|Mod|Str|Sar|Shr|Shl|Or)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="([+\-*/~=&lt;&gt;^])">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[(),:\[\]\\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\.([ \t]*)([a-z]\w*)">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="\b(New)\b([ \t]+)([a-z]\w*)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(Gosub|Goto)\b([ \t]+)([a-z]\w*)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(Object)\b([ \t]*)([.])([ \t]*)([a-z]\w*)\b">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b([a-z]\w*)(?:([ \t]*)(@{1,2}|[#$%])|([ \t]*)([.])([ \t]*)(?:([a-z]\w*)))?\b([ \t]*)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(Function)\b([ \t]+)([a-z]\w*)(?:([ \t]*)(@{1,2}|[#$%])|([ \t]*)([.])([ \t]*)(?:([a-z]\w*)))?">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(Type)([ \t]+)([a-z]\w*)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(Pi|True|False|Null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="\b(Local|Global|Const|Field|Dim)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="\b(Function|Restore|Default|Forever|Include|Return|Repeat|ElseIf|Delete|Insert|Select|EndIf|Until|While|Gosub|Type|Goto|Else|Data|Next|Step|Each|Case|Wend|Exit|Read|Then|For|New|Asc|Len|Chr|End|To|If)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="([a-z]\w*)(?:([ \t]*)(@{1,2}|[#$%])|([ \t]*)([.])([ \t]*)(?:([a-z]\w*)))?">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bnf.xml 🔗

@@ -0,0 +1,28 @@
+<lexer>
+  <config>
+    <name>BNF</name>
+    <alias>bnf</alias>
+    <filename>*.bnf</filename>
+    <mime_type>text/x-bnf</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(&lt;)([ -;=?-~]+)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameClass"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="::=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[^&lt;&gt;:]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/bqn.xml 🔗

@@ -0,0 +1,83 @@
+<lexer>
+  <config>
+    <name>BQN</name>
+    <alias>bqn</alias>
+    <filename>*.bqn</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\A#!.+$">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="&quot;(?:[^&quot;]|&quot;&quot;)*&quot;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="[⟨⟩\[\]‿]">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="[()]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[:;?]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[⋄,]">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="[←⇐↩→]">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="'.'">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="[˙˜˘¨⌜⁼´˝`]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[∘○⊸⟜⌾⊘◶⎉⚇⍟⎊]">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="[𝔽𝔾𝕎𝕏𝕊+\-×÷⋆√⌊⌈|¬∧∨&lt;&gt;≠=≤≥≡≢⊣⊢⥊∾≍⋈↑↓↕«»⌽⍉/⍋⍒⊏⊑⊐⊒∊⍷⊔!⍕⍎]">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="[𝕗𝕘𝕨𝕩𝕤]">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="·">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="@">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="\d+(?:\.\d+)?[eE]¯?\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[¯∞π]?(?:\d*\.?\b\d+(?:e[+¯]?\d+|E[+¯]?\d+)?|¯|∞|π)(?:j¯?(?:(?:\d+(?:\.\d+)?|\.\d+)(?:e[+¯]?\d+|E[+¯]?\d+)?|¯|∞|π))?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(•?[a-z][A-Z_a-z0-9π∞¯]*|𝕣)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="•?[A-Z][A-Z_a-z0-9π∞¯]*">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(•?_[A-Za-z][A-Z_a-z0-9π∞¯]*|_𝕣)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(•?_[A-Za-z][A-Z_a-z0-9π∞¯]*_|_𝕣_)">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/brainfuck.xml 🔗

@@ -0,0 +1,51 @@
+<lexer>
+  <config>
+    <name>Brainfuck</name>
+    <alias>brainfuck</alias>
+    <alias>bf</alias>
+    <filename>*.bf</filename>
+    <filename>*.b</filename>
+    <mime_type>application/x-brainfuck</mime_type>
+  </config>
+  <rules>
+    <state name="common">
+      <rule pattern="[.,]+">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="[+-]+">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[&lt;&gt;]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[^.,+\-&lt;&gt;\[\]]+">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\[">
+        <token type="Keyword"/>
+        <push state="loop"/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Error"/>
+      </rule>
+      <rule>
+        <include state="common"/>
+      </rule>
+    </state>
+    <state name="loop">
+      <rule pattern="\[">
+        <token type="Keyword"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="common"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/c#.xml 🔗

@@ -0,0 +1,121 @@
+<lexer>
+  <config>
+    <name>C#</name>
+    <alias>csharp</alias>
+    <alias>c#</alias>
+    <filename>*.cs</filename>
+    <mime_type>text/x-csharp</mime_type>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^\s*\[.*?\]">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="///[^\n\r]*">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="//[^\n\r]*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/[*].*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*()+=|\[\]:;,.&lt;&gt;/?-]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="@&#34;(&#34;&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$@?&#34;(&#34;&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;\n])*[&#34;\n]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+[Ll]?|\d[_\d]*(\.\d*)?([eE][+-]?\d+)?[flFLdD]?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="#[ \t]*(if|endif|else|elif|define|undef|line|error|warning|region|endregion|pragma|nullable)\b">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\b(extern)(\s+)(alias)\b">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(as|await|base|break|by|case|catch|checked|continue|default|delegate|do|else|event|finally|fixed|for|foreach|goto|if|in|init|is|let|lock|new|on|out|params|readonly|ref|return|sizeof|stackalloc|switch|this|throw|try|typeof|unchecked|virtual|void|while|get|set|new|yield|add|remove|value|alias|ascending|descending|from|group|into|orderby|select|thenby|where|join|equals)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(global)(::)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(abstract|async|const|enum|explicit|extern|implicit|internal|operator|override|partial|private|protected|public|static|sealed|unsafe|volatile)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(bool|byte|char|decimal|double|dynamic|float|int|long|object|sbyte|short|string|uint|ulong|ushort|var)\b\??">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(class|struct|record|interface)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="(namespace|using)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="namespace"/>
+      </rule>
+      <rule pattern="@?[_a-zA-Z]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="class">
+      <rule pattern="@?[_a-zA-Z]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="namespace">
+      <rule pattern="(?=\()">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(@?[_a-zA-Z]\w*|\.)+">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/c++.xml 🔗

@@ -0,0 +1,331 @@
+<lexer>
+  <config>
+    <name>C++</name>
+    <alias>cpp</alias>
+    <alias>c++</alias>
+    <filename>*.cpp</filename>
+    <filename>*.hpp</filename>
+    <filename>*.c++</filename>
+    <filename>*.h++</filename>
+    <filename>*.cc</filename>
+    <filename>*.hh</filename>
+    <filename>*.cxx</filename>
+    <filename>*.hxx</filename>
+    <filename>*.C</filename>
+    <filename>*.H</filename>
+    <filename>*.cp</filename>
+    <filename>*.CPP</filename>
+    <filename>*.tpp</filename>
+    <mime_type>text/x-c++hdr</mime_type>
+    <mime_type>text/x-c++src</mime_type>
+    <ensure_nl>true</ensure_nl>
+    <analyse first="true">
+      <regex pattern="#include &lt;[a-z_]+>" score="0.2" />
+      <regex pattern="using namespace " score="0.4" />
+    </analyse>
+  </config>
+  <rules>
+    <state name="classname">
+      <rule pattern="(\[\[.+\]\])(\s*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s*(?=[&gt;{])">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="^#if\s+0">
+        <token type="CommentPreproc"/>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^#">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#if\s+0)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(include)(\s+)(&quot;[^&quot;]+?&quot;|&lt;[^&gt;]+?&gt;)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="CommentPreprocFile"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule pattern="(reinterpret_cast|static_assert|thread_local|dynamic_cast|static_cast|const_cast|co_return|protected|namespace|consteval|constexpr|typename|co_await|co_yield|operator|restrict|explicit|template|override|noexcept|requires|decltype|alignof|private|alignas|virtual|mutable|nullptr|concept|export|friend|typeid|throws|public|delete|final|throw|catch|using|this|new|try)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(enum)\b(\s+)(class)\b(\s*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(class|struct|enum|union)\b(\s*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="\[\[.+\]\]">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(R)(&#34;)([^\\()\s]{,16})(\()((?:.|\n)*?)(\)\3)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="LiteralString"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(u8|u|U)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(L?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(L?)(&#39;)(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX]([0-9A-Fa-f](&#39;?[0-9A-Fa-f]+)*)[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0(&#39;?[0-7]+)+[LlUu]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[Bb][01](&#39;?[01]+)*[LlUu]*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="[0-9](&#39;?[0-9]+)*[LlUu]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(restricted|volatile|continue|register|default|typedef|struct|extern|switch|sizeof|static|return|union|while|const|break|goto|enum|else|case|auto|for|asm|if|do)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(bool|int|long|float|short|double|char((8|16|32)_t)?|wchar_t|unsigned|signed|void|u?int(_fast|_least|)(8|16|32|64)_t)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(typename|__inline|restrict|_inline|thread|inline|naked)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(__m(128i|128d|128|64))\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="__(forceinline|identifier|unaligned|declspec|fastcall|stdcall|finally|except|assume|int32|cdecl|int64|based|leave|int16|raise|noop|int8|w64|try|asm)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|NULL)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(:)(?!:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-fA-F0-9]{2,4}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <push/>
+      </rule>
+      <rule pattern="^\s*#el(?:se|if).*\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;{]*)(\{)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="function"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;]*)(;)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <push state="statement"/>
+      </rule>
+      <rule pattern="__(multiple_inheritance|virtual_inheritance|single_inheritance|interface|uuidof|super|event)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="__(offload|blockingoffload|outer)\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+    </state>
+    <state name="statement">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern="[{]">
+        <token type="Punctuation"/>
+        <push state="root"/>
+      </rule>
+      <rule pattern="[;}]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/c.xml 🔗

@@ -0,0 +1,260 @@
+<lexer>
+  <config>
+    <name>C</name>
+    <alias>c</alias>
+    <filename>*.c</filename>
+    <filename>*.h</filename>
+    <filename>*.idc</filename>
+    <filename>*.x[bp]m</filename>
+    <mime_type>text/x-chdr</mime_type>
+    <mime_type>text/x-csrc</mime_type>
+    <mime_type>image/x-xbitmap</mime_type>
+    <mime_type>image/x-xpixmap</mime_type>
+    <ensure_nl>true</ensure_nl>
+    <analyse first="true" >
+      <regex pattern="(?m)^\s*#include &lt;" score="0.1" />
+      <regex pattern="(?m)^\s*#ifn?def " score="0.1" />
+    </analyse>
+  </config>
+  <rules>
+    <state name="statement">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-fA-F0-9]{2,4}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(include)(\s*(?:/[*].*?[*]/\s*)?)([^\n]+)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="CommentPreprocFile"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <push/>
+      </rule>
+      <rule pattern="^\s*#el(?:se|if).*\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="^#if\s+0">
+        <token type="CommentPreproc"/>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^#">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#if\s+0)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule pattern="(L?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(L?)(&#39;)(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]+[LlUu]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\d+[LlUu]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(restricted|volatile|continue|register|default|typedef|struct|extern|switch|sizeof|static|return|union|while|const|break|goto|enum|else|case|auto|for|asm|if|do)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(bool|int|long|float|short|double|char((8|16|32)_t)?|unsigned|signed|void|u?int(_fast|_least|)(8|16|32|64)_t)\b|\b[a-z]\w*_t\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(typename|__inline|restrict|_inline|thread|inline|naked)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(__m(128i|128d|128|64))\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="__(forceinline|identifier|unaligned|declspec|fastcall|finally|stdcall|wchar_t|assume|except|int32|cdecl|int16|leave|based|raise|int64|noop|int8|w64|try|asm)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|NULL)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(:)(?!:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b[A-Za-z_]\w*(?=\s*\()">
+          <token type="NameFunction"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;{]*)(\{)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="function"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;]*)(;)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <push state="statement"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cap_n_proto.xml 🔗

@@ -0,0 +1,122 @@
+<lexer>
+  <config>
+    <name>Cap&#39;n Proto</name>
+    <alias>capnp</alias>
+    <filename>*.capnp</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="@[0-9a-zA-Z]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="=">
+        <token type="Literal"/>
+        <push state="expression"/>
+      </rule>
+      <rule pattern=":">
+        <token type="NameClass"/>
+        <push state="type"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="NameAttribute"/>
+        <push state="annotation"/>
+      </rule>
+      <rule pattern="(struct|enum|interface|union|import|using|const|annotation|extends|in|of|on|as|with|from|fixed)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[\w.]+">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[^#@=:$\w]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="type">
+      <rule pattern="[^][=;,(){}$]+">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="[[(]">
+        <token type="NameClass"/>
+        <push state="parentype"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="parentype">
+      <rule pattern="[^][;()]+">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="[[(]">
+        <token type="NameClass"/>
+        <push/>
+      </rule>
+      <rule pattern="[])]">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="expression">
+      <rule pattern="[^][;,(){}$]+">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="[[(]">
+        <token type="Literal"/>
+        <push state="parenexp"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="parenexp">
+      <rule pattern="[^][;()]+">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="[[(]">
+        <token type="Literal"/>
+        <push/>
+      </rule>
+      <rule pattern="[])]">
+        <token type="Literal"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="annotation">
+      <rule pattern="[^][;,(){}=:]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[[(]">
+        <token type="NameAttribute"/>
+        <push state="annexp"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="annexp">
+      <rule pattern="[^][;()]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[[(]">
+        <token type="NameAttribute"/>
+        <push/>
+      </rule>
+      <rule pattern="[])]">
+        <token type="NameAttribute"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cassandra_cql.xml 🔗

@@ -0,0 +1,137 @@
+<lexer>
+  <config>
+    <name>Cassandra CQL</name>
+    <alias>cassandra</alias>
+    <alias>cql</alias>
+    <filename>*.cql</filename>
+    <mime_type>text/x-cql</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="[^&#39;]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="quoted-ident">
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralStringName"/>
+      </rule>
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralStringName"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringName"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="dollar-string">
+      <rule pattern="[^\$]+">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="\$\$">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="(--|\/\/).*\n?">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comments"/>
+      </rule>
+      <rule pattern="(ascii|bigint|blob|boolean|counter|date|decimal|double|float|frozen|inet|int|list|map|set|smallint|text|time|timestamp|timeuuid|tinyint|tuple|uuid|varchar|varint)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(DURABLE_WRITES|LOCAL_QUORUM|MATERIALIZED|COLUMNFAMILY|REPLICATION|NORECURSIVE|NOSUPERUSER|PERMISSIONS|EACH_QUORUM|CONSISTENCY|PERMISSION|CLUSTERING|WRITETIME|SUPERUSER|KEYSPACES|AUTHORIZE|LOCAL_ONE|AGGREGATE|FINALFUNC|PARTITION|FILTERING|UNLOGGED|CONTAINS|DISTINCT|FUNCTION|LANGUAGE|INFINITY|INITCOND|TRUNCATE|KEYSPACE|PASSWORD|REPLACE|OPTIONS|TRIGGER|STORAGE|ENTRIES|RETURNS|COMPACT|PRIMARY|EXISTS|STATIC|PAGING|UPDATE|CUSTOM|VALUES|INSERT|DELETE|MODIFY|CREATE|SELECT|SCHEMA|LOGGED|REVOKE|RENAME|QUORUM|CALLED|STYPE|ORDER|ALTER|BATCH|BEGIN|COUNT|ROLES|APPLY|WHERE|SFUNC|LEVEL|INPUT|LOGIN|INDEX|TABLE|THREE|ALLOW|TOKEN|LIMIT|USING|USERS|GRANT|FROM|KEYS|JSON|USER|INTO|ROLE|TYPE|VIEW|DESC|WITH|DROP|FULL|ASC|TTL|OFF|PER|KEY|USE|ADD|NAN|ONE|ALL|ANY|TWO|AND|NOT|AS|IN|IF|OF|IS|ON|TO|BY|OR)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[+*/&lt;&gt;=~!@#%^&amp;|`?-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(?s)(java|javascript)(\s+)(AS)(\s+)(&#39;|\$\$)(.*?)(\5)">
+        <usingbygroup>
+          <sublexer_name_group>1</sublexer_name_group>
+          <code_group>6</code_group>
+          <emitters>
+            <token type="NameBuiltin"/>
+            <token type="TextWhitespace"/>
+            <token type="Keyword"/>
+            <token type="TextWhitespace"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+          </emitters>
+        </usingbygroup>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="0x[0-9a-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\.[0-9]+(e[+-]?[0-9]+)?">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="-?[0-9]+(\.[0-9])?(e[+-]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringName"/>
+        <push state="quoted-ident"/>
+      </rule>
+      <rule pattern="\$\$">
+        <token type="LiteralStringHeredoc"/>
+        <push state="dollar-string"/>
+      </rule>
+      <rule pattern="[a-z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern=":([&#39;&#34;]?)[a-z]\w*\b\1">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[;:()\[\]\{\},.]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="multiline-comments">
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comments"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^/*]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="[/*]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ceylon.xml 🔗

@@ -0,0 +1,151 @@
+<lexer>
+  <config>
+    <name>Ceylon</name>
+    <alias>ceylon</alias>
+    <filename>*.ceylon</filename>
+    <mime_type>text/x-ceylon</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="class">
+      <rule pattern="[A-Za-z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[a-z][\w.]*">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^(\s*(?:[a-zA-Z_][\w.\[\]]*\s+)+?)([a-zA-Z_]\w*)(\s*)(\()">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="(shared|abstract|formal|default|actual|variable|deprecated|small|late|literal|doc|by|see|throws|optional|license|tagged|final|native|annotation|sealed)\b">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="(break|case|catch|continue|else|finally|for|in|if|return|switch|this|throw|try|while|is|exists|dynamic|nonempty|then|outer|assert|let)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(abstracts|extends|satisfies|super|given|of|out|assign)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(function|value|void|new)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(assembly|module|package)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(class|interface|object|alias)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="(import)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;|&#39;\\\{#[0-9a-fA-F]{4}\}&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#34;.*``.*``.*&#34;">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="(\.)([a-z_]\w*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[~^*!%&amp;\[\](){}&lt;&gt;|+=:;,./?-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\d{1,3}(_\d{3})+\.\d{1,3}(_\d{3})+[kMGTPmunpf]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d{1,3}(_\d{3})+\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.\d{1,3}(_\d{3})+[kMGTPmunpf]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][+-]?[0-9]+)?[kMGTPmunpf]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="#([0-9a-fA-F]{4})(_[0-9a-fA-F]{4})+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="#[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\$([01]{4})(_[01]{4})+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="\$[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="\d{1,3}(_\d{3})+[kMGTP]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[0-9]+[kMGTP]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cfengine3.xml 🔗

@@ -0,0 +1,197 @@
+<lexer>
+  <config>
+    <name>CFEngine3</name>
+    <alias>cfengine3</alias>
+    <alias>cf3</alias>
+    <filename>*.cf</filename>
+  </config>
+  <rules>
+    <state name="interpol">
+      <rule pattern="\$[{(]">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="[})]">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^${()}]+">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+    </state>
+    <state name="arglist">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#.*?\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="^@.*?\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(body)(\s+)(\S+)(\s+)(control)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(body|bundle|promise)(\s+)(\S+)(\s+)(\w+)(\()">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="arglist"/>
+      </rule>
+      <rule pattern="(body|bundle|promise)(\s+)(\S+)(\s+)(\w+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\S+)(\s*)(=&gt;)(\s*)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([\w.!&amp;|()&#34;&#36;]+)(::)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="doublequotestring"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="singlequotestring"/>
+      </rule>
+      <rule pattern="&#96;">
+        <token type="LiteralString"/>
+        <push state="backtickstring"/>
+      </rule>
+      <rule pattern="(\w+)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\w+)(:)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="@[{(][^)}]+[})]">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\$[(][^)]+[)]">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[(){},;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="=&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="-&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\d+\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="doublequotestring">
+      <rule pattern="\$[{(]">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="singlequotestring">
+      <rule pattern="\$[{(]">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="backtickstring">
+      <rule pattern="\$[{(]">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#96;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cfstatement.xml 🔗

@@ -0,0 +1,92 @@
+<lexer>
+  <config>
+    <name>cfstatement</name>
+    <alias>cfs</alias>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*(?:.|\n)*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\+\+|--">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[-+*/^&amp;=!]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;=|&gt;=|&lt;|&gt;|==">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="mod\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(eq|lt|gt|lte|gte|not|is|and|or)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\|\||&amp;&amp;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(if|else|len|var|xml|default|break|switch|component|property|function|do|try|catch|in|continue|for|return|while|required|any|array|binary|boolean|component|date|guid|numeric|query|string|struct|uuid|case)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(application|session|client|cookie|super|this|variables|arguments)\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="([a-z_$][\w.]*)(\s*)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-z_$][\w.]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[()\[\]{};:,.\\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="#.+?#">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^&#34;#]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="#">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/chaiscript.xml 🔗

@@ -0,0 +1,134 @@
+<lexer>
+  <config>
+    <name>ChaiScript</name>
+    <alias>chai</alias>
+    <alias>chaiscript</alias>
+    <filename>*.chai</filename>
+    <mime_type>text/x-chaiscript</mime_type>
+    <mime_type>application/x-chaiscript</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="dqstring">
+      <rule pattern="\$\{[^&#34;}]+?\}">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\\&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[^\\&#34;$]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="^\#.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gim]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="#pop" state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\+\+|--|~|&amp;&amp;|\?|:|\|\||\\(?=\n)|\.\.(&lt;&lt;|&gt;&gt;&gt;?|==?|!=?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[=+\-*/]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(for|in|while|do|break|return|continue|if|else|throw|try|catch)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(var)\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(attr|def|fun)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(eval|throw)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="`\S+`">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[$a-zA-Z_]\w*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="dqstring"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/chapel.xml 🔗

@@ -0,0 +1,143 @@
+<lexer>
+  <config>
+    <name>Chapel</name>
+    <alias>chapel</alias>
+    <alias>chpl</alias>
+    <filename>*.chpl</filename>
+  </config>
+  <rules>
+    <state name="procname">
+      <rule pattern="([a-zA-Z_][.\w$]*|\~[a-zA-Z_][.\w$]*|[+*/!~%&lt;&gt;=&amp;^|\-:]{1,2})">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="receivertype"/>
+      </rule>
+      <rule pattern="\)+\.">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="receivertype">
+      <rule pattern="(unmanaged|borrowed|atomic|single|shared|owned|sync)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(complex|nothing|opaque|string|locale|bytes|range|imag|real|bool|uint|void|int)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[^()]*">
+        <token type="NameOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(config|const|inout|param|type|out|ref|var|in)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(false|none|true|nil)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(complex|nothing|opaque|string|locale|bytes|range|imag|real|bool|uint|void|int)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(implements|forwarding|prototype|otherwise|subdomain|primitive|unmanaged|override|borrowed|lifetime|coforall|continue|private|require|dmapped|cobegin|foreach|lambda|sparse|shared|domain|pragma|reduce|except|export|extern|throws|forall|delete|return|noinit|single|import|select|public|inline|serial|atomic|defer|break|local|index|throw|catch|label|begin|where|while|align|yield|owned|only|this|sync|with|scan|else|enum|init|when|then|let|for|try|use|new|zip|if|by|as|on|do)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(iter)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="procname"/>
+      </rule>
+      <rule pattern="(proc)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="procname"/>
+      </rule>
+      <rule pattern="(operator)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="procname"/>
+      </rule>
+      <rule pattern="(class|interface|module|record|union)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+\.\d*([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\.\d+([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+[Ee][-+]\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(\d*\.\d+)([eE][+-]?[0-9]+)?i?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+[eE][+-]?[0-9]+i?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[bB][01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[oO][0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(=|\+=|-=|\*=|/=|\*\*=|%=|&amp;=|\|=|\^=|&amp;&amp;=|\|\|=|&lt;&lt;=|&gt;&gt;=|&lt;=&gt;|&lt;~&gt;|\.\.|by|#|\.\.\.|&amp;&amp;|\|\||!|&amp;|\||\^|~|&lt;&lt;|&gt;&gt;|==|!=|&lt;=|&gt;=|&lt;|&gt;|[+\-*/%]|\*\*)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[:;,.?()\[\]{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[a-zA-Z_][\w$]*">
+        <token type="NameOther"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="[a-zA-Z_][\w$]*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cheetah.xml 🔗

@@ -0,0 +1,55 @@
+<lexer>
+  <config>
+    <name>Cheetah</name>
+    <alias>cheetah</alias>
+    <alias>spitfire</alias>
+    <filename>*.tmpl</filename>
+    <filename>*.spt</filename>
+    <mime_type>application/x-cheetah</mime_type>
+    <mime_type>application/x-spitfire</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(##[^\n]*)$">
+        <bygroups>
+          <token type="Comment"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#[*](.|\n)*?[*]#">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="#end[^#\n]*(?:#|$)">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="#slurp$">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(#[a-zA-Z]+)([^#\n]*)(#|$)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <using lexer="Python"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$)([a-zA-Z_][\w.]*\w)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <using lexer="Python"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$\{!?)(.*?)(\})(?s)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <using lexer="Python"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?sx)&#xA;                (.+?)               # anything, followed by:&#xA;                (?:&#xA;                 (?=\#[#a-zA-Z]*) | # an eval comment&#xA;                 (?=\$[a-zA-Z_{]) | # a substitution&#xA;                 \Z                 # end of string&#xA;                )&#xA;            ">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/clojure.xml 🔗

@@ -0,0 +1,71 @@
+<lexer>
+  <config>
+    <name>Clojure</name>
+    <alias>clojure</alias>
+    <alias>clj</alias>
+    <alias>edn</alias>
+    <filename>*.clj</filename>
+    <filename>*.edn</filename>
+    <mime_type>text/x-clojure</mime_type>
+    <mime_type>application/x-clojure</mime_type>
+    <mime_type>application/edn</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=";.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="[,\s]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="-?\d+\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="0x-?[abcdef\d]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="\\(.|[a-z]+)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="::?#?(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="~@|[`\&#39;#^~&amp;@]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(quote|loop|new|var|let|def|if|do|fn|\.) ">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(definterface|defprotocol|defproject|defstruct|definline|defmethod|defrecord|defmulti|defmacro|defonce|declare|deftype|defn-|def-|defn|ns) ">
+        <token type="KeywordDeclaration"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cmake.xml 🔗

@@ -0,0 +1,90 @@
+<lexer>
+  <config>
+    <name>CMake</name>
+    <alias>cmake</alias>
+    <filename>*.cmake</filename>
+    <filename>CMakeLists.txt</filename>
+    <mime_type>text/x-cmake</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\b(\w+)([ \t]*)(\()">
+        <bygroups>
+          <token type="NameBuiltin"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="args"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="ws"/>
+      </rule>
+    </state>
+    <state name="args">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(\$\{)(.+?)(\})">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$ENV\{)(.+?)(\})">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$&lt;)(.+?)(&gt;)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?s)&#34;.*?&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\\\S+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[^)$&#34;# \t\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="ws"/>
+      </rule>
+    </state>
+    <state name="string"/>
+    <state name="keywords">
+      <rule pattern="\b(WIN32|UNIX|APPLE|CYGWIN|BORLAND|MINGW|MSVC|MSVC_IDE|MSVC60|MSVC70|MSVC71|MSVC80|MSVC90)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="ws">
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cobol.xml 🔗

@@ -0,0 +1,90 @@
+<lexer>
+  <config>
+    <name>COBOL</name>
+    <alias>cobol</alias>
+    <filename>*.cob</filename>
+    <filename>*.COB</filename>
+    <filename>*.cpy</filename>
+    <filename>*.CPY</filename>
+    <mime_type>text/x-cobol</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="strings">
+      <rule pattern="&#34;[^&#34;\n]*(&#34;|\n)">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;[^&#39;\n]*(&#39;|\n)">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="nums">
+      <rule pattern="\d+(\s*|\.$|$)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[+-]?\d*\.\d+(E[-+]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+-]?\d+\.\d*(E[-+]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="comment"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule>
+        <include state="core"/>
+      </rule>
+      <rule>
+        <include state="nums"/>
+      </rule>
+      <rule pattern="[a-z0-9]([\w\-]*[a-z0-9]+)?">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="(^.{6}[*/].*\n|^.{6}|\*&gt;.*\n)">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="core">
+      <rule pattern="(^|(?&lt;=[^\w\-]))(ALL\s+)?((ZEROES)|(HIGH-VALUE|LOW-VALUE|QUOTE|SPACE|ZERO)(S)?)\s*($|(?=[^\w\-]))">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(^|(?&lt;=[^\w\-]))(WORKING-STORAGE|IDENTIFICATION|LOCAL-STORAGE|CONFIGURATION|END-EVALUATE|FILE-CONTROL|END-UNSTRING|END-SUBTRACT|END-MULTIPLY|INPUT-OUTPUT|END-PERFORM|END-DISPLAY|END-OF-PAGE|END-COMPUTE|ENVIRONMENT|I-O-CONTROL|END-REWRITE|END-RETURN|INITIALIZE|END-ACCEPT|END-DIVIDE|PROGRAM-ID|END-STRING|END-DELETE|END-SEARCH|END-WRITE|PROCEDURE|END-START|TERMINATE|END-READ|MULTIPLY|CONTINUE|SUPPRESS|SUBTRACT|INITIATE|UNSTRING|DIVISION|VALIDATE|END-CALL|ALLOCATE|GENERATE|EVALUATE|PERFORM|FOREVER|LINKAGE|END-ADD|REWRITE|INSPECT|SECTION|RELEASE|COMPUTE|DISPLAY|END-IF|GOBACK|INVOKE|CANCEL|UNLOCK|SCREEN|SEARCH|DELETE|STRING|DIVIDE|ACCEPT|RETURN|RESUME|START|RAISE|MERGE|CLOSE|WRITE|FILE|STOP|FREE|READ|ELSE|THEN|SORT|EXIT|OPEN|CALL|MOVE|DATA|END|SET|ADD|USE|GO|FD|SD|IF)\s*($|(?=[^\w\-]))">
+        <token type="KeywordReserved"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/coffeescript.xml 🔗

@@ -0,0 +1,210 @@
+<lexer>
+  <config>
+    <name>CoffeeScript</name>
+    <alias>coffee-script</alias>
+    <alias>coffeescript</alias>
+    <alias>coffee</alias>
+    <filename>*.coffee</filename>
+    <mime_type>text/coffeescript</mime_type>
+    <dot_all>true</dot_all>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="###[^#].*?###">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="#(?!##[^#]).*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="multilineregex">
+      <rule pattern="[^/#]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="///([gim]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpoling_string"/>
+      </rule>
+      <rule pattern="[/#]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="///">
+        <token type="LiteralStringRegex"/>
+        <push state="#pop" state="multilineregex"/>
+      </rule>
+      <rule pattern="/(?! )(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gim]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="Operator"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="tsqs">
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#|\\.|\&#39;|&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="dqs">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\.|\&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpoling_string"/>
+      </rule>
+      <rule pattern="#">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="sqs">
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#|\\.|&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="tdqs">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\.|\&#39;|&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpoling_string"/>
+      </rule>
+      <rule pattern="#">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="^(?=\s|/)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="\+\+|~|&amp;&amp;|\band\b|\bor\b|\bis\b|\bisnt\b|\bnot\b|\?|:|\|\||\\(?=\n)|(&lt;&lt;|&gt;&gt;&gt;?|==?(?!&gt;)|!=?|=(?!&gt;)|-(?!&gt;)|[&lt;&gt;+*`%&amp;\|\^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(?:\([^()]*\))?\s*[=-]&gt;">
+        <token type="NameFunction"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(?&lt;![.$])(for|own|in|of|while|until|loop|break|return|continue|switch|when|then|if|unless|else|throw|try|catch|finally|new|delete|typeof|instanceof|super|extends|this|class|by)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(?&lt;![.$])(true|false|yes|no|on|off|null|NaN|Infinity|undefined)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(Array|Boolean|Date|Error|Function|Math|netscape|Number|Object|Packages|RegExp|String|sun|decodeURI|decodeURIComponent|encodeURI|encodeURIComponent|eval|isFinite|isNaN|parseFloat|parseInt|document|window)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[$a-zA-Z_][\w.:$]*\s*[:=]\s">
+        <token type="NameVariable"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="@[$a-zA-Z_][\w.:$]*\s*[:=]\s">
+        <token type="NameVariableInstance"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="@">
+        <token type="NameOther"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="@?[$a-zA-Z_][\w$]*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="tdqs"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <push state="tsqs"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="dqs"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="sqs"/>
+      </rule>
+    </state>
+    <state name="interpoling_string">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="strings">
+      <rule pattern="[^#\\\&#39;&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/common_lisp.xml 🔗

@@ -0,0 +1,184 @@
+<lexer>
+  <config>
+    <name>Common Lisp</name>
+    <alias>common-lisp</alias>
+    <alias>cl</alias>
+    <alias>lisp</alias>
+    <filename>*.cl</filename>
+    <filename>*.lisp</filename>
+    <mime_type>text/x-common-lisp</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="body">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="#\|">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comment"/>
+      </rule>
+      <rule pattern="#\d*Y.*$">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="&#34;(\\.|\\\n|[^&#34;\\])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=":(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="::(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern=":#(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#39;(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="`">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[-+]?\d+\.?(?=[ &#34;()\&#39;\n,;`])">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[-+]?\d+/\d+(?=[ &#34;()\&#39;\n,;`])">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[-+]?(\d*\.\d+([defls][-+]?\d+)?|\d+(\.\d*)?[defls][-+]?\d+)(?=[ &#34;()\&#39;\n,;`])">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="#\\.(?=[ &#34;()\&#39;\n,;`])">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="#\\(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="#\(">
+        <token type="Operator"/>
+        <push state="body"/>
+      </rule>
+      <rule pattern="#\d*\*[01]*">
+        <token type="LiteralOther"/>
+      </rule>
+      <rule pattern="#:(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="#[.,]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="#\&#39;">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="#b[+-]?[01]+(/[01]+)?">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="#o[+-]?[0-7]+(/[0-7]+)?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="#x[+-]?[0-9a-f]+(/[0-9a-f]+)?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="#\d+r[+-]?[0-9a-z]+(/[0-9a-z]+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(#c)(\()">
+        <bygroups>
+          <token type="LiteralNumber"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="body"/>
+      </rule>
+      <rule pattern="(#\d+a)(\()">
+        <bygroups>
+          <token type="LiteralOther"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="body"/>
+      </rule>
+      <rule pattern="(#s)(\()">
+        <bygroups>
+          <token type="LiteralOther"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="body"/>
+      </rule>
+      <rule pattern="#p?&#34;(\\.|[^&#34;])*&#34;">
+        <token type="LiteralOther"/>
+      </rule>
+      <rule pattern="#\d+=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="#\d+#">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="#+nil(?=[ &#34;()\&#39;\n,;`])\s*\(">
+        <token type="CommentPreproc"/>
+        <push state="commented-form"/>
+      </rule>
+      <rule pattern="#[+-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(,@|,|\.)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(t|nil)(?=[ &#34;()\&#39;\n,;`])">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\*(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)\*">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="(\|[^|]+\||(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@\[\]^{}~]|[#.:])*)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="body"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <push state="body"/>
+      </rule>
+    </state>
+    <state name="multiline-comment">
+      <rule pattern="#\|">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\|#">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^|#]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="[|#]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="commented-form">
+      <rule pattern="\(">
+        <token type="CommentPreproc"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^()]+">
+        <token type="CommentPreproc"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/coq.xml 🔗

@@ -0,0 +1,136 @@
+<lexer>
+  <config>
+    <name>Coq</name>
+    <alias>coq</alias>
+    <filename>*.v</filename>
+    <mime_type>text/x-coq</mime_type>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="dotted">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*(?=\s*\.)">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-z][a-z0-9_\&#39;]*">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="false|true|\(\)|\[\]">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\b(Projections|Monomorphic|Polymorphic|Proposition|CoInductive|Hypothesis|CoFixpoint|Contextual|Definition|Parameters|Hypotheses|Structure|Inductive|Corollary|Implicits|Parameter|Variables|Arguments|Canonical|Printing|Coercion|Reserved|Universe|Notation|Instance|Fixpoint|Variable|Morphism|Relation|Existing|Implicit|Example|Theorem|Delimit|Defined|Rewrite|outside|Require|Resolve|Section|Context|Prenex|Strict|Module|Import|Export|Global|inside|Remark|Tactic|Search|Record|Scope|Unset|Check|Local|Close|Class|Graph|Proof|Lemma|Print|Axiom|Show|Goal|Open|Fact|Hint|Bind|Ltac|Save|View|Let|Set|All|End|Qed)\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="\b(exists2|nosimpl|struct|exists|return|forall|match|cofix|then|with|else|for|fix|let|fun|end|is|of|if|in|as)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(Type|Prop)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\b(native_compute|setoid_rewrite|etransitivity|econstructor|transitivity|autorewrite|constructor|cutrewrite|vm_compute|bool_congr|generalize|inversion|induction|injection|nat_congr|intuition|destruct|suffices|erewrite|symmetry|nat_norm|replace|rewrite|compute|pattern|trivial|without|assert|unfold|change|eapply|intros|unlock|revert|rename|refine|eauto|tauto|after|right|congr|split|field|simpl|intro|clear|apply|using|subst|case|left|suff|loss|wlog|have|fold|ring|move|lazy|elim|pose|auto|red|cbv|hnf|cut|set)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(contradiction|discriminate|reflexivity|assumption|congruence|romega|omega|exact|solve|tauto|done|by)\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="\b(repeat|first|idtac|last|try|do)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="\b([A-Z][\w\&#39;]*)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="(λ|Π|\|\}|\{\||\\/|/\\|=&gt;|~|\}|\|]|\||\{&lt;|\{|`|_|]|\[\||\[&gt;|\[&lt;|\[|\?\?|\?|&gt;\}|&gt;]|&gt;|=|&lt;-&gt;|&lt;-|&lt;|;;|;|:&gt;|:=|::|:|\.\.|\.|-&gt;|-\.|-|,|\+|\*|\)|\(|&amp;&amp;|&amp;|#|!=)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="([=&lt;&gt;@^|&amp;+\*/$%-]|[!?~])?[!$%&amp;*+\./:&lt;=&gt;?@^|~-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(unit|nat|bool|string|ascii|list)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[^\W\d][\w&#39;]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\d[\d_]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="0[xX][\da-fA-F][\da-fA-F_]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[oO][0-7][0-7_]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[bB][01][01_]*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="&#39;(?:(\\[\\\&#34;&#39;ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;.&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[~?][a-z][\w\&#39;]*:">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^(*)]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="Comment"/>
+        <push/>
+      </rule>
+      <rule pattern="\*\)">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[(*)]">
+        <token type="Comment"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/crystal.xml 🔗

@@ -0,0 +1,762 @@
+<lexer>
+  <config>
+    <name>Crystal</name>
+    <alias>cr</alias>
+    <alias>crystal</alias>
+    <filename>*.cr</filename>
+    <mime_type>text/x-crystal</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="pa-intp-string">
+      <rule pattern="\\[\(]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#()]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#()]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="ab-regex">
+      <rule pattern="\\[\\&lt;&gt;]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="&lt;">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="&gt;[imsx]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#&lt;&gt;]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#&lt;&gt;]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="cb-regex">
+      <rule pattern="\\[\\{}]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="\}[imsx]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#{}]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#{}]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="simple-backtick">
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[^\\`#]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string-intp">
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="in-intp"/>
+      </rule>
+    </state>
+    <state name="interpolated-regex">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="cb-string">
+      <rule pattern="\\[\\{}]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#{}]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#{}]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="in-macro-control">
+      <rule pattern="\{%">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="%\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="for\b|in\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="interpolated-string">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="in-macro-expr">
+      <rule pattern="\{\{">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="\}\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="simple-string">
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[^\\&#34;#]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="cb-intp-string">
+      <rule pattern="\\[\{]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#{}]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#{}]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="string-intp-escaped">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule>
+        <include state="string-escaped"/>
+      </rule>
+    </state>
+    <state name="sb-regex">
+      <rule pattern="\\[\\\[\]]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="\][imsx]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#\[\]]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#\[\]]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="[A-Z_]\w*">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(\()(\s*)([A-Z_]\w*)(\s*)(\))">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string-escaped">
+      <rule pattern="\\([\\befnstv#&#34;\&#39;]|x[a-fA-F0-9]{1,2}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="sb-intp-string">
+      <rule pattern="\\[\[]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#\[\]]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#\[\]]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="pa-regex">
+      <rule pattern="\\[\\()]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="\)[imsx]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#()]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#()]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="in-attr">
+      <rule pattern="\[">
+        <token type="Operator"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="ab-intp-string">
+      <rule pattern="\\[&lt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&lt;">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#&lt;&gt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#&lt;&gt;]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="in-intp">
+      <rule pattern="\{">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="end-part">
+      <rule pattern=".+">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(instance_sizeof|pointerof|protected|abstract|require|private|include|unless|typeof|sizeof|return|extend|ensure|rescue|ifdef|super|break|begin|until|while|elsif|yield|next|when|else|then|case|with|end|asm|if|do|as|of)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(false|true|nil)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(module|lib)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(def|fun|macro)(\s+)((?:[a-zA-Z_]\w*::)*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="def(?=[*%&amp;^`~+-/\[&lt;&gt;=])">
+        <token type="Keyword"/>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="(class|struct|union|type|alias|enum)(\s+)((?:[a-zA-Z_]\w*::)*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(self|out|uninitialized)\b|(is_a|responds_to)\?">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="(def_equals_and_hash|assert_responds_to|forward_missing_to|def_equals|property|def_hash|parallel|delegate|debugger|getter|record|setter|spawn|pp)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="getter[!?]|property[!?]|__(DIR|FILE|LINE)__\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(get_stack_top|StaticArray|Concurrent|with_color|Reference|Scheduler|read_line|Exception|at_exit|Pointer|Channel|Float64|sprintf|Float32|Process|Object|Struct|caller|UInt16|UInt32|UInt64|system|future|Number|printf|String|Symbol|Int32|Range|Slice|Regex|Mutex|sleep|Array|Class|raise|Tuple|Deque|delay|Float|Int16|print|abort|Value|UInt8|Int64|puts|Proc|File|Void|exit|fork|Bool|Char|gets|lazy|loop|main|rand|Enum|Int8|Time|Hash|Set|Box|Nil|Dir|Int|p)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(&lt;&lt;-?)([&#34;`\&#39;]?)([a-zA-Z_]\w*)(\2)(.*?\n)">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="(&lt;&lt;-?)(&#34;|\&#39;)()(\2)(.*?\n)">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="__END__">
+        <token type="CommentPreproc"/>
+        <push state="end-part"/>
+      </rule>
+      <rule pattern="(?:^|(?&lt;=[=&lt;&gt;~!:])|(?&lt;=(?:\s|;)when\s)|(?&lt;=(?:\s|;)or\s)|(?&lt;=(?:\s|;)and\s)|(?&lt;=\.index\s)|(?&lt;=\.scan\s)|(?&lt;=\.sub\s)|(?&lt;=\.sub!\s)|(?&lt;=\.gsub\s)|(?&lt;=\.gsub!\s)|(?&lt;=\.match\s)|(?&lt;=(?:\s|;)if\s)|(?&lt;=(?:\s|;)elsif\s)|(?&lt;=^when\s)|(?&lt;=^index\s)|(?&lt;=^scan\s)|(?&lt;=^sub\s)|(?&lt;=^gsub\s)|(?&lt;=^sub!\s)|(?&lt;=^gsub!\s)|(?&lt;=^match\s)|(?&lt;=^if\s)|(?&lt;=^elsif\s))(\s*)(/)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringRegex"/>
+        </bygroups>
+        <push state="multiline-regex"/>
+      </rule>
+      <rule pattern="(?&lt;=\(|,|\[)/">
+        <token type="LiteralStringRegex"/>
+        <push state="multiline-regex"/>
+      </rule>
+      <rule pattern="(\s+)(/)(?![\s=])">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringRegex"/>
+        </bygroups>
+        <push state="multiline-regex"/>
+      </rule>
+      <rule pattern="(0o[0-7]+(?:_[0-7]+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberOct"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(0x[0-9A-Fa-f]+(?:_[0-9A-Fa-f]+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberHex"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(0b[01]+(?:_[01]+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberBin"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:0(?![0-9])|[1-9][\d_]*)(?:\.\d[\d_]*)(?:e[+-]?[0-9]+)?(?:_?f[0-9]+)?)(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberFloat"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:0(?![0-9])|[1-9][\d_]*)(?:\.\d[\d_]*)?(?:e[+-]?[0-9]+)(?:_?f[0-9]+)?)(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberFloat"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:0(?![0-9])|[1-9][\d_]*)(?:\.\d[\d_]*)?(?:e[+-]?[0-9]+)?(?:_?f[0-9]+))(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberFloat"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(0\b|[1-9][\d]*(?:_\d+)*(?:_?[iu][0-9]+)?)\b(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberInteger"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="@@[a-zA-Z_]\w*">
+        <token type="NameVariableClass"/>
+      </rule>
+      <rule pattern="@[a-zA-Z_]\w*">
+        <token type="NameVariableInstance"/>
+      </rule>
+      <rule pattern="\$\w+">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="\$[!@&amp;`\&#39;+~=/\\,;.&lt;&gt;_*$?:&#34;^-]">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="\$-[0adFiIlpvw]">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="::">
+        <token type="Operator"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule pattern="\?(\\[MC]-)*(\\([\\befnrtv#&#34;\&#39;]|x[a-fA-F0-9]{1,2}|[0-7]{1,3})|\S)(?!\w)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="[A-Z][A-Z_]+\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\{%">
+        <token type="LiteralStringInterpol"/>
+        <push state="in-macro-control"/>
+      </rule>
+      <rule pattern="\{\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="in-macro-expr"/>
+      </rule>
+      <rule pattern="(@\[)(\s*)([A-Z]\w*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="NameDecorator"/>
+        </bygroups>
+        <push state="in-attr"/>
+      </rule>
+      <rule pattern="(\.|::)(\[\]\?|&lt;=&gt;|===|\[\]=|&gt;&gt;|&amp;&amp;|\*\*|\[\]|\|\||&gt;=|=~|!~|&lt;&lt;|&lt;=|!=|==|&lt;|/|=|-|\+|&gt;|\*|&amp;|%|\^|!|\||~)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameOperator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\.|::)([a-zA-Z_]\w*[!?]?|[*%&amp;^`~+\-/\[&lt;&gt;=])">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Name"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*(?:[!?](?!=))?">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="(\[|\]\??|\*\*|&lt;=&gt;?|&gt;=|&lt;&lt;?|&gt;&gt;?|=~|===|!~|&amp;&amp;?|\|\||\.{1,3})">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[-+/*%=&lt;&gt;&amp;!^|~]=?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[(){};,/?:\\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="multiline-regex">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\\/">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\/#]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="/[imsx]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="ab-string">
+      <rule pattern="\\[\\&lt;&gt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&lt;">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#&lt;&gt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#&lt;&gt;]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="pa-string">
+      <rule pattern="\\[\\()]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#()]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#()]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="strings">
+      <rule pattern="\:@{0,2}[a-zA-Z_]\w*[!?]?">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="\:@{0,2}(\[\]\?|&lt;=&gt;|===|\[\]=|&gt;&gt;|&amp;&amp;|\*\*|\[\]|\|\||&gt;=|=~|!~|&lt;&lt;|&lt;=|!=|==|&lt;|/|=|-|\+|&gt;|\*|&amp;|%|\^|!|\||~)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern=":&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;]|\\[^&#39;\\]+)&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern=":&#34;">
+        <token type="LiteralStringSymbol"/>
+        <push state="simple-sym"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(:)(?!:)">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="simple-string"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)`">
+        <token type="LiteralStringBacktick"/>
+        <push state="simple-backtick"/>
+      </rule>
+      <rule pattern="%\{">
+        <token type="LiteralStringOther"/>
+        <push state="cb-intp-string"/>
+      </rule>
+      <rule pattern="%[wi]\{">
+        <token type="LiteralStringOther"/>
+        <push state="cb-string"/>
+      </rule>
+      <rule pattern="%r\{">
+        <token type="LiteralStringRegex"/>
+        <push state="cb-regex"/>
+      </rule>
+      <rule pattern="%\[">
+        <token type="LiteralStringOther"/>
+        <push state="sb-intp-string"/>
+      </rule>
+      <rule pattern="%[wi]\[">
+        <token type="LiteralStringOther"/>
+        <push state="sb-string"/>
+      </rule>
+      <rule pattern="%r\[">
+        <token type="LiteralStringRegex"/>
+        <push state="sb-regex"/>
+      </rule>
+      <rule pattern="%\(">
+        <token type="LiteralStringOther"/>
+        <push state="pa-intp-string"/>
+      </rule>
+      <rule pattern="%[wi]\(">
+        <token type="LiteralStringOther"/>
+        <push state="pa-string"/>
+      </rule>
+      <rule pattern="%r\(">
+        <token type="LiteralStringRegex"/>
+        <push state="pa-regex"/>
+      </rule>
+      <rule pattern="%&lt;">
+        <token type="LiteralStringOther"/>
+        <push state="ab-intp-string"/>
+      </rule>
+      <rule pattern="%[wi]&lt;">
+        <token type="LiteralStringOther"/>
+        <push state="ab-string"/>
+      </rule>
+      <rule pattern="%r&lt;">
+        <token type="LiteralStringRegex"/>
+        <push state="ab-regex"/>
+      </rule>
+      <rule pattern="(%r([\W_]))((?:\\\2|(?!\2).)*)(\2[imsx]*)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(%[wi]([\W_]))((?:\\\2|(?!\2).)*)(\2)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(?&lt;=[-+/*%=&lt;&gt;&amp;!^|~,(])(\s*)(%([\t ])(?:(?:\\\3|(?!\3).)*)\3)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringOther"/>
+          <token type="None"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(%([\t ])(?:(?:\\\3|(?!\3).)*)\3)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringOther"/>
+          <token type="None"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(%([\[{(&lt;]))((?:\\\2|(?!\2).)*)(\2)">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="sb-string">
+      <rule pattern="\\[\\\[\]]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#\[\]]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#\[\]]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="funcname">
+      <rule pattern="(?:([a-zA-Z_]\w*)(\.))?([a-zA-Z_]\w*[!?]?|\*\*?|[-+]@?|[/%&amp;|^`~]|\[\]=?|&lt;&lt;|&gt;&gt;|&lt;=?&gt;|&gt;=?|===?)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Operator"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="simple-sym">
+      <rule>
+        <include state="string-escaped"/>
+      </rule>
+      <rule pattern="[^\\&#34;#]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringSymbol"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/css.xml 🔗

@@ -0,0 +1,323 @@
+<lexer>
+  <config>
+    <name>CSS</name>
+    <alias>css</alias>
+    <filename>*.css</filename>
+    <mime_type>text/css</mime_type>
+  </config>
+  <rules>
+    <state name="numeric-end">
+      <rule pattern="(vmin|grad|vmax|turn|dppx|dpcm|kHz|dpi|rad|rem|deg|vw|vh|ch|px|mm|cm|in|pt|pc|Hz|ex|em|ms|q|s)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="%">
+        <token type="KeywordType"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="atrule">
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="atcontent"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="basics"/>
+      </rule>
+    </state>
+    <state name="atcontent">
+      <rule>
+        <include state="basics"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="common-values">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(-webkit-|-khtml-|prince-|-atsc-|-moz-|-rim-|-wap-|-ms-|-xv-|mso-|-ah-|-hp-|-ro-|-tc-|-o-)">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule>
+        <include state="urls"/>
+      </rule>
+      <rule pattern="(attr|blackness|blend|blenda|blur|brightness|calc|circle|color-mod|contrast|counter|cubic-bezier|device-cmyk|drop-shadow|ellipse|gray|grayscale|hsl|hsla|hue|hue-rotate|hwb|image|inset|invert|lightness|linear-gradient|matrix|matrix3d|opacity|perspective|polygon|radial-gradient|rect|repeating-linear-gradient|repeating-radial-gradient|rgb|rgba|rotate|rotate3d|rotateX|rotateY|rotateZ|saturate|saturation|scale|scale3d|scaleX|scaleY|scaleZ|sepia|shade|skewX|skewY|steps|tint|toggle|translate|translate3d|translateX|translateY|translateZ|whiteness)(\()">
+        <bygroups>
+          <token type="NameBuiltin"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="function-start"/>
+      </rule>
+      <rule pattern="([a-zA-Z_][\w-]+)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="function-start"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/csv.xml 🔗

@@ -0,0 +1,53 @@
+<!--
+Lexer for RFC-4180 compliant CSV subject to the following additions:
+- UTF-8 encoding is accepted (the RFC requires 7-bit ASCII)
+- The line terminator character can be LF or CRLF (the RFC allows CRLF only)
+
+Link to the RFC-4180 specification: https://tools.ietf.org/html/rfc4180
+
+Additions inspired by:
+https://github.com/frictionlessdata/datapackage/issues/204#issuecomment-193242077
+
+Future improvements:
+- Identify non-quoted numbers as LiteralNumber
+- Identify y as an error in "x"y. Currently it's identified as another string
+  literal.
+-->
+
+<lexer>
+    <config>
+        <name>CSV</name>
+        <alias>csv</alias>
+        <filename>*.csv</filename>
+        <mime_type>text/csv</mime_type>
+    </config>
+    <rules>
+        <state name="root">
+            <rule pattern="\r?\n">
+                <token type="Punctuation" />
+            </rule>
+            <rule pattern=",">
+                <token type="Punctuation" />
+            </rule>
+            <rule pattern="&quot;">
+                <token type="LiteralStringDouble" />
+                <push state="escaped" />
+            </rule>
+            <rule pattern="[^\r\n,]+">
+                <token type="LiteralString" />
+            </rule>
+        </state>
+        <state name="escaped">
+            <rule pattern="&quot;&quot;">
+                <token type="LiteralStringEscape"/>
+            </rule>
+            <rule pattern="&quot;">
+                <token type="LiteralStringDouble" />
+                <pop depth="1"/>
+            </rule>
+            <rule pattern="[^&quot;]+">
+                <token type="LiteralStringDouble" />
+            </rule>
+        </state>
+    </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cue.xml 🔗

@@ -0,0 +1,85 @@
+<lexer>
+  <config>
+    <name>CUE</name>
+    <alias>cue</alias>
+    <filename>*.cue</filename>
+    <mime_type>text/x-cue</mime_type>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//[^\n\r]+">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(\+|&amp;&amp;|==|&lt;|=|-|\|\||!=|&gt;|:|\*|&amp;|=~|&lt;=|\?|\[|\]|,|/|\||!~|&gt;=|!|_\|_|\.\.\.)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="#*&#34;+">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="'(\\\\|\\'|[^'\n])*['\n]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="0[boxX][0-9a-fA-F][_0-9a-fA-F]*|(\.\d+|\d[_\d]*(\.\d*)?)([eE][+-]?\d+)?[KMGTP]?i?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*()+=|\[\]:;,.&lt;&gt;/?-]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(import|for|if|in|let|package)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(bool|float|int|string|uint|ulong|ushort)\b\??">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(true|false|null|_)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[@#]?[_a-zA-Z$]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="\\#*\(">
+        <token type="LiteralStringInterpol"/>
+        <push state="string-intp"/>
+      </rule>
+      <rule pattern="&#34;+#*">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\[&#39;&#34;\\nrt]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="string-intp">
+      <rule pattern="\)">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/cython.xml 🔗

@@ -0,0 +1,372 @@
+<lexer>
+  <config>
+    <name>Cython</name>
+    <alias>cython</alias>
+    <alias>pyx</alias>
+    <alias>pyrex</alias>
+    <filename>*.pyx</filename>
+    <filename>*.pxd</filename>
+    <filename>*.pxi</filename>
+    <mime_type>text/x-cython</mime_type>
+    <mime_type>application/x-cython</mime_type>
+  </config>
+  <rules>
+    <state name="funcname">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^(\s*)(&#34;&#34;&#34;(?:.|\n)*?&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringDoc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(&#39;&#39;&#39;(?:.|\n)*?&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringDoc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[]{}:(),;[]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(in|is|and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(&lt;)([a-zA-Z0-9.?]+)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="KeywordType"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="!=|==|&lt;&lt;|&gt;&gt;|[-~+/*%=&lt;&gt;&amp;^|.?]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(from)(\d+)(&lt;=)(\s+)(&lt;)(\d+)(:)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="LiteralNumberInteger"/>
+          <token type="Operator"/>
+          <token type="Name"/>
+          <token type="Operator"/>
+          <token type="Name"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule pattern="(def|property)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="(cp?def)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="cdef"/>
+      </rule>
+      <rule pattern="(cdef)(:)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(class|struct)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(from)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="fromimport"/>
+      </rule>
+      <rule pattern="(c?import)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule>
+        <include state="builtins"/>
+      </rule>
+      <rule>
+        <include state="backtick"/>
+      </rule>
+      <rule pattern="(?:[rR]|[uU][rR]|[rR][uU])&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="tdqs"/>
+      </rule>
+      <rule pattern="(?:[rR]|[uU][rR]|[rR][uU])&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <push state="tsqs"/>
+      </rule>
+      <rule pattern="(?:[rR]|[uU][rR]|[rR][uU])&#34;">
+        <token type="LiteralString"/>
+        <push state="dqs"/>
+      </rule>
+      <rule pattern="(?:[rR]|[uU][rR]|[rR][uU])&#39;">
+        <token type="LiteralString"/>
+        <push state="sqs"/>
+      </rule>
+      <rule pattern="[uU]?&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <combined state="stringescape" state="tdqs"/>
+      </rule>
+      <rule pattern="[uU]?&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <combined state="stringescape" state="tsqs"/>
+      </rule>
+      <rule pattern="[uU]?&#34;">
+        <token type="LiteralString"/>
+        <combined state="stringescape" state="dqs"/>
+      </rule>
+      <rule pattern="[uU]?&#39;">
+        <token type="LiteralString"/>
+        <combined state="stringescape" state="sqs"/>
+      </rule>
+      <rule>
+        <include state="name"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+    </state>
+    <state name="stringescape">
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|\n|N\{.*?\}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|x[a-fA-F0-9]{2}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="strings">
+      <rule pattern="%(\([a-zA-Z0-9]+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[E-GXc-giorsux%]">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^\\\&#39;&#34;%\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[\&#39;&#34;\\]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="%">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="backtick">
+      <rule pattern="`.*?`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="(\d+\.?\d*|\d*\.\d+)([eE][+-]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0\d+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][a-fA-F0-9]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+L">
+        <token type="LiteralNumberIntegerLong"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(continue|ctypedef|except\?|include|finally|global|return|lambda|assert|except|print|nogil|while|fused|yield|break|raise|exec|else|elif|pass|with|gil|for|try|del|by|as|if)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(DEF|IF|ELIF|ELSE)\b">
+        <token type="CommentPreproc"/>
+      </rule>
+    </state>
+    <state name="fromimport">
+      <rule pattern="(\s+)(c?import)\b">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-zA-Z_.][\w.]*">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="nl">
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="dqs">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\|\\&#34;|\\\n">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="tsqs">
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule>
+        <include state="nl"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="(\s+)(as)(\s+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_][\w.]*">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="(\s*)(,)(\s*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="name">
+      <rule pattern="@\w+">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="cdef">
+      <rule pattern="(public|readonly|extern|api|inline)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(struct|enum|union|class)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(?=[(:#=]|$)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(,)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="from\b">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="as\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=[&#34;\&#39;])">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="sqs">
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\|\\&#39;|\\\n">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="tdqs">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule>
+        <include state="nl"/>
+      </rule>
+    </state>
+    <state name="builtins">
+      <rule pattern="(?&lt;!\.)(staticmethod|classmethod|__import__|issubclass|isinstance|basestring|bytearray|raw_input|frozenset|enumerate|property|unsigned|reversed|callable|execfile|hasattr|compile|complex|delattr|setattr|unicode|globals|getattr|reload|divmod|xrange|unichr|filter|reduce|buffer|intern|coerce|sorted|locals|object|round|input|range|super|tuple|bytes|float|slice|apply|bool|long|exit|vars|file|next|type|iter|open|dict|repr|hash|list|eval|oct|map|zip|int|hex|set|sum|chr|cmp|any|str|pow|ord|dir|len|min|all|abs|max|bin|id)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(self|None|Ellipsis|NotImplemented|False|True|NULL)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(PendingDeprecationWarning|UnicodeTranslateError|NotImplementedError|FloatingPointError|DeprecationWarning|UnicodeDecodeError|UnicodeEncodeError|UnboundLocalError|KeyboardInterrupt|ZeroDivisionError|IndentationError|EnvironmentError|OverflowWarning|ArithmeticError|RuntimeWarning|UnicodeWarning|AttributeError|AssertionError|NotImplemented|ReferenceError|StopIteration|SyntaxWarning|OverflowError|GeneratorExit|FutureWarning|BaseException|ImportWarning|StandardError|RuntimeError|UnicodeError|LookupError|ImportError|SyntaxError|MemoryError|SystemError|UserWarning|SystemExit|ValueError|IndexError|NameError|TypeError|Exception|KeyError|EOFError|TabError|OSError|Warning|IOError)\b">
+        <token type="NameException"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/d.xml 🔗

@@ -0,0 +1,133 @@
+<lexer>
+  <config>
+    <name>D</name>
+    <alias>d</alias>
+    <filename>*.d</filename>
+    <filename>*.di</filename>
+    <mime_type>text/x-d</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\+.*?\+/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(asm|assert|body|break|case|cast|catch|continue|default|debug|delete|do|else|finally|for|foreach|foreach_reverse|goto|if|in|invariant|is|macro|mixin|new|out|pragma|return|super|switch|this|throw|try|typeid|typeof|version|while|with)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="__(FILE|FILE_FULL_PATH|MODULE|LINE|FUNCTION|PRETTY_FUNCTION|DATE|EOF|TIME|TIMESTAMP|VENDOR|VERSION)__\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="__(traits|vector|parameters)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="((?:(?:[^\W\d]|\$)[\w.\[\]$&lt;&gt;]*\s+)+?)((?:[^\W\d]|\$)[\w$]*)(\s*)(\()">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="@[\w.]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="(abstract|auto|alias|align|const|delegate|deprecated|enum|export|extern|final|function|immutable|inout|lazy|nothrow|override|package|private|protected|public|pure|ref|scope|shared|static|synchronized|template|unittest|__gshared)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(void|bool|byte|ubyte|short|ushort|int|uint|long|ulong|cent|ucent|float|double|real|ifloat|idouble|ireal|cfloat|cdouble|creal|char|wchar|dchar)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(size_t|ptrdiff_t|noreturn|string|wstring|dstring|Object|Throwable|Exception|Error|imported)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(module)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(class|interface|struct|template|union)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="(import)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="[qr]?&#34;(\\\\|\\&#34;|[^&#34;])*&#34;[cwd]?">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(`)([^`]*)(`)[cwd]?">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;|&#39;\\u[0-9a-fA-F]{4}&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(\.)((?:[^\W\d]|\$)[\w$]*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^\s*([^\W\d]|\$)[\w$]*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="([0-9][0-9_]*\.([0-9][0-9_]*)?|\.[0-9][0-9_]*)([eE][+\-]?[0-9][0-9_]*)?[fFL]?i?|[0-9][eE][+\-]?[0-9][0-9_]*[fFL]?|[0-9]([eE][+\-]?[0-9][0-9_]*)?[fFL]|0[xX]([0-9a-fA-F][0-9a-fA-F_]*\.?|([0-9a-fA-F][0-9a-fA-F_]*)?\.[0-9a-fA-F][0-9a-fA-F_]*)[pP][+\-]?[0-9][0-9_]*[fFL]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F][0-9a-fA-F_]*[lL]?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[bB][01][01_]*[lL]?">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[0-7_]+[lL]?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0|[1-9][0-9_]*[lL]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="([~^*!%&amp;\[\](){}&lt;&gt;|+=:;,./?-]|q{)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="([^\W\d]|\$)[\w$]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="class">
+      <rule pattern="([^\W\d]|\$)[\w$]*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[\w.]+\*?">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dart.xml 🔗

@@ -0,0 +1,213 @@
+<lexer>
+  <config>
+    <name>Dart</name>
+    <alias>dart</alias>
+    <filename>*.dart</filename>
+    <mime_type>text/x-dart</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="string_double_multiline">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#34;$\\]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <include state="string_common"/>
+      </rule>
+      <rule pattern="(\$|\&#34;)+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="class">
+      <rule pattern="[a-zA-Z_$]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="import_decl">
+      <rule>
+        <include state="string_literal"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\b(as|show|hide)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[a-zA-Z_$]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\,">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\;">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string_single_multiline">
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\&#39;$\\]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule>
+        <include state="string_common"/>
+      </rule>
+      <rule pattern="(\$|\&#39;)+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="string_literal"/>
+      </rule>
+      <rule pattern="#!(.*?)$">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\b(import|export)\b">
+        <token type="Keyword"/>
+        <push state="import_decl"/>
+      </rule>
+      <rule pattern="\b(library|source|part of|part)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\b(class)\b(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="\b(assert|break|case|catch|continue|default|do|else|finally|for|if|in|is|new|return|super|switch|this|throw|try|while)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(abstract|async|await|const|extends|factory|final|get|implements|native|operator|required|set|static|sync|typedef|var|with|yield)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="\b(bool|double|dynamic|int|num|Object|String|void)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\b(false|null|true)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]|as\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[a-zA-Z_$]\w*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="[a-zA-Z_$]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[(){}\[\],.;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+(\.\d*)?([eE][+-]?\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\.\d+([eE][+-]?\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="string_literal">
+      <rule pattern="r&#34;&#34;&#34;([\w\W]*?)&#34;&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="r&#39;&#39;&#39;([\w\W]*?)&#39;&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="r&#34;(.*?)&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="r&#39;(.*?)&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string_double_multiline"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="string_single_multiline"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string_double"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="string_single"/>
+      </rule>
+    </state>
+    <state name="string_common">
+      <rule pattern="\\(x[0-9A-Fa-f]{2}|u[0-9A-Fa-f]{4}|u\{[0-9A-Fa-f]*\}|[a-z&#39;\&#34;$\\])">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="(\$)([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="LiteralStringInterpol"/>
+          <token type="Name"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$\{)(.*?)(\})">
+        <bygroups>
+          <token type="LiteralStringInterpol"/>
+          <usingself state="root"/>
+          <token type="LiteralStringInterpol"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="string_double">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#34;$\\\n]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <include state="string_common"/>
+      </rule>
+      <rule pattern="\$+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="string_single">
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#39;$\\\n]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule>
+        <include state="string_common"/>
+      </rule>
+      <rule pattern="\$+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dax.xml 🔗

@@ -0,0 +1,39 @@
+<lexer>
+  <config>
+    <name>Dax</name>
+    <alias>dax</alias>
+    <filename>*.dax</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="--.*\n?"><token type="CommentSingle"/></rule>
+      <rule pattern="//.*\n?"><token type="CommentSingle"/></rule>
+      <rule pattern="/\*"><token type="CommentMultiline"/><push state="multiline-comments"/></rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/desktop_entry.xml 🔗

@@ -0,0 +1,17 @@
+<lexer>
+  <config>
+    <name>Desktop file</name>
+    <alias>desktop</alias>
+    <alias>desktop_entry</alias>
+    <filename>*.desktop</filename>
+    <mime_type>application/x-desktop</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^[ \t]*\n"><token type="TextWhitespace"/></rule>
+      <rule pattern="^(#.*)(\n)"><bygroups><token type="CommentSingle"/><token type="TextWhitespace"/></bygroups></rule>
+      <rule pattern="(\[[^\]\n]+\])(\n)"><bygroups><token type="Keyword"/><token type="TextWhitespace"/></bygroups></rule>
+      <rule pattern="([-A-Za-z0-9]+)(\[[^\] \t=]+\])?([ \t]*)(=)([ \t]*)([^\n]*)([ \t\n]*\n)"><bygroups><token type="NameAttribute"/><token type="NameNamespace"/><token type="TextWhitespace"/><token type="Operator"/><token type="TextWhitespace"/><token type="LiteralString"/><token type="TextWhitespace"/></bygroups></rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/diff.xml 🔗

@@ -0,0 +1,52 @@
+<lexer>
+  <config>
+    <name>Diff</name>
+    <alias>diff</alias>
+    <alias>udiff</alias>
+    <filename>*.diff</filename>
+    <filename>*.patch</filename>
+    <mime_type>text/x-diff</mime_type>
+    <mime_type>text/x-patch</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=" .*\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\d+(,\d+)?(a|c|d)\d+(,\d+)?\n">
+        <token type="GenericSubheading"/>
+      </rule>
+      <rule pattern="---\n">
+        <token type="GenericStrong"/>
+      </rule>
+      <rule pattern="&lt; .*\n">
+        <token type="GenericDeleted"/>
+      </rule>
+      <rule pattern="&gt; .*\n">
+        <token type="GenericInserted"/>
+      </rule>
+      <rule pattern="\+.*\n">
+        <token type="GenericInserted"/>
+      </rule>
+      <rule pattern="-.*\n">
+        <token type="GenericDeleted"/>
+      </rule>
+      <rule pattern="!.*\n">
+        <token type="GenericStrong"/>
+      </rule>
+      <rule pattern="@.*\n">
+        <token type="GenericSubheading"/>
+      </rule>
+      <rule pattern="([Ii]ndex|diff).*\n">
+        <token type="GenericHeading"/>
+      </rule>
+      <rule pattern="=.*\n">
+        <token type="GenericHeading"/>
+      </rule>
+      <rule pattern=".*\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/django_jinja.xml 🔗

@@ -0,0 +1,153 @@
+<lexer>
+  <config>
+    <name>Django/Jinja</name>
+    <alias>django</alias>
+    <alias>jinja</alias>
+    <mime_type>application/x-django-templating</mime_type>
+    <mime_type>application/x-jinja</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="var">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(-?)(\}\})">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="varnames"/>
+      </rule>
+    </state>
+    <state name="block">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(-?)(%\})">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="varnames"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[^{]+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="\{\{">
+        <token type="CommentPreproc"/>
+        <push state="var"/>
+      </rule>
+      <rule pattern="\{[*#].*?[*#]\}">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)(comment)(\s*-?)(%\})(.*?)(\{%)(-?\s*)(endcomment)(\s*-?)(%\})">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <token type="Comment"/>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)(raw)(\s*-?)(%\})(.*?)(\{%)(-?\s*)(endraw)(\s*-?)(%\})">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)(filter)(\s+)([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <push state="block"/>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="block"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Other"/>
+      </rule>
+    </state>
+    <state name="varnames">
+      <rule pattern="(\|)(\s*)([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(is)(\s+)(not)?(\s+)?([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(_|true|false|none|True|False|None)\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="(in|as|reversed|recursive|not|and|or|is|if|else|import|with(?:(?:out)?\s*context)?|scoped|ignore\s+missing)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(loop|block|super|forloop)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-zA-Z_][\w-]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\.\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern=":?&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern=":?&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="([{}()\[\]+\-*/,:~]|[&gt;&lt;=]=?)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|0[xX][0-9a-fA-F]+[Ll]?">
+        <token type="LiteralNumber"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dns.xml 🔗

@@ -0,0 +1,44 @@
+<?xml version="1.0"?>
+<lexer>
+  <config>
+    <name>dns</name>
+    <alias>zone</alias>
+    <alias>bind</alias>
+    <filename>*.zone</filename>
+    <mime_type>text/dns</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\b(IN|A|AAAA|AFSDB|APL|CAA|CDNSKEY|CDS|CERT|CNAME|DHCID|DLV|DNAME|DNSKEY|DS|HIP|IPSECKEY|KEY|KX|LOC|MX|NAPTR|NS|NSEC|NSEC3|NSEC3PARAM|PTR|RRSIG|RP|SIG|SOA|SRV|SSHFP|TA|TKEY|TLSA|TSIG|TXT)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern=";.*(\S|$)">
+        <token type="Comment"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/docker.xml 🔗

@@ -0,0 +1,57 @@
+<lexer>
+  <config>
+    <name>Docker</name>
+    <alias>docker</alias>
+    <alias>dockerfile</alias>
+    <filename>Dockerfile</filename>
+    <filename>Dockerfile.*</filename>
+    <filename>*.Dockerfile</filename>
+    <filename>*.docker</filename>
+    <mime_type>text/x-dockerfile-config</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#.*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(ONBUILD)((?:\s*\\?\s*))">
+        <bygroups>
+          <token type="Keyword"/>
+          <using lexer="Bash"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(HEALTHCHECK)(((?:\s*\\?\s*)--\w+=\w+(?:\s*\\?\s*))*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <using lexer="Bash"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(VOLUME|ENTRYPOINT|CMD|SHELL)((?:\s*\\?\s*))(\[.*?\])">
+        <bygroups>
+          <token type="Keyword"/>
+          <using lexer="Bash"/>
+          <using lexer="JSON"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(LABEL|ENV|ARG)((?:(?:\s*\\?\s*)\w+=\w+(?:\s*\\?\s*))*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <using lexer="Bash"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:FROM|MAINTAINER|EXPOSE|WORKDIR|USER|STOPSIGNAL)|VOLUME)\b(.*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:RUN|CMD|ENTRYPOINT|ENV|ARG|LABEL|ADD|COPY))">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(.*\\\n)*.+">
+        <using lexer="Bash"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dtd.xml 🔗

@@ -0,0 +1,168 @@
+<lexer>
+  <config>
+    <name>DTD</name>
+    <alias>dtd</alias>
+    <filename>*.dtd</filename>
+    <mime_type>application/xml-dtd</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="common">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(%|&amp;)[^;]*;">
+        <token type="NameEntity"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="[(|)*,?+]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\&#39;[^\&#39;]*\&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^-]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="--&gt;">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="-">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="element">
+      <rule>
+        <include state="common"/>
+      </rule>
+      <rule pattern="EMPTY|ANY|#PCDATA">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[^&gt;\s|()?+*,]+">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="attlist">
+      <rule>
+        <include state="common"/>
+      </rule>
+      <rule pattern="CDATA|IDREFS|IDREF|ID|NMTOKENS|NMTOKEN|ENTITIES|ENTITY|NOTATION">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="#REQUIRED|#IMPLIED|#FIXED">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="xml:space|xml:lang">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="[^&gt;\s|()?+*,]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="entity">
+      <rule>
+        <include state="common"/>
+      </rule>
+      <rule pattern="SYSTEM|PUBLIC|NDATA">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[^&gt;\s|()?+*,]+">
+        <token type="NameEntity"/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="notation">
+      <rule>
+        <include state="common"/>
+      </rule>
+      <rule pattern="SYSTEM|PUBLIC">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[^&gt;\s|()?+*,]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="common"/>
+      </rule>
+      <rule pattern="(&lt;!ELEMENT)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+        </bygroups>
+        <push state="element"/>
+      </rule>
+      <rule pattern="(&lt;!ATTLIST)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+        </bygroups>
+        <push state="attlist"/>
+      </rule>
+      <rule pattern="(&lt;!ENTITY)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameEntity"/>
+        </bygroups>
+        <push state="entity"/>
+      </rule>
+      <rule pattern="(&lt;!NOTATION)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+        </bygroups>
+        <push state="notation"/>
+      </rule>
+      <rule pattern="(&lt;!\[)([^\[\s]+)(\s*)(\[)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="NameEntity"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;!DOCTYPE)(\s+)([^&gt;\s]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="PUBLIC|SYSTEM">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[\[\]&gt;]">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/dylan.xml 🔗

@@ -0,0 +1,176 @@
+<lexer>
+  <config>
+    <name>Dylan</name>
+    <alias>dylan</alias>
+    <filename>*.dylan</filename>
+    <filename>*.dyl</filename>
+    <filename>*.intr</filename>
+    <mime_type>text/x-dylan</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-f0-9]{2,4}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="([a-z0-9-]+:)([ \t]*)(.*(?:\n[ \t].+)*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="TextWhitespace"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <push state="code"/>
+      </rule>
+    </state>
+    <state name="code">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="&#39;(\\.|\\[0-7]{1,3}|\\x[a-f0-9]{1,2}|[^\\\&#39;\n])&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="#b[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="#o[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[-+]?(\d*\.\d+([ed][-+]?\d+)?|\d+(\.\d*)?e[-+]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[-+]?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="#x[0-9a-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(\?\\?)([\w!&amp;*&lt;&gt;|^$%@+~?/=-]+)(:)(token|name|variable|expression|body|case-body|\*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+          <token type="NameBuiltin"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\?)(:)(token|name|variable|expression|body|case-body|\*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Operator"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\?\\?)([\w!&amp;*&lt;&gt;|^$%@+~?/=-]+)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(=&gt;|::|#\(|#\[|##|\?\?|\?=|\?|[(){}\[\],.;])">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=":=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="#[tf]">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="#&#34;">
+        <token type="LiteralStringSymbol"/>
+        <push state="symbol"/>
+      </rule>
+      <rule pattern="#[a-z0-9-]+">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="#(all-keys|include|key|next|rest)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[\w!&amp;*&lt;&gt;|^$%@+~?/=-]+:">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="&lt;[\w!&amp;*&lt;&gt;|^$%@+~?/=-]+&gt;">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="\*[\w!&amp;*&lt;&gt;|^$%@+~?/=-]+\*">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="\$[\w!&amp;*&lt;&gt;|^$%@+~?/=-]+">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(let|method|function)([ \t]+)([\w!&amp;*&lt;&gt;|^$%@+~?/=-]+)">
+        <bygroups>
+          <token type="NameBuiltin"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(error|signal|return|break)">
+        <token type="NameException"/>
+      </rule>
+      <rule pattern="(\\?)([\w!&amp;*&lt;&gt;|^$%@+~?/=-]+)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Name"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="symbol">
+      <rule pattern="&#34;">
+        <token type="LiteralStringSymbol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ebnf.xml 🔗

@@ -0,0 +1,90 @@
+<lexer>
+  <config>
+    <name>EBNF</name>
+    <alias>ebnf</alias>
+    <filename>*.ebnf</filename>
+    <mime_type>text/x-ebnf</mime_type>
+  </config>
+  <rules>
+    <state name="comment">
+      <rule pattern="[^*)]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule>
+        <include state="comment_start"/>
+      </rule>
+      <rule pattern="\*\)">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*)]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="identifier">
+      <rule pattern="([a-zA-Z][\w \-]*)">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comment_start"/>
+      </rule>
+      <rule>
+        <include state="identifier"/>
+      </rule>
+      <rule pattern="=">
+        <token type="Operator"/>
+        <push state="production"/>
+      </rule>
+    </state>
+    <state name="production">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comment_start"/>
+      </rule>
+      <rule>
+        <include state="identifier"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;[^&#39;]*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="(\?[^?]*\?)">
+        <token type="NameEntity"/>
+      </rule>
+      <rule pattern="[\[\]{}(),|]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="-">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="comment_start">
+      <rule pattern="\(\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/elixir.xml 🔗

@@ -0,0 +1,744 @@
+<lexer>
+  <config>
+    <name>Elixir</name>
+    <alias>elixir</alias>
+    <alias>ex</alias>
+    <alias>exs</alias>
+    <filename>*.ex</filename>
+    <filename>*.eex</filename>
+    <filename>*.exs</filename>
+    <mime_type>text/x-elixir</mime_type>
+  </config>
+  <rules>
+    <state name="cb-intp">
+      <rule pattern="[^#\}\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\}[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="triquot-end">
+      <rule pattern="[a-zA-Z]+">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="apos-no-intp">
+      <rule pattern="[^&#39;\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&#39;[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="slas-no-intp">
+      <rule pattern="[^/\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="/[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="pipe-no-intp">
+      <rule pattern="[^\|\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\|[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="apos-intp">
+      <rule pattern="[^#&#39;\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&#39;[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="cb-no-intp">
+      <rule pattern="[^\}\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\}[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="heredoc_double">
+      <rule pattern="^\s*&#34;&#34;&#34;">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="heredoc_interpol"/>
+      </rule>
+    </state>
+    <state name="triapos-end">
+      <rule pattern="[a-zA-Z]+">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="interpol_string">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="triquot-intp">
+      <rule pattern="^\s*&#34;&#34;&#34;">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="heredoc_interpol"/>
+      </rule>
+    </state>
+    <state name="interpol">
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol_string"/>
+      </rule>
+    </state>
+    <state name="pa-no-intp">
+      <rule pattern="[^\)\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\)[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="map_key">
+      <rule>
+        <include state="root"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+        <push state="map_val"/>
+      </rule>
+      <rule pattern="=&gt;">
+        <token type="Punctuation"/>
+        <push state="map_val"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="pa-intp">
+      <rule pattern="[^#\)\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\)[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="tuple">
+      <rule>
+        <include state="root"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(\?)(\\x\{)([\da-fA-F]+)(\})">
+        <bygroups>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringEscape"/>
+          <token type="LiteralNumberHex"/>
+          <token type="LiteralStringEscape"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\?)(\\x[\da-fA-F]{1,2})">
+        <bygroups>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringEscape"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\?)(\\[abdefnrstv])">
+        <bygroups>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringEscape"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\?\\?.">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern=":::">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="::">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=":(?:\.\.\.|&lt;&lt;&gt;&gt;|%\{\}|%|\{\})">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern=":(?:(?:\.\.\.|[a-z_]\w*[!?]?)|[A-Z]\w*(?:\.[A-Z]\w*)*|(?:\&lt;\&lt;\&lt;|\&gt;\&gt;\&gt;|\|\|\||\&amp;\&amp;\&amp;|\^\^\^|\~\~\~|\=\=\=|\!\=\=|\~\&gt;\&gt;|\&lt;\~\&gt;|\|\~\&gt;|\&lt;\|\&gt;|\=\=|\!\=|\&lt;\=|\&gt;\=|\&amp;\&amp;|\|\||\&lt;\&gt;|\+\+|\-\-|\|\&gt;|\=\~|\-\&gt;|\&lt;\-|\||\.|\=|\~\&gt;|\&lt;\~|\&lt;|\&gt;|\+|\-|\*|\/|\!|\^|\&amp;))">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern=":&#34;">
+        <token type="LiteralStringSymbol"/>
+        <push state="string_double_atom"/>
+      </rule>
+      <rule pattern=":&#39;">
+        <token type="LiteralStringSymbol"/>
+        <push state="string_single_atom"/>
+      </rule>
+      <rule pattern="((?:\.\.\.|&lt;&lt;&gt;&gt;|%\{\}|%|\{\})|(?:(?:\.\.\.|[a-z_]\w*[!?]?)|[A-Z]\w*(?:\.[A-Z]\w*)*|(?:\&lt;\&lt;\&lt;|\&gt;\&gt;\&gt;|\|\|\||\&amp;\&amp;\&amp;|\^\^\^|\~\~\~|\=\=\=|\!\=\=|\~\&gt;\&gt;|\&lt;\~\&gt;|\|\~\&gt;|\&lt;\|\&gt;|\=\=|\!\=|\&lt;\=|\&gt;\=|\&amp;\&amp;|\|\||\&lt;\&gt;|\+\+|\-\-|\|\&gt;|\=\~|\-\&gt;|\&lt;\-|\||\.|\=|\~\&gt;|\&lt;\~|\&lt;|\&gt;|\+|\-|\*|\/|\!|\^|\&amp;)))(:)(?=\s|\n)">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(fn|do|end|after|else|rescue|catch)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(not|and|or|when|in)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(case|cond|for|if|unless|try|receive|raise|quote|unquote|unquote_splicing|throw|super|while)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(def|defp|defmodule|defprotocol|defmacro|defmacrop|defdelegate|defexception|defstruct|defimpl|defcallback)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(import|require|use|alias)\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(nil|true|false)\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(_|__MODULE__|__DIR__|__ENV__|__CALLER__)\b">
+        <token type="NamePseudo"/>
+      </rule>
+      <rule pattern="@(?:\.\.\.|[a-z_]\w*[!?]?)">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(?:\.\.\.|[a-z_]\w*[!?]?)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="(%?)([A-Z]\w*(?:\.[A-Z]\w*)*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\&lt;\&lt;\&lt;|\&gt;\&gt;\&gt;|\|\|\||\&amp;\&amp;\&amp;|\^\^\^|\~\~\~|\=\=\=|\!\=\=|\~\&gt;\&gt;|\&lt;\~\&gt;|\|\~\&gt;|\&lt;\|\&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\=\=|\!\=|\&lt;\=|\&gt;\=|\&amp;\&amp;|\|\||\&lt;\&gt;|\+\+|\-\-|\|\&gt;|\=\~|\-\&gt;|\&lt;\-|\||\.|\=|\~\&gt;|\&lt;\~">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\\\\|\&lt;\&lt;|\&gt;\&gt;|\=\&gt;|\(|\)|\:|\;|\,|\[|\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&amp;\d">
+        <token type="NameEntity"/>
+      </rule>
+      <rule pattern="\&lt;|\&gt;|\+|\-|\*|\/|\!|\^|\&amp;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="0b[01](_?[01])*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0o[0-7](_?[0-7])*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0x[\da-fA-F](_?[\dA-Fa-f])*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d(_?\d)*\.\d(_?\d)*([eE][-+]?\d(_?\d)*)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d(_?\d)*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;\s*">
+        <token type="LiteralStringHeredoc"/>
+        <push state="heredoc_double"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;\s*$">
+        <token type="LiteralStringHeredoc"/>
+        <push state="heredoc_single"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string_double"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="string_single"/>
+      </rule>
+      <rule>
+        <include state="sigils"/>
+      </rule>
+      <rule pattern="%\{">
+        <token type="Punctuation"/>
+        <push state="map_key"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="tuple"/>
+      </rule>
+    </state>
+    <state name="sigils">
+      <rule pattern="(~[a-z])(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringOther"/>
+          <token type="LiteralStringHeredoc"/>
+        </bygroups>
+        <push state="triquot-end" state="triquot-intp"/>
+      </rule>
+      <rule pattern="(~[A-Z])(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringOther"/>
+          <token type="LiteralStringHeredoc"/>
+        </bygroups>
+        <push state="triquot-end" state="triquot-no-intp"/>
+      </rule>
+      <rule pattern="(~[a-z])(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringOther"/>
+          <token type="LiteralStringHeredoc"/>
+        </bygroups>
+        <push state="triapos-end" state="triapos-intp"/>
+      </rule>
+      <rule pattern="(~[A-Z])(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringOther"/>
+          <token type="LiteralStringHeredoc"/>
+        </bygroups>
+        <push state="triapos-end" state="triapos-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]\{">
+        <token type="LiteralStringOther"/>
+        <push state="cb-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]\{">
+        <token type="LiteralStringOther"/>
+        <push state="cb-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]\[">
+        <token type="LiteralStringOther"/>
+        <push state="sb-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]\[">
+        <token type="LiteralStringOther"/>
+        <push state="sb-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]\(">
+        <token type="LiteralStringOther"/>
+        <push state="pa-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]\(">
+        <token type="LiteralStringOther"/>
+        <push state="pa-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]&lt;">
+        <token type="LiteralStringOther"/>
+        <push state="ab-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]&lt;">
+        <token type="LiteralStringOther"/>
+        <push state="ab-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]/">
+        <token type="LiteralStringOther"/>
+        <push state="slas-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]/">
+        <token type="LiteralStringOther"/>
+        <push state="slas-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]\|">
+        <token type="LiteralStringOther"/>
+        <push state="pipe-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]\|">
+        <token type="LiteralStringOther"/>
+        <push state="pipe-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]&#34;">
+        <token type="LiteralStringOther"/>
+        <push state="quot-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]&#34;">
+        <token type="LiteralStringOther"/>
+        <push state="quot-no-intp"/>
+      </rule>
+      <rule pattern="~[a-z]&#39;">
+        <token type="LiteralStringOther"/>
+        <push state="apos-intp"/>
+      </rule>
+      <rule pattern="~[A-Z]&#39;">
+        <token type="LiteralStringOther"/>
+        <push state="apos-no-intp"/>
+      </rule>
+    </state>
+    <state name="triapos-intp">
+      <rule pattern="^\s*&#39;&#39;&#39;">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="heredoc_interpol"/>
+      </rule>
+    </state>
+    <state name="string_single_atom">
+      <rule pattern="[^#&#39;\\]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="(&#39;)">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="quot-intp">
+      <rule pattern="[^#&#34;\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&#34;[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="sb-no-intp">
+      <rule pattern="[^\]\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\][a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="slas-intp">
+      <rule pattern="[^#/\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="/[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="sb-intp">
+      <rule pattern="[^#\]\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\][a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="heredoc_no_interpol">
+      <rule pattern="[^\\\n]+">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="\n+">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+    </state>
+    <state name="pipe-intp">
+      <rule pattern="[^#\|\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\|[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="map_val">
+      <rule>
+        <include state="root"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=\})">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="heredoc_single">
+      <rule pattern="^\s*&#39;&#39;&#39;">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="heredoc_interpol"/>
+      </rule>
+    </state>
+    <state name="heredoc_interpol">
+      <rule pattern="[^#\\\n]+">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="\n+">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="string_single">
+      <rule pattern="[^#&#39;\\]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="(&#39;)">
+        <bygroups>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="string_double_atom">
+      <rule pattern="[^#&#34;\\]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="(&#34;)">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="ab-no-intp">
+      <rule pattern="[^&gt;\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&gt;[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="ab-intp">
+      <rule pattern="[^#&gt;\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&gt;[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="quot-no-intp">
+      <rule pattern="[^&#34;\\]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&#34;[a-zA-Z]*">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="triapos-no-intp">
+      <rule pattern="^\s*&#39;&#39;&#39;">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="heredoc_no_interpol"/>
+      </rule>
+    </state>
+    <state name="string_double">
+      <rule pattern="[^#&#34;\\]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="(&#34;)">
+        <bygroups>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="interpol"/>
+      </rule>
+    </state>
+    <state name="escapes">
+      <rule pattern="(\\x\{)([\da-fA-F]+)(\})">
+        <bygroups>
+          <token type="LiteralStringEscape"/>
+          <token type="LiteralNumberHex"/>
+          <token type="LiteralStringEscape"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\\x[\da-fA-F]{1,2})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="(\\[abdefnrstv])">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="triquot-no-intp">
+      <rule pattern="^\s*&#34;&#34;&#34;">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="heredoc_no_interpol"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/elm.xml 🔗

@@ -0,0 +1,119 @@
+<lexer>
+  <config>
+    <name>Elm</name>
+    <alias>elm</alias>
+    <filename>*.elm</filename>
+    <mime_type>text/x-elm</mime_type>
+  </config>
+  <rules>
+    <state name="shader">
+      <rule pattern="\|(?!\])">
+        <token type="NameEntity"/>
+      </rule>
+      <rule pattern="\|\]">
+        <token type="NameEntity"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".*\n">
+        <token type="NameEntity"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\{-">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="--.*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="doublequote"/>
+      </rule>
+      <rule pattern="^\s*module\s*">
+        <token type="KeywordNamespace"/>
+        <push state="imports"/>
+      </rule>
+      <rule pattern="^\s*import\s*">
+        <token type="KeywordNamespace"/>
+        <push state="imports"/>
+      </rule>
+      <rule pattern="\[glsl\|.*">
+        <token type="NameEntity"/>
+        <push state="shader"/>
+      </rule>
+      <rule pattern="(import|module|alias|where|port|else|type|case|then|let|as|of|if|in)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="[A-Z]\w*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="^main ">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="\((&lt;-|\|\||\|&gt;|&amp;&amp;|\+\+|-&gt;|\.\.|//|&gt;&gt;|&gt;=|/=|==|::|&lt;~|&lt;\||&lt;=|&lt;&lt;|~|&lt;|=|:|&gt;|&#39;|/|\\|\.|\^|-|`|\+|\*|\||%)\)">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(&lt;-|\|\||\|&gt;|&amp;&amp;|\+\+|-&gt;|\.\.|//|&gt;&gt;|&gt;=|/=|==|::|&lt;~|&lt;\||&lt;=|&lt;&lt;|~|&lt;|=|:|&gt;|&#39;|/|\\|\.|\^|-|`|\+|\*|\||%)">
+        <token type="NameFunction"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule pattern="[a-z_][a-zA-Z_\&#39;]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[,()\[\]{}]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="-(?!\})">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\{-">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="[^-}]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="-\}">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="doublequote">
+      <rule pattern="\\u[0-9a-fA-F]{4}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\[nrfvb\\&#34;]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^&#34;]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="imports">
+      <rule pattern="\w+(\.\w+)*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="_?\d+\.(?=\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="_?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/emacslisp.xml 🔗

@@ -0,0 +1,132 @@
+<lexer>
+  <config>
+    <name>EmacsLisp</name>
+    <alias>emacs</alias>
+    <alias>elisp</alias>
+    <alias>emacs-lisp</alias>
+    <filename>*.el</filename>
+    <mime_type>text/x-elisp</mime_type>
+    <mime_type>application/x-elisp</mime_type>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="[^&#34;\\`]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="`((?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|]|[#.:])*)\&#39;">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <push state="body"/>
+      </rule>
+    </state>
+    <state name="body">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\?([^\\]|\\.)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern=":((?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|]|[#.:])*)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="::((?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#39;((?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="`">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[-+]?\d+\.?(?=[ &#34;()\]\&#39;\n,;`])">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[-+]?\d+/\d+(?=[ &#34;()\]\&#39;\n,;`])">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[-+]?(\d*\.\d+([defls][-+]?\d+)?|\d+(\.\d*)?[defls][-+]?\d+)(?=[ &#34;()\]\&#39;\n,;`])">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\[|\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="#:((?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|]|[#.:])*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="#\^\^?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="#\&#39;">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="#[bB][+-]?[01]+(/[01]+)?">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="#[oO][+-]?[0-7]+(/[0-7]+)?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="#[xX][+-]?[0-9a-fA-F]+(/[0-9a-fA-F]+)?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="#\d+r[+-]?[0-9a-zA-Z]+(/[0-9a-zA-Z]+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="#\d+=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="#\d+#">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(,@|,|\.|:)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(t|nil)(?=[ &#34;()\]\&#39;\n,;`])">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\*((?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|]|[#.:])*)\*">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="((?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|])(?:\\.|[\w!$%&amp;*+-/&lt;=&gt;?@^{}~|]|[#.:])*)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="#\(">
+        <token type="Operator"/>
+        <push state="body"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="body"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/erlang.xml 🔗

@@ -0,0 +1,166 @@
+<lexer>
+  <config>
+    <name>Erlang</name>
+    <alias>erlang</alias>
+    <filename>*.erl</filename>
+    <filename>*.hrl</filename>
+    <filename>*.es</filename>
+    <filename>*.escript</filename>
+    <mime_type>text/x-erlang</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="%.*\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(receive|after|begin|catch|query|case|cond|when|let|fun|end|try|of|if)\b">
+        <token type="Keyword"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/factor.xml 🔗

@@ -0,0 +1,412 @@
+<lexer>
+  <config>
+    <name>Factor</name>
+    <alias>factor</alias>
+    <filename>*.factor</filename>
+    <mime_type>text/x-factor</mime_type>
+  </config>
+  <rules>
+    <state name="base">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="((?:MACRO|MEMO|TYPED)?:[:]?)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(M:[:]?)(\s+)(\S+)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(C:)(\s+)(\S+)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(GENERIC:)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(HOOK:|GENERIC#)(\s+)(\S+)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\(\s">
+        <token type="NameFunction"/>
+        <push state="stackeffect"/>
+      </rule>
+      <rule pattern=";\s">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(USING:)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="vocabs"/>
+      </rule>
+      <rule pattern="(USE:|UNUSE:|IN:|QUALIFIED:)(\s+)(\S+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(QUALIFIED-WITH:)(\s+)(\S+)(\s+)(\S+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(FROM:|EXCLUDE:)(\s+)(\S+)(\s+=&gt;\s)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="words"/>
+      </rule>
+      <rule pattern="(RENAME:)(\s+)(\S+)(\s+)(\S+)(\s+=&gt;\s+)(\S+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(ALIAS:|TYPEDEF:)(\s+)(\S+)(\s+)(\S+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(DEFER:|FORGET:|POSTPONE:)(\s+)(\S+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(TUPLE:|ERROR:)(\s+)(\S+)(\s+&lt;\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+        <push state="slots"/>
+      </rule>
+      <rule pattern="(TUPLE:|ERROR:|BUILTIN:)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+        <push state="slots"/>
+      </rule>
+      <rule pattern="(MIXIN:|UNION:|INTERSECTION:)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(PREDICATE:)(\s+)(\S+)(\s+&lt;\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(C:)(\s+)(\S+)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(INSTANCE:)(\s+)(\S+)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(SLOT:)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(SINGLETON:)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="SINGLETONS:">
+        <token type="Keyword"/>
+        <push state="classes"/>
+      </rule>
+      <rule pattern="(CONSTANT:|SYMBOL:|MAIN:|HELP:)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="SYMBOLS:\s">
+        <token type="Keyword"/>
+        <push state="words"/>
+      </rule>
+      <rule pattern="SYNTAX:\s">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="ALIEN:\s">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(STRUCT:)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(FUNCTION:)(\s+\S+\s+)(\S+)(\s+\(\s+[^)]+\)\s)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(FUNCTION-ALIAS:)(\s+)(\S+)(\s+\S+\s+)(\S+)(\s+\(\s+[^)]+\)\s)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?:&lt;PRIVATE|PRIVATE&gt;)\s">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;\s+(?:.|\n)*?\s+&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(?:\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\S+&#34;\s+(?:\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="CHAR:\s+(?:\\[\\abfnrstv]|[^\\]\S*)\s">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="!\s+.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="#!\s+.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="/\*\s+(?:.|\n)*?\s\*/\s">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[tf]\s">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="[\\$]\s+\S+">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="M\\\s+\S+\s+\S+">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="[+-]?(?:[\d,]*\d)?\.(?:\d([\d,]*\d)?)?(?:[eE][+-]?\d+)?\s">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[+-]?\d(?:[\d,]*\d)?(?:[eE][+-]?\d+)?\s">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="0x[a-fA-F\d](?:[a-fA-F\d,]*[a-fA-F\d])?(?:p\d([\d,]*\d)?)?\s">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="NAN:\s+[a-fA-F\d](?:[a-fA-F\d,]*[a-fA-F\d])?(?:p\d([\d,]*\d)?)?\s">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="0b[01]+\s">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0o[0-7]+\s">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="(?:\d([\d,]*\d)?)?\+\d(?:[\d,]*\d)?/\d(?:[\d,]*\d)?\s">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(?:\-\d([\d,]*\d)?)?\-\d(?:[\d,]*\d)?/\d(?:[\d,]*\d)?\s">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(?:deprecated|final|foldable|flushable|inline|recursive)\s">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(identity-hashcode|callstack&gt;array|identity-tuple\?|identity-tuple|retainstack|callstack\?|tri-curry\*|tri-curry@|tri-curry|&lt;wrapper&gt;|datastack|bi-curry@|bi-curry\*|hashcode\*|callstack|\?execute|hashcode|boolean\?|compose\?|&gt;boolean|wrapper\?|bi-curry|unless\*|boolean|assert\?|\(clone\)|either\?|prepose|assert=|execute|wrapper|compose|3curry|assert|2curry|curry\?|object|equal\?|tuple\?|unless|build|3drop|same\?|2tri\*|2tri@|both\?|3keep|4drop|throw|2over|swapd|clear|2keep|2drop|until|curry|4keep|clone|while|tuple|when\*|-rot|tri@|dupd|drop|tri\*|call|when|with|4dup|4dip|3tri|3dup|3dip|2tri|keep|loop|most|2nip|swap|2dup|null|2dip|2bi\*|2bi@|pick|over|and|rot|not|nip|new|if\*|tri|2bi|boa|eq\?|dup|3bi|dip|die|bi\*|bi@|\?if|xor|bi|do|if|or|\?|=)\s">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(assoc-clone-like|assoc-filter-as|assoc-partition|assoc-intersect|assoc-hashcode|assoc-combine|assoc-filter!|assoc-subset\?|assoc-union!|maybe-set-at|extract-keys|assoc-map-as|assoc-differ|assoc-refine|assoc-empty\?|assoc-filter|assoc-diff!|sift-values|assoc-union|assoc-stack|clear-assoc|assoc-all\?|delete-at\*|assoc-find|substitute|assoc-each|assoc-size|assoc-diff|assoc-any\?|assoc-like|rename-at|sift-keys|new-assoc|map&gt;assoc|value-at\*|assoc-map|delete-at|change-at|assoc&gt;map|value-at|push-at|assoc=|values|set-at|&lt;enum&gt;|inc-at|2cache|value\?|assoc\?|&gt;alist|cache|enum\?|assoc|unzip|key\?|enum|keys|\?at|\?of|zip|at\+|at\*|at|of)\s">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(shallow-spread&gt;quot|recursive-hashcode|linear-case-quot|deep-spread&gt;quot|to-fixed-point|execute-effect|wrong-values\?|4cleave&gt;quot|2cleave&gt;quot|wrong-values|3cleave&gt;quot|cleave&gt;quot|call-effect|alist&gt;quot|case&gt;quot|case-find|cond&gt;quot|no-case\?|no-cond\?|no-case|no-cond|4cleave|3cleave|2cleave|cleave|spread|cond|case)\s">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(log2-expects-positive\?|integer&gt;fixnum-strict|log2-expects-positive|out-of-fixnum-range\?|out-of-fixnum-range|find-last-integer|next-power-of-2|\(all-integers\?\)|integer&gt;fixnum|\(find-integer\)|\(each-integer\)|imaginary-part|fp-nan-payload|all-integers\?|find-integer|each-integer|fp-infinity\?|fp-special\?|fp-bitwise=|bits&gt;double|double&gt;bits|power-of-2\?|unless-zero|denominator|next-float|bits&gt;float|float&gt;bits|prev-float|unordered\?|real-part|when-zero|numerator|rational\?|&gt;integer|rational|complex\?|&lt;fp-nan&gt;|fp-qnan\?|fp-snan\?|integer\?|number=|bignum\?|integer|&gt;fixnum|fp-sign|fp-nan\?|fixnum\?|number\?|complex|if-zero|&gt;bignum|bignum|number|fixnum|float\?|bitxor|ratio\?|bitnot|bitand|&gt;float|real\?|bitor|zero\?|even\?|times|shift|float|recip|align|ratio|neg\?|real|log2|bit\?|odd\?|/mod|\?1\+|mod|rem|neg|sgn|u&lt;=|u&gt;=|abs|u&gt;|2/|2\^|/i|/f|sq|&lt;=|u&lt;|&gt;=|-|\+|&lt;|\*|/|&gt;)\s">
+        <token type="NameBuiltin"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fennel.xml 🔗

@@ -0,0 +1,68 @@
+<lexer>
+  <config>
+    <name>Fennel</name>
+    <alias>fennel</alias>
+    <alias>fnl</alias>
+    <filename>*.fennel</filename>
+    <mime_type>text/x-fennel</mime_type>
+    <mime_type>application/x-fennel</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=";.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="-?\d+\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="0x-?[abcdef\d]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="\\(.|[a-z]+)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="::?#?(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="~@|[`\&#39;#^~&amp;@]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(require-macros|set-forcibly!|import-macros|eval-compiler|pick-values|accumulate|macrodebug|pick-args|with-open|icollect|partial|comment|include|collect|hashfn|rshift|values|length|lshift|quote|match|while|doto|band|when|bnot|bxor|not=|tset|-\?&gt;&gt;|each|-&gt;&gt;|let|doc|for|and|set|not|-\?&gt;|bor|lua|\?\.|do|&gt;=|&lt;=|//|\.\.|-&gt;|or|if|~=|\^|&gt;|=|&lt;|:|/|\.|-|\+|\*|%|#) ">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(global|lambda|macros|local|macro|var|fn|λ) ">
+        <token type="KeywordDeclaration"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fish.xml 🔗

@@ -0,0 +1,159 @@
+<lexer>
+  <config>
+    <name>Fish</name>
+    <alias>fish</alias>
+    <alias>fishshell</alias>
+    <filename>*.fish</filename>
+    <filename>*.load</filename>
+    <mime_type>application/x-fish</mime_type>
+  </config>
+  <rules>
+    <state name="paren">
+      <rule pattern="\)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="math">
+      <rule pattern="\)\)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[-+*/%^|&amp;]|\*\*|\|\|">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\d+#\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+#(?! )">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+    </state>
+    <state name="interp">
+      <rule pattern="\$\(\(">
+        <token type="Keyword"/>
+        <push state="math"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Keyword"/>
+        <push state="paren"/>
+      </rule>
+      <rule pattern="\$#?(\w+|.)">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="basic">
+      <rule pattern="(?&lt;=(?:^|\A|;|&amp;&amp;|\|\||\||\b(continue|function|return|switch|begin|while|break|count|false|block|echo|case|true|else|exit|test|set|cdh|and|pwd|for|end|not|if|cd|or)\b)\s*)(continue|function|return|switch|begin|while|break|count|false|block|test|case|true|echo|exit|else|set|cdh|and|pwd|for|end|not|if|cd|or)(?=;?\b)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?&lt;=for\s+\S+\s+)in\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(fish_update_completions|fish_command_not_found|fish_breakpoint_prompt|fish_status_to_signal|fish_right_prompt|fish_is_root_user|fish_mode_prompt|fish_vcs_prompt|fish_key_reader|fish_svn_prompt|fish_git_prompt|fish_hg_prompt|fish_greeting|fish_add_path|commandline|fish_prompt|fish_indent|fish_config|fish_pager|breakpoint|fish_title|prompt_pwd|functions|set_color|realpath|funcsave|contains|complete|argparse|fish_opt|history|builtin|getopts|suspend|command|mimedb|printf|ulimit|disown|string|source|funced|status|random|isatty|fishd|prevd|vared|umask|nextd|alias|pushd|emit|jobs|popd|help|psub|wait|fish|read|time|exec|eval|math|trap|type|dirs|dirh|abbr|kill|bind|hash|open|fc|bg|fg)\s*\b(?!\.)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="#!.*\n">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="#.*\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\\[\w\W]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="(\b\w+)(\s*)(=)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[\[\]()={}]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(?&lt;=\[[^\]]+)\.\.|-(?=[^\[]+\])">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;&lt;-?\s*(\&#39;?)\\?(\w+)[\w\W]+?\2">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(?&lt;=set\s+(?:--?[^\d\W][\w-]*\s+)?)\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(?&lt;=for\s+)\w[\w-]*(?=\s+in)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(?&lt;=function\s+)\w(?:[^\n])*?(?= *[-\n])">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(?&lt;=(?:^|\b(?:and|or|sudo)\b|;|\|\||&amp;&amp;|\||\(|(?:\b\w+\s*=\S+\s)) *)\w[\w-]*">
+        <token type="NameFunction"/>
+      </rule>
+    </state>
+    <state name="data">
+      <rule pattern="(?s)\$?&#34;(\\\\|\\[0-7]+|\\.|[^&#34;\\$])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(?s)\$&#39;(\\\\|\\[0-7]+|\\.|[^&#39;\\])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="(?s)&#39;.*?&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&amp;&amp;|\|\||&amp;|\||\^|&lt;|&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\b\d+\b">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(?&lt;=\s+)--?[^\d][\w-]*">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern=".+?">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?s)(\\\\|\\[0-7]+|\\.|[^&#34;\\$])+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/forth.xml 🔗

@@ -0,0 +1,78 @@
+<lexer>
+  <config>
+    <name>Forth</name>
+    <alias>forth</alias>
+    <filename>*.frt</filename>
+    <filename>*.fth</filename>
+    <filename>*.fs</filename>
+    <mime_type>application/x-forth</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\([\s].*?\)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(:|variable|constant|value|buffer:)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="worddef"/>
+      </rule>
+      <rule pattern="([.sc]&#34;)(\s+?)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="stringdef"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fortran.xml 🔗

@@ -0,0 +1,102 @@
+<lexer>
+  <config>
+    <name>Fortran</name>
+    <alias>fortran</alias>
+    <alias>f90</alias>
+    <filename>*.f03</filename>
+    <filename>*.f90</filename>
+    <filename>*.f95</filename>
+    <filename>*.F03</filename>
+    <filename>*.F90</filename>
+    <filename>*.F95</filename>
+    <mime_type>text/x-fortran</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="core">
+      <rule pattern="\b(DO)(\s+)(CONCURRENT)\b">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(GO)(\s*)(TO)\b">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fortranfixed.xml 🔗

@@ -0,0 +1,71 @@
+<lexer>
+  <config>
+    <name>FortranFixed</name>
+    <alias>fortranfixed</alias>
+    <filename>*.f</filename>
+    <filename>*.F</filename>
+    <mime_type>text/x-fortran</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="cont-char">
+      <rule pattern=" ">
+        <token type="TextWhitespace"/>
+        <push state="code"/>
+      </rule>
+      <rule pattern=".">
+        <token type="GenericStrong"/>
+        <push state="code"/>
+      </rule>
+    </state>
+    <state name="code">
+      <rule pattern="(.{66})(.*)(\n)">
+        <bygroups>
+          <using lexer="Fortran"/>
+          <token type="Comment"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="root"/>
+      </rule>
+      <rule pattern="(.*)(!.*)(\n)">
+        <bygroups>
+          <using lexer="Fortran"/>
+          <token type="Comment"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="root"/>
+      </rule>
+      <rule pattern="(.*)(\n)">
+        <bygroups>
+          <using lexer="Fortran"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="root"/>
+      </rule>
+      <rule>
+        <mutators>
+          <push state="root"/>
+        </mutators>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[C*].*\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="#.*\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern=" {0,4}!.*\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(.{5})">
+        <token type="NameLabel"/>
+        <push state="cont-char"/>
+      </rule>
+      <rule pattern=".*\n">
+        <using lexer="Fortran"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/fsharp.xml 🔗

@@ -0,0 +1,245 @@
+<lexer>
+  <config>
+    <name>FSharp</name>
+    <alias>fsharp</alias>
+    <filename>*.fs</filename>
+    <filename>*.fsi</filename>
+    <mime_type>text/x-fsharp</mime_type>
+  </config>
+  <rules>
+    <state name="comment">
+      <rule pattern="[^(*)@&#34;]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="Comment"/>
+        <push/>
+      </rule>
+      <rule pattern="\*\)">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="@&#34;">
+        <token type="LiteralString"/>
+        <push state="lstring"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="tqs"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[(*)@]">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="escape-sequence"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;B?">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="lstring">
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;B?">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="tqs">
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;B?">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="escape-sequence">
+      <rule pattern="\\[\\&#34;\&#39;ntbrafv]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\[0-9]{3}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\u[0-9a-fA-F]{4}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\U[0-9a-fA-F]{8}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\(\)|\[\]">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="\b(?&lt;!\.)([A-Z][\w\&#39;]*)(?=\s*\.)">
+        <token type="NameNamespace"/>
+        <push state="dotted"/>
+      </rule>
+      <rule pattern="\b([A-Z][\w\&#39;]*)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="///.*?\n">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\(\*(?!\))">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="@&#34;">
+        <token type="LiteralString"/>
+        <push state="lstring"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="tqs"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\b(open type|open|module)(\s+)([\w.]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(let!?)(\s+)(\w+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(type)(\s+)(\w+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(member|override)(\s+)(\w+)(\.)(\w+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Name"/>
+          <token type="Punctuation"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(abstract|as|assert|base|begin|class|default|delegate|do!|do|done|downcast|downto|elif|else|end|exception|extern|false|finally|for|function|fun|global|if|inherit|inline|interface|internal|in|lazy|let!|let|match|member|module|mutable|namespace|new|null|of|open|override|private|public|rec|return!|return|select|static|struct|then|to|true|try|type|upcast|use!|use|val|void|when|while|with|yield!|yield|atomic|break|checked|component|const|constraint|constructor|continue|eager|event|external|fixed|functor|include|method|mixin|object|parallel|process|protected|pure|sealed|tailcall|trait|virtual|volatile)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="``([^`\n\r\t]|`[^`\n\r\t])+``">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="#[ \t]*(if|endif|else|line|nowarn|light|r|\d+)\b">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(!=|#|&amp;&amp;|&amp;|\(|\)|\*|\+|,|-\.|-&gt;|-|\.\.|\.|::|:=|:&gt;|:|;;|;|&lt;-|&lt;\]|&lt;|&gt;\]|&gt;|\?\?|\?|\[&lt;|\[\||\[|\]|_|`|\{|\|\]|\||\}|~|&lt;@@|&lt;@|=|@&gt;|@@&gt;)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="([=&lt;&gt;@^|&amp;+\*/$%-]|[!?~])?[!$%&amp;*+\./:&lt;=&gt;?@^|~-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="\b(sbyte|byte|char|nativeint|unativeint|float32|single|float|double|int8|uint8|int16|uint16|int32|uint32|int64|uint64|decimal|unit|bool|string|list|exn|obj|enum)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[^\W\d][\w&#39;]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\d[\d_]*[uU]?[yslLnQRZINGmM]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="0[xX][\da-fA-F][\da-fA-F_]*[uU]?[yslLn]?[fF]?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[oO][0-7][0-7_]*[uU]?[yslLn]?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[bB][01][01_]*[uU]?[yslLn]?">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)[fFmM]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="&#39;(?:(\\[\\\&#34;&#39;ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))&#39;B?">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;.&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="@?&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[~?][a-z][\w\&#39;]*:">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="dotted">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*(?=\s*\.)">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-z_][\w\&#39;]*">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gas.xml 🔗

@@ -0,0 +1,150 @@
+<lexer>
+  <config>
+    <name>GAS</name>
+    <alias>gas</alias>
+    <alias>asm</alias>
+    <filename>*.s</filename>
+    <filename>*.S</filename>
+    <mime_type>text/x-gas</mime_type>
+    <priority>0.1</priority>
+  </config>
+  <rules>
+    <state name="punctuation">
+      <rule pattern="[-*,.()\[\]!:]+">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+):">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="\.(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+)">
+        <token type="NameAttribute"/>
+        <push state="directive-args"/>
+      </rule>
+      <rule pattern="lock|rep(n?z)?|data\d+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+)">
+        <token type="NameFunction"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="directive-args">
+      <rule pattern="(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+)">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="&#34;(\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="@(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+)">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(?:0[xX][a-zA-Z0-9]+|\d+)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="%(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([;#]|//).*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/[*].*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/[*].*?\n[\w\W]*?[*]/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+    </state>
+    <state name="instruction-args">
+      <rule pattern="([a-z0-9]+)( )(&lt;)((?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+))(&gt;)">
+        <bygroups>
+          <token type="LiteralNumberHex"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="NameConstant"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([a-z0-9]+)( )(&lt;)((?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+))([-+])((?:0[xX][a-zA-Z0-9]+|\d+))(&gt;)">
+        <bygroups>
+          <token type="LiteralNumberHex"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="NameConstant"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumberInteger"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+)">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(?:0[xX][a-zA-Z0-9]+|\d+)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="%(?:[a-zA-Z$_][\w$.@-]*|\.[\w$.@-]+)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="$(?:0[xX][a-zA-Z0-9]+|\d+)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="$&#39;(.|\\&#39;)&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([;#]|//).*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/[*].*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/[*].*?\n[\w\W]*?[*]/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="([;#]|//).*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/[*][\w\W]*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gdscript.xml 🔗

@@ -0,0 +1,259 @@
+<lexer>
+  <config>
+    <name>GDScript</name>
+    <alias>gdscript</alias>
+    <alias>gd</alias>
+    <filename>*.gd</filename>
+    <mime_type>text/x-gdscript</mime_type>
+    <mime_type>application/x-gdscript</mime_type>
+    <priority>0.1</priority>
+    <analyse>
+      <regex pattern="^export" score="0.1"/>
+    </analyse>
+  </config>
+  <rules>
+    <state name="dqs">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings_double"/>
+      </rule>
+    </state>
+    <state name="tdqs">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings_double"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(?&lt;!\w)(PI|TAU|NAN|INF|true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(is|in|as|not|or|and)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(var|const|enum|signal|static)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(if|elif|else|for|while|match|break|continue|pass|return|breakpoint|await|yield|super)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(self)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+    </state>
+    <state name="builtin_funcs">
+      <rule pattern="(?&lt;!\w)(assert|char|convert|dict_to_inst|get_stack|inst_to_dict|is_instance_of|len|load|preload|print_debug|print_stack|range|type_exists)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(abs[fi]?|acos|asin|atan2?|bezier_(derivative|interpolate)|bytes_to_var(_with_objects)?|ceil[fi]?|clamp[fi]?|cosh?|cubic_interpolate(_angle)?(_in_time)?|db_to_linear|deg_to_rad|ease|error_string|exp|floor[fi]?|fmod|fposmod|hash|instance_from_id|inverse_lerp|is_equal_approx|is_finite|is_instance(_id)?_valid|is_nan|is_same|is_zero_approx|lerp|lerp_angle|lerpf|linear_to_db|log|max[fi]?|min[fi]?|move_toward|nearest_po2|pingpong|posmod|pow|print|print_rich|print_verbose|printerr|printraw|prints|printt|push_error|push_warning|rad_to_deg|rand_from_seed|randf|randf_range|randfn|randi|randi_range|randomize|remap|rid_allocate_id|rid_from_int64|round[fi]?|seed|sign[fi]?|sinh?|smoothstep|snapped[fi]?|sqrt|step_decimals|str|str_to_var|tanh?|typeof|var_to_bytes(_with_objects)?|var_to_str|weakref|wrap[fi]?)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+    </state>
+    <state name="tsqs">
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings_single"/>
+      </rule>
+    </state>
+    <state name="strings_single">
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule pattern="\{[^\\\&#39;\n]+\}">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^\\\&#39;\{%]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="%">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="{">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="funcname">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+[eE][+-]?[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[a-fA-F0-9]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0b[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="sqs">
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings_single"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="annotations">
+      <rule pattern="^\s*@export(_category|_color_no_alpha|_dir|_enum|_exp_easing|_file|_flags((_2d|_3d)(_navigation|_physics|_render)|_avoidance)?|_global(_file|_dir)|_group|_multiline|_node_path|_placeholder|_range|_subgroup)?">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="^\s*@(icon|onready|rpc|tool|warning_ignore)">
+        <token type="NameDecorator"/>
+      </rule>
+    </state>
+    <state name="types">
+      <rule pattern="(?&lt;!\w)(null|void|bool|int|float)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(String(Name)?|NodePath|Vector[234]i?|Rect2|Transform[23]D|Plane|Quaternion|AABB|Basis|Color8?|RID|Object|(Packed(Byte|Int(32|64)|Float(32|64)|String|Vector(2|3)|Color))?Array|Dictionary|Signal|Callable)\b">
+        <token type="NameClass"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gdscript3.xml 🔗

@@ -0,0 +1,270 @@
+<lexer>
+  <config>
+    <name>GDScript3</name>
+    <alias>gdscript3</alias>
+    <alias>gd3</alias>
+    <filename>*.gd</filename>
+    <mime_type>text/x-gdscript</mime_type>
+    <mime_type>application/x-gdscript</mime_type>
+    <analyse>
+      <regex pattern="func (_ready|_init|_input|_process|_unhandled_input)" score="0.8"/>
+      <regex pattern="(extends |class_name |onready |preload|load|setget|func [^_])" score="0.4"/>
+      <regex pattern="(var|const|enum|export|signal|tool)" score="0.2"/>
+    </analyse>
+  </config>
+  <rules>
+    <state name="builtins">
+      <rule pattern="(?&lt;!\.)(instance_from_id|nearest_po2|print_stack|type_exist|rand_range|linear2db|var2bytes|dict2inst|randomize|bytes2var|rand_seed|db2linear|inst2dict|printerr|printraw|decimals|preload|deg2rad|str2var|stepify|var2str|convert|weakref|fposmod|funcref|rad2deg|dectime|printt|is_inf|is_nan|assert|Color8|typeof|ColorN|prints|floor|atan2|yield|randf|print|range|clamp|round|randi|sqrt|tanh|cosh|ceil|ease|acos|load|fmod|lerp|seed|sign|atan|sinh|hash|asin|sin|str|cos|tan|pow|exp|min|abs|log|max)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(self|false|true|PI|NAN|INF)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gherkin.xml 🔗

@@ -0,0 +1,263 @@
+<lexer>
+  <config>
+    <name>Gherkin</name>
+    <alias>cucumber</alias>
+    <alias>Cucumber</alias>
+    <alias>gherkin</alias>
+    <alias>Gherkin</alias>
+    <filename>*.feature</filename>
+    <filename>*.FEATURE</filename>
+    <mime_type>text/x-gherkin</mime_type>
+  </config>
+  <rules>
+    <state name="comments">
+      <rule pattern="\s*#.*$">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="featureElementsOnStack">

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gleam.xml 🔗

@@ -0,0 +1,117 @@
+<lexer>
+  <config>
+    <name>Gleam</name>
+    <alias>gleam</alias>
+    <filename>*.gleam</filename>
+    <mime_type>text/x-gleam</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="///(.*?)\n">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="//(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(as|assert|case|opaque|panic|pub|todo)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(import|use)\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(auto|const|delegate|derive|echo|else|if|implement|macro|test)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(let)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(fn)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(type)\b">
+        <token type="Keyword"/>
+        <push state="typename"/>
+      </rule>
+      <rule pattern="(True|False)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="0[bB][01](_?[01])*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[oO][0-7](_?[0-7])*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][\da-fA-F](_?[\dA-Fa-f])*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d(_?\d)*\.\d(_?\d)*([eE][-+]?\d(_?\d)*)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d(_?\d)*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="@([a-z_]\w*[!?]?)">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[{}()\[\],]|[#(]|\.\.|&lt;&gt;|&lt;&lt;|&gt;&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[+\-*/%!=&lt;&gt;&amp;|.]|&lt;-">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=":|-&gt;">
+        <token type="Operator"/>
+        <push state="typename"/>
+      </rule>
+      <rule pattern="([a-z_][A-Za-z0-9_]*)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([A-Z][A-Za-z0-9_]*)(\()">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([a-z_]\w*[!?]?)">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="typename">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="[A-Z][A-Za-z0-9_]*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\[&#34;\\fnrt]|\\u\{[\da-fA-F]{1,6}\}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/glsl.xml 🔗

@@ -0,0 +1,65 @@
+<lexer>
+  <config>
+    <name>GLSL</name>
+    <alias>glsl</alias>
+    <filename>*.vert</filename>
+    <filename>*.frag</filename>
+    <filename>*.geo</filename>
+    <mime_type>text/x-glslsrc</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^#.*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="//.*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\+|-|~|!=?|\*|/|%|&lt;&lt;|&gt;&gt;|&lt;=?|&gt;=?|==?|&amp;&amp;?|\^|\|\|?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[?:]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\bdefined\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[;{}(),\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[+-]?\d*\.\d+([eE][-+]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+-]?\d+\.\d*([eE][-+]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[1-9][0-9]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\b(sampler3DsamplerCube|sampler2DShadow|sampler1DShadow|invariant|sampler1D|sampler2D|attribute|mat3mat4|centroid|continue|varying|uniform|discard|mat4x4|mat3x3|mat2x3|mat4x2|mat3x2|mat2x2|mat2x4|mat3x4|struct|return|mat4x3|bvec4|false|ivec4|ivec3|const|float|inout|ivec2|break|while|bvec3|bvec2|vec3|else|true|void|bool|vec2|vec4|mat2|for|out|int|in|do|if)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(sampler2DRectShadow|sampler2DRect|sampler3DRect|namespace|precision|interface|volatile|template|unsigned|external|noinline|mediump|typedef|default|switch|static|extern|inline|sizeof|output|packed|double|public|fvec3|class|union|short|highp|fixed|input|fvec4|hvec2|hvec3|hvec4|dvec2|dvec3|dvec4|fvec2|using|long|this|enum|lowp|cast|goto|half|asm)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/gnuplot.xml 🔗

@@ -0,0 +1,289 @@
+<lexer>
+  <config>
+    <name>Gnuplot</name>
+    <alias>gnuplot</alias>
+    <filename>*.plot</filename>
+    <filename>*.plt</filename>
+    <mime_type>text/x-gnuplot</mime_type>
+  </config>
+  <rules>
+    <state name="whitespace">
+      <rule pattern="#">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="[ \t\v\f]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="save">
+      <rule pattern="functions\b|function\b|functio\b|functi\b|funct\b|func\b|fun\b|fu\b|f\b|set\b|se\b|s\b|terminal\b|termina\b|termin\b|termi\b|term\b|ter\b|te\b|t\b|variables\b|variable\b|variabl\b|variab\b|varia\b|vari\b|var\b|va\b|v\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule>
+        <include state="genericargs"/>
+      </rule>
+    </state>
+    <state name="pause">
+      <rule pattern="(mouse|any|button1|button2|button3)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="keypress\b|keypres\b|keypre\b|keypr\b|keyp\b|key\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule>
+        <include state="genericargs"/>
+      </rule>
+    </state>
+    <state name="plot">
+      <rule pattern="axes\b|axe\b|ax\b|axis\b|axi\b|binary\b|binar\b|bina\b|bin\b|every\b|ever\b|eve\b|ev\b|index\b|inde\b|ind\b|in\b|i\b|matrix\b|matri\b|matr\b|mat\b|smooth\b|smoot\b|smoo\b|smo\b|sm\b|s\b|thru\b|title\b|titl\b|tit\b|ti\b|t\b|notitle\b|notitl\b|notit\b|noti\b|not\b|using\b|usin\b|usi\b|us\b|u\b|with\b|wit\b|wi\b|w\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule>
+        <include state="genericargs"/>
+      </rule>
+    </state>
+    <state name="if">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="genericargs"/>
+      </rule>
+    </state>
+    <state name="genericargs">
+      <rule>
+        <include state="noargs"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="dqstring"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="sqstring"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[,.~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{}()\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(eq|ne)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="@[a-zA-Z_]\w*">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^\\\n]">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="Comment"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="bind\b|bin\b|bi\b">
+        <token type="Keyword"/>
+        <push state="bind"/>
+      </rule>
+      <rule pattern="exit\b|exi\b|ex\b|quit\b|qui\b|qu\b|q\b">
+        <token type="Keyword"/>
+        <push state="quit"/>
+      </rule>
+      <rule pattern="fit\b|fi\b|f\b">
+        <token type="Keyword"/>
+        <push state="fit"/>
+      </rule>
+      <rule pattern="(if)(\s*)(\()">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="if"/>
+      </rule>
+      <rule pattern="else\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="pause\b|paus\b|pau\b|pa\b">
+        <token type="Keyword"/>
+        <push state="pause"/>
+      </rule>
+      <rule pattern="plot\b|plo\b|pl\b|p\b|replot\b|replo\b|repl\b|rep\b|splot\b|splo\b|spl\b|sp\b">
+        <token type="Keyword"/>
+        <push state="plot"/>
+      </rule>
+      <rule pattern="save\b|sav\b|sa\b">
+        <token type="Keyword"/>
+        <push state="save"/>
+      </rule>
+      <rule pattern="set\b|se\b">
+        <token type="Keyword"/>
+        <push state="genericargs" state="optionarg"/>
+      </rule>
+      <rule pattern="show\b|sho\b|sh\b|unset\b|unse\b|uns\b">
+        <token type="Keyword"/>
+        <push state="noargs" state="optionarg"/>
+      </rule>
+      <rule pattern="lower\b|lowe\b|low\b|raise\b|rais\b|rai\b|ra\b|call\b|cal\b|ca\b|cd\b|clear\b|clea\b|cle\b|cl\b|help\b|hel\b|he\b|h\b|\?\b|history\b|histor\b|histo\b|hist\b|his\b|hi\b|load\b|loa\b|lo\b|l\b|print\b|prin\b|pri\b|pr\b|pwd\b|reread\b|rerea\b|rere\b|rer\b|re\b|reset\b|rese\b|res\b|screendump\b|screendum\b|screendu\b|screend\b|screen\b|scree\b|scre\b|scr\b|shell\b|shel\b|she\b|system\b|syste\b|syst\b|sys\b|sy\b|update\b|updat\b|upda\b|upd\b|up\b">
+        <token type="Keyword"/>
+        <push state="genericargs"/>
+      </rule>
+      <rule pattern="pwd\b|reread\b|rerea\b|rere\b|rer\b|re\b|reset\b|rese\b|res\b|screendump\b|screendum\b|screendu\b|screend\b|screen\b|scree\b|scre\b|scr\b|shell\b|shel\b|she\b|test\b">
+        <token type="Keyword"/>
+        <push state="noargs"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(=)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+        <push state="genericargs"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*\(.*?\)\s*)(=)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+        <push state="genericargs"/>
+      </rule>
+      <rule pattern="@[a-zA-Z_]\w*">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="dqstring">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-fA-F0-9]{2,4}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="optionarg">
+      <rule>
+        <include state="whitespace"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/go_template.xml 🔗

@@ -0,0 +1,114 @@
+<lexer>
+  <config>
+    <name>Go Template</name>
+    <alias>go-template</alias>
+    <filename>*.gotmpl</filename>
+    <filename>*.go.tmpl</filename>
+  </config>
+  <rules>
+    <state name="template">
+      <rule pattern="[-]?}}">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=}})">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Operator"/>
+        <push state="subexpression"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="expression"/>
+      </rule>
+    </state>
+    <state name="subexpression">
+      <rule pattern="\)">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="expression"/>
+      </rule>
+    </state>
+    <state name="expression">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Operator"/>
+        <push state="subexpression"/>
+      </rule>
+      <rule pattern="(range|if|else|while|with|template|end|true|false|nil|and|call|html|index|js|len|not|or|print|printf|println|urlquery|eq|ne|lt|le|gt|ge|block|break|continue|define|slice)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\||:?=|,">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[$]?[^\W\d]\w*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="\$|[$]?\.(?:[^\W\d]\w*)?">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="-?\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="-?\d+\.\d*([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\.\d+([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="-?\d+[Ee][-+]\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="-?\d+(\.\d+[eE][+\-]?\d+|\.\d*|[eE][+\-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\.\d+([eE][+\-]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?0[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="-?0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="-?0b[01_]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="-?(0|[1-9][0-9]*)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#39;(\\[&#39;&#34;\\abfnrtv]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|[^\\])&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="`[^`]*`">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="{{(- )?/\*(.|\n)*?\*/( -)?}}">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="{{[-]?">
+        <token type="CommentPreproc"/>
+        <push state="template"/>
+      </rule>
+      <rule pattern="[^{]+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="{">
+        <token type="Other"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/graphql.xml 🔗

@@ -0,0 +1,88 @@
+<lexer>
+  <config>
+    <name>GraphQL</name>
+    <alias>graphql</alias>
+    <alias>graphqls</alias>
+    <alias>gql</alias>
+    <filename>*.graphql</filename>
+    <filename>*.graphqls</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(query|mutation|subscription|fragment|scalar|implements|interface|union|enum|input|type)">
+        <token type="KeywordDeclaration"/>
+        <push state="type"/>
+      </rule>
+      <rule pattern="(on|extend|schema|directive|\.\.\.)">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(QUERY|MUTATION|SUBSCRIPTION|FIELD|FRAGMENT_DEFINITION|FRAGMENT_SPREAD|INLINE_FRAGMENT|SCHEMA|SCALAR|OBJECT|FIELD_DEFINITION|ARGUMENT_DEFINITION|INTERFACE|UNION|ENUM|ENUM_VALUE|INPUT_OBJECT|INPUT_FIELD_DEFINITION)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[^\W\d]\w*">
+        <token type="NameProperty"/>
+      </rule>
+      <rule pattern="\@\w+">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+        <push state="type"/>
+      </rule>
+      <rule pattern="[\(\)\{\}\[\],!\|=]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\$\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+\.\d*([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\.\d+([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+[Ee][-+]\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+(\.\d+[eE][+\-]?\d+|\.\d*|[eE][+\-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\.\d+([eE][+\-]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(0|[1-9][0-9]*)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;[\x00-\x7F]*?&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(\\[&#34;\\abfnrtv]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|[^\\])&#34;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(true|false|null)*&#34;">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="[\r\n\s]+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="#[^\r\n]*">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="type">
+      <rule pattern="[^\W\d]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/groff.xml 🔗

@@ -0,0 +1,90 @@
+<lexer>
+  <config>
+    <name>Groff</name>
+    <alias>groff</alias>
+    <alias>nroff</alias>
+    <alias>man</alias>
+    <filename>*.[1-9]</filename>
+    <filename>*.1p</filename>
+    <filename>*.3pm</filename>
+    <filename>*.man</filename>
+    <mime_type>application/x-troff</mime_type>
+    <mime_type>text/troff</mime_type>
+  </config>
+  <rules>
+    <state name="request">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="&#34;[^\n&#34;]+&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="(\.)(\w+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="request"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+        <push state="request"/>
+      </rule>
+      <rule pattern="[^\\\n]+">
+        <token type="Text"/>
+        <push state="textline"/>
+      </rule>
+      <rule>
+        <push state="textline"/>
+      </rule>
+    </state>
+    <state name="textline">
+      <rule>
+        <include state="escapes"/>
+      </rule>
+      <rule pattern="[^\\\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="escapes">
+      <rule pattern="\\&#34;[^\n]*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\\[fn]\w">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\\(.{2}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\.\[.*\]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+        <push state="request"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/groovy.xml 🔗

@@ -0,0 +1,135 @@
+<lexer>
+  <config>
+    <name>Groovy</name>
+    <alias>groovy</alias>
+    <filename>*.groovy</filename>
+    <filename>*.gradle</filename>
+    <mime_type>text/x-groovy</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#!(.*?)$">
+        <token type="CommentPreproc"/>
+        <push state="base"/>
+      </rule>
+      <rule>
+        <push state="base"/>
+      </rule>
+    </state>
+    <state name="base">
+      <rule pattern="^(\s*(?:[a-zA-Z_][\w.\[\]]*\s+)+?)([a-zA-Z_]\w*)(\s*)(\()">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="@[a-zA-Z_][\w.]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="(as|assert|break|case|catch|continue|default|do|else|finally|for|if|in|goto|instanceof|new|return|switch|this|throw|try|while|in|as)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(abstract|const|extends|final|implements|native|private|protected|public|static|strictfp|super|synchronized|throws|transient|volatile)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(def|var|boolean|byte|char|double|float|int|long|short|void)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(package)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(class|interface|enum|trait|record)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="(import)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;.*?&#34;&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;.*?&#39;&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="\$/((?!/\$).)*/\$">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="/(\\\\|\\&#34;|[^/])*/">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;|&#39;\\u[0-9a-fA-F]{4}&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(\.)([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="[a-zA-Z_$]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[~^*!%&amp;\[\](){}&lt;&gt;|+=:;,./?-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+L?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="class">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[\w.]+\*?">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/handlebars.xml 🔗

@@ -0,0 +1,147 @@
+<lexer>
+  <config>
+    <name>Handlebars</name>
+    <alias>handlebars</alias>
+    <alias>hbs</alias>
+    <filename>*.handlebars</filename>
+    <filename>*.hbs</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^{]+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="\{\{!.*\}\}">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(\{\{\{)(\s*)">
+        <bygroups>
+          <token type="CommentSpecial"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="(\{\{)(\s*)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\}\}\}">
+        <token type="CommentSpecial"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\}\}">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([#/]*)(each|if|unless|else|with|log|in(?:line)?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#\*inline">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="([#/])([\w-]+)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([\w-]+)(=)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&gt;)(\s*)(@partial-block)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(#?&gt;)(\s*)([\w-]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&gt;)(\s*)(\()">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="dynamic-partial"/>
+      </rule>
+      <rule>
+        <include state="generic"/>
+      </rule>
+    </state>
+    <state name="dynamic-partial">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(lookup)(\s+)(\.|this)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(lookup)(\s+)(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <usingself state="variable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[\w-]+">
+        <token type="NameFunction"/>
+      </rule>
+      <rule>
+        <include state="generic"/>
+      </rule>
+    </state>
+    <state name="variable">
+      <rule pattern="[a-zA-Z][\w-]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\.[\w-]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(this\/|\.\/|(\.\.\/)+)[\w-]+">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="generic">
+      <rule>
+        <include state="variable"/>
+      </rule>
+      <rule pattern=":?&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern=":?&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|0[xX][0-9a-fA-F]+[Ll]?">
+        <token type="LiteralNumber"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hare.xml 🔗

@@ -0,0 +1,98 @@
+<lexer>
+  <config>
+    <name>Hare</name>
+    <alias>hare</alias>
+    <filename>*.ha</filename>
+    <mime_type>text/x-hare</mime_type>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="&quot;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\0abfnrtv&quot;']|x[a-fA-F0-9]{2}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&quot;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[\s\n]+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="@[a-z]+">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="//.*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="&quot;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="`[^`]*`">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="'(\\[\\0abfnrtv&quot;']||\\(x[a-fA-F0-9]{2}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8})|[^\\'])'">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(0|[1-9]\d*)\.\d+([eE][+-]?\d+)?(f32|f64)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(0|[1-9]\d*)([eE][+-]?\d+)?(f32|f64)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+\.[0-9a-fA-F]+([pP][+-]?\d+(f32|f64)?)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+[pP][+-]?\d+(f32|f64)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+(z|[iu](8|16|32|64)?)?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0o[0-7]+(z|[iu](8|16|32|64)?)?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0b[01]+(z|[iu](8|16|32|64)?)?">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="(0|[1-9]\d*)([eE][+-]?\d+)?(z|[iu](8|16|32|64)?)?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]|[ai]s\b|\.\.\.">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.{};]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="use\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(_|align|break|const|continue|else|enum|export|for|if|return|static|struct|offset|union|fn|free|assert|abort|alloc|let|len|def|type|match|switch|case|append|delete|insert|defer|yield|vastart|vaarg|vaend)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(size)([\s\n]*)(\()">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="TextWhitespace" />
+          <token type="Punctuation" />
+        </bygroups>
+      </rule>
+      <rule pattern="(str|size|rune|bool|int|uint|uintptr|u8|u16|u32|u64|i8|i16|i32|i64|f32|f64|null|void|done|nullable|valist|opaque|never)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/haskell.xml 🔗

@@ -0,0 +1,275 @@
+<lexer>
+  <config>
+    <name>Haskell</name>
+    <alias>haskell</alias>
+    <alias>hs</alias>
+    <filename>*.hs</filename>
+    <mime_type>text/x-haskell</mime_type>
+  </config>
+  <rules>
+    <state name="escape">
+      <rule pattern="[abfnrtv&#34;\&#39;&amp;\\]">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\^[][\p{Lu}@^_]">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="NUL|SOH|[SE]TX|EOT|ENQ|ACK|BEL|BS|HT|LF|VT|FF|CR|S[OI]|DLE|DC[1-4]|NAK|SYN|ETB|CAN|EM|SUB|ESC|[FGRU]S|SP|DEL">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="o[0-7]+">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="x[\da-fA-F]+">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s+\\">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="--(?![!#$%&amp;*+./&lt;=&gt;?@^|_~:\\]).*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\{-">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\bimport\b">
+        <token type="KeywordReserved"/>
+        <push state="import"/>
+      </rule>
+      <rule pattern="\bmodule\b">
+        <token type="KeywordReserved"/>
+        <push state="module"/>
+      </rule>
+      <rule pattern="\berror\b">
+        <token type="NameException"/>
+      </rule>
+      <rule pattern="\b(case|class|data|default|deriving|do|else|family|if|in|infix[lr]?|instance|let|newtype|of|then|type|where|_)(?!\&#39;)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="&#39;[^\\]&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="^[_\p{Ll}][\w\&#39;]*">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="&#39;?[_\p{Ll}][\w&#39;]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="(&#39;&#39;)?[\p{Lu}][\w\&#39;]*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(&#39;)[\p{Lu}][\w\&#39;]*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(&#39;)\[[^\]]*\]">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(&#39;)\([^)]*\)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\\(?![:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+)">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(&lt;-|::|-&gt;|=&gt;|=|'([:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+))(?![:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+)">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern=":[:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\d+_*[eE][+-]?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+(_+[\d]+)*\.\d+(_+[\d]+)*([eE][+-]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[oO](_*[0-7])+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX](_*[\da-fA-F])+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[bB](_*[01])+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="\d+(_*[\d])*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringChar"/>
+        <push state="character"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\[\]">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\(\)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[][(),;`{}]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="qualified\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="([\p{Lu}][\w.]*)(\s+)(as)(\s+)([\p{Lu}][\w.]*)">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Name"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([\p{Lu}][\w.]*)(\s+)(hiding)(\s+)(\()">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="funclist"/>
+      </rule>
+      <rule pattern="([\p{Lu}][\w.]*)(\s+)(\()">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="funclist"/>
+      </rule>
+      <rule pattern="[\w.]+">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="module">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="([\p{Lu}][\w.]*)(\s+)(\()">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="funclist"/>
+      </rule>
+      <rule pattern="[\p{Lu}][\w.]*">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="funclist">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[\p{Lu}]\w*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(_[\w\&#39;]+|[\p{Ll}][\w\&#39;]*)">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="--(?![!#$%&amp;*+./&lt;=&gt;?@^|_~:\\]).*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\{-">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="funclist" state="funclist"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^-{}]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\{-">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="-\}">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[-{}]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="character">
+      <rule pattern="[^\\&#39;]&#39;">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralStringEscape"/>
+        <push state="escape"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralStringEscape"/>
+        <push state="escape"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hcl.xml 🔗

@@ -0,0 +1,143 @@
+<lexer>
+  <config>
+    <name>HCL</name>
+    <alias>hcl</alias>
+    <filename>*.hcl</filename>
+    <mime_type>application/x-hcl</mime_type>
+  </config>
+  <rules>
+    <state name="punctuation">
+      <rule pattern="[\[\](),.]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="(&#34;.*&#34;)">
+        <bygroups>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="string"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule>
+        <include state="curly"/>
+      </rule>
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumber"/>
+      </rule>
+    </state>
+    <state name="basic">
+      <rule pattern="\b(false|true)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\s*/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\s*#.*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(.*?)(\s*)(=)">
+        <bygroups>
+          <token type="Name"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\b\w+\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="var_builtin"/>
+      </rule>
+    </state>
+    <state name="curly">
+      <rule pattern="\{">
+        <token type="TextPunctuation"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="TextPunctuation"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule pattern="(\s+)(&#34;.*&#34;)(\s+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule>
+        <include state="curly"/>
+      </rule>
+    </state>
+    <state name="var_builtin">
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="\b(element|concat|lookup|file|join)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule>
+        <include state="string"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hexdump.xml 🔗

@@ -0,0 +1,189 @@
+<lexer>
+  <config>
+    <name>Hexdump</name>
+    <alias>hexdump</alias>
+  </config>
+  <rules>
+    <state name="offset">
+      <rule pattern="^([0-9A-Ha-h]+)(:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="offset-mode"/>
+      </rule>
+      <rule pattern="^[0-9A-Ha-h]+">
+        <token type="NameLabel"/>
+      </rule>
+    </state>
+    <state name="offset-mode">
+      <rule pattern="\s">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[0-9A-Ha-h]+">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="piped-strings">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <include state="offset"/>
+      </rule>
+      <rule pattern="[0-9A-Ha-h]{2}">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(\s{2,3})(\|)(.{1,16})(\|)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\s">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^\*">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="bracket-strings">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <include state="offset"/>
+      </rule>
+      <rule pattern="[0-9A-Ha-h]{2}">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(\s{2,3})(\&gt;)(.{1,16})(\&lt;)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\s">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^\*">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="nonpiped-strings">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <include state="offset"/>
+      </rule>
+      <rule pattern="([0-9A-Ha-h]{2})(\-)([0-9A-Ha-h]{2})">
+        <bygroups>
+          <token type="LiteralNumberHex"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumberHex"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[0-9A-Ha-h]{2}">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(\s{19,})(.{1,20}?)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s{2,3})(.{1,20})$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\s">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^\*">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <include state="offset"/>
+      </rule>
+      <rule pattern="([0-9A-Ha-h]{2})(\-)([0-9A-Ha-h]{2})">
+        <bygroups>
+          <token type="LiteralNumberHex"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumberHex"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[0-9A-Ha-h]{2}">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(\s{2,3})(\&gt;)(.{16})(\&lt;)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="bracket-strings"/>
+      </rule>
+      <rule pattern="(\s{2,3})(\|)(.{16})(\|)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="piped-strings"/>
+      </rule>
+      <rule pattern="(\s{2,3})(\&gt;)(.{1,15})(\&lt;)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s{2,3})(\|)(.{1,15})(\|)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s{2,3})(.{1,15})$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s{2,3})(.{16}|.{20})$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="nonpiped-strings"/>
+      </rule>
+      <rule pattern="\s">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^\*">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hlb.xml 🔗

@@ -0,0 +1,149 @@
+<lexer>
+  <config>
+    <name>HLB</name>
+    <alias>hlb</alias>
+    <filename>*.hlb</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(#.*)">
+        <bygroups>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((\b(0(b|B|o|O|x|X)[a-fA-F0-9]+)\b)|(\b(0|[1-9][0-9]*)\b))">
+        <bygroups>
+          <token type="LiteralNumber"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((\b(true|false)\b))">
+        <bygroups>
+          <token type="NameBuiltin"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\bstring\b|\bint\b|\bbool\b|\bfs\b|\boption\b)">
+        <bygroups>
+          <token type="KeywordType"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\b[a-zA-Z_][a-zA-Z0-9]*\b)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="params"/>
+      </rule>
+      <rule pattern="(\{)">
+        <bygroups>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="block"/>
+      </rule>
+      <rule pattern="(\n|\r|\r\n)">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="block">
+      <rule pattern="(\})">
+        <bygroups>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(#.*)">
+        <bygroups>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((\b(0(b|B|o|O|x|X)[a-fA-F0-9]+)\b)|(\b(0|[1-9][0-9]*)\b))">
+        <bygroups>
+          <token type="LiteralNumber"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((\b(true|false)\b))">
+        <bygroups>
+          <token type="KeywordConstant"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(with)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(as)([\t ]+)(\b[a-zA-Z_][a-zA-Z0-9]*\b)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\bstring\b|\bint\b|\bbool\b|\bfs\b|\boption\b)([\t ]+)(\{)">
+        <bygroups>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="block"/>
+      </rule>
+      <rule pattern="(?!\b(?:scratch|image|resolve|http|checksum|chmod|filename|git|keepGitDir|local|includePatterns|excludePatterns|followPaths|generate|frontendInput|shell|run|readonlyRootfs|env|dir|user|network|security|host|ssh|secret|mount|target|localPath|uid|gid|mode|readonly|tmpfs|sourcePath|cache|mkdir|createParents|chown|createdTime|mkfile|rm|allowNotFound|allowWildcards|copy|followSymlinks|contentsOnly|unpack|createDestPath)\b)(\b[a-zA-Z_][a-zA-Z0-9]*\b)">
+        <bygroups>
+          <token type="NameOther"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\n|\r|\r\n)">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="params">
+      <rule pattern="(\))">
+        <bygroups>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(variadic)">
+        <bygroups>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\bstring\b|\bint\b|\bbool\b|\bfs\b|\boption\b)">
+        <bygroups>
+          <token type="KeywordType"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\b[a-zA-Z_][a-zA-Z0-9]*\b)">
+        <bygroups>
+          <token type="NameOther"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\n|\r|\r\n)">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hlsl.xml 🔗

@@ -0,0 +1,110 @@
+<lexer>
+  <config>
+    <name>HLSL</name>
+    <alias>hlsl</alias>
+    <filename>*.hlsl</filename>
+    <filename>*.hlsli</filename>
+    <filename>*.cginc</filename>
+    <filename>*.fx</filename>
+    <filename>*.fxh</filename>
+    <mime_type>text/x-hlsl</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^#.*$">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="//.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\+|-|~|!=?|\*|/|%|&lt;&lt;|&gt;&gt;|&lt;=?|&gt;=?|==?|&amp;&amp;?|\^|\|\|?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[?:]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\bdefined\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[;{}(),.\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[+-]?\d*\.\d+([eE][-+]?\d+)?f?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+-]?\d+\.\d*([eE][-+]?\d+)?f?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[1-9][0-9]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&quot;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\b(asm|asm_fragment|break|case|cbuffer|centroid|class|column_major|compile|compile_fragment|const|continue|default|discard|do|else|export|extern|for|fxgroup|globallycoherent|groupshared|if|in|inline|inout|interface|line|lineadj|linear|namespace|nointerpolation|noperspective|NULL|out|packoffset|pass|pixelfragment|point|precise|return|register|row_major|sample|sampler|shared|stateblock|stateblock_state|static|struct|switch|tbuffer|technique|technique10|technique11|texture|typedef|triangle|triangleadj|uniform|vertexfragment|volatile|while)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="\b(auto|catch|char|const_cast|delete|dynamic_cast|enum|explicit|friend|goto|long|mutable|new|operator|private|protected|public|reinterpret_cast|short|signed|sizeof|static_cast|template|this|throw|try|typename|union|unsigned|using|virtual)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="\b(dword|matrix|snorm|string|unorm|unsigned|void|vector|BlendState|Buffer|ByteAddressBuffer|ComputeShader|DepthStencilState|DepthStencilView|DomainShader|GeometryShader|HullShader|InputPatch|LineStream|OutputPatch|PixelShader|PointStream|RasterizerState|RenderTargetView|RasterizerOrderedBuffer|RasterizerOrderedByteAddressBuffer|RasterizerOrderedStructuredBuffer|RasterizerOrderedTexture1D|RasterizerOrderedTexture1DArray|RasterizerOrderedTexture2D|RasterizerOrderedTexture2DArray|RasterizerOrderedTexture3D|RWBuffer|RWByteAddressBuffer|RWStructuredBuffer|RWTexture1D|RWTexture1DArray|RWTexture2D|RWTexture2DArray|RWTexture3D|SamplerState|SamplerComparisonState|StructuredBuffer|Texture1D|Texture1DArray|Texture2D|Texture2DArray|Texture2DMS|Texture2DMSArray|Texture3D|TextureCube|TextureCubeArray|TriangleStream|VertexShader)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\b(bool|double|float|int|half|min16float|min10float|min16int|min12int|min16uint|uint)([1-4](x[1-4])?)?\b">
+        <token type="KeywordType"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/holyc.xml 🔗

@@ -0,0 +1,252 @@
+<lexer>
+  <config>
+    <name>HolyC</name>
+    <alias>holyc</alias>
+    <filename>*.HC</filename>
+    <filename>*.hc</filename>
+    <filename>*.HH</filename>
+    <filename>*.hh</filename>
+    <filename>*.hc.z</filename>
+    <filename>*.HC.Z</filename>
+    <mime_type>text/x-chdr</mime_type>
+    <mime_type>text/x-csrc</mime_type>
+    <mime_type>image/x-xbitmap</mime_type>
+    <mime_type>image/x-xpixmap</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="statement">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-fA-F0-9]{2,4}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(include)(\s*(?:/[*].*?[*]/\s*)?)([^\n]+)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="CommentPreprocFile"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <push/>
+      </rule>
+      <rule pattern="^\s*#el(?:se|if).*\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="^#if\s+0">
+        <token type="CommentPreproc"/>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^#">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#if\s+0)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule pattern="(L?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(L?)(&#39;)(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]+[LlUu]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\d+[LlUu]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(break|case|continue|default|do|else|for|goto|if|return|switch|while|throw|try|catch|extern|MOV|CALL|PUSH|LEAVE|RET|SUB|SHR|ADD|RETF|CMP|JNE|BTS|INT|XOR|JC|JZ|LOOP|POP|TEST|SHL|ADC|SBB|JMP|INC)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(U0|I8|U8|I16|U16|I32|U32|I64|U64|F64|Bool|class|union|DU8|DU16|DU32|DU64|RAX|RCX|RDX|RBX|RSP|RBP|RSI|RDI|EAX|ECX|EDX|EBX|ESP|EBP|ESI|EDI|AX|CX|DX|BX|SP|BP|SI|DI|SS|CS|DS|ES|FS|GS|CH|asm|const|extern|register|restrict|static|volatile|inline|_extern|_import|IMPORT|public)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="__()\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(NULL|TRUE|FALSE|ON|OFF)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(:)(?!:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b[A-Za-z_]\w*(?=\s*\()">
+          <token type="NameFunction"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;{]*)(\{)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="function"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;]*)(;)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <push state="statement"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/html.xml 🔗

@@ -0,0 +1,159 @@
+<lexer>
+  <config>
+    <name>HTML</name>
+    <alias>html</alias>
+    <filename>*.html</filename>
+    <filename>*.htm</filename>
+    <filename>*.xhtml</filename>
+    <filename>*.xslt</filename>
+    <mime_type>text/html</mime_type>
+    <mime_type>application/xhtml+xml</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <dot_all>true</dot_all>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="script-content">
+      <rule pattern="(&lt;)(\s*)(/)(\s*)(script)(\s*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".+?(?=&lt;\s*/\s*script\s*&gt;)">
+        <using lexer="Javascript"/>
+      </rule>
+    </state>
+    <state name="style-content">
+      <rule pattern="(&lt;)(\s*)(/)(\s*)(style)(\s*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".+?(?=&lt;\s*/\s*style\s*&gt;)">
+        <using lexer="CSS"/>
+      </rule>
+    </state>
+    <state name="attr">
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\s&gt;]+">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[^&lt;&amp;]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&amp;\S*?;">
+        <token type="NameEntity"/>
+      </rule>
+      <rule pattern="\&lt;\!\[CDATA\[.*?\]\]\&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="&lt;\?.*?\?&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="&lt;![^&gt;]*&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(&lt;)(\s*)(script)(\s*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="script-content" state="tag"/>
+      </rule>
+      <rule pattern="(&lt;)(\s*)(style)(\s*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="style-content" state="tag"/>
+      </rule>
+      <rule pattern="(&lt;)(\s*)([\w:.-]+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="(&lt;)(\s*)(/)(\s*)([\w:.-]+)(\s*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^-]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="--&gt;">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="-">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="([\w:-]+\s*)(=)(\s*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="attr"/>
+      </rule>
+      <rule pattern="[\w:-]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(/?)(\s*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/hy.xml 🔗

@@ -0,0 +1,104 @@
+<lexer>
+  <config>
+    <name>Hy</name>
+    <alias>hylang</alias>
+    <filename>*.hy</filename>
+    <mime_type>text/x-hy</mime_type>
+    <mime_type>application/x-hy</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=";.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="[,\s]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="-?\d+\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="0[0-7]+j?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][a-fA-F0-9]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="\\(.|[a-z]+)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="^(\s*)([rRuU]{,2}&#34;&#34;&#34;(?:.|\n)*?&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringDoc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)([rRuU]{,2}&#39;&#39;&#39;(?:.|\n)*?&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringDoc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="::?(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="~@|[`\&#39;#^~&amp;@]">
+        <token type="Operator"/>
+      </rule>
+      <rule>
+        <include state="py-keywords"/>
+      </rule>
+      <rule>
+        <include state="py-builtins"/>
+      </rule>
+      <rule pattern="(eval-when-compile|eval-and-compile|with-decorator|unquote-splice|quasiquote|list_comp|unquote|foreach|kwapply|import|not-in|unless|is-not|quote|progn|slice|assoc|first|while|when|rest|cond|&lt;&lt;=|-&gt;&gt;|for|get|&gt;&gt;=|let|cdr|car|is|-&gt;|do|in|\||~|,) ">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(defmacro|defclass|lambda|defun|defn|setv|def|fn) ">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(repeatedly|take_while|iterator\?|iterable\?|instance\?|distinct|take_nth|numeric\?|iterate|filter|repeat|remove|even\?|none\?|cycle|zero\?|odd\?|pos\?|neg\?|take|drop|inc|dec|nth) ">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;=\()(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(?!#)[\w!$%*+&lt;=&gt;?/.#-]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(\[|\])">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(\{|\})">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(\(|\))">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="py-keywords">
+      <rule pattern="(yield from|continue|finally|lambda|assert|global|except|return|print|yield|while|break|raise|elif|pass|exec|else|with|try|for|del|as|if)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="py-builtins">
+      <rule pattern="(?&lt;!\.)(staticmethod|classmethod|__import__|isinstance|basestring|issubclass|frozenset|raw_input|bytearray|enumerate|property|callable|reversed|execfile|hasattr|setattr|compile|complex|delattr|unicode|globals|getattr|unichr|reduce|xrange|buffer|intern|filter|locals|divmod|coerce|sorted|reload|object|slice|round|float|super|input|bytes|apply|tuple|range|iter|dict|long|type|hash|vars|next|file|exit|open|repr|eval|bool|list|bin|pow|zip|ord|oct|min|set|any|max|map|all|len|sum|int|dir|hex|chr|abs|cmp|str|id)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(self|None|Ellipsis|NotImplemented|False|True|cls)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(PendingDeprecationWarning|UnicodeTranslateError|NotImplementedError|UnicodeEncodeError|UnicodeDecodeError|DeprecationWarning|FloatingPointError|UnboundLocalError|KeyboardInterrupt|ZeroDivisionError|EnvironmentError|IndentationError|ArithmeticError|OverflowWarning|ReferenceError|RuntimeWarning|AttributeError|AssertionError|NotImplemented|UnicodeWarning|FutureWarning|BaseException|StopIteration|SyntaxWarning|OverflowError|StandardError|ImportWarning|GeneratorExit|RuntimeError|WindowsError|UnicodeError|LookupError|SyntaxError|SystemError|ImportError|MemoryError|UserWarning|ValueError|IndexError|SystemExit|Exception|TypeError|NameError|EOFError|VMSError|KeyError|TabError|IOError|OSError|Warning)\b">
+        <token type="NameException"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/idris.xml 🔗

@@ -0,0 +1,216 @@
+<lexer>
+  <config>
+    <name>Idris</name>
+    <alias>idris</alias>
+    <alias>idr</alias>
+    <filename>*.idr</filename>
+    <mime_type>text/x-idris</mime_type>
+  </config>
+  <rules>
+    <state name="escape">
+      <rule pattern="[abfnrtv&#34;\&#39;&amp;\\]">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\^[][A-Z@^_]">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="NUL|SOH|[SE]TX|EOT|ENQ|ACK|BEL|BS|HT|LF|VT|FF|CR|S[OI]|DLE|DC[1-4]|NAK|SYN|ETB|CAN|EM|SUB|ESC|[FGRU]S|SP|DEL">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="o[0-7]+">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="x[\da-fA-F]+">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s+\\">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^(\s*)(%lib|link|flag|include|hide|freeze|access|default|logging|dynamic|name|error_handlers|language)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordReserved"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(--(?![!#$%&amp;*+./&lt;=&gt;?@^|_~:\\]).*?)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(\|{3}.*?)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(\{-)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentMultiline"/>
+        </bygroups>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="^(\s*)([^\s(){}]+)(\s*)(:)(\s*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="OperatorWord"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(case|class|data|default|using|do|else|if|in|infix[lr]?|instance|rewrite|auto|namespace|codata|mutual|private|public|abstract|total|partial|let|proof|of|then|static|where|_|with|pattern|term|syntax|prefix|postulate|parameters|record|dsl|impossible|implicit|tactics|intros|intro|compute|refine|exact|trivial)(?!\&#39;)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(import|module)(\s+)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="module"/>
+      </rule>
+      <rule pattern="(&#39;&#39;)?[A-Z][\w\&#39;]*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[a-z][\w\&#39;]*">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(&lt;-|::|-&gt;|=&gt;|=)">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="([(){}\[\]:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+)">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="\d+[eE][+-]?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+\.\d+([eE][+-]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX][\da-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringChar"/>
+        <push state="character"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[^\s(){}]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+?">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="module">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="([A-Z][\w.]*)(\s+)(\()">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="funclist"/>
+      </rule>
+      <rule pattern="[A-Z][\w.]*">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="funclist">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[A-Z]\w*">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(_[\w\&#39;]+|[a-z][\w\&#39;]*)">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="--.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\{-">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[:!#$%&amp;*+.\\/&lt;=&gt;?@^|~-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="funclist" state="funclist"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^-{}]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\{-">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="-\}">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[-{}]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="character">
+      <rule pattern="[^\\&#39;]">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralStringEscape"/>
+        <push state="escape"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralStringEscape"/>
+        <push state="escape"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/igor.xml 🔗

@@ -0,0 +1,47 @@
+<lexer>
+  <config>
+    <name>Igor</name>
+    <alias>igor</alias>
+    <alias>igorpro</alias>
+    <filename>*.ipf</filename>
+    <mime_type>text/ipf</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="//.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="&#34;([^&#34;\\]|\\.)*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\b(AbortOnValue|AbortOnRTE|strswitch|endswitch|continue|default|endfor|endtry|switch|return|elseif|while|catch|endif|break|else|case|for|try|do|if)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(strconstant|constant|variable|funcref|string|uint64|uint32|uint16|STRUCT|double|dfref|uchar|int16|int32|int64|float|WAVE|SVAR|NVAR|char)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\b(EndStructure|MultiThread|ThreadSafe|Structure|EndMacro|function|DoPrompt|override|Picture|SubMenu|window|Prompt|static|macro|Proc|Menu|end)\b">
+        <token type="KeywordReserved"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ini.xml 🔗

@@ -0,0 +1,45 @@
+<lexer>
+  <config>
+    <name>INI</name>
+    <alias>ini</alias>
+    <alias>cfg</alias>
+    <alias>dosini</alias>
+    <filename>*.ini</filename>
+    <filename>*.cfg</filename>
+    <filename>*.inf</filename>
+    <filename>*.service</filename>
+    <filename>*.socket</filename>
+    <filename>.gitconfig</filename>
+    <filename>.editorconfig</filename>
+    <filename>pylintrc</filename>
+    <filename>.pylintrc</filename>
+    <mime_type>text/x-ini</mime_type>
+    <mime_type>text/inf</mime_type>
+    <priority>0.1</priority> <!-- higher priority than Inform 6 -->
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[;#].*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\[.*?\]$">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(.*?)([ \t]*)(=)([ \t]*)(.*(?:\n[ \t].+)*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(.+?)$">
+        <token type="NameAttribute"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/io.xml 🔗

@@ -0,0 +1,71 @@
+<lexer>
+  <config>
+    <name>Io</name>
+    <alias>io</alias>
+    <filename>*.io</filename>
+    <mime_type>text/x-iosrc</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="#(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\+">
+        <token type="CommentMultiline"/>
+        <push state="nestedcomment"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="::=|:=|=|\(|\)|;|,|\*|-|\+|&gt;|&lt;|@|!|/|\||\^|\.|%|&amp;|\[|\]|\{|\}">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(clone|do|doFile|doString|method|for|if|else|elseif|then)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(nil|false|true)\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(Object|list|List|Map|args|Sequence|Coroutine|File)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="(\d+\.?\d*|\d*\.\d+)([eE][+-]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="nestedcomment">
+      <rule pattern="[^+/]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\+">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\+/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[+/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/iscdhcpd.xml 🔗

@@ -0,0 +1,96 @@
+<lexer>
+  <config>
+    <name>ISCdhcpd</name>
+    <alias>iscdhcpd</alias>
+    <filename>dhcpd.conf</filename>
+  </config>
+  <rules>
+    <state name="interpol">
+      <rule pattern="\$[{(]">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="[})]">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^${()}]+">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#.*?\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(hardware|packet|leased-address|host-decl-name|lease-time|max-lease-time|client-state|config-option|option|filename|next-server|allow|deny|match|ignore)\b">
+         <token type="Keyword"/>
+      </rule>
+      <rule pattern="(include|group|host|subnet|subnet6|netmask|class|subclass|pool|failover|include|shared-network|range|range6|prefix6)\b">
+         <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(on|off|true|false|none)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(if|elsif|else)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(exists|known|static)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(==|!=|~=|~~|=)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{},;\)]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\/\d{1,2}">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[a-fA-F0-9]{1,2}:[a-fA-F0-9]{1,2}:[a-fA-F0-9]{1,2}:[a-fA-F0-9]{1,2}:[a-fA-F0-9]{1,2}:[a-fA-F0-9]{1,2}">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="doublequotestring"/>
+      </rule>
+      <rule pattern="([\w\-.]+)(\s*)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[\w\-.]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="doublequotestring">
+      <rule pattern="\$[{(]">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/j.xml 🔗

@@ -0,0 +1,157 @@
+<lexer>
+  <config>
+    <name>J</name>
+    <alias>j</alias>
+    <filename>*.ijs</filename>
+    <mime_type>text/x-j</mime_type>
+  </config>
+  <rules>
+    <state name="singlequote">
+      <rule pattern="[^&#39;]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#!.*$">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="NB\..*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\n+\s*Note">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\s*Note.*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="singlequote"/>
+      </rule>
+      <rule pattern="0\s+:\s*0|noun\s+define\s*$">
+        <token type="NameEntity"/>
+        <push state="nounDefinition"/>
+      </rule>
+      <rule pattern="(([1-4]|13)\s+:\s*0|(adverb|conjunction|dyad|monad|verb)\s+define)\b">
+        <token type="NameFunction"/>
+        <push state="explicitDefinition"/>
+      </rule>
+      <rule pattern="(label_|goto_|for_)\b[a-zA-Z]\w*\.">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="(continue|select|return|assert|catchd|catcht|elseif|whilst|break|catch|fcase|while|throw|else|case|end|try|for|do|if)\.">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="\b[a-zA-Z]\w*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(timespacex|fixdotdot|nameclass|namelist|file2url|tmoutput|ucpcount|boxxopen|smoutput|JVERSION|datatype|toupper|tolower|alpha17|alpha27|getargs|evtloop|boxopen|fliprgb|inverse|scriptd|iospath|cutopen|isatty|toCRLF|toHOST|isutf8|getenv|stdout|script|usleep|sminfo|expand|stderr|clear|fetch|every|erase|empty|Debug|EMPTY|split|names|timex|cutLF|stdin|apply|items|table|exit|Note|list|take|leaf|type|bind|drop|rows|each|echo|sign|CRLF|utf8|sort|pick|ARGV|uucp|ucp|DEL|inv|hfd|dfh|def|LF2|EAV|toJ|TAB|nl|FF|LF|bx|nc|CR|on)">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="=[.:]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[-=+*#$%@!~`^&amp;&#34;;:.,&lt;&gt;{}\[\]\\|/]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[abCdDeEfHiIjLMoprtT]\.">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="[aDiLpqsStux]\:">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(_[0-9])\:">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="parentheses"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^)]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="^\)">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[)]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="explicitDefinition">
+      <rule pattern="\b[nmuvxy]\b">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+      <rule pattern="[^)]">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="^\)">
+        <token type="NameLabel"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[)]">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="\b_{1,2}\b">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="_?\d+(\.\d+)?(\s*[ejr]\s*)_?\d+(\.?=\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="_?\d+\.(?=\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="_?\d+x">
+        <token type="LiteralNumberIntegerLong"/>
+      </rule>
+      <rule pattern="_?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="nounDefinition">
+      <rule pattern="[^)]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="^\)">
+        <token type="NameLabel"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[)]">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="parentheses">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="explicitDefinition"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/java.xml 🔗

@@ -0,0 +1,193 @@
+<lexer>
+  <config>
+    <name>Java</name>
+    <alias>java</alias>
+    <filename>*.java</filename>
+    <mime_type>text/x-java</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(^\s*)((?:(?:public|private|protected|static|strictfp)(?:\s+))*)(record)\b">
+        <bygroups>
+          <token type="TextWhitespace" />
+          <usingself state="root" />
+          <token type="KeywordDeclaration" />
+        </bygroups>
+        <push state="class" />
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="TextWhitespace" />
+      </rule>
+      <rule pattern="(//.*?)(\n)">
+        <bygroups>
+          <token type="CommentSingle" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline" />
+      </rule>
+      <rule
+        pattern="(assert|break|case|catch|continue|default|do|else|finally|for|if|goto|instanceof|new|return|switch|this|throw|try|while)\b">
+        <token type="Keyword" />
+      </rule>
+      <rule pattern="((?:(?:[^\W\d]|\$)[\w.\[\]$&lt;&gt;]*\s+)+?)((?:[^\W\d]|\$)[\w$]*)(\s*)(\()">
+        <bygroups>
+          <usingself state="root" />
+          <token type="NameFunction" />
+          <token type="TextWhitespace" />
+          <token type="Punctuation" />
+        </bygroups>
+      </rule>
+      <rule pattern="@[^\W\d][\w.]*">
+        <token type="NameDecorator" />
+      </rule>
+      <rule
+        pattern="(abstract|const|enum|extends|final|implements|native|private|protected|public|sealed|static|strictfp|super|synchronized|throws|transient|volatile|yield)\b">
+        <token type="KeywordDeclaration" />
+      </rule>
+      <rule pattern="(boolean|byte|char|double|float|int|long|short|void)\b">
+        <token type="KeywordType" />
+      </rule>
+      <rule pattern="(package)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace" />
+          <token type="TextWhitespace" />
+        </bygroups>
+        <push state="import" />
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant" />
+      </rule>
+      <rule pattern="(class|interface)\b">
+        <token type="KeywordDeclaration" />
+        <push state="class" />
+      </rule>
+      <rule pattern="(var)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration" />
+          <token type="TextWhitespace" />
+        </bygroups>
+        <push state="var" />
+      </rule>
+      <rule pattern="(import(?:\s+static)?)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace" />
+          <token type="TextWhitespace" />
+        </bygroups>
+        <push state="import" />
+      </rule>
+      <rule pattern="&quot;&quot;&quot;\n">
+        <token type="LiteralString" />
+        <push state="multiline_string" />
+      </rule>
+      <rule pattern="&quot;">
+        <token type="LiteralString" />
+        <push state="string" />
+      </rule>
+      <rule pattern="&#x27;\\.&#x27;|&#x27;[^\\]&#x27;|&#x27;\\u[0-9a-fA-F]{4}&#x27;">
+        <token type="LiteralStringChar" />
+      </rule>
+      <rule pattern="(\.)((?:[^\W\d]|\$)[\w$]*)">
+        <bygroups>
+          <token type="Punctuation" />
+          <token type="NameAttribute" />
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(default)(:)">
+        <bygroups>
+          <token type="TextWhitespace" />
+          <token type="Keyword" />
+          <token type="Punctuation" />
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)((?:[^\W\d]|\$)[\w$]*)(:)">
+        <bygroups>
+          <token type="TextWhitespace" />
+          <token type="NameLabel" />
+          <token type="Punctuation" />
+        </bygroups>
+      </rule>
+      <rule pattern="([^\W\d]|\$)[\w$]*">
+        <token type="Name" />
+      </rule>
+      <rule
+        pattern="([0-9][0-9_]*\.([0-9][0-9_]*)?|\.[0-9][0-9_]*)([eE][+\-]?[0-9][0-9_]*)?[fFdD]?|[0-9][eE][+\-]?[0-9][0-9_]*[fFdD]?|[0-9]([eE][+\-]?[0-9][0-9_]*)?[fFdD]|0[xX]([0-9a-fA-F][0-9a-fA-F_]*\.?|([0-9a-fA-F][0-9a-fA-F_]*)?\.[0-9a-fA-F][0-9a-fA-F_]*)[pP][+\-]?[0-9][0-9_]*[fFdD]?">
+        <token type="LiteralNumberFloat" />
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F][0-9a-fA-F_]*[lL]?">
+        <token type="LiteralNumberHex" />
+      </rule>
+      <rule pattern="0[bB][01][01_]*[lL]?">
+        <token type="LiteralNumberBin" />
+      </rule>
+      <rule pattern="0[0-7_]+[lL]?">
+        <token type="LiteralNumberOct" />
+      </rule>
+      <rule pattern="0|[1-9][0-9_]*[lL]?">
+        <token type="LiteralNumberInteger" />
+      </rule>
+      <rule pattern="[~^*!%&amp;\[\]&lt;&gt;|+=/?-]">
+        <token type="Operator" />
+      </rule>
+      <rule pattern="[{}();:.,]">
+        <token type="Punctuation" />
+      </rule>
+      <rule pattern="\n">
+        <token type="TextWhitespace" />
+      </rule>
+    </state>
+    <state name="class">
+      <rule pattern="\s+">
+        <token type="Text" />
+      </rule>
+      <rule pattern="([^\W\d]|\$)[\w$]*">
+        <token type="NameClass" />
+        <pop depth="1" />
+      </rule>
+    </state>
+    <state name="var">
+      <rule pattern="([^\W\d]|\$)[\w$]*">
+        <token type="Name" />
+        <pop depth="1" />
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[\w.]+\*?">
+        <token type="NameNamespace" />
+        <pop depth="1" />
+      </rule>
+    </state>
+    <state name="multiline_string">
+      <rule pattern="&quot;&quot;&quot;">
+        <token type="LiteralString" />
+        <pop depth="1" />
+      </rule>
+      <rule pattern="&quot;">
+        <token type="LiteralString" />
+      </rule>
+      <rule>
+        <include state="string" />
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&quot;]+">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="\\&quot;">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString" />
+      </rule>
+      <rule pattern="&quot;">
+        <token type="LiteralString" />
+        <pop depth="1" />
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/javascript.xml 🔗

@@ -0,0 +1,160 @@
+<lexer>
+  <config>
+    <name>JavaScript</name>
+    <alias>js</alias>
+    <alias>javascript</alias>
+    <filename>*.js</filename>
+    <filename>*.jsm</filename>
+    <filename>*.mjs</filename>
+    <filename>*.cjs</filename>
+    <mime_type>application/javascript</mime_type>
+    <mime_type>application/x-javascript</mime_type>
+    <mime_type>text/x-javascript</mime_type>
+    <mime_type>text/javascript</mime_type>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="interp">
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\\`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\\[^`\\]">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interp-inside"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="[^`\\$]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="interp-inside">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gimuy]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="#pop" state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\A#! ?/.*?\n">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="^(?=\s|/|&lt;!--)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="\d+(\.\d*|[eE][+\-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[bB][01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[oO][0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9][0-9_]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\.\.\.|=&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\+\+|--|~|&amp;&amp;|\?|:|\|\||\\(?=\n)|(&lt;&lt;|&gt;&gt;&gt;?|==?|!=?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(for|in|while|do|break|return|continue|switch|case|default|if|else|throw|try|catch|finally|new|delete|typeof|instanceof|void|yield|this|of)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(var|let|with|function)\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(abstract|async|await|boolean|byte|char|class|const|debugger|double|enum|export|extends|final|float|goto|implements|import|int|interface|long|native|package|private|protected|public|short|static|super|synchronized|throws|transient|volatile)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity|undefined)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(Array|Boolean|Date|Error|Function|Math|netscape|Number|Object|Packages|RegExp|String|Promise|Proxy|sun|decodeURI|decodeURIComponent|encodeURI|encodeURIComponent|Error|eval|isFinite|isNaN|isSafeInteger|parseFloat|parseInt|document|this|window)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?:[$_\p{L}\p{N}]|\\u[a-fA-F0-9]{4})(?:(?:[$\p{L}\p{N}]|\\u[a-fA-F0-9]{4}))*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="interp"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/json.xml 🔗

@@ -0,0 +1,112 @@
+<lexer>
+  <config>
+    <name>JSON</name>
+    <alias>json</alias>
+    <filename>*.json</filename>
+    <filename>*.jsonc</filename>
+    <filename>*.avsc</filename>
+    <mime_type>application/json</mime_type>
+    <dot_all>true</dot_all>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule>
+        <include state="value"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="simplevalue">
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="-?(0|[1-9]\d*)(\.\d+[eE](\+|-)?\d+|[eE](\+|-)?\d+|\.\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?(0|[1-9]\d*)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="objectattribute">
+      <rule>
+        <include state="value"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="objectvalue">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comment"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="NameTag"/>
+        <push state="objectattribute"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="arrayvalue">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="value"/>
+      </rule>
+      <rule>
+        <include state="comment"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="simplevalue"/>
+      </rule>
+      <rule>
+        <include state="comment"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="objectvalue"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push state="arrayvalue"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/jsonata.xml 🔗

@@ -0,0 +1,83 @@
+<lexer>
+  <config>
+    <name>JSONata</name>
+    <alias>jsonata</alias>
+    <filename>*.jsonata</filename>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="[{}()\[\]:;,\.=]">
+        <token type="Punctuation"/>
+      </rule>
+       <rule pattern="\.\."> // Spread operator
+        <token type="Operator"/>
+      </rule> 
+      <rule pattern="\^(?=\()"> // Sort operator
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\*\*|\*(?=\.)|\*"> // Descendant | Wildcard | Multiplication
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\/(?!\*)"> // Division
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[&lt;&gt;!]=?"> // Comparison operators
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="~>">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(and|or|in)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[%@#&amp;?]|\+(?!\d)|\-(?!\d)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\$[a-zA-Z0-9_]*(?![\w\(])">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\$\w*(?=\()">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="\b(function)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(\+|-)?(0|[1-9]\d*)(\.\d+[eE](\+|-)?\d+|[eE](\+|-)?\d+|\.\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule> 
+      <rule pattern="(\+|-)?(0|[1-9]\d*)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <!-- NOTE: This expression matches all object keys (NameTags), which are essentially strings with double quotes
+      that should only be captured on the left side of a colon (:) within a JSON-like object.
+      Therefore, this expression must preceed the one for all LiteralStringDouble -->
+      <rule pattern="&#34;(\\.|[^\\&#34;\r\n])*&#34;(?=\s*:)">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`.*`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <!-- NOTE: This expression matches everything remaining, which should be only JSONata names.
+      Therefore, it has been left as last intentionally -->
+      <rule pattern="[a-zA-Z0-9_]*"> 
+        <token type="Name"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/jsonnet.xml 🔗

@@ -0,0 +1,138 @@
+
+<lexer>
+  <config>
+    <name>Jsonnet</name>
+    <alias>jsonnet</alias>
+    <filename>*.jsonnet</filename>
+    <filename>*.libsonnet</filename>
+  </config>
+  <rules>
+    <state name="_comments">
+      <rule pattern="(//|#).*\n"><token type="CommentSingle"/></rule>
+      <rule pattern="/\*\*([^/]|/(?!\*))*\*/"><token type="LiteralStringDoc"/></rule>
+      <rule pattern="/\*([^/]|/(?!\*))*\*/"><token type="Comment"/></rule>
+    </state>
+    <state name="root">
+      <rule><include state="_comments"/></rule>
+      <rule pattern="@&#x27;.*&#x27;"><token type="LiteralString"/></rule>
+      <rule pattern="@&quot;.*&quot;"><token type="LiteralString"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralString"/><push state="singlestring"/></rule>
+      <rule pattern="&quot;"><token type="LiteralString"/><push state="doublestring"/></rule>
+      <rule pattern="\|\|\|(.|\n)*\|\|\|"><token type="LiteralString"/></rule>
+      <rule pattern="[+-]?[0-9]+(.[0-9])?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="[!$~+\-&amp;|^=&lt;&gt;*/%]"><token type="Operator"/></rule>
+      <rule pattern="\{"><token type="Punctuation"/><push state="object"/></rule>
+      <rule pattern="\["><token type="Punctuation"/><push state="array"/></rule>
+      <rule pattern="local\b"><token type="Keyword"/><push state="local_name"/></rule>
+      <rule pattern="assert\b"><token type="Keyword"/><push state="assert"/></rule>
+      <rule pattern="(assert|else|error|false|for|if|import|importstr|in|null|tailstrict|then|self|super|true)\b"><token type="Keyword"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="function(?=\()"><token type="Keyword"/><push state="function_params"/></rule>
+      <rule pattern="std\.[^\W\d]\w*(?=\()"><token type="NameBuiltin"/><push state="function_args"/></rule>
+      <rule pattern="[^\W\d]\w*(?=\()"><token type="NameFunction"/><push state="function_args"/></rule>
+      <rule pattern="[^\W\d]\w*"><token type="NameVariable"/></rule>
+      <rule pattern="[\.()]"><token type="Punctuation"/></rule>
+    </state>
+    <state name="singlestring">
+      <rule pattern="[^&#x27;\\]"><token type="LiteralString"/></rule>
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralString"/><pop depth="1"/></rule>
+    </state>
+    <state name="doublestring">
+      <rule pattern="[^&quot;\\]"><token type="LiteralString"/></rule>
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="&quot;"><token type="LiteralString"/><pop depth="1"/></rule>
+    </state>
+    <state name="array">
+      <rule pattern=","><token type="Punctuation"/></rule>
+      <rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="local_name">
+      <rule pattern="[^\W\d]\w*(?=\()"><token type="NameFunction"/><push state="function_params"/></rule>
+      <rule pattern="[^\W\d]\w*"><token type="NameVariable"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="(?==)"><token type="TextWhitespace"/><push state="#pop" state="local_value"/></rule>
+    </state>
+    <state name="local_value">
+      <rule pattern="="><token type="Operator"/></rule>
+      <rule pattern=";"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="assert">
+      <rule pattern=":"><token type="Punctuation"/></rule>
+      <rule pattern=";"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="function_params">
+      <rule pattern="[^\W\d]\w*"><token type="NameVariable"/></rule>
+      <rule pattern="\("><token type="Punctuation"/></rule>
+      <rule pattern="\)"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern=","><token type="Punctuation"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="="><token type="Operator"/><push state="function_param_default"/></rule>
+    </state>
+    <state name="function_args">
+      <rule pattern="\("><token type="Punctuation"/></rule>
+      <rule pattern="\)"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern=","><token type="Punctuation"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="object">
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="local\b"><token type="Keyword"/><push state="object_local_name"/></rule>
+      <rule pattern="assert\b"><token type="Keyword"/><push state="object_assert"/></rule>
+      <rule pattern="\["><token type="Operator"/><push state="field_name_expr"/></rule>
+      <rule pattern="(?=[^\W\d]\w*)"><token type="Text"/><push state="field_name"/></rule>
+      <rule pattern="\}"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="&quot;"><token type="NameVariable"/><push state="double_field_name"/></rule>
+      <rule pattern="&#x27;"><token type="NameVariable"/><push state="single_field_name"/></rule>
+      <rule><include state="_comments"/></rule>
+    </state>
+    <state name="field_name">
+      <rule pattern="[^\W\d]\w*(?=\()"><token type="NameFunction"/><push state="field_separator" state="function_params"/></rule>
+      <rule pattern="[^\W\d]\w*"><token type="NameVariable"/><push state="field_separator"/></rule>
+    </state>
+    <state name="double_field_name">
+      <rule pattern="([^&quot;\\]|\\.)*&quot;"><token type="NameVariable"/><push state="field_separator"/></rule>
+    </state>
+    <state name="single_field_name">
+      <rule pattern="([^&#x27;\\]|\\.)*&#x27;"><token type="NameVariable"/><push state="field_separator"/></rule>
+    </state>
+    <state name="field_name_expr">
+      <rule pattern="\]"><token type="Operator"/><push state="field_separator"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="function_param_default">
+      <rule pattern="(?=[,\)])"><token type="TextWhitespace"/><pop depth="1"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="field_separator">
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="\+?::?:?"><token type="Punctuation"/><push state="#pop" state="#pop" state="field_value"/></rule>
+      <rule><include state="_comments"/></rule>
+    </state>
+    <state name="field_value">
+      <rule pattern=","><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="\}"><token type="Punctuation"/><pop depth="2"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="object_assert">
+      <rule pattern=":"><token type="Punctuation"/></rule>
+      <rule pattern=","><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+    <state name="object_local_name">
+      <rule pattern="[^\W\d]\w*"><token type="NameVariable"/><push state="#pop" state="object_local_value"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+    </state>
+    <state name="object_local_value">
+      <rule pattern="="><token type="Operator"/></rule>
+      <rule pattern=","><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="\}"><token type="Punctuation"/><pop depth="2"/></rule>
+      <rule><include state="root"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/julia.xml 🔗

@@ -0,0 +1,400 @@
+<lexer>
+  <config>
+    <name>Julia</name>
+    <alias>julia</alias>
+    <alias>jl</alias>
+    <filename>*.jl</filename>
+    <mime_type>text/x-julia</mime_type>
+    <mime_type>application/x-julia</mime_type>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="(&#34;)((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*)|\d+)?">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="LiteralStringAffix"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\&#34;\&#39;$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+      <rule pattern="%[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[E-GXc-giorsux%]">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^&#34;$%\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="curly">
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="rawstring">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\&#34;">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="([^&#34;\\]|\\[^&#34;])+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="tqcommand">
+      <rule pattern="(```)((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*)|\d+)?">
+        <bygroups>
+          <token type="LiteralStringBacktick"/>
+          <token type="LiteralStringAffix"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\$">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+      <rule pattern="[^\\`$]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="in-intp">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="tqstring">
+      <rule pattern="(&#34;&#34;&#34;)((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*)|\d+)?">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="LiteralStringAffix"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\&#34;\&#39;$nrbtfav]|(x|u|U)[a-fA-F0-9]+|\d+)">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+      <rule pattern="[^&#34;$%\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="interp">
+      <rule pattern="\$(?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*)">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="(\$)(\()">
+        <bygroups>
+          <token type="LiteralStringInterpol"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="in-intp"/>
+      </rule>
+    </state>
+    <state name="tqregex">
+      <rule pattern="(&#34;&#34;&#34;)([imsxa]*)?">
+        <bygroups>
+          <token type="LiteralStringRegex"/>
+          <token type="LiteralStringAffix"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#=">
+        <token type="CommentMultiline"/>
+        <push state="blockcomment"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[\[\](),;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))(\s*)(:)((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))">
+        <bygroups>
+          <token type="Name"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Name"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;![\]):&lt;&gt;\d.])(:(?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="(?&lt;=::)(\s*)((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))\b(?![(\[])">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))(\s*)([&lt;&gt;]:)(\s*)((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))\b(?![(\[])">
+        <bygroups>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([&lt;&gt;]:)(\s*)((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))\b(?![(\[])">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b((?:[a-zA-Z_¡-􏿿][a-zA-Z_0-9!¡-􏿿]*))(\s*)([&lt;&gt;]:)">
+        <bygroups>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/jungle.xml 🔗

@@ -0,0 +1,98 @@
+<lexer>
+  <config>
+    <name>Jungle</name>
+    <alias>jungle</alias>
+    <filename>*.jungle</filename>
+    <mime_type>text/x-jungle</mime_type>
+  </config>
+  <rules>
+    <state name="var">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\b(((re)?source|barrel)Path|excludeAnnotations|annotations|lang)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="\bbase\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\b(ind|zsm|hrv|ces|dan|dut|eng|fin|fre|deu|gre|hun|ita|nob|po[lr]|rus|sl[ov]|spa|swe|ara|heb|zh[st]|jpn|kor|tha|vie|bul|tur)">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\b((semi)?round|rectangle)(-\d+x\d+)?\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="[\.;\[\]\(\$]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#(\n|[\w\W]*?[^#]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="^(?=\S)">
+        <token type="None"/>
+        <push state="instruction"/>
+      </rule>
+      <rule pattern="[\.;\[\]\(\)\$]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="instruction">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="=">
+        <token type="Operator"/>
+        <push state="value"/>
+      </rule>
+      <rule pattern="(?=\S)">
+        <token type="None"/>
+        <push state="var"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\$\(">
+        <token type="Punctuation"/>
+        <push state="var"/>
+      </rule>
+      <rule pattern="[;\[\]\(\)\$]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="#(\n|[\w\W]*?[^#]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="[\w_\-\.\/\\]+">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/kotlin.xml 🔗

@@ -0,0 +1,223 @@
+<lexer>
+  <config>
+    <name>Kotlin</name>
+    <alias>kotlin</alias>
+    <filename>*.kt</filename>
+    <mime_type>text/x-kotlin</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="\\[tbnr&#39;&#34;\\\$]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\u[0-9a-fA-F]{4}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-interpol"/>
+      </rule>
+      <rule pattern="[^\n\\&#34;$]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="package">
+      <rule pattern="\S+">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="class">
+      <rule pattern="\x60[^\x60]+?\x60">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:[_\p{L}][\p{L}\p{N}]*|`@?[_\p{L}][\p{L}\p{N}]+`)">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="property">
+      <rule pattern="\x60[^\x60]+?\x60">
+        <token type="NameProperty"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:[_\p{L}][\p{L}\p{N}]*|`@?[_\p{L}][\p{L}\p{N}]+`)">
+        <token type="NameProperty"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string-interpol">
+      <rule pattern="\$(?:[_\p{L}][\p{L}\p{N}]*|`@?[_\p{L}][\p{L}\p{N}]+`)">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="\${[^}\n]*}">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+    </state>
+    <state name="generics-specification">
+      <rule pattern="&lt;">
+        <token type="Punctuation"/>
+        <push state="generics-specification"/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[,:*?]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(in|out|reified)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\x60[^\x60]+?\x60">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(?:[_\p{L}][\p{L}\p{N}]*|`@?[_\p{L}][\p{L}\p{N}]+`)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^\s*\[.*?\]">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//[^\n]*\n?">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/[*].*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="!==|!in|!is|===">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="%=|&amp;&amp;|\*=|\+\+|\+=|--|-=|-&gt;|\.\.|\/=|::|&lt;=|==|&gt;=|!!|!=|\|\||\?[:.]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*()+=|\[\]:;,.&lt;&gt;\/?-]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="rawstring"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(&#39;)(\\u[0-9a-fA-F]{4})(&#39;)">
+        <bygroups>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringEscape"/>
+          <token type="LiteralStringChar"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+[Uu]?[Ll]?|[0-9]+(\.[0-9]*)?([eE][+-][0-9]+)?[fF]?[Uu]?[Ll]?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(companion)(\s+)(object)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(class|interface|object)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="(package|import)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="package"/>
+      </rule>
+      <rule pattern="(val|var)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="property"/>
+      </rule>
+      <rule pattern="(fun)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="function"/>
+      </rule>
+      <rule pattern="(abstract|actual|annotation|as|as\?|break|by|catch|class|companion|const|constructor|continue|crossinline|data|delegate|do|dynamic|else|enum|expect|external|false|field|file|final|finally|for|fun|get|if|import|in|infix|init|inline|inner|interface|internal|is|it|lateinit|noinline|null|object|open|operator|out|override|package|param|private|property|protected|public|receiver|reified|return|sealed|set|setparam|super|suspend|tailrec|this|throw|true|try|typealias|typeof|val|value|var|vararg|when|where|while)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="@(?:[_\p{L}][\p{L}\p{N}]*|`@?[_\p{L}][\p{L}\p{N}]+`)">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="(?:\p{Lu}[_\p{L}]*)(?=\.)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(?:[_\p{L}][\p{L}\p{N}]*|`@?[_\p{L}][\p{L}\p{N}]+`)">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule pattern="&lt;">
+        <token type="Punctuation"/>
+        <push state="generics-specification"/>
+      </rule>
+      <rule pattern="\x60[^\x60]+?\x60">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:[_\p{L}][\p{L}\p{N}]*|`@?[_\p{L}][\p{L}\p{N}]+`)">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="rawstring">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:[^$&#34;]+|\&#34;{1,2}[^&#34;])+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="string-interpol"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/lighttpd_configuration_file.xml 🔗

@@ -0,0 +1,42 @@
+<lexer>
+  <config>
+    <name>Lighttpd configuration file</name>
+    <alias>lighty</alias>
+    <alias>lighttpd</alias>
+    <mime_type>text/x-lighttpd-conf</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#.*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\S*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[a-zA-Z._-]+">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\d+\.\d+\.\d+\.\d+(?:/\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="=&gt;|=~|\+=|==|=|\+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\$[A-Z]+">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[(){}\[\],]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;([^&#34;\\]*(?:\\.[^&#34;\\]*)*)&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/llvm.xml 🔗

@@ -0,0 +1,73 @@
+<lexer>
+  <config>
+    <name>LLVM</name>
+    <alias>llvm</alias>
+    <filename>*.ll</filename>
+    <mime_type>text/x-llvm</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="([-a-zA-Z$._][\w\-$.]*|&#34;[^&#34;]*?&#34;)\s*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule>
+        <include state="keyword"/>
+      </rule>
+      <rule pattern="%([-a-zA-Z$._][\w\-$.]*|&#34;[^&#34;]*?&#34;)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="@([-a-zA-Z$._][\w\-$.]*|&#34;[^&#34;]*?&#34;)">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="%\d+">
+        <token type="NameVariableAnonymous"/>
+      </rule>
+      <rule pattern="@\d+">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="#\d+">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="!([-a-zA-Z$._][\w\-$.]*|&#34;[^&#34;]*?&#34;)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="!\d+">
+        <token type="NameVariableAnonymous"/>
+      </rule>
+      <rule pattern="c?&#34;[^&#34;]*?&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="0[xX][a-fA-F0-9]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="-?\d+(?:[.]\d+)?(?:[eE][-+]?\d+(?:[.]\d+)?)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[=&lt;&gt;{}\[\]()*.,!]|x\b">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="(\n|\s)+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="keyword">

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/lua.xml 🔗

@@ -0,0 +1,158 @@
+<lexer>
+  <config>
+    <name>Lua</name>
+    <alias>lua</alias>
+    <filename>*.lua</filename>
+    <filename>*.wlua</filename>
+    <mime_type>text/x-lua</mime_type>
+    <mime_type>application/x-lua</mime_type>
+  </config>
+  <rules>
+    <state name="funcname">
+      <rule>
+        <include state="ws"/>
+      </rule>
+      <rule pattern="[.:]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(?:[^\W\d]\w*)(?=(?:(?:--\[(=*)\[[\w\W]*?\](\2)\])|(?:--.*$)|(?:\s+))*[.:])">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(?:[^\W\d]\w*)">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="label">
+      <rule>
+        <include state="ws"/>
+      </rule>
+      <rule pattern="::">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?:[^\W\d]\w*)">
+        <token type="NameLabel"/>
+      </rule>
+    </state>
+    <state name="dqs">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#!.*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule>
+        <push state="base"/>
+      </rule>
+    </state>
+    <state name="ws">
+      <rule pattern="(?:--\[(=*)\[[\w\W]*?\](\1)\])">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(?:--.*$)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(?:\s+)">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="goto">
+      <rule>
+        <include state="ws"/>
+      </rule>
+      <rule pattern="(?:[^\W\d]\w*)">
+        <token type="NameLabel"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="sqs">
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\\&#39;]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="base">
+      <rule>
+        <include state="ws"/>
+      </rule>
+      <rule pattern="(?i)0x[\da-f]*(\.[\da-f]*)?(p[+-]?\d+)?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(?i)(\d*\.\d+|\d+\.\d*)(e[+-]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(?i)\d+e[+-]?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(?s)\[(=*)\[.*?\]\1\]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="::">
+        <token type="Punctuation"/>
+        <push state="label"/>
+      </rule>
+      <rule pattern="\.{3}">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[=&lt;&gt;|~&amp;+\-*/%#^]+|\.\.">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[\[\]{}().,:;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(break|do|else|elseif|end|for|if|in|repeat|return|then|until|while)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="goto\b">
+        <token type="KeywordReserved"/>
+        <push state="goto"/>
+      </rule>
+      <rule pattern="(local)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(true|false|nil)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(function)\b">
+        <token type="KeywordReserved"/>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="[A-Za-z_]\w*(\.[A-Za-z_]\w*)?">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <combined state="stringescape" state="sqs"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <combined state="stringescape" state="dqs"/>
+      </rule>
+    </state>
+    <state name="stringescape">
+      <rule pattern="\\([abfnrtv\\&#34;\&#39;]|[\r\n]{1,2}|z\s*|x[0-9a-fA-F]{2}|\d{1,3}|u\{[0-9a-fA-F]+\})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/makefile.xml 🔗

@@ -0,0 +1,131 @@
+<lexer>
+  <config>
+    <name>Makefile</name>
+    <alias>make</alias>
+    <alias>makefile</alias>
+    <alias>mf</alias>
+    <alias>bsdmake</alias>
+    <filename>*.mak</filename>
+    <filename>*.mk</filename>
+    <filename>Makefile</filename>
+    <filename>makefile</filename>
+    <filename>Makefile.*</filename>
+    <filename>GNUmakefile</filename>
+    <filename>BSDmakefile</filename>
+    <filename>Justfile</filename>
+    <filename>justfile</filename>
+    <filename>.justfile</filename>
+    <mime_type>text/x-makefile</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^(?:[\t ]+.*\n|\n)+">
+        <using lexer="Bash"/>
+      </rule>
+      <rule pattern="\$[&lt;@$+%?|*]">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*?\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(export)(\s+)(?=[\w${}\t -]+\n)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="export"/>
+      </rule>
+      <rule pattern="export\s+">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="([\w${}().-]+)(\s*)([!?:+]?=)([ \t]*)((?:.*\\\n)+|.*\n)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <using lexer="Bash"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?s)&#34;(\\\\|\\.|[^&#34;\\])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="(?s)&#39;(\\\\|\\.|[^&#39;\\])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="([^\n:]+)(:+)([ \t]*)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="block-header"/>
+      </rule>
+      <rule pattern="\$\(">
+        <token type="Keyword"/>
+        <push state="expansion"/>
+      </rule>
+    </state>
+    <state name="expansion">
+      <rule pattern="[^$a-zA-Z_()]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Keyword"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="export">
+      <rule pattern="[\w${}-]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="block-header">
+      <rule pattern="[,|]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="#.*?\n">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\$\(">
+        <token type="Keyword"/>
+        <push state="expansion"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]+">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mako.xml 🔗

@@ -0,0 +1,120 @@
+<lexer>
+  <config>
+    <name>Mako</name>
+    <alias>mako</alias>
+    <filename>*.mao</filename>
+    <mime_type>application/x-mako</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(\s*)(%)(\s*end(?:\w+))(\n|\Z)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <token type="Keyword"/>
+          <token type="Other"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(%)([^\n]*)(\n|\Z)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <using lexer="Python"/>
+          <token type="Other"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(##[^\n]*)(\n|\Z)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <token type="Other"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?s)&lt;%doc&gt;.*?&lt;/%doc&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(&lt;%)([\w.:]+)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="NameBuiltin"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="(&lt;/%)([\w.:]+)(&gt;)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="NameBuiltin"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&lt;%(?=([\w.:]+))">
+        <token type="CommentPreproc"/>
+        <push state="ondeftags"/>
+      </rule>
+      <rule pattern="(&lt;%(?:!?))(.*?)(%&gt;)(?s)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <using lexer="Python"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$\{)(.*?)(\})">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <using lexer="Python"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?sx)&#xA;                (.+?)                # anything, followed by:&#xA;                (?:&#xA;                 (?&lt;=\n)(?=%|\#\#) | # an eval or comment line&#xA;                 (?=\#\*) |          # multiline comment&#xA;                 (?=&lt;/?%) |          # a python block&#xA;                                     # call start or end&#xA;                 (?=\$\{) |          # a substitution&#xA;                 (?&lt;=\n)(?=\s*%) |&#xA;                                     # - don&#39;t consume&#xA;                 (\\\n) |            # an escaped newline&#xA;                 \Z                  # end of string&#xA;                )&#xA;            ">
+        <bygroups>
+          <token type="Other"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="ondeftags">
+      <rule pattern="&lt;%">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=&lt;%)(include|inherit|namespace|page)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule>
+        <include state="tag"/>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule pattern="((?:\w+)\s*=)(\s*)(&#34;.*?&#34;)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="/?\s*&gt;">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="attr">
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\s&gt;]+">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mason.xml 🔗

@@ -0,0 +1,89 @@
+<lexer>
+  <config>
+    <name>Mason</name>
+    <alias>mason</alias>
+    <filename>*.m</filename>
+    <filename>*.mhtml</filename>
+    <filename>*.mc</filename>
+    <filename>*.mi</filename>
+    <filename>autohandler</filename>
+    <filename>dhandler</filename>
+    <mime_type>application/x-mason</mime_type>
+    <priority>0.1</priority>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(&lt;%doc&gt;)(.*?)(&lt;/%doc&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="CommentMultiline"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;%(?:def|method))(\s*)(.*?)(&gt;)(.*?)(&lt;/%\2\s*&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="NameTag"/>
+          <usingself state="root"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;%\w+)(.*?)(&gt;)(.*?)(&lt;/%\2\s*&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="NameFunction"/>
+          <token type="NameTag"/>
+          <using lexer="Perl"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;&amp;[^|])(.*?)(,.*?)?(&amp;&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="NameFunction"/>
+          <using lexer="Perl"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;&amp;\|)(.*?)(,.*?)?(&amp;&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="NameFunction"/>
+          <using lexer="Perl"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&lt;/&amp;&gt;">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="(&lt;%!?)(.*?)(%&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <using lexer="Perl"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;=^)#[^\n]*(\n|\Z)">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(?&lt;=^)(%)([^\n]*)(\n|\Z)">
+        <bygroups>
+          <token type="NameTag"/>
+          <using lexer="Perl"/>
+          <token type="Other"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?sx)&#xA;                 (.+?)               # anything, followed by:&#xA;                 (?:&#xA;                  (?&lt;=\n)(?=[%#]) |  # an eval or comment line&#xA;                  (?=&lt;/?[%&amp;]) |      # a substitution or block or&#xA;                                     # call start or end&#xA;                                     # - don&#39;t consume&#xA;                  (\\\n) |           # an escaped newline&#xA;                  \Z                 # end of string&#xA;                 )">
+        <bygroups>
+          <using lexer="HTML"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/materialize_sql_dialect.xml 🔗

@@ -0,0 +1,155 @@
+<lexer>
+  <config>
+    <name>Materialize SQL dialect</name>
+    <mime_type>text/x-materializesql</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+    <alias>materialize</alias>
+    <alias>mzsql</alias>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text" />
+      </rule>
+      <rule pattern="--.*\n?">
+        <token type="CommentSingle" />
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline" />
+        <push state="multiline-comments" />
+      </rule>
+      <rule pattern="(bigint|bigserial|bit|bit\s+varying|bool|boolean|box|bytea|char|character|character\s+varying|cidr|circle|date|decimal|double\s+precision|float4|float8|inet|int|int2|int4|int8|integer|interval|json|jsonb|line|lseg|macaddr|money|numeric|path|pg_lsn|point|polygon|real|serial|serial2|serial4|serial8|smallint|smallserial|text|time|timestamp|timestamptz|timetz|tsquery|tsvector|txid_snapshot|uuid|varbit|varchar|with\s+time\s+zone|without\s+time\s+zone|xml|anyarray|anyelement|anyenum|anynonarray|anyrange|cstring|fdw_handler|internal|language_handler|opaque|record|void)\b">
+        <token type="NameBuiltin" />
+      </rule>
+      <rule pattern="(?s)(DO)(\s+)(?:(LANGUAGE)?(\s+)('?)(\w+)?('?)(\s+))?(\$)([^$]*)(\$)(.*?)(\$)(\10)(\$)">
+        <usingbygroup>
+          <sublexer_name_group>6</sublexer_name_group>
+          <code_group>12</code_group>
+          <emitters>
+            <token type="Keyword" />
+            <token type="Text" />
+            <token type="Keyword" />
+            <token type="Text" />
+            <token type="LiteralStringSingle" />
+            <token type="LiteralStringSingle" />
+            <token type="LiteralStringSingle" />
+            <token type="Text" />
+            <token type="LiteralStringHeredoc" />
+            <token type="LiteralStringHeredoc" />
+            <token type="LiteralStringHeredoc" />
+            <token type="LiteralStringHeredoc" />
+            <token type="LiteralStringHeredoc" />
+            <token type="LiteralStringHeredoc" />
+            <token type="LiteralStringHeredoc" />
+          </emitters>
+        </usingbygroup>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mathematica.xml 🔗

@@ -0,0 +1,60 @@
+<lexer>
+  <config>
+    <name>Mathematica</name>
+    <alias>mathematica</alias>
+    <alias>mma</alias>
+    <alias>nb</alias>
+    <filename>*.cdf</filename>
+    <filename>*.m</filename>
+    <filename>*.ma</filename>
+    <filename>*.mt</filename>
+    <filename>*.mx</filename>
+    <filename>*.nb</filename>
+    <filename>*.nbp</filename>
+    <filename>*.wl</filename>
+    <mime_type>application/mathematica</mime_type>
+    <mime_type>application/vnd.wolfram.mathematica</mime_type>
+    <mime_type>application/vnd.wolfram.mathematica.package</mime_type>
+    <mime_type>application/vnd.wolfram.cdf</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(?s)\(\*.*?\*\)">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="([a-zA-Z]+[A-Za-z0-9]*`)">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="([A-Za-z0-9]*_+[A-Za-z0-9]*)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="#\d*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="([a-zA-Z]+[a-zA-Z0-9]*)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="-?\d+\.\d*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\d*\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(!===|@@@|===|/;|:=|-&gt;|:&gt;|/\.|=\.|~~|&lt;=|@@|/@|&amp;&amp;|\|\||//|&lt;&gt;|;;|&gt;=|-|@|!|\^|/|\*|\?|\+|&amp;|&lt;|&gt;|=|\|)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(,|;|\(|\)|\[|\]|\{|\})">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/matlab.xml 🔗

@@ -0,0 +1,114 @@
+<lexer>
+  <config>
+    <name>Matlab</name>
+    <alias>matlab</alias>
+    <filename>*.m</filename>
+    <mime_type>text/matlab</mime_type>
+  </config>
+  <rules>
+    <state name="blockcomment">
+      <rule pattern="^\s*%\}">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="^.*\n">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern=".">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="deffunc">
+      <rule pattern="(\s*)(?:(.+)(\s*)(=)(\s*))?(.+)(\()(.*)(\))(\s*)">
+        <bygroups>
+          <token type="TextWhitespace"/>
+          <token type="Text"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(\s*)([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^!.*">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="%\{\s*\n">
+        <token type="CommentMultiline"/>
+        <push state="blockcomment"/>
+      </rule>
+      <rule pattern="%.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="^\s*function">
+        <token type="Keyword"/>
+        <push state="deffunc"/>
+      </rule>
+      <rule pattern="(properties|persistent|enumerated|otherwise|continue|function|classdef|methods|elseif|events|switch|return|global|parfor|catch|break|while|else|spmd|case|try|end|for|if)\b">
+        <token type="Keyword"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mcfunction.xml 🔗

@@ -0,0 +1,138 @@
+
+<lexer>
+  <config>
+    <name>MCFunction</name>
+    <alias>mcfunction</alias>
+    <alias>mcf</alias>
+    <filename>*.mcfunction</filename>
+    <mime_type>text/mcfunction</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule><include state="names"/></rule>
+      <rule><include state="comments"/></rule>
+      <rule><include state="literals"/></rule>
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="property"/></rule>
+      <rule><include state="operators"/></rule>
+      <rule><include state="selectors"/></rule>
+    </state>
+    <state name="names">
+      <rule pattern="^(\s*)([a-z_]+)"><bygroups><token type="TextWhitespace"/><token type="NameBuiltin"/></bygroups></rule>
+      <rule pattern="(?&lt;=run)\s+[a-z_]+"><token type="NameBuiltin"/></rule>
+      <rule pattern="\b[0-9a-fA-F]+(?:-[0-9a-fA-F]+){4}\b"><token type="NameVariable"/></rule>
+      <rule><include state="resource-name"/></rule>
+      <rule pattern="[A-Za-z_][\w.#%$]+"><token type="KeywordConstant"/></rule>
+      <rule pattern="[#%$][\w.#%$]+"><token type="NameVariableMagic"/></rule>
+    </state>
+    <state name="resource-name">
+      <rule pattern="#?[a-z_][a-z_.-]*:[a-z0-9_./-]+"><token type="NameFunction"/></rule>
+      <rule pattern="#?[a-z0-9_\.\-]+\/[a-z0-9_\.\-\/]+"><token type="NameFunction"/></rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+    </state>
+    <state name="comments">
+      <rule pattern="^\s*(#[&gt;!])"><token type="CommentMultiline"/><push state="comments.block" state="comments.block.emphasized"/></rule>
+      <rule pattern="#.*$"><token type="CommentSingle"/></rule>
+    </state>
+    <state name="comments.block">
+      <rule pattern="^\s*#[&gt;!]"><token type="CommentMultiline"/><push state="comments.block.emphasized"/></rule>
+      <rule pattern="^\s*#"><token type="CommentMultiline"/><push state="comments.block.normal"/></rule>
+      <rule><pop depth="1"/></rule>
+    </state>
+    <state name="comments.block.normal">
+      <rule><include state="comments.block.special"/></rule>
+      <rule pattern="\S+"><token type="CommentMultiline"/></rule>
+      <rule pattern="\n"><token type="Text"/><pop depth="1"/></rule>
+      <rule><include state="whitespace"/></rule>
+    </state>
+    <state name="comments.block.emphasized">
+      <rule><include state="comments.block.special"/></rule>
+      <rule pattern="\S+"><token type="LiteralStringDoc"/></rule>
+      <rule pattern="\n"><token type="Text"/><pop depth="1"/></rule>
+      <rule><include state="whitespace"/></rule>
+    </state>
+    <state name="comments.block.special">
+      <rule pattern="@\S+"><token type="NameDecorator"/></rule>
+      <rule><include state="resource-name"/></rule>
+      <rule pattern="[#%$][\w.#%$]+"><token type="NameVariableMagic"/></rule>
+    </state>
+    <state name="operators">
+      <rule pattern="[\-~%^?!+*&lt;&gt;\\/|&amp;=.]"><token type="Operator"/></rule>
+    </state>
+    <state name="literals">
+      <rule pattern="\.\."><token type="Literal"/></rule>
+      <rule pattern="(true|false)"><token type="KeywordPseudo"/></rule>
+      <rule pattern="[A-Za-z_]+"><token type="NameVariableClass"/></rule>
+      <rule pattern="[0-7]b"><token type="LiteralNumberByte"/></rule>
+      <rule pattern="[+-]?\d*\.?\d+([eE]?[+-]?\d+)?[df]?\b"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="[+-]?\d+\b"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><push state="literals.string-double"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralStringSingle"/><push state="literals.string-single"/></rule>
+    </state>
+    <state name="literals.string-double">
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&quot;\n]+"><token type="LiteralStringDouble"/></rule>
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
+    </state>
+    <state name="literals.string-single">
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&#x27;\n]+"><token type="LiteralStringSingle"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
+    </state>
+    <state name="selectors">
+      <rule pattern="@[a-z]"><token type="NameVariable"/></rule>
+    </state>
+    <state name="property">
+      <rule pattern="\{"><token type="Punctuation"/><push state="property.curly" state="property.key"/></rule>
+      <rule pattern="\["><token type="Punctuation"/><push state="property.square" state="property.key"/></rule>
+    </state>
+    <state name="property.curly">
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="property"/></rule>
+      <rule pattern="\}"><token type="Punctuation"/><pop depth="1"/></rule>
+    </state>
+    <state name="property.square">
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="property"/></rule>
+      <rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern=","><token type="Punctuation"/></rule>
+    </state>
+    <state name="property.key">
+      <rule><include state="whitespace"/></rule>
+      <rule pattern="#?[a-z_][a-z_\.\-]*\:[a-z0-9_\.\-/]+(?=\s*\=)"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
+      <rule pattern="#?[a-z_][a-z0-9_\.\-/]+"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
+      <rule pattern="[A-Za-z_\-\+]+"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
+      <rule pattern="&quot;"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
+      <rule pattern="&#x27;"><token type="NameAttribute"/><push state="property.delimiter"/></rule>
+      <rule pattern="-?\d+"><token type="LiteralNumberInteger"/><push state="property.delimiter"/></rule>
+      <rule><pop depth="1"/></rule>
+    </state>
+    <state name="property.key.string-double">
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&quot;\n]+"><token type="NameAttribute"/></rule>
+      <rule pattern="&quot;"><token type="NameAttribute"/><pop depth="1"/></rule>
+    </state>
+    <state name="property.key.string-single">
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&#x27;\n]+"><token type="NameAttribute"/></rule>
+      <rule pattern="&#x27;"><token type="NameAttribute"/><pop depth="1"/></rule>
+    </state>
+    <state name="property.delimiter">
+      <rule><include state="whitespace"/></rule>
+      <rule pattern="[:=]!?"><token type="Punctuation"/><push state="property.value"/></rule>
+      <rule pattern=","><token type="Punctuation"/></rule>
+      <rule><pop depth="1"/></rule>
+    </state>
+    <state name="property.value">
+      <rule><include state="whitespace"/></rule>
+      <rule pattern="#?[a-z_][a-z_\.\-]*\:[a-z0-9_\.\-/]+"><token type="NameTag"/></rule>
+      <rule pattern="#?[a-z_][a-z0-9_\.\-/]+"><token type="NameTag"/></rule>
+      <rule><include state="literals"/></rule>
+      <rule><include state="property"/></rule>
+      <rule><pop depth="1"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/meson.xml 🔗

@@ -0,0 +1,85 @@
+<lexer>
+  <config>
+    <name>Meson</name>
+    <alias>meson</alias>
+    <alias>meson.build</alias>
+    <filename>meson.build</filename>
+    <filename>meson_options.txt</filename>
+    <mime_type>text/x-meson</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#.*?$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;.*&#39;&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[1-9][0-9]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="0o[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0x[a-fA-F0-9]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule>
+        <include state="string"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="expr"/>
+      </rule>
+      <rule pattern="[a-zA-Z_][a-zA-Z_0-9]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[&#39;]{3}([&#39;]{0,2}([^\\&#39;]|\\(.|\n)))*[&#39;]{3}">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;.*?(?&lt;!\\)(\\\\)*?&#39;">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(endforeach|continue|foreach|break|endif|else|elif|if)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="expr">
+      <rule pattern="(in|and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(\*=|/=|%=|\+]=|-=|==|!=|\+|-|=)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[\[\]{}:().,?]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(false|true)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule>
+        <include state="builtins"/>
+      </rule>
+      <rule pattern="(target_machine|build_machine|host_machine|meson)\b">
+        <token type="NameVariableMagic"/>
+      </rule>
+    </state>
+    <state name="builtins">
+      <rule pattern="(?&lt;!\.)(add_project_link_arguments|add_global_link_arguments|add_project_arguments|add_global_arguments|include_directories|configuration_data|declare_dependency|install_headers|both_libraries|install_subdir|add_test_setup|configure_file|static_library|shared_library|custom_target|add_languages|shared_module|set_variable|get_variable|find_library|find_program|build_target|install_data|environment|is_disabler|run_command|subdir_done|install_man|is_variable|subproject|dependency|join_paths|get_option|executable|generator|benchmark|disabler|project|message|library|summary|vcs_tag|warning|assert|subdir|range|files|error|test|jar)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)import\b">
+        <token type="NameNamespace"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/metal.xml 🔗

@@ -0,0 +1,270 @@
+<lexer>
+  <config>
+    <name>Metal</name>
+    <alias>metal</alias>
+    <filename>*.metal</filename>
+    <mime_type>text/x-metal</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="function">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(include)(\s*(?:/[*].*?[*]/\s*)?)([^\n]+)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="CommentPreprocFile"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <push/>
+      </rule>
+      <rule pattern="^\s*#el(?:se|if).*\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule pattern="(namespace|constexpr|operator|template|using|this)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(enum)\b(\s+)(class)\b(\s*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(class|struct|enum|union)\b(\s*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="\[\[.+\]\]">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX]([0-9A-Fa-f](&#39;?[0-9A-Fa-f]+)*)[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0(&#39;?[0-7]+)+[LlUu]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[Bb][01](&#39;?[01]+)*[LlUu]*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="[0-9](&#39;?[0-9]+)*[LlUu]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(continue|typedef|sizeof|extern|static|switch|struct|return|union|const|break|while|enum|else|case|for|do|if)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(bool|float|half|long|ptrdiff_t|size_t|unsigned|u?char|u?int((8|16|32|64)_t)?|u?short)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(bool|float|half|u?(char|int|long|short))(2|3|4)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="packed_(float|half|long|u?(char|int|short))(2|3|4)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(float|half)(2|3|4)x(2|3|4)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="atomic_u?int\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(rg?(8|16)(u|s)norm|rgba(8|16)(u|s)norm|srgba8unorm|rgb10a2|rg11b10f|rgb9e5)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(array|depth(2d|cube)(_array)?|depth2d_ms(_array)?|sampler|texture_buffer|texture(1|2)d(_array)?|texture2d_ms(_array)?|texture3d|texturecube(_array)?|uniform|visible_function_table)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(true|false|NULL)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(threadgroup_imageblock|threadgroup|constant|ray_data|device|thread)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(:)(?!:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="(fragment|kernel|vertex)?((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;{]*)(\{)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="function"/>
+      </rule>
+      <rule pattern="(fragment|kernel|vertex)?((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;]*)(;)">
+        <bygroups>
+          <token type="Keyword"/>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <push state="statement"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="(\[\[.+\]\])(\s*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s*(?=[&gt;{])">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="^#if\s+0">
+        <token type="CommentPreproc"/>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^#">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#if\s+0)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="statement">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern="[{]">
+        <token type="Punctuation"/>
+        <push state="root"/>
+      </rule>
+      <rule pattern="[;}]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/minizinc.xml 🔗

@@ -0,0 +1,82 @@
+<lexer>
+  <config>
+    <name>MiniZinc</name>
+    <alias>minizinc</alias>
+    <alias>MZN</alias>
+    <alias>mzn</alias>
+    <filename>*.mzn</filename>
+    <filename>*.dzn</filename>
+    <filename>*.fzn</filename>
+    <mime_type>text/minizinc</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\%(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\b(annotation|constraint|predicate|minimize|function|maximize|satisfy|include|record|output|solve|test|list|type|ann|par|any|var|op|of)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(string|tuple|float|array|bool|enum|int|set)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\b(forall|where|endif|then|else|for|if)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(array_intersect|index_set_2of3|index_set_1of3|index_set_3of3|index_set_1of2|index_set_2of2|array_union|show_float|dom_array|int2float|set2array|index_set|dom_size|lb_array|is_fixed|ub_array|bool2int|show_int|array4d|array2d|array1d|array5d|array6d|array3d|product|length|assert|concat|trace|acosh|round|abort|log10|floor|sinh|tanh|atan|sqrt|asin|show|log2|card|ceil|cosh|join|pow|cos|max|log|exp|dom|sin|abs|fix|sum|tan|min|lb|ln|ub)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(not|&lt;-&gt;|-&gt;|&lt;-|\\/|xor|/\\)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(&lt;|&gt;|&lt;=|&gt;=|==|=|!=)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(\+|-|\*|/|div|mod)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(intersect|superset|symdiff|subset|union|diff|in)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(\\|\.\.|\+\+)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[|()\[\]{},:;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="([+-]?)\d+(\.(?!\.)\d*)?([eE][-+]?\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="::\s*([^\W\d]\w*)(\s*\([^\)]*\))?">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="\b([^\W\d]\w*)\b(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\W\d]\w*">
+        <token type="NameOther"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mlir.xml 🔗

@@ -0,0 +1,73 @@
+<lexer>
+  <config>
+    <name>MLIR</name>
+    <alias>mlir</alias>
+    <filename>*.mlir</filename>
+    <mime_type>text/x-mlir</mime_type>
+  </config>
+  <rules>
+    <state name="whitespace">
+      <rule pattern="(\n|\s)+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="keyword">
+      <rule pattern="(constant|return)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(memref|tensor|vector|func|loc)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="bf16|f16|f32|f64|index">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="i[1-9]\d*">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="c?&#34;[^&#34;]*?&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\^([-a-zA-Z$._][\w\-$.0-9]*)\s*">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="([\w\d_$.]+)\s*=">
+        <token type="NameLabel"/>
+      </rule>
+      <rule>
+        <include state="keyword"/>
+      </rule>
+      <rule pattern="-&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="@([\w_][\w\d_$.]*)">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="[%#][\w\d_$.]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="([1-9?][\d?]*\s*x)+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="0[xX][a-fA-F0-9]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="-?\d+(?:[.]\d+)?(?:[eE][-+]?\d+(?:[.]\d+)?)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[=&lt;&gt;{}\[\]()*.,!:]|x\b">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\w\d]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/modula-2.xml 🔗

@@ -0,0 +1,245 @@
+<lexer>
+  <config>
+    <name>Modula-2</name>
+    <alias>modula2</alias>
+    <alias>m2</alias>
+    <filename>*.def</filename>
+    <filename>*.mod</filename>
+    <mime_type>text/x-modula2</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="dialecttags">
+      <rule pattern="\(\*!m2pim\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="\(\*!m2iso\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="\(\*!m2r10\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="\(\*!objm2\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="\(\*!m2iso\+aglet\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="\(\*!m2pim\+gm2\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="\(\*!m2iso\+p1\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="\(\*!m2iso\+xds\*\)">
+        <token type="CommentSpecial"/>
+      </rule>
+    </state>
+    <state name="unigraph_operators">
+      <rule pattern="[+-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[=#&lt;&gt;]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\^">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="@">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&amp;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="~">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="`">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="string_literals">
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="identifiers">
+      <rule pattern="([a-zA-Z_$][\w$]*)">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="pragmas">
+      <rule pattern="&lt;\*.*?\*&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\(\*\$.*?\*\)">
+        <token type="CommentPreproc"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="^//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\(\*([^$].*?)\*\)">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*(.*?)\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\n+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="suffixed_number_literals">
+      <rule pattern="[0-7]+B">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[0-7]+C">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[0-9A-F]+H">
+        <token type="LiteralNumberHex"/>
+      </rule>
+    </state>
+    <state name="plain_number_literals">
+      <rule pattern="[0-9]+(\&#39;[0-9]+)*\.[0-9]+(\&#39;[0-9]+)*[eE][+-]?[0-9]+(\&#39;[0-9]+)*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+(\&#39;[0-9]+)*\.[0-9]+(\&#39;[0-9]+)*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+(\&#39;[0-9]+)*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="digraph_punctuation">
+      <rule pattern="\.\.">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&lt;&lt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&gt;&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="-&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\|#">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="##">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\|\*">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="unigraph_punctuation">
+      <rule pattern="[()\[\]{},.:;|]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="!">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\?">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="dialecttags"/>
+      </rule>
+      <rule>
+        <include state="pragmas"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule>
+        <include state="identifiers"/>
+      </rule>
+      <rule>
+        <include state="suffixed_number_literals"/>
+      </rule>
+      <rule>
+        <include state="prefixed_number_literals"/>
+      </rule>
+      <rule>
+        <include state="plain_number_literals"/>
+      </rule>
+      <rule>
+        <include state="string_literals"/>
+      </rule>
+      <rule>
+        <include state="digraph_punctuation"/>
+      </rule>
+      <rule>
+        <include state="digraph_operators"/>
+      </rule>
+      <rule>
+        <include state="unigraph_punctuation"/>
+      </rule>
+      <rule>
+        <include state="unigraph_operators"/>
+      </rule>
+    </state>
+    <state name="prefixed_number_literals">
+      <rule pattern="0b[01]+(\&#39;[01]+)*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[ux][0-9A-F]+(\&#39;[0-9A-F]+)*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+    </state>
+    <state name="digraph_operators">
+      <rule pattern="\*\.">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\+&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&gt;=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="==">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="::">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=":=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\+\+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="--">
+        <token type="Operator"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/monkeyc.xml 🔗

@@ -0,0 +1,153 @@
+<lexer>
+  <config>
+    <name>MonkeyC</name>
+    <alias>monkeyc</alias>
+    <filename>*.mc</filename>
+    <mime_type>text/x-monkeyc</mime_type>
+  </config>
+  <rules>
+    <state name="class">
+      <rule pattern="([a-zA-Z_][\w_\.]*)(?:(\s+)(extends)(\s+)([a-zA-Z_][\w_\.]*))?">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule pattern="initialize">
+        <token type="NameFunctionMagic"/>
+      </rule>
+      <rule pattern="[a-zA-Z_][\w_\.]*">
+        <token type="NameFunction"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="module">
+      <rule pattern="[a-zA-Z_][\w_\.]*">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern=":[a-zA-Z_][\w_\.]*">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="[{}\[\]\(\),;:\.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[&amp;~\|\^!+\-*\/%=?]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="=&gt;|[+-]=|&amp;&amp;|\|\||&gt;&gt;|&lt;&lt;|[&lt;&gt;]=?|[!=]=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(and|or|instanceof|has|extends|new)">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(false|null|true|NaN)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(using)((?:\s|\\\\s)+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="(class)((?:\s|\\\\s)+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="(function)((?:\s|\\\\s)+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="function"/>
+      </rule>
+      <rule pattern="(module)((?:\s|\\\\s)+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="module"/>
+      </rule>
+      <rule pattern="\b(if|else|for|switch|case|while|break|continue|default|do|try|catch|finally|return|throw|extends|function)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(const|enum|hidden|public|protected|private|static)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\bvar\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="\b(Activity(Monitor|Recording)?|Ant(Plus)?|Application|Attention|Background|Communications|Cryptography|FitContributor|Graphics|Gregorian|Lang|Math|Media|Persisted(Content|Locations)|Position|Properties|Sensor(History|Logging)?|Storage|StringUtil|System|Test|Time(r)?|Toybox|UserProfile|WatchUi|Rez|Drawables|Strings|Fonts|method)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="\b(me|self|\$)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="-?(0x[0-9a-fA-F]+l?)">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="-?([0-9]+(\.[0-9]+[df]?|[df]))\b">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?([0-9]+l?)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="([a-zA-Z_][\w_\.]*)(?:(\s+)(as)(\s+)([a-zA-Z_][\w_]*))?">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Text"/>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/morrowindscript.xml 🔗

@@ -0,0 +1,90 @@
+<lexer>
+  <config>
+    <name>MorrowindScript</name>
+    <alias>morrowind</alias>
+    <alias>mwscript</alias>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="([&#34;&#39;])(?:(?=(\\?))\2.)*?\1">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[0-9]+\.[0-9]*(?!\.)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="types"/>
+      </rule>
+      <rule>
+        <include state="builtins"/>
+      </rule>
+      <rule>
+        <include state="punct"/>
+      </rule>
+      <rule>
+        <include state="operators"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\S+\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[a-zA-Z0-9_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(?i)(begin|if|else|elseif|endif|while|endwhile|return|to)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?i)(end)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?i)(end)\w+.*$">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[\w+]-&gt;[\w+]">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="builtins">

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/myghty.xml 🔗

@@ -0,0 +1,77 @@
+<lexer>
+  <config>
+    <name>Myghty</name>
+    <alias>myghty</alias>
+    <filename>*.myt</filename>
+    <filename>autodelegate</filename>
+    <mime_type>application/x-myghty</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(&lt;%(?:def|method))(\s*)(.*?)(&gt;)(.*?)(&lt;/%\2\s*&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+          <token type="NameTag"/>
+          <usingself state="root"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;%\w+)(.*?)(&gt;)(.*?)(&lt;/%\2\s*&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="NameFunction"/>
+          <token type="NameTag"/>
+          <using lexer="Python2"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;&amp;[^|])(.*?)(,.*?)?(&amp;&gt;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="NameFunction"/>
+          <using lexer="Python2"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;&amp;\|)(.*?)(,.*?)?(&amp;&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="NameFunction"/>
+          <using lexer="Python2"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&lt;/&amp;&gt;">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="(&lt;%!?)(.*?)(%&gt;)(?s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <using lexer="Python2"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;=^)#[^\n]*(\n|\Z)">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(?&lt;=^)(%)([^\n]*)(\n|\Z)">
+        <bygroups>
+          <token type="NameTag"/>
+          <using lexer="Python2"/>
+          <token type="Other"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?sx)&#xA;                 (.+?)               # anything, followed by:&#xA;                 (?:&#xA;                  (?&lt;=\n)(?=[%#]) |  # an eval or comment line&#xA;                  (?=&lt;/?[%&amp;]) |      # a substitution or block or&#xA;                                     # call start or end&#xA;                                     # - don&#39;t consume&#xA;                  (\\\n) |           # an escaped newline&#xA;                  \Z                 # end of string&#xA;                 )">
+        <bygroups>
+          <token type="Other"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/mysql.xml 🔗

@@ -0,0 +1,121 @@
+<lexer>
+  <config>
+    <name>MySQL</name>
+    <alias>mysql</alias>
+    <alias>mariadb</alias>
+    <filename>*.sql</filename>
+    <mime_type>text/x-mysql</mime_type>
+    <mime_type>text/x-mariadb</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="[^&#39;]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="double-string">
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="(#|--\s+).*\n?">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comments"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[0-9]*\.[0-9]+(e[+-][0-9]+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="((?:_[a-z0-9]+)?)(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="((?:_[a-z0-9]+)?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <push state="double-string"/>
+      </rule>
+      <rule pattern="[+*/&lt;&gt;=~!@#%^&amp;|`?-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(tinyint|smallint|mediumint|int|integer|bigint|date|datetime|time|bit|bool|tinytext|mediumtext|longtext|text|tinyblob|mediumblob|longblob|blob|float|double|double\s+precision|real|numeric|dec|decimal|timestamp|year|char|varchar|varbinary|varcharacter|enum|set)(\b\s*)(\()?">
+        <bygroups>
+          <token type="KeywordType"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nasm.xml 🔗

@@ -0,0 +1,126 @@
+<lexer>
+  <config>
+    <name>NASM</name>
+    <alias>nasm</alias>
+    <filename>*.asm</filename>
+    <filename>*.ASM</filename>
+    <filename>*.nasm</filename>
+    <mime_type>text/x-nasm</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <priority>1.0</priority> <!-- TASM uses the same file endings, but TASM is not as common as NASM, so we prioritize NASM higher by default. -->
+  </config>
+  <rules>
+    <state name="punctuation">
+      <rule pattern="[,():\[\]]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[&amp;|^&lt;&gt;+*/%~-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[$]+">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="seg|wrt|strict">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="byte|[dq]?word">
+        <token type="KeywordType"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^\s*%">
+        <token type="CommentPreproc"/>
+        <push state="preproc"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[a-z$._?][\w$.?#@~]*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="([a-z$._?][\w$.?#@~]*)(\s+)(equ)">
+        <bygroups>
+          <token type="NameConstant"/>
+          <token type="KeywordDeclaration"/>
+          <token type="KeywordDeclaration"/>
+        </bygroups>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="BITS|USE16|USE32|SECTION|SEGMENT|ABSOLUTE|EXTERN|GLOBAL|ORG|ALIGN|STRUC|ENDSTRUC|COMMON|CPU|GROUP|UPPERCASE|IMPORT|EXPORT|LIBRARY|MODULE">
+        <token type="Keyword"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="(?:res|d)[bwdqt]|times">
+        <token type="KeywordDeclaration"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="[a-z$._?][\w$.?#@~]*">
+        <token type="NameFunction"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="instruction-args">
+      <rule pattern="&#34;(\\&#34;|[^&#34;\n])*&#34;|&#39;(\\&#39;|[^&#39;\n])*&#39;|`(\\`|[^`\n])*`">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(?:0x[0-9a-f]+|$0[0-9a-f]*|[0-9]+[0-9a-f]*h)">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-7]+q">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[01]+b">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="[0-9]+\.e?[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule pattern="r[0-9][0-5]?[bwd]|[a-d][lh]|[er]?[a-d]x|[er]?[sb]p|[er]?[sd]i|[c-gs]s|st[0-7]|mm[0-7]|cr[0-4]|dr[0-367]|tr[3-7]">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-z$._?][\w$.?#@~]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+    </state>
+    <state name="preproc">
+      <rule pattern="[^;\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern=";.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/natural.xml 🔗

@@ -0,0 +1,143 @@
+<lexer>
+  <config>
+    <name>Natural</name>
+    <alias>natural</alias>
+    <filename>*.NSN</filename>
+    <filename>*.NSP</filename>
+    <filename>*.NSS</filename>
+    <filename>*.NSH</filename>
+    <filename>*.NSG</filename>
+    <filename>*.NSL</filename>
+    <filename>*.NSA</filename>
+    <filename>*.NSM</filename>
+    <filename>*.NSC</filename>
+    <filename>*.NS7</filename>
+    <mime_type>text/x-natural</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="common">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^\*.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*$">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="variable-names">
+      <rule pattern="[#+]?[\w\-\d]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\([a-zA-z]\d*\)">
+        <token type="Other"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="common"/>
+      </rule>
+      <rule pattern="(?:END-DEFINE|END-IF|END-FOR|END-SUBROUTINE|END-ERROR|END|IGNORE)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?:INIT|CONST)\s*&lt;\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(FORM)(\s+)(\w+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(DEFINE)(\s+)(SUBROUTINE)(\s+)([#+]?[\w\-\d]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(PERFORM)(\s+)([#+]?[\w\-\d]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(METHOD)(\s+)([\w~]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s+)([\w\-]+)([=\-]&gt;)([\w\-~]+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;=(=|-)&gt;)([\w\-~]+)(?=\()">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(TEXT)(-)(\d{3})">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumberInteger"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(TEXT)(-)(\w{3})">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Punctuation"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ndisasm.xml 🔗

@@ -0,0 +1,123 @@
+<lexer>
+  <config>
+    <name>NDISASM</name>
+    <alias>ndisasm</alias>
+    <mime_type>text/x-disasm</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <priority>0.5</priority> <!-- Lower than NASM -->
+  </config>
+  <rules>
+    <state name="root">
+        <rule pattern="^[0-9A-Za-z]+">
+            <token type="CommentSpecial"/>
+            <push state="offset"/>
+        </rule>
+    </state>
+    <state name="offset">
+        <rule pattern="[0-9A-Za-z]+">
+            <token type="CommentSpecial"/>
+            <push state="assembly"/>
+        </rule>
+        <rule>
+            <include state="whitespace"/>
+        </rule>
+    </state>
+    <state name="punctuation">
+      <rule pattern="[,():\[\]]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[&amp;|^&lt;&gt;+*/%~-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[$]+">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="seg|wrt|strict">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="byte|[dq]?word">
+        <token type="KeywordType"/>
+      </rule>
+    </state>
+    <state name="assembly">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[a-z$._?][\w$.?#@~]*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="([a-z$._?][\w$.?#@~]*)(\s+)(equ)">
+        <bygroups>
+          <token type="NameConstant"/>
+          <token type="KeywordDeclaration"/>
+          <token type="KeywordDeclaration"/>
+        </bygroups>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="BITS|USE16|USE32|SECTION|SEGMENT|ABSOLUTE|EXTERN|GLOBAL|ORG|ALIGN|STRUC|ENDSTRUC|COMMON|CPU|GROUP|UPPERCASE|IMPORT|EXPORT|LIBRARY|MODULE">
+        <token type="Keyword"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="(?:res|d)[bwdqt]|times">
+        <token type="KeywordDeclaration"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="[a-z$._?][\w$.?#@~]*">
+        <token type="NameFunction"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="instruction-args">
+      <rule pattern="&#34;(\\&#34;|[^&#34;\n])*&#34;|&#39;(\\&#39;|[^&#39;\n])*&#39;|`(\\`|[^`\n])*`">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(?:0x[0-9a-f]+|$0[0-9a-f]*|[0-9]+[0-9a-f]*h)">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-7]+q">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[01]+b">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="[0-9]+\.e?[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule pattern="r[0-9][0-5]?[bwd]|[a-d][lh]|[er]?[a-d]x|[er]?[sb]p|[er]?[sd]i|[c-gs]s|st[0-7]|mm[0-7]|cr[0-4]|dr[0-367]|tr[3-7]">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-z$._?][\w$.?#@~]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+        <pop depth="3"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/newspeak.xml 🔗

@@ -0,0 +1,121 @@
+<lexer>
+  <config>
+    <name>Newspeak</name>
+    <alias>newspeak</alias>
+    <filename>*.ns2</filename>
+    <mime_type>text/x-newspeak</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\b(Newsqueak2)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="&#39;[^&#39;]*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\b(class)(\s+)(\w+)(\s*)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(mixin|self|super|private|public|protected|nil|true|false)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(\w+\:)(\s*)([a-zA-Z_]\w+)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\w+)(\s*)(=)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&lt;\w+&gt;">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule>
+        <include state="expressionstat"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+    </state>
+    <state name="expressionstat">
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern=":\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(\w+)(::)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\w+:">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\(|\)">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\[|\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\{|\}">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(\^|\+|\/|~|\*|&lt;|&gt;|=|@|%|\||&amp;|\?|!|,|-|:)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\.|;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="literals"/>
+      </rule>
+    </state>
+    <state name="literals">
+      <rule pattern="\$.">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;[^&#39;]*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="#&#39;[^&#39;]*&#39;">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="#\w+:?">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="#(\+|\/|~|\*|&lt;|&gt;|=|@|%|\||&amp;|\?|!|,|-)+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]*&#34;">
+        <token type="Comment"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nginx_configuration_file.xml 🔗

@@ -0,0 +1,98 @@
+<lexer>
+  <config>
+    <name>Nginx configuration file</name>
+    <alias>nginx</alias>
+    <filename>nginx.conf</filename>
+    <mime_type>text/x-nginx-conf</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(include)(\s+)([^\s;]+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Name"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\s;#]+">
+        <token type="Keyword"/>
+        <push state="stmt"/>
+      </rule>
+      <rule>
+        <include state="base"/>
+      </rule>
+    </state>
+    <state name="block">
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="[^\s;#]+">
+        <token type="KeywordNamespace"/>
+        <push state="stmt"/>
+      </rule>
+      <rule>
+        <include state="base"/>
+      </rule>
+    </state>
+    <state name="stmt">
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="block"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="base"/>
+      </rule>
+    </state>
+    <state name="base">
+      <rule pattern="#.*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="on|off">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\$[^\s;#()]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="([a-z0-9.-]+)(:)([0-9]+)">
+        <bygroups>
+          <token type="Name"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumberInteger"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-z-]+/[a-z-+]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[0-9]+[km]?\b">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(~)(\s*)([^\s{]+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="LiteralStringRegex"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[:=~]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[^\s;#{}$]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="/[^\s;#]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[$;]">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nim.xml 🔗

@@ -0,0 +1,211 @@
+<lexer>
+  <config>
+    <name>Nim</name>
+    <alias>nim</alias>
+    <alias>nimrod</alias>
+    <filename>*.nim</filename>
+    <filename>*.nimrod</filename>
+    <mime_type>text/x-nim</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="dqs">
+      <rule pattern="\\([\\abcefnrtvl&#34;\&#39;]|\n|x[a-f0-9]{2}|[0-9]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="tdqs">
+      <rule pattern="&#34;&#34;&#34;(?!&#34;)">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule>
+        <include state="nl"/>
+      </rule>
+    </state>
+    <state name="funcname">
+      <rule pattern="((?![\d_])\w)(((?!_)\w)|(_(?!_)\w))*">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="`.+`">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="int-suffix">
+      <rule pattern="\&#39;(i|u)(32|64)">
+        <token type="LiteralNumberIntegerLong"/>
+      </rule>
+      <rule pattern="\&#39;(u|(i|u)(8|16))">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="float-suffix">
+      <rule pattern="\&#39;(f|d|f(32|64))">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="strings">
+      <rule pattern="(?&lt;!\$)\$(\d+|#|\w+)+">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^\\\&#39;&#34;$\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[\&#39;&#34;\\]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="nl">
+      <rule pattern="\n">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="chars">
+      <rule pattern="\\([\\abcefnrtvl&#34;\&#39;]|x[a-f0-9]{2}|[0-9]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralStringChar"/>
+      </rule>
+    </state>
+    <state name="rdqs">
+      <rule pattern="&#34;(?!&#34;)">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+    </state>
+    <state name="float-number">
+      <rule pattern="\.(?!\.)[0-9_]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="e[+-]?[0-9][0-9_]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#\[[\s\S]*?\]#">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="##.*$">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[*=&gt;&lt;+\-/@$~&amp;%!?|\\\[\]]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\.\.|\.|,|\[\.|\.\]|\{\.|\.\}|\(\.|\.\)|\{|\}|\(|\)|:|\^|`|;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(?:[\w]+)&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="tdqs"/>
+      </rule>
+      <rule pattern="(?:[\w]+)&#34;">
+        <token type="LiteralString"/>
+        <push state="rdqs"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="tdqs"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="dqs"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringChar"/>
+        <push state="chars"/>
+      </rule>
+      <rule pattern="(a_?n_?d_?|o_?r_?|n_?o_?t_?|x_?o_?r_?|s_?h_?l_?|s_?h_?r_?|d_?i_?v_?|m_?o_?d_?|i_?n_?|n_?o_?t_?i_?n_?|i_?s_?|i_?s_?n_?o_?t_?)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(p_?r_?o_?c_?\s)(?![(\[\]])">
+        <token type="Keyword"/>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="(a_?d_?d_?r_?|a_?n_?d_?|a_?s_?|a_?s_?m_?|a_?t_?o_?m_?i_?c_?|b_?i_?n_?d_?|b_?l_?o_?c_?k_?|b_?r_?e_?a_?k_?|c_?a_?s_?e_?|c_?a_?s_?t_?|c_?o_?n_?c_?e_?p_?t_?|c_?o_?n_?s_?t_?|c_?o_?n_?t_?i_?n_?u_?e_?|c_?o_?n_?v_?e_?r_?t_?e_?r_?|d_?e_?f_?e_?r_?|d_?i_?s_?c_?a_?r_?d_?|d_?i_?s_?t_?i_?n_?c_?t_?|d_?i_?v_?|d_?o_?|e_?l_?i_?f_?|e_?l_?s_?e_?|e_?n_?d_?|e_?n_?u_?m_?|e_?x_?c_?e_?p_?t_?|e_?x_?p_?o_?r_?t_?|f_?i_?n_?a_?l_?l_?y_?|f_?o_?r_?|f_?u_?n_?c_?|i_?f_?|i_?n_?|y_?i_?e_?l_?d_?|i_?n_?t_?e_?r_?f_?a_?c_?e_?|i_?s_?|i_?s_?n_?o_?t_?|i_?t_?e_?r_?a_?t_?o_?r_?|l_?e_?t_?|m_?a_?c_?r_?o_?|m_?e_?t_?h_?o_?d_?|m_?i_?x_?i_?n_?|m_?o_?d_?|n_?o_?t_?|n_?o_?t_?i_?n_?|o_?b_?j_?e_?c_?t_?|o_?f_?|o_?r_?|o_?u_?t_?|p_?r_?o_?c_?|p_?t_?r_?|r_?a_?i_?s_?e_?|r_?e_?f_?|r_?e_?t_?u_?r_?n_?|s_?h_?a_?r_?e_?d_?|s_?h_?l_?|s_?h_?r_?|s_?t_?a_?t_?i_?c_?|t_?e_?m_?p_?l_?a_?t_?e_?|t_?r_?y_?|t_?u_?p_?l_?e_?|t_?y_?p_?e_?|w_?h_?e_?n_?|w_?h_?i_?l_?e_?|w_?i_?t_?h_?|w_?i_?t_?h_?o_?u_?t_?|x_?o_?r_?)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(f_?r_?o_?m_?|i_?m_?p_?o_?r_?t_?|i_?n_?c_?l_?u_?d_?e_?)\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(v_?a_?r)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(i_?n_?t_?|i_?n_?t_?8_?|i_?n_?t_?1_?6_?|i_?n_?t_?3_?2_?|i_?n_?t_?6_?4_?|f_?l_?o_?a_?t_?|f_?l_?o_?a_?t_?3_?2_?|f_?l_?o_?a_?t_?6_?4_?|b_?o_?o_?l_?|c_?h_?a_?r_?|r_?a_?n_?g_?e_?|a_?r_?r_?a_?y_?|s_?e_?q_?|s_?e_?t_?|s_?t_?r_?i_?n_?g_?)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(n_?i_?l_?|t_?r_?u_?e_?|f_?a_?l_?s_?e_?)\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="\b_\b">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\b((?![_\d])\w)(((?!_)\w)|(_(?!_)\w))*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[0-9][0-9_]*(?=([e.]|\&#39;(f|d|f(32|64))))">
+        <token type="LiteralNumberFloat"/>
+        <push state="float-suffix" state="float-number"/>
+      </rule>
+      <rule pattern="0x[a-f0-9][a-f0-9_]*">
+        <token type="LiteralNumberHex"/>
+        <push state="int-suffix"/>
+      </rule>
+      <rule pattern="0b[01][01_]*">
+        <token type="LiteralNumberBin"/>
+        <push state="int-suffix"/>
+      </rule>
+      <rule pattern="0o[0-7][0-7_]*">
+        <token type="LiteralNumberOct"/>
+        <push state="int-suffix"/>
+      </rule>
+      <rule pattern="[0-9][0-9_]*">
+        <token type="LiteralNumberInteger"/>
+        <push state="int-suffix"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".+$">
+        <token type="Error"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nix.xml 🔗

@@ -0,0 +1,258 @@
+<lexer>
+  <config>
+    <name>Nix</name>
+    <alias>nixos</alias>
+    <alias>nix</alias>
+    <filename>*.nix</filename>
+    <mime_type>text/x-nix</mime_type>
+  </config>
+  <rules>
+    <state name="space">
+      <rule pattern="[ \t\r\n]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="paren">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="scope">
+      <rule pattern="}:">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="in(?![a-zA-Z0-9_&#39;-])">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\${">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+      <rule pattern="(=|\?|,)">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="builtins">
+      <rule pattern="throw(?![a-zA-Z0-9_&#39;-])">
+        <token type="NameException"/>
+      </rule>
+      <rule pattern="(dependencyClosure|fetchTarball|filterSource|currentTime|removeAttrs|baseNameOf|derivation|toString|builtins|getAttr|hasAttr|getEnv|isNull|abort|dirOf|toXML|map)(?![a-zA-Z0-9_&#39;-])">
+        <token type="NameBuiltin"/>
+      </rule>
+    </state>
+    <state name="literals">
+      <rule pattern="(false|true|null)(?![a-zA-Z0-9_&#39;-])">
+        <token type="NameConstant"/>
+      </rule>
+      <rule>
+        <include state="uri"/>
+      </rule>
+      <rule>
+        <include state="path"/>
+      </rule>
+      <rule>
+        <include state="int"/>
+      </rule>
+      <rule>
+        <include state="float"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="import(?![a-zA-Z0-9_&#39;-])">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(inherit|assert|with|then|else|rec|if)(?![a-zA-Z0-9_&#39;-])">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="list">
+      <rule pattern="\]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="operators">
+      <rule pattern=" [/-] ">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(\.)(\${)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="LiteralStringInterpol"/>
+        </bygroups>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="(\?)(\s*)(\${)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="LiteralStringInterpol"/>
+        </bygroups>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="(&amp;&amp;|&gt;=|&lt;=|\+\+|-&gt;|!=|=|\|\||//|==|@|!|\+|\?|&lt;|\.|&gt;|\*)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[;:]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".|\n">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="interpol">
+      <rule pattern="}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="path">
+      <rule pattern="[a-zA-Z0-9._+-]*(/[a-zA-Z0-9._+-]+)+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="~(/[a-zA-Z0-9._+-]+)+/?">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="&lt;[a-zA-Z0-9._+-]+(/[a-zA-Z0-9._+-]+)*&gt;">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="float">
+      <rule pattern="-?(([1-9][0-9]*\.[0-9]*)|(0?\.[0-9]+))([Ee][+-]?[0-9]+)?(?![a-zA-Z0-9_&#39;-])">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="builtins"/>
+      </rule>
+      <rule>
+        <include state="literals"/>
+      </rule>
+      <rule>
+        <include state="operators"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="paren"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push state="list"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="qstring"/>
+      </rule>
+      <rule pattern="&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="istring"/>
+      </rule>
+      <rule pattern="{">
+        <token type="Punctuation"/>
+        <push state="scope"/>
+      </rule>
+      <rule pattern="let(?![a-zA-Z0-9_&#39;-])">
+        <token type="Keyword"/>
+        <push state="scope"/>
+      </rule>
+      <rule>
+        <include state="id"/>
+      </rule>
+      <rule>
+        <include state="space"/>
+      </rule>
+    </state>
+    <state name="int">
+      <rule pattern="-?[0-9]+(?![a-zA-Z0-9_&#39;-])">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="uri">
+      <rule pattern="[a-zA-Z][a-zA-Z0-9+.-]*:[a-zA-Z0-9%/?:@&amp;=+$,_.!~*&#39;-]+">
+        <token type="LiteralStringDoc"/>
+      </rule>
+    </state>
+    <state name="qstring">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\${">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern=".|\n">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="istring">
+      <rule pattern="&#39;&#39;\$">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#39;&#39;\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\${">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpol"/>
+      </rule>
+      <rule pattern="\$.">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern=".|\n">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="id">
+      <rule pattern="[a-zA-Z_][a-zA-Z0-9_&#39;-]*">
+        <token type="Name"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/nsis.xml 🔗

@@ -0,0 +1,59 @@
+<lexer>
+  <config>
+    <name>NSIS</name>
+    <alias>nsis</alias>
+    <alias>nsi</alias>
+    <alias>nsh</alias>
+    <filename>*.nsi</filename>
+    <filename>*.nsh</filename>
+    <mime_type>text/x-nsis</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="([;#].*)(\n)"><bygroups><token type="Comment"/><token type="TextWhitespace"/></bygroups></rule>
+      <rule pattern="&#x27;.*?&#x27;"><token type="LiteralStringSingle"/></rule>
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><push state="str_double"/></rule>
+      <rule pattern="`"><token type="LiteralStringBacktick"/><push state="str_backtick"/></rule>
+      <rule><include state="macro"/></rule>
+      <rule><include state="interpol"/></rule>
+      <rule><include state="basic"/></rule>
+      <rule pattern="\$\{[a-z_|][\w|]*\}"><token type="KeywordPseudo"/></rule>
+      <rule pattern="/[a-z_]\w*"><token type="NameAttribute"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="[\w.]+"><token type="Text"/></rule>
+    </state>
+    <state name="basic">
+      <rule pattern="(\n)(Function)(\s+)([._a-z][.\w]*)\b"><bygroups><token type="TextWhitespace"/><token type="Keyword"/><token type="TextWhitespace"/><token type="NameFunction"/></bygroups></rule>
+      <rule pattern="\b([_a-z]\w*)(::)([a-z][a-z0-9]*)\b"><bygroups><token type="KeywordNamespace"/><token type="Punctuation"/><token type="NameFunction"/></bygroups></rule>
+      <rule pattern="\b([_a-z]\w*)(:)"><bygroups><token type="NameLabel"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="(\b[ULS]|\B)([!&lt;&gt;=]?=|\&lt;\&gt;?|\&gt;)\B"><token type="Operator"/></rule>
+      <rule pattern="[|+-]"><token type="Operator"/></rule>
+      <rule pattern="\\"><token type="Punctuation"/></rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/objective-c.xml 🔗

@@ -0,0 +1,510 @@
+<lexer>
+  <config>
+    <name>Objective-C</name>
+    <alias>objective-c</alias>
+    <alias>objectivec</alias>
+    <alias>obj-c</alias>
+    <alias>objc</alias>
+    <filename>*.m</filename>
+    <filename>*.h</filename>
+    <mime_type>text/x-objective-c</mime_type>
+  </config>
+  <rules>
+    <state name="macro">
+      <rule pattern="(include)(\s*(?:/[*].*?[*]/\s*)?)([^\n]+)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="CommentPreprocFile"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="literal_number">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="literal_number_inner"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Literal"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="statement"/>
+      </rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <push/>
+      </rule>
+      <rule pattern="^\s*#el(?:se|if).*\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^([-+])(\s*)(\(.*?\))?(\s*)([a-zA-Z$_][\w$]*:?)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <usingself state="root"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <push state="method"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;{]*)(\{)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="function"/>
+      </rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;]*)(;)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <usingself state="root"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <push state="statement"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule pattern="@&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="@(YES|NO)">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="@&#39;(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="@(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="@(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="@0x[0-9a-fA-F]+[Ll]?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="@0[0-7]+[Ll]?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="@\d+[Ll]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="@\(">
+        <token type="Literal"/>
+        <push state="literal_number"/>
+      </rule>
+      <rule pattern="@\[">
+        <token type="Literal"/>
+        <push state="literal_array"/>
+      </rule>
+      <rule pattern="@\{">
+        <token type="Literal"/>
+        <push state="literal_dictionary"/>
+      </rule>
+      <rule pattern="(unsafe_unretained|__bridge_transfer|@autoreleasepool|__autoreleasing|@synchronized|@synthesize|@protected|@selector|@required|@optional|readwrite|@property|nonatomic|@finally|__bridge|@dynamic|__strong|readonly|@private|__block|@public|@encode|release|assign|retain|atomic|@throw|@catch|__weak|setter|getter|typeof|strong|inout|class|@try|@end|weak|copy|out|in)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(instancetype|IBOutlet|IBAction|unichar|Class|BOOL|IMP|SEL|id)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="@(true|false|YES|NO)\n">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(YES|NO|nil|self|super)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(Boolean|UInt8|SInt8|UInt16|SInt16|UInt32|SInt32)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(TRUE|FALSE)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(@interface|@implementation)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="#pop" state="oc_classname"/>
+      </rule>
+      <rule pattern="(@class|@protocol)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="#pop" state="oc_forward_classname"/>
+      </rule>
+      <rule pattern="@">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(L?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(L?)(&#39;)(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+          <token type="LiteralStringChar"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]+[LlUu]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\d+[LlUu]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(restricted|volatile|continue|register|default|typedef|struct|extern|switch|sizeof|static|return|union|while|const|break|goto|enum|else|case|auto|for|asm|if|do)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(bool|int|long|float|short|double|char|unsigned|signed|void)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(typename|__inline|restrict|_inline|thread|inline|naked)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(__m(128i|128d|128|64))\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="__(forceinline|identifier|unaligned|declspec|fastcall|finally|stdcall|wchar_t|assume|except|int32|cdecl|int16|leave|based|raise|int64|noop|int8|w64|try|asm)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|NULL)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(\s*)(:)(?!:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="method">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\.\.\.">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(\(.*?\))(\s*)([a-zA-Z$_][\w$]*)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z$_][\w$]*:">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="function"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="literal_array">
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push state="literal_array_inner"/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Literal"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="statement"/>
+      </rule>
+    </state>
+    <state name="oc_classname">
+      <rule pattern="([a-zA-Z$_][\w$]*)(\s*:\s*)([a-zA-Z$_][\w$]*)?(\s*)(\{)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="#pop" state="oc_ivars"/>
+      </rule>
+      <rule pattern="([a-zA-Z$_][\w$]*)(\s*:\s*)([a-zA-Z$_][\w$]*)?">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([a-zA-Z$_][\w$]*)(\s*)(\([a-zA-Z$_][\w$]*\))(\s*)(\{)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameLabel"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="#pop" state="oc_ivars"/>
+      </rule>
+      <rule pattern="([a-zA-Z$_][\w$]*)(\s*)(\([a-zA-Z$_][\w$]*\))">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="NameLabel"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([a-zA-Z$_][\w$]*)(\s*)(\{)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="#pop" state="oc_ivars"/>
+      </rule>
+      <rule pattern="([a-zA-Z$_][\w$]*)">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="^#if\s+0">
+        <token type="CommentPreproc"/>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^#">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#if\s+0)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="if0"/>
+      </rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="literal_number_inner">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="statement"/>
+      </rule>
+    </state>
+    <state name="statement">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="oc_forward_classname">
+      <rule pattern="([a-zA-Z$_][\w$]*)(\s*,\s*)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="oc_forward_classname"/>
+      </rule>
+      <rule pattern="([a-zA-Z$_][\w$]*)(\s*;?)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="literal_array_inner">
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="statement"/>
+      </rule>
+    </state>
+    <state name="literal_dictionary">
+      <rule pattern="\}">
+        <token type="Literal"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="statement"/>
+      </rule>
+    </state>
+    <state name="oc_ivars">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-fA-F0-9]{2,4}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/objectpascal.xml 🔗

@@ -0,0 +1,145 @@
+<lexer>
+  <config>
+    <name>ObjectPascal</name>
+    <alias>objectpascal</alias>
+    <filename>*.pas</filename>
+    <filename>*.pp</filename>
+    <filename>*.inc</filename>
+    <filename>*.dpr</filename>
+    <filename>*.dpk</filename>
+    <filename>*.lpr</filename>
+    <filename>*.lpk</filename>
+    <mime_type>text/x-pascal</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <!-- TextWhitespace -->
+      <rule pattern="[^\S\n]+">
+        <token type="TextWhitespace" />
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <!-- Magic Number (BOM) -->
+      <rule pattern="[^\u0000-\u007F]+">
+        <token type="Text"/>
+      </rule>
+      <!-- Compiler Directive -->
+      <rule pattern="\{[$].*?\}|\{[-](NOD|EXT|OBJ).*?\}|\([*][$].*?[*]\)">
+        <token type="CommentPreproc" />
+      </rule>
+      <!-- Comment Single -->
+      <rule pattern="(//.*?)(\n)">
+        <bygroups>
+          <token type="CommentSingle" />
+          <token type="TextWhitespace" />
+        </bygroups>
+      </rule>
+      <!-- Comment Multiline Block -->
+      <rule pattern="\([*](.|\n)*?[*]\)">
+        <token type="CommentMultiline"/>
+      </rule>
+      <!-- Comment Multiline Source Documentation -->
+      <rule pattern="[{](.|\n)*?[}]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <!-- Range Indicator -->
+      <rule pattern="(?i:(\.\.))">
+        <token type="Operator" />
+      </rule>
+      <!-- Control Character -->
+      <rule pattern="[\#][0-9a-fA-F]*|[0-9]+[xX][0-9a-fA-F]*">
+        <token type="LiteralStringEscape" />
+      </rule>
+      <!-- Numbers -->
+      <rule pattern="[\$][0-9a-fA-F]*[xX][0-9a-fA-F]*|[\$][0-9a-fA-F]*|([0-9]+[0-9a-fA-F]+(?=[hH]))">
+        <token type="LiteralNumberHex" />
+      </rule>
+      <rule pattern="[0-9]+(\&#39;[0-9]+)*\.[0-9]+(\&#39;[0-9]+)*[eE][+-]?[0-9]+(\&#39;[0-9]+)*|[0-9]+(\&#39;[0-9]+)*\.[0-9]+(\&#39;[0-9]+)*|\d+[eE][+-]?[0-9]+">
+        <token type="LiteralNumberFloat" />
+      </rule>
+      <rule pattern="0|[1-9][0-9_]*?">
+        <token type="LiteralNumberInteger" />
+      </rule>
+      <!-- Multiline string Literal -->
+      <rule pattern="(&#39;&#39;&#39;\s*\n)(.|\n)*?(&#39;&#39;&#39;)(?=\s*&#59;)">
+        <token type="LiteralString" />
+      </rule>
+      <!-- string -->
+      <rule pattern="(?i:(\')).*?(?i:(\'))">
+        <token type="LiteralString" />
+      </rule>
+      <!-- string (Special case for Delphi Assembler)-->
+      <rule pattern="(?i:(&#34;)).*?(?i:(&#34;))">
+        <token type="LiteralString" />
+      </rule>
+      <!-- Simple Types -->
+      <rule pattern="\b(?!=\.)(?i:(NativeInt|NativeUInt|LongInt|LongWord|Integer|Int64|Cardinal|UInt64|ShortInt|SmallInt|FixedInt|Byte|Word|FixedUInt|Int8|Int16|Int32|UInt8|UInt16|UInt32|Real48|Single|Double|Real|Extended|Comp|Currency|Char|AnsiChar|WideChar|UCS2Char|UCS4Char|string|ShortString|AnsiString|UnicodeString|WideString|RawByteString|UTF8String|File|TextFile|Text|Boolean|ByteBool|WordBool|LongBool|Pointer|Variant|OleVariant))\b(?![&#60;\/(])">
+        <token type="KeywordType" />
+      </rule>
+      <!-- T Types -->
+      <rule pattern="\b(?!=\.)(?i:(TSingleRec|TDoubleRec|TExtended80Rec|TByteArray|TTextBuf|TVarRec|TWordArray))\b(?![&#60;\/(])">
+        <token type="KeywordType" />
+      </rule>
+      <!-- Pointer Types -->
+      <rule pattern="\b(?!=\.)(?i:(PChar|PAnsiChar|PWideChar|PRawByteString|PUnicodeString|PString|PAnsiString|PShortString|PTextBuf|PWideString|PByte|PShortInt|PWord|PSmallInt|PCardinal|PLongWord|PFixedUInt|PLongint|PFixedInt|PUInt64|PInt64|PNativeUInt|PNativeInt|PByteArray|PCurrency|PDouble|PExtended|PSingle|PInteger|POleVariant|PVarRec|PVariant|PWordArray|PBoolean|PWordBool|PLongBool|PPointer))\b(?![&#60;\/(])">
+        <token type="KeywordType" />
+      </rule>
+      <!-- More Types -->
+      <rule pattern="\b(?!=\.)(?i:(IntPtr|UIntPtr|Float32|Float64|_ShortStr|_ShortString|_AnsiStr|_AnsiString|_AnsiChr|_AnsiChar|_WideStr|_WideString|_PAnsiChr|_PAnsiChar|UTF8Char|_AnsiChar|PUTF8Char|_PAnsiChar|MarshaledString|MarshaledAString))\b(?![&#60;\/(])">
+        <token type="KeywordType" />
+      </rule>
+      <!-- Result -->
+      <rule pattern="\b(?!=\.)(?i:(Result))\b(?![&#60;\/(])">
+        <token type="GenericEmph" />
+      </rule>      
+      <!-- Result Constants -->
+      <rule pattern="\b(?!=\.)(?i:(True|False))\b(?![&#60;\/(])">
+        <token type="NameConstant" />
+      </rule>
+      <!-- Operator (Assign) -->
+      <rule pattern="[(\:\=)]">
+        <token type="Operator" />
+      </rule>      
+      <!-- Operators (Arithmetic, Unary Arithmetic, String, Pointer, Set, Relational, Address) -->
+      <rule pattern="[\+\-\*\/\^&#60;&#62;\=\@]">
+        <token type="Operator" />
+      </rule>
+      <!-- Operators (Arithmetic, Boolean, Logical (Bitwise), Set) -->
+      <rule pattern="\b(?i:([div][mod][not][and][or][xor][shl][shr][in]))\b">
+        <token type="OperatorWord" />
+      </rule>
+      <!-- Special Symbols (Escape, Literal Chr, Hex Value, Binary Numeral Expression Indicator) -->
+      <rule pattern="[&#38;\#\$\%]">
+        <token type="Operator" />
+      </rule>
+      <!-- Special Symbols (Punctuation) -->
+      <rule pattern="[\(\)\,\.\:\;\[\]]">
+        <token type="Punctuation" />
+      </rule>
+      <!-- Reserved Words -->
+      <rule pattern="\b(?!=\.)(?i:(and|end|interface|record|var|array|except|is|repeat|while|as|exports|label|resourcestring|with|asm|file|library|set|xor|begin|finalization|mod|shl|case|finally|nil|shr|class|for|not|string|const|function|object|then|constructor|goto|of|threadvar|destructor|if|or|to|dispinterface|implementation|packed|try|div|in|procedure|type|do|inherited|program|unit|downto|initialization|property|until|else|inline|raise|uses))\b(?![&#60;\/(])">
+        <token type="KeywordReserved" />
+      </rule>
+      <!-- Directives -->
+      <rule pattern="\b(?!=\.)(?i:(absolute|export|name|public|stdcall|abstract|external|published|strict|assembler|nodefault|read|stored|automated|final|operator|readonly|unsafe|cdecl|forward|out|reference|varargs|contains|helper|overload|register|virtual|default|implements|override|reintroduce|winapi|delayed|index|package|requires|write|deprecated|inline|pascal|writeonly|dispid|library|platform|safecall|dynamic|local|private|sealed|experimental|message|protected|static))\b(?![&#60;\/(])">
+        <token type="Keyword" />
+      </rule>
+      <!-- Directives obsolete -->
+      <rule pattern="\b(?!=\.)(?i:(near|far|resident))\b(?![&#60;\/(])">
+        <token type="Keyword" />
+      </rule>
+      <!-- Constant Expressions -->
+      <rule pattern="\b(?!=\.)(?i:(Abs|High|Low|Pred|Succ|Chr|Length|Odd|Round|Swap|Hi|Lo|Ord|SizeOf|Trunc))\b(?![&#60;\/(])">
+        <token type="KeywordConstant" />
+      </rule>
+      <!-- everything else -->
+      <rule pattern="([^\W\d]|\$)[\w$]*">
+        <token type="Text" />
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ocaml.xml 🔗

@@ -0,0 +1,145 @@
+<lexer>
+  <config>
+    <name>OCaml</name>
+    <alias>ocaml</alias>
+    <filename>*.ml</filename>
+    <filename>*.mli</filename>
+    <filename>*.mll</filename>
+    <filename>*.mly</filename>
+    <mime_type>text/x-ocaml</mime_type>
+  </config>
+  <rules>
+    <state name="escape-sequence">
+      <rule pattern="\\[\\&#34;\&#39;ntbr]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\[0-9]{3}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\x[0-9a-fA-F]{2}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="false|true|\(\)|\[\]">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="\b([A-Z][\w\&#39;]*)(?=\s*\.)">
+        <token type="NameNamespace"/>
+        <push state="dotted"/>
+      </rule>
+      <rule pattern="\b([A-Z][\w\&#39;]*)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="\(\*(?![)])">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\b(as|assert|begin|class|constraint|do|done|downto|else|end|exception|external|false|for|fun|function|functor|if|in|include|inherit|initializer|lazy|let|match|method|module|mutable|new|object|of|open|private|raise|rec|sig|struct|then|to|true|try|type|value|val|virtual|when|while|with)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(~|\}|\|]|\||\{&lt;|\{|`|_|]|\[\||\[&gt;|\[&lt;|\[|\?\?|\?|&gt;\}|&gt;]|&gt;|=|&lt;-|&lt;|;;|;|:&gt;|:=|::|:|\.\.|\.|-&gt;|-\.|-|,|\+|\*|\)|\(|&amp;&amp;|&amp;|#|!=)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="([=&lt;&gt;@^|&amp;+\*/$%-]|[!?~])?[!$%&amp;*+\./:&lt;=&gt;?@^|~-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(and|asr|land|lor|lsl|lxor|mod|or)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="\b(unit|int|float|bool|string|char|list|array)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[^\W\d][\w&#39;]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX][\da-fA-F][\da-fA-F_]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[oO][0-7][0-7_]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[bB][01][01_]*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="\d[\d_]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#39;(?:(\\[\\\&#34;&#39;ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;.&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[~?][a-z][\w\&#39;]*:">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^(*)]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="Comment"/>
+        <push/>
+      </rule>
+      <rule pattern="\*\)">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[(*)]">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <include state="escape-sequence"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="dotted">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*(?=\s*\.)">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-z_][\w\&#39;]*">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/octave.xml 🔗

@@ -0,0 +1,101 @@
+<lexer>
+  <config>
+    <name>Octave</name>
+    <alias>octave</alias>
+    <filename>*.m</filename>
+    <mime_type>text/octave</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[%#].*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="^\s*function">
+        <token type="Keyword"/>
+        <push state="deffunc"/>
+      </rule>
+      <rule pattern="(unwind_protect_cleanup|end_unwind_protect|unwind_protect|end_try_catch|endproperties|endclassdef|endfunction|persistent|properties|endmethods|otherwise|endevents|endswitch|__FILE__|continue|classdef|__LINE__|endwhile|function|methods|elseif|return|static|events|global|endfor|switch|until|endif|while|catch|break|case|else|set|end|try|for|get|do|if)\b">
+        <token type="Keyword"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/odin.xml 🔗

@@ -0,0 +1,113 @@
+<lexer>
+  <config>
+    <name>Odin</name>
+    <alias>odin</alias>
+    <filename>*.odin</filename>
+    <mime_type>text/odin</mime_type>
+  </config>
+  <rules>
+    <state name="NestedComment">
+        <rule pattern = "/[*]">
+            <token type = "CommentMultiline"/>
+            <push/>
+        </rule>
+        <rule pattern = "[*]/">
+            <token type = "CommentMultiline"/>
+            <pop depth = "1"/>
+        </rule>
+        <rule pattern = "[\s\S]">
+            <token type = "CommentMultiline"/>
+        </rule>
+    </state>
+    <state name="root">
+        <rule pattern = "\n">
+            <token type = "TextWhitespace"/>
+        </rule>
+        <rule pattern = "\s+">
+            <token type = "TextWhitespace"/>
+        </rule>
+        <rule pattern = "//.*?\n">
+            <token type = "CommentSingle"/>
+        </rule>
+        <rule pattern = "/[*]">
+            <token type = "CommentMultiline"/>
+            <push state="NestedComment"/>
+        </rule>
+        <rule pattern = "(import|package)\b">
+            <token type = "KeywordNamespace"/>
+        </rule>
+        <rule pattern = "(proc|struct|map|enum|union)\b">
+            <token type = "KeywordDeclaration"/>
+        </rule>
+        <rule pattern = "(asm|auto_cast|bit_set|break|case|cast|context|continue|defer|distinct|do|dynamic|else|enum|fallthrough|for|foreign|if|import|in|map|not_in|or_else|or_return|package|proc|return|struct|switch|transmute|typeid|union|using|when|where|panic|real|imag|len|cap|append|copy|delete|new|make|clearpanic|real|imag|len|cap|append|copy|delete|new|make|clear)\b">
+            <token type = "Keyword"/>
+        </rule>
+        <rule pattern = "(true|false|nil)\b">
+            <token type = "KeywordConstant"/>
+        </rule>
+        <rule pattern = "(uint|u8|u16|u32|u64|int|i8|i16|i32|i64|i16le|i32le|i64le|i128le|u16le|u32le|u64le|u128le|i16be|i32be|i64be|i128be|u16be|u32be|u64be|u128be|f16|f32|f64|complex32|complex64|complex128|quaternion64|quaternion128|quaternion256|byte|rune|string|cstring|typeid|any|bool|b8|b16|b32|b64|uintptr|rawptr)\b">
+            <token type = "KeywordType"/>
+        </rule>
+        <rule pattern = "\#[a-zA-Z_]+\b">
+            <token type = "NameDecorator"/>
+        </rule>
+        <rule pattern = "\@(\([a-zA-Z_]+\b\s*.*\)|\(?[a-zA-Z_]+\)?)">
+            <token type = "NameAttribute"/>
+        </rule>
+        <rule pattern="[a-zA-Z_]\w*">
+            <token type="Name"/>
+        </rule>
+        <rule pattern="([a-zA-Z_]\w*)(\s*)(\()">
+            <token type="NameFunction"/>
+        </rule>
+        <rule pattern="[^\W\d]\w*">
+            <token type="NameOther"/>
+        </rule>
+        <rule pattern = "\d+i">
+            <token type = "LiteralNumber"/>
+        </rule>
+        <rule pattern = "\d+\.\d*([Ee][-+]\d+)?i">
+            <token type = "LiteralNumber"/>
+        </rule>
+        <rule pattern = "\.\d+([Ee][-+]\d+)?i">
+            <token type = "LiteralNumber"/>
+        </rule>
+        <rule pattern = "\d+[Ee][-+]\d+i">
+            <token type = "LiteralNumber"/>
+        </rule>
+        <rule pattern = "\d+(\.\d+[eE][+\-]?\d+|\.\d*|[eE][+\-]?\d+)">
+            <token type = "LiteralNumberFloat"/>
+        </rule>
+        <rule pattern = "\.\d+([eE][+\-]?\d+)?">
+            <token type = "LiteralNumberFloat"/>
+        </rule>
+        <rule pattern = "0o[0-7]+">
+            <token type = "LiteralNumberOct"/>
+        </rule>
+        <rule pattern = "0x[0-9a-fA-F_]+">
+            <token type = "LiteralNumberHex"/>
+        </rule>
+        <rule pattern = "0b[01_]+">
+            <token type = "LiteralNumberBin"/>
+        </rule>
+        <rule pattern = "(0|[1-9][0-9_]*)">
+            <token type = "LiteralNumberInteger"/>
+        </rule>
+        <rule pattern = "'(\\['&quot;\\abfnrtv]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|[^\\])'" >
+            <token type = "LiteralStringChar"/>
+        </rule>
+        <rule pattern = "(`)([^`]*)(`)" >
+            <token type = "LiteralString"/>
+        </rule>
+        <rule pattern = "&quot;(\\\\|\\&quot;|[^&quot;])*&quot;" >
+            <token type = "LiteralString"/>
+        </rule>
+        <rule pattern = "(&lt;&lt;=|&gt;&gt;=|&lt;&lt;|&gt;&gt;|&lt;=|&gt;=|&amp;=|&amp;|\+=|-=|\*=|/=|%=|\||\^|=|&amp;&amp;|\|\||--|-&gt;|=|==|!=|:=|:|::|\.\.\&lt;|\.\.=|[&lt;&gt;+\-*/%&amp;])" >
+            <token type = "Operator"/>
+        </rule>
+        <rule pattern="[{}()\[\],.;]">
+            <token type="Punctuation"/>
+        </rule>
+    </state>  
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/onesenterprise.xml 🔗

@@ -0,0 +1,92 @@
+<lexer>
+  <config>
+    <name>OnesEnterprise</name>
+    <alias>ones</alias>
+    <alias>onesenterprise</alias>
+    <alias>1S</alias>
+    <alias>1S:Enterprise</alias>
+    <filename>*.EPF</filename>
+    <filename>*.epf</filename>
+    <filename>*.ERF</filename>
+    <filename>*.erf</filename>
+    <mime_type>application/octet-stream</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(.*?)\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(#область|#region|#конецобласти|#endregion|#если|#if|#иначе|#else|#конецесли|#endif).*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(&amp;наклиенте|&amp;atclient|&amp;насервере|&amp;atserver|&amp;насерверебезконтекста|&amp;atservernocontext|&amp;наклиентенасерверебезконтекста|&amp;atclientatservernocontext).*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(&gt;=|&lt;=|&lt;&gt;|\+|-|=|&gt;|&lt;|\*|/|%)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(;|,|\)|\(|\.)">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(истина|ложь|или|false|true|не|and|not|и|or)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(иначеесли|конецесли|иначе|тогда|если|elsif|endif|else|then|if)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(конеццикла|каждого|цикл|пока|для|while|enddo|по|each|из|for|do|in|to)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(продолжить|прервать|возврат|перейти|continue|return|break|goto)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(конецпроцедуры|конецфункции|процедура|функция|endprocedure|endfunction|procedure|function)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(экспорт|новый|перем|знач|export|new|val|var)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(вызватьисключение|конецпопытки|исключение|попытка|endtry|except|raise|try)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(выполнить|вычислить|execute|eval)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[_а-яА-Я0-9][а-яА-Я0-9]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[_\w][\w]*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;C?">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/openedge_abl.xml 🔗

@@ -0,0 +1,101 @@
+<lexer>
+  <config>
+    <name>OpenEdge ABL</name>
+    <alias>openedge</alias>
+    <alias>abl</alias>
+    <alias>progress</alias>
+    <alias>openedgeabl</alias>
+    <filename>*.p</filename>
+    <filename>*.cls</filename>
+    <filename>*.w</filename>
+    <filename>*.i</filename>
+    <mime_type>text/x-openedge</mime_type>
+    <mime_type>application/x-openedge</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="//.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="CommentPreproc"/>
+        <push state="preprocessor"/>
+      </rule>
+      <rule pattern="\s*&amp;.*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(?i)(DEFINE|DEF|DEFI|DEFIN)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(?i)(^|(?&lt;=[^\w\-]))(CHARACTER|CHAR|CHARA|CHARAC|CHARACT|CHARACTE|COM-HANDLE|DATE|DATETIME|DATETIME-TZ|DECIMAL|DEC|DECI|DECIM|DECIMA|HANDLE|INT64|INTEGER|INT|INTE|INTEG|INTEGE|LOGICAL|LONGCHAR|MEMPTR|RAW|RECID|ROWID)\s*($|(?=[^\w\-]))">
+        <token type="KeywordType"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/openscad.xml 🔗

@@ -0,0 +1,96 @@
+<lexer>
+  <config>
+    <name>OpenSCAD</name>
+    <alias>openscad</alias>
+    <filename>*.scad</filename>
+    <mime_type>text/x-scad</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//(\n|[\w\W]*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="[{}\[\]\(\),;:]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[*!#%\-+=?/]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;|&lt;=|==|!=|&gt;=|&gt;|&amp;&amp;|\|\|">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\$(f[asn]|t|vp[rtd]|children)">
+        <token type="NameVariableMagic"/>
+      </rule>
+      <rule pattern="(undef|PI)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(use|include)((?:\s|\\\\s)+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="includes"/>
+      </rule>
+      <rule pattern="(module)(\s*)([^\s\(]+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(function)(\s*)([^\s\(]+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(true|false)\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="\b(function|module|include|use|for|intersection_for|if|else|return)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(circle|square|polygon|text|sphere|cube|cylinder|polyhedron|translate|rotate|scale|resize|mirror|multmatrix|color|offset|hull|minkowski|union|difference|intersection|abs|sign|sin|cos|tan|acos|asin|atan|atan2|floor|round|ceil|ln|log|pow|sqrt|exp|rands|min|max|concat|lookup|str|chr|search|version|version_num|norm|cross|parent_module|echo|import|import_dxf|dxf_linear_extrude|linear_extrude|rotate_extrude|surface|projection|render|dxf_cross|dxf_dim|let|assign|len)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="\bchildren\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="-?\d+(\.\d+)?(e[+-]?\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="includes">
+      <rule pattern="(&lt;)([^&gt;]*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="CommentPreprocFile"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/org_mode.xml 🔗

@@ -0,0 +1,329 @@
+<lexer>
+  <config>
+    <name>Org Mode</name>
+    <alias>org</alias>
+    <alias>orgmode</alias>
+    <filename>*.org</filename>
+    <mime_type>text/org</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^# .*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="^(\*)( COMMENT)( .*)$">
+        <bygroups>
+          <token type="GenericHeading"/>
+          <token type="NameEntity"/>
+          <token type="GenericStrong"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*\*+)( COMMENT)( .*)$">
+        <bygroups>
+          <token type="GenericSubheading"/>
+          <token type="NameEntity"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*)( DONE)( .*)$">
+        <bygroups>
+          <token type="GenericHeading"/>
+          <token type="LiteralStringRegex"/>
+          <token type="GenericStrong"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*\*+)( DONE)( .*)$">
+        <bygroups>
+          <token type="GenericSubheading"/>
+          <token type="LiteralStringRegex"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*)( TODO)( .*)$">
+        <bygroups>
+          <token type="GenericHeading"/>
+          <token type="Error"/>
+          <token type="GenericStrong"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*\*+)( TODO)( .*)$">
+        <bygroups>
+          <token type="GenericSubheading"/>
+          <token type="Error"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*)( .+?)( :[a-zA-Z0-9_@:]+:)$">
+        <bygroups>
+          <token type="GenericHeading"/>
+          <token type="GenericStrong"/>
+          <token type="GenericEmph"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*)( .+)$">
+        <bygroups>
+          <token type="GenericHeading"/>
+          <token type="GenericStrong"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*\*+)( .+?)( :[a-zA-Z0-9_@:]+:)$">
+        <bygroups>
+          <token type="GenericSubheading"/>
+          <token type="Text"/>
+          <token type="GenericEmph"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\*\*+)( .+)$">
+        <bygroups>
+          <token type="GenericSubheading"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( *)([+-] )(\[[ X]\])( .+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Keyword"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( +)(\* )(\[[ X]\])( .+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Keyword"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( *)([+-] )([^ \n]+ ::)( .+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Keyword"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( +)(\* )([^ \n]+ ::)( .+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Keyword"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( *)([+-] )(.+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( +)(\* )(.+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( *)([0-9]+[.)])( \[@[0-9]+\])( .+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="GenericEmph"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( *)([0-9]+[.)])( .+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <usingself state="inline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)^( *#\+begin: )([^ ]+)([\w\W]*?\n)([\w\W]*?)(^ *#\+end: *$)">
+        <bygroups>
+          <token type="Comment"/>
+          <token type="CommentSpecial"/>
+          <token type="Comment"/>
+          <usingself state="inline"/>
+          <token type="Comment"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)^( *#\+begin_comment *\n)([\w\W]*?)(^ *#\+end_comment *$)">
+        <bygroups>
+          <token type="Comment"/>
+          <token type="Comment"/>
+          <token type="Comment"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)^( *#\+begin_src )([^ \n]+)(.*?\n)([\w\W]*?)(^ *#\+end_src *$)">
+        <usingbygroup>
+          <sublexer_name_group>2</sublexer_name_group>
+          <code_group>4</code_group>
+          <emitters>
+            <token type="Comment"/>
+            <token type="CommentSpecial"/>
+            <token type="Comment"/>
+            <token type="Text"/>
+            <token type="Comment"/>
+          </emitters>
+        </usingbygroup>
+      </rule>
+      <rule pattern="(?i)^( *#\+begin_export )(\w+)( *\n)([\w\W]*?)(^ *#\+end_export *$)">
+        <usingbygroup>
+          <sublexer_name_group>2</sublexer_name_group>
+          <code_group>4</code_group>
+          <emitters>
+            <token type="Comment"/>
+            <token type="CommentSpecial"/>
+            <token type="Text"/>
+            <token type="Text"/>
+            <token type="Comment"/>
+          </emitters>
+        </usingbygroup>
+      </rule>
+      <rule pattern="(?i)^( *#\+begin_)(\w+)( *\n)([\w\W]*?)(^ *#\+end_\2)( *$)">
+        <bygroups>
+          <token type="Comment"/>
+          <token type="Comment"/>
+          <token type="Text"/>
+          <token type="Text"/>
+          <token type="Comment"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(#\+\w+)(:.*)$">
+        <bygroups>
+          <token type="CommentSpecial"/>
+          <token type="Comment"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)^( *:\w+: *\n)([\w\W]*?)(^ *:end: *$)">
+        <bygroups>
+          <token type="Comment"/>
+          <token type="CommentSpecial"/>
+          <token type="Comment"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(.*)(\\\\)$">
+        <bygroups>
+          <usingself state="inline"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)^( *(?:DEADLINE|SCHEDULED): )(&lt;[^&lt;&gt;]+?&gt; *)$">
+        <bygroups>
+          <token type="Comment"/>
+          <token type="CommentSpecial"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)^( *CLOSED: )(\[[^][]+?\] *)$">
+        <bygroups>
+          <token type="Comment"/>
+          <token type="CommentSpecial"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="inline"/>
+      </rule>
+    </state>
+    <state name="inline">
+      <rule pattern="(\s*)(\*[^ \n*][^*]+?[^ \n*]\*)((?=\W|\n|$))">
+        <bygroups>
+          <token type="Text"/>
+          <token type="GenericStrong"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(/[^/]+?/)((?=\W|\n|$))">
+        <bygroups>
+          <token type="Text"/>
+          <token type="GenericEmph"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(=[^\n=]+?=)((?=\W|\n|$))">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(~[^\n~]+?~)((?=\W|\n|$))">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(\+[^+]+?\+)((?=\W|\n|$))">
+        <bygroups>
+          <token type="Text"/>
+          <token type="GenericDeleted"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\s*)(_[^_]+?_)((?=\W|\n|$))">
+        <bygroups>
+          <token type="Text"/>
+          <token type="GenericUnderline"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;)([^&lt;&gt;]+?)(&gt;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[{]{3}[^}]+[}]{3}">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="([^[])(\[fn:)([^]]+?)(\])([^]])">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameBuiltinPseudo"/>
+          <token type="LiteralString"/>
+          <token type="NameBuiltinPseudo"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\[\[)([^][]+?)(\]\[)([^][]+)(\]\])">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="NameTag"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\[\[)([^][]+?)(\]\])">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;&lt;)([^&lt;&gt;]+?)(&gt;&gt;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^( *)(\|[ -].*?[ -]\|)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pacmanconf.xml 🔗

@@ -0,0 +1,37 @@
+<lexer>
+  <config>
+    <name>PacmanConf</name>
+    <alias>pacmanconf</alias>
+    <filename>pacman.conf</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="^\s*\[.*?\]\s*$">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(\w+)(\s*)(=)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(\w+)(\s*)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$repo|\$arch|%o|%u)\b">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/perl.xml 🔗

@@ -0,0 +1,400 @@
+<lexer>
+  <config>
+    <name>Perl</name>
+    <alias>perl</alias>
+    <alias>pl</alias>
+    <filename>*.pl</filename>
+    <filename>*.pm</filename>
+    <filename>*.t</filename>
+    <mime_type>text/x-perl</mime_type>
+    <mime_type>application/x-perl</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\A\#!.+?$">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="\#.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="^=[a-zA-Z0-9]+\s+.*?\n=cut">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(continue|foreach|unless|return|elsif|CHECK|while|BEGIN|reset|print|until|next|else|INIT|then|last|redo|case|our|new|for|END|if|do|my)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(format)(\s+)(\w+)(\s*)(=)(\s*\n)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Name"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="format"/>
+      </rule>
+      <rule pattern="(eq|lt|gt|le|ge|ne|not|and|or|cmp)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="s/(\\\\|\\[^\\]|[^\\/])*/(\\\\|\\[^\\]|[^\\/])*/[egimosx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="s!(\\\\|\\!|[^!])*!(\\\\|\\!|[^!])*![egimosx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="s\\(\\\\|[^\\])*\\(\\\\|[^\\])*\\[egimosx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="s@(\\\\|\\[^\\]|[^\\@])*@(\\\\|\\[^\\]|[^\\@])*@[egimosx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="s%(\\\\|\\[^\\]|[^\\%])*%(\\\\|\\[^\\]|[^\\%])*%[egimosx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="s\{(\\\\|\\[^\\]|[^\\}])*\}\s*">
+        <token type="LiteralStringRegex"/>
+        <push state="balanced-regex"/>
+      </rule>
+      <rule pattern="s&lt;(\\\\|\\[^\\]|[^\\&gt;])*&gt;\s*">
+        <token type="LiteralStringRegex"/>
+        <push state="balanced-regex"/>
+      </rule>
+      <rule pattern="s\[(\\\\|\\[^\\]|[^\\\]])*\]\s*">
+        <token type="LiteralStringRegex"/>
+        <push state="balanced-regex"/>
+      </rule>
+      <rule pattern="s\((\\\\|\\[^\\]|[^\\)])*\)\s*">
+        <token type="LiteralStringRegex"/>
+        <push state="balanced-regex"/>
+      </rule>
+      <rule pattern="m?/(\\\\|\\[^\\]|[^\\/\n])*/[gcimosx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="m(?=[/!\\{&lt;\[(@%$])">
+        <token type="LiteralStringRegex"/>
+        <push state="balanced-regex"/>
+      </rule>
+      <rule pattern="((?&lt;==~)|(?&lt;=\())\s*/(\\\\|\\[^\\]|[^\\/])*/[gcimosx]*">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/php.xml 🔗

@@ -0,0 +1,212 @@
+<lexer>
+  <config>
+    <name>PHP</name>
+    <alias>php</alias>
+    <alias>php3</alias>
+    <alias>php4</alias>
+    <alias>php5</alias>
+    <filename>*.php</filename>
+    <filename>*.php[345]</filename>
+    <filename>*.inc</filename>
+    <mime_type>text/x-php</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+    <priority>3</priority>
+  </config>
+  <rules>
+    <state name="magicfuncs">
+      <rule pattern="(__callStatic|__set_state|__construct|__debugInfo|__toString|__destruct|__invoke|__wakeup|__clone|__sleep|__isset|__unset|__call|__get|__set)\b">
+        <token type="NameFunctionMagic"/>
+      </rule>
+    </state>
+    <state name="magicconstants">
+      <rule pattern="(__NAMESPACE__|__FUNCTION__|__METHOD__|__CLASS__|__TRAIT__|__LINE__|__FILE__|__DIR__)\b">
+        <token type="NameConstant"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="functionname">
+      <rule>
+        <include state="magicfuncs"/>
+      </rule>
+      <rule pattern="(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^{$&#34;\\]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\\([nrt&#34;$\\]|[0-7]{1,3}|x[0-9a-f]{1,2})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\$(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*(\[\S+?\]|-&gt;(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*)?">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="(\{\$\{)(.*?)(\}\})">
+        <bygroups>
+          <token type="LiteralStringInterpol"/>
+          <usingself state="root"/>
+          <token type="LiteralStringInterpol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{)(\$.*?)(\})">
+        <bygroups>
+          <token type="LiteralStringInterpol"/>
+          <usingself state="root"/>
+          <token type="LiteralStringInterpol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\$\{)(\S+)(\})">
+        <bygroups>
+          <token type="LiteralStringInterpol"/>
+          <token type="NameVariable"/>
+          <token type="LiteralStringInterpol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[${\\]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\?&gt;">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(&lt;&lt;&lt;)([\&#39;&#34;]?)((?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*)(\2\n.*?\n\s*)(\3)(;?)(\n)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="LiteralString"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="LiteralString"/>
+          <token type="LiteralStringDelimiter"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*\*.*?\*/">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(-&gt;|::)(\s*)((?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|:.&lt;&gt;/@-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[\[\]{}();,]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(class)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(function)(\s*)(?=\()">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(function)(\s+)(&amp;?)(\s*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="functionname"/>
+      </rule>
+      <rule pattern="(const)(\s+)((?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameConstant"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(and|E_PARSE|old_function|E_ERROR|or|as|E_WARNING|parent|eval|PHP_OS|break|exit|case|extends|PHP_VERSION|cfunction|FALSE|print|for|require|continue|foreach|require_once|declare|return|default|static|do|switch|die|stdClass|echo|else|TRUE|elseif|var|empty|if|xor|enddeclare|include|virtual|endfor|include_once|while|endforeach|global|endif|list|endswitch|new|endwhile|not|array|E_ALL|NULL|final|php_user_filter|interface|implements|public|private|protected|abstract|clone|try|catch|throw|this|use|namespace|trait|yield|finally)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule>
+        <include state="magicconstants"/>
+      </rule>
+      <rule pattern="\$\{\$+(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*\}">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\$+(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w]|[^\x00-\x7f])*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\d*\.\d+)(e[+-]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+e[+-]?[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0x[a-f0-9_]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d[\d_]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="0b[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="&#39;([^&#39;\\]*(?:\\.[^&#39;\\]*)*)&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`([^`\\]*(?:\\.[^`\\]*)*)`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pig.xml 🔗

@@ -0,0 +1,105 @@
+<lexer>
+  <config>
+    <name>Pig</name>
+    <alias>pig</alias>
+    <filename>*.pig</filename>
+    <mime_type>text/x-pig</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="--.*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="/\*[\w\W]*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\&#39;(?:\\[ntbrf\\\&#39;]|\\u[0-9a-f]{4}|[^\&#39;\\\n\r])*\&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="types"/>
+      </rule>
+      <rule>
+        <include state="builtins"/>
+      </rule>
+      <rule>
+        <include state="punct"/>
+      </rule>
+      <rule>
+        <include state="operators"/>
+      </rule>
+      <rule pattern="[0-9]*\.[0-9]+(e[0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+L?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="([a-z_]\w*)(\s*)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[()#:]">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[^(:#\&#39;&#34;)\s]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\S+\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(assert|and|any|all|arrange|as|asc|bag|by|cache|CASE|cat|cd|cp|%declare|%default|define|dense|desc|describe|distinct|du|dump|eval|exex|explain|filter|flatten|foreach|full|generate|group|help|if|illustrate|import|inner|input|into|is|join|kill|left|limit|load|ls|map|matches|mkdir|mv|not|null|onschema|or|order|outer|output|parallel|pig|pwd|quit|register|returns|right|rm|rmf|rollup|run|sample|set|ship|split|stderr|stdin|stdout|store|stream|through|union|using|void)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="builtins">
+      <rule pattern="(AVG|BinStorage|cogroup|CONCAT|copyFromLocal|copyToLocal|COUNT|cross|DIFF|MAX|MIN|PigDump|PigStorage|SIZE|SUM|TextLoader|TOKENIZE)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+    </state>
+    <state name="types">
+      <rule pattern="(bytearray|BIGINTEGER|BIGDECIMAL|chararray|datetime|double|float|int|long|tuple)\b">
+        <token type="KeywordType"/>
+      </rule>
+    </state>
+    <state name="punct">
+      <rule pattern="[;(){}\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="operators">
+      <rule pattern="[#=,./%+\-?]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(eq|gt|lt|gte|lte|neq|matches)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(==|&lt;=|&lt;|&gt;=|&gt;|!=)">
+        <token type="Operator"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pkgconfig.xml 🔗

@@ -0,0 +1,73 @@
+<lexer>
+  <config>
+    <name>PkgConfig</name>
+    <alias>pkgconfig</alias>
+    <filename>*.pc</filename>
+  </config>
+  <rules>
+    <state name="curly">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameAttribute"/>
+      </rule>
+    </state>
+    <state name="spvalue">
+      <rule>
+        <include state="interp"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^${}#\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="^(\w+)(=)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^([\w.]+)(:)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="spvalue"/>
+      </rule>
+      <rule>
+        <include state="interp"/>
+      </rule>
+      <rule pattern="[^${}#=:\n.]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="interp">
+      <rule pattern="\$\$">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="curly"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pl_pgsql.xml 🔗

@@ -0,0 +1,119 @@
+<lexer>
+  <config>
+    <name>PL/pgSQL</name>
+    <alias>plpgsql</alias>
+    <mime_type>text/x-plpgsql</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\%[a-z]\w*\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern=":=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\&lt;\&lt;[a-z]\w*\&gt;\&gt;">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="\#[a-z]\w*\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="--.*\n?">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comments"/>
+      </rule>
+      <rule pattern="(bigint|bigserial|bit|bit\s+varying|bool|boolean|box|bytea|char|character|character\s+varying|cidr|circle|date|decimal|double\s+precision|float4|float8|inet|int|int2|int4|int8|integer|interval|json|jsonb|line|lseg|macaddr|money|numeric|path|pg_lsn|point|polygon|real|serial|serial2|serial4|serial8|smallint|smallserial|text|time|timestamp|timestamptz|timetz|tsquery|tsvector|txid_snapshot|uuid|varbit|varchar|with\s+time\s+zone|without\s+time\s+zone|xml|anyarray|anyelement|anyenum|anynonarray|anyrange|cstring|fdw_handler|internal|language_handler|opaque|record|void)\b">
+        <token type="NameBuiltin"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/plaintext.xml 🔗

@@ -0,0 +1,21 @@
+<lexer>
+  <config>
+    <name>plaintext</name>
+    <alias>text</alias>
+    <alias>plain</alias>
+    <alias>no-highlight</alias>
+    <filename>*.txt</filename>
+    <mime_type>text/plain</mime_type>
+    <priority>-1</priority>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=".+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/plutus_core.xml 🔗

@@ -0,0 +1,105 @@
+<lexer>
+  <config>
+    <name>Plutus Core</name>
+    <alias>plutus-core</alias>
+    <alias>plc</alias>
+    <filename>*.plc</filename>
+    <mime_type>text/x-plutus-core</mime_type>
+    <mime_type>application/x-plutus-core</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(\(|\))">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(\[|\])">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="({|})">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="([+-]?\d+)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(#([a-fA-F0-9][a-fA-F0-9])+)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(\(\))">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(True|False)">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(con |abs |iwrap |unwrap |lam |builtin |delay |force |error)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(fun |all |ifix |lam |con )">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(type|fun )">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(program )(\S+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(unit|bool|integer|bytestring|string)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(addInteger |subtractInteger |multiplyInteger |divideInteger |quotientInteger |remainderInteger |modInteger |equalsInteger |lessThanInteger |lessThanEqualsInteger )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(appendByteString |consByteString |sliceByteString |lengthOfByteString |indexByteString |equalsByteString |lessThanByteString |lessThanEqualsByteString )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(sha2_256 |sha3_256 |blake2b_256 |verifySignature )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(appendString |equalsString |encodeUtf8 |decodeUtf8 )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(ifThenElse )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(chooseUnit )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(trace )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(fstPair |sndPair )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(chooseList |mkCons |headList |tailList |nullList )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(chooseData |constrData |mapData |listData |iData |bData |unConstrData |unMapData |unListData |unIData |unBData |equalsData )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(mkPairData |mkNilData |mkNilPairData )">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="([a-zA-Z][a-zA-Z0-9_&#39;]*)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/pony.xml 🔗

@@ -0,0 +1,135 @@
+<lexer>
+  <config>
+    <name>Pony</name>
+    <alias>pony</alias>
+    <filename>*.pony</filename>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="nested_comment"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;(?:.|\n)*?&#34;&#34;&#34;">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\&#39;.*\&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="=&gt;|[]{}:().~;,|&amp;!^?[]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(addressof|digestof|consume|isnt|and|not|as|is|or)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="!=|==|&lt;&lt;|&gt;&gt;|[-+/*%=&lt;&gt;]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(compile_intrinsic|compile_error|continue|recover|return|repeat|lambda|elseif|object|#share|match|#send|#read|ifdef|until|embed|while|where|error|break|with|else|#any|this|then|tag|for|trn|try|ref|use|var|val|let|end|iso|box|in|if|do)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(actor|class|struct|primitive|interface|trait|type)((?:\s)+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="typename"/>
+      </rule>
+      <rule pattern="(new|fun|be)((?:\s)+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="methodname"/>
+      </rule>
+      <rule pattern="(DisposableActor|NullablePointer|AsioEventNotify|UnsignedInteger|RuntimeOptions|DoNotOptimise|FloatingPoint|SignedInteger|ReadElement|ArrayValues|StringBytes|StringRunes|InputNotify|InputStream|AsioEventID|ByteSeqIter|AmbientAuth|Comparable|ArrayPairs|Stringable|OutStream|SourceLoc|ArrayKeys|StdStream|Equatable|AsioEvent|Iterator|Platform|Unsigned|Greater|Compare|Integer|Pointer|ReadSeq|ByteSeq|String|Number|Signed|Float|USize|Stdin|ILong|ISize|HasEq|Array|ULong|Equal|I128|U128|Bool|Less|Real|None|Seq|I64|Any|F32|F64|U64|U32|I32|Int|I16|U16|Env|I8|U8)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="_?[A-Z]\w*">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="string\(\)">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="_\d*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="_?[a-z][\w\&#39;_]*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="typename">
+      <rule pattern="(iso|trn|ref|val|box|tag)?((?:\s)*)(_?[A-Z]\w*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="methodname">
+      <rule pattern="(iso|trn|ref|val|box|tag)?((?:\s)*)(_?[a-z]\w*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="nested_comment">
+      <rule pattern="[^*/]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/postgresql_sql_dialect.xml 🔗

@@ -0,0 +1,155 @@
+<lexer>
+  <config>
+    <name>PostgreSQL SQL dialect</name>
+    <alias>postgresql</alias>
+    <alias>postgres</alias>
+    <mime_type>text/x-postgresql</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="--.*\n?">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comments"/>
+      </rule>
+      <rule pattern="(bigint|bigserial|bit|bit\s+varying|bool|boolean|box|bytea|char|character|character\s+varying|cidr|circle|date|decimal|double\s+precision|float4|float8|inet|int|int2|int4|int8|integer|interval|json|jsonb|line|lseg|macaddr|money|numeric|path|pg_lsn|point|polygon|real|serial|serial2|serial4|serial8|smallint|smallserial|text|time|timestamp|timestamptz|timetz|tsquery|tsvector|txid_snapshot|uuid|varbit|varchar|with\s+time\s+zone|without\s+time\s+zone|xml|anyarray|anyelement|anyenum|anynonarray|anyrange|cstring|fdw_handler|internal|language_handler|opaque|record|void)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?s)(DO)(\s+)(?:(LANGUAGE)?(\s+)(&#39;?)(\w+)?(&#39;?)(\s+))?(\$)([^$]*)(\$)(.*?)(\$)(\10)(\$)">
+        <usingbygroup>
+          <sublexer_name_group>6</sublexer_name_group>
+          <code_group>12</code_group>
+          <emitters>
+            <token type="Keyword"/>
+            <token type="Text"/>
+            <token type="Keyword"/>
+            <token type="Text"/>
+            <token type="LiteralStringSingle"/>
+            <token type="LiteralStringSingle"/>
+            <token type="LiteralStringSingle"/>
+            <token type="Text"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+            <token type="LiteralStringHeredoc"/>
+          </emitters>
+        </usingbygroup>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/postscript.xml 🔗

@@ -0,0 +1,89 @@
+<lexer>
+  <config>
+    <name>PostScript</name>
+    <alias>postscript</alias>
+    <alias>postscr</alias>
+    <filename>*.ps</filename>
+    <filename>*.eps</filename>
+    <mime_type>application/postscript</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^%!.+\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="%%.*\n">
+        <token type="CommentSpecial"/>
+      </rule>
+      <rule pattern="(^%.*\n){2,}">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="%.*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralString"/>
+        <push state="stringliteral"/>
+      </rule>
+      <rule pattern="[{}&lt;&gt;\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&lt;[0-9A-Fa-f]+&gt;(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+\#(\-|\+)?([0-9]+\.?|[0-9]*\.[0-9]+|[0-9]+\.[0-9]*)((e|E)[0-9]+)?(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="(\-|\+)?([0-9]+\.?|[0-9]*\.[0-9]+|[0-9]+\.[0-9]*)((e|E)[0-9]+)?(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\-|\+)?[0-9]+(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\/[^()&lt;&gt;\[\]{}/%\s]+(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[^()&lt;&gt;\[\]{}/%\s]+(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(false|true)(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(eq|ne|g[et]|l[et]|and|or|not|if(?:else)?|for(?:all)?)(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(dictstackoverflow|undefinedfilename|currentlinewidth|undefinedresult|currentmatrix|defaultmatrix|invertmatrix|concatmatrix|currentpoint|setlinewidth|syntaxerror|idtransform|identmatrix|setrgbcolor|stringwidth|setlinejoin|getinterval|itransform|strokepath|pathforall|rangecheck|setlinecap|dtransform|transform|translate|setmatrix|typecheck|undefined|scalefont|closepath|findfont|showpage|rcurveto|grestore|truncate|pathbbox|charpath|rlineto|rmoveto|ceiling|newpath|setdash|setfont|restore|curveto|setgray|stroke|pstack|matrix|length|lineto|repeat|rotate|moveto|shfill|concat|gsave|aload|scale|array|round|stack|index|begin|print|floor|exch|quit|clip|copy|bind|loop|idiv|fill|show|roll|exit|load|dict|save|arcn|sqrt|exec|rand|atan|end|div|abs|run|def|cvs|exp|cvi|sin|cos|get|dup|mod|put|sub|pop|add|neg|mul|arc|log|ln|gt)(?=[()&lt;&gt;\[\]{}/%\s])">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="stringliteral">
+      <rule pattern="[^()\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralStringEscape"/>
+        <push state="escape"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralString"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="escape">
+      <rule pattern="[0-8]{3}|n|r|t|b|f|\\|\(|\)">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/povray.xml 🔗

@@ -0,0 +1,58 @@
+<lexer>
+  <config>
+    <name>POVRay</name>
+    <alias>pov</alias>
+    <filename>*.pov</filename>
+    <filename>*.inc</filename>
+    <mime_type>text/x-povray</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="/\*[\w\W]*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(?s)&#34;(?:\\.|[^&#34;\\])+&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="#(statistics|include|version|declare|default|warning|define|elseif|ifndef|switch|fclose|render|fopen|undef|error|debug|while|local|macro|range|ifdef|break|write|else|case|read|for|end|if)\b">
+        <token type="CommentPreproc"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/powerquery.xml 🔗

@@ -0,0 +1,51 @@
+<lexer>
+  <config>
+    <name>PowerQuery</name>
+    <alias>powerquery</alias>
+    <alias>pq</alias>
+    <filename>*.pq</filename>
+    <mime_type>text/x-powerquery</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(and|as|each|else|error|false|if|in|is|let|meta|not|null|or|otherwise|section|shared|then|true|try|type)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(#binary|#date|#datetime|#datetimezone|#duration|#infinity|#nan|#sections|#shared|#table|#time)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(([a-zA-Z]|_)[\w|._]*|#&#34;[^&#34;]+&#34;)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F][0-9a-fA-F_]*[lL]?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="([0-9]+\.[0-9]+|\.[0-9]+)([eE][0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[\(\)\[\]\{\}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\.\.|\.\.\.|=&gt;|&lt;=|&gt;=|&lt;&gt;|[@!?,;=&lt;&gt;\+\-\*\/&amp;]">
+        <token type="Operator"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/powershell.xml 🔗

@@ -0,0 +1,230 @@
+<lexer>
+  <config>
+    <name>PowerShell</name>
+    <alias>powershell</alias>
+    <alias>posh</alias>
+    <alias>ps1</alias>
+    <alias>psm1</alias>
+    <alias>psd1</alias>
+    <alias>pwsh</alias>
+    <filename>*.ps1</filename>
+    <filename>*.psm1</filename>
+    <filename>*.psd1</filename>
+    <mime_type>text/x-powershell</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="child"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(\s*)(#)(requires)(\s+)">
+        <bygroups>
+          <token type="TextWhitespace"/>
+          <token type="Comment"/>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="requires"/>
+      </rule>
+      <rule pattern="^(\s*#[#\s]*)(\.(?:component|description|example|externalhelp|forwardhelpcategory|forwardhelptargetname|functionality|inputs|link|notes|outputs|parameter|remotehelprunspace|role|synopsis))([^\n]*$)">
+        <bygroups>
+          <token type="Comment"/>
+          <token type="LiteralStringDoc"/>
+          <token type="Comment"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#[^\n]*?$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(&amp;lt;|&lt;)#">
+        <token type="CommentMultiline"/>
+        <push state="multline"/>
+      </rule>
+      <rule pattern="(?i)([A-Z]:)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="@&#34;\n">
+        <token type="LiteralStringHeredoc"/>
+        <push state="heredoc-double"/>
+      </rule>
+      <rule pattern="@&#39;\n.*?\n&#39;@">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="@(?=\(|{)|\$(?=\()">
+        <token type="NameVariableMagic"/>
+      </rule>
+      <rule pattern="`[\&#39;&#34;$@-]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="&#39;([^&#39;]|&#39;&#39;)*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)(function|filter|workflow)(\s*)(global:|script:|private:|env:)?(\w\S*\b)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariableMagic"/>
+          <token type="NameBuiltin"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;!\S)(class|configuration)(\s+)(\w\S*)(\s*)(:*)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="TextWhitespace"/>
+          <token type="NameBuiltin"/>
+          <token type="NameBuiltin"/>
+          <token type="NameBuiltin"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\$false|\$null|\$true(?=\b)">
+        <token type="NameVariableMagic"/>
+      </rule>
+      <rule pattern="(\$|@@|@)((global|script|private|env):)?\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(parameter|validatenotnullorempty|validatescript|validaterange|validateset|validaterange|validatepattern|validatelength|validatecount|validatenotnullorempty|validatescript|cmdletbinding|alias)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-z]\w*-[a-z]\w*\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(mandatory|parametersetname|position|helpmessage|valuefrompipeline|valuefrompipelinebypropertyname|valuefromremainingarguments|dontshow)\b">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(confirmimpact|defaultparametersetname|helpuri|supportspaging|supportsshouldprocess|positionalbinding)\b">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(while|until|trap|switch|return|ref|process|param|parameter|in|if|global:|foreach|for|finally|filter|end|elseif|else|dynamicparam|do|default|continue|break|begin|\?|%|#script|#private|#local|#global|try|catch|throw)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="-(and|as|band|bnot|bor|bxor|casesensitive|ccontains|ceq|cge|cgt|cle|clike|clt|cmatch|cne|cnotcontains|cnotlike|cnotmatch|contains|creplace|eq|exact|f|file|ge|gt|icontains|ieq|ige|igt|ile|ilike|ilt|imatch|ine|inotcontains|inotlike|inotmatch|ireplace|is|isnot|le|like|lt|match|ne|not|notcontains|notlike|notmatch|or|regex|replace|wildcard)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(ac|asnp|cat|cd|cfs|chdir|clc|clear|clhy|cli|clp|cls|clv|cnsn|compare|copy|cp|cpi|cpp|curl|cvpa|dbp|del|diff|dir|dnsn|ebp|echo|epal|epcsv|epsn|erase|etsn|exsn|fc|fhx|fl|foreach|ft|fw|gal|gbp|gc|gci|gcm|gcs|gdr|ghy|gi|gjb|gl|gm|gmo|gp|gps|gpv|group|gsn|gsnp|gsv|gu|gv|gwmi|h|history|icm|iex|ihy|ii|ipal|ipcsv|ipmo|ipsn|irm|ise|iwmi|iwr|kill|lp|ls|man|md|measure|mi|mount|move|mp|mv|nal|ndr|ni|nmo|npssc|nsn|nv|ogv|oh|popd|ps|pushd|pwd|r|rbp|rcjb|rcsn|rd|rdr|ren|ri|rjb|rm|rmdir|rmo|rni|rnp|rp|rsn|rsnp|rujb|rv|rvpa|rwmi|sajb|sal|saps|sasv|sbp|sc|select|set|shcm|si|sl|sleep|sls|sort|sp|spjb|spps|spsv|start|sujb|sv|swmi|tee|trcm|type|wget|where|wjb|write)\s">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(\[)([a-z_\[][\w. `,\[\]]*)(\])">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameConstant"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;!\[)(?&lt;=\S[^\*|\n]\.)\w+(?=\s+|\(|\{|\.)">
+        <token type="NameProperty"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)([-+]?(?:[0-9]+)?\.?[0-9]+(?:(?:e|E)[0-9]+)?(?:F|f|D|d|M|m)?)((?i:[kmgtp]b)?)\b">
+        <bygroups>
+          <token type="LiteralNumberFloat"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="-[a-z_]\w*:*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[.,;@{}\[\]$()=+*/\\&amp;%!~?^\x60|&lt;&gt;-]|::">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="requires">
+      <rule pattern="\s*\n|\s*$">
+        <token type="TextWhitespace"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="-(?i:modules|pssnapin|runasadministrator|ahellid|version|assembly|psedition)">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="-\S*\b">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\s+(\S+)">
+        <token type="NameAttribute"/>
+      </rule>
+    </state>
+    <state name="child">
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="multline">
+      <rule pattern="[^#&amp;.]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="#(&gt;|&amp;gt;)">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(\s*\.)(component|description|example|externalhelp|forwardhelpcategory|forwardhelptargetname|functionality|inputs|link|notes|outputs|parameter|remotehelprunspace|role|synopsis)(\s*$)">
+        <bygroups>
+          <token type="CommentMultiline"/>
+          <token type="LiteralStringDoc"/>
+          <token type="CommentMultiline"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[#&amp;.]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="`[0abfnrtv&#39;\&#34;$`]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^$`&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\$\(">
+        <token type="Punctuation"/>
+        <push state="child"/>
+      </rule>
+      <rule pattern="((\$)((global|script|private|env):)?\w+)|((\$){((global|script|private|env):)?\w+})">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[`$]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="heredoc-double">
+      <rule pattern="\n&#34;@">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\$\(">
+        <token type="Punctuation"/>
+        <push state="child"/>
+      </rule>
+      <rule pattern="((\$)((global|script|private|env):)?\w+)|((\$){((global|script|private|env):)?\w+})">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[^@\n]+&#34;]">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/prolog.xml 🔗

@@ -0,0 +1,115 @@
+<lexer>
+  <config>
+    <name>Prolog</name>
+    <alias>prolog</alias>
+    <filename>*.ecl</filename>
+    <filename>*.prolog</filename>
+    <filename>*.pro</filename>
+    <filename>*.pl</filename>
+    <mime_type>text/x-prolog</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="nested-comment"/>
+      </rule>
+      <rule pattern="%.*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="0\&#39;.">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="0b[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0o[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d\d?\&#39;[a-zA-Z0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[\[\](){}|.,;!]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=":-|--&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;(?:\\x[0-9a-fA-F]+\\|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|\\[0-7]+\\|\\[&#34;\nabcefnrstv]|[^\\&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(?:&#39;&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringAtom"/>
+      </rule>
+      <rule pattern="is\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(&lt;|&gt;|=&lt;|&gt;=|==|=:=|=|/|//|\*|\+|-)(?=\s|[a-zA-Z0-9\[])">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(mod|div|not)\b">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="_">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="([a-z]+)(:)">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([a-zÀ-῿぀-퟿-￯][\w$À-῿぀-퟿-￯]*)(\s*)(:-|--&gt;)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([a-zÀ-῿぀-퟿-￯][\w$À-῿぀-퟿-￯]*)(\s*)(\()">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zÀ-῿぀-퟿-￯][\w$À-῿぀-퟿-￯]*">
+        <token type="LiteralStringAtom"/>
+      </rule>
+      <rule pattern="[#&amp;*+\-./:&lt;=&gt;?@\\^~¡-¿‐-〿]+">
+        <token type="LiteralStringAtom"/>
+      </rule>
+      <rule pattern="[A-Z_]\w*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\s+|[ -‏￰-�￯]">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="nested-comment">
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="[^*/]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/promela.xml 🔗

@@ -0,0 +1,119 @@
+
+<lexer>
+  <config>
+    <name>Promela</name>
+    <alias>promela</alias>
+    <filename>*.pml</filename>
+    <filename>*.prom</filename>
+    <filename>*.prm</filename>
+    <filename>*.promela</filename>
+    <filename>*.pr</filename>
+    <filename>*.pm</filename>
+    <mime_type>text/x-promela</mime_type>
+  </config>
+  <rules>
+    <state name="statements">
+      <rule pattern="(\[\]|&lt;&gt;|/\\|\\/)|(U|W|V)\b"><token type="Operator"/></rule>
+      <rule pattern="@"><token type="Punctuation"/></rule>
+      <rule pattern="(\.)([a-zA-Z_]\w*)"><bygroups><token type="Operator"/><token type="NameAttribute"/></bygroups></rule>
+      <rule><include state="keywords"/></rule>
+      <rule><include state="types"/></rule>
+      <rule pattern="([LuU]|u8)?(&quot;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralString"/></bygroups><push state="string"/></rule>
+      <rule pattern="([LuU]|u8)?(&#x27;)(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#x27;\n])(&#x27;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringChar"/><token type="LiteralStringChar"/><token type="LiteralStringChar"/></bygroups></rule>
+      <rule pattern="0[xX]([0-9a-fA-F](\&#x27;?[0-9a-fA-F])*\.[0-9a-fA-F](\&#x27;?[0-9a-fA-F])*|\.[0-9a-fA-F](\&#x27;?[0-9a-fA-F])*|[0-9a-fA-F](\&#x27;?[0-9a-fA-F])*)[pP][+-]?[0-9a-fA-F](\&#x27;?[0-9a-fA-F])*[lL]?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="(-)?(\d(\&#x27;?\d)*\.\d(\&#x27;?\d)*|\.\d(\&#x27;?\d)*|\d(\&#x27;?\d)*)[eE][+-]?\d(\&#x27;?\d)*[fFlL]?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="(-)?((\d(\&#x27;?\d)*\.(\d(\&#x27;?\d)*)?|\.\d(\&#x27;?\d)*)[fFlL]?)|(\d(\&#x27;?\d)*[fFlL])"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="(-)?0[xX][0-9a-fA-F](\&#x27;?[0-9a-fA-F])*(([uU][lL]{0,2})|[lL]{1,2}[uU]?)?"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="(-)?0[bB][01](\&#x27;?[01])*(([uU][lL]{0,2})|[lL]{1,2}[uU]?)?"><token type="LiteralNumberBin"/></rule>
+      <rule pattern="(-)?0(\&#x27;?[0-7])+(([uU][lL]{0,2})|[lL]{1,2}[uU]?)?"><token type="LiteralNumberOct"/></rule>
+      <rule pattern="(-)?\d(\&#x27;?\d)*(([uU][lL]{0,2})|[lL]{1,2}[uU]?)?"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]"><token type="Operator"/></rule>
+      <rule pattern="[()\[\],.]"><token type="Punctuation"/></rule>
+      <rule pattern="(true|false|NULL)\b"><token type="NameBuiltin"/></rule>
+      <rule pattern="(?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8})+"><token type="Name"/></rule>
+    </state>
+    <state name="types">
+      <rule pattern="(bit|bool|byte|pid|short|int|unsigned)\b"><token type="KeywordType"/></rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(atomic|break|d_step|do|od|for|in|goto|if|fi|unless)\b"><token type="Keyword"/></rule>
+      <rule pattern="(assert|get_priority|printf|printm|set_priority)\b"><token type="NameFunction"/></rule>
+      <rule pattern="(c_code|c_decl|c_expr|c_state|c_track)\b"><token type="Keyword"/></rule>
+      <rule pattern="(_|_last|_nr_pr|_pid|_priority|else|np_|STDIN)\b"><token type="NameBuiltin"/></rule>
+      <rule pattern="(empty|enabled|eval|full|len|nempty|nfull|pc_value)\b"><token type="NameFunction"/></rule>
+      <rule pattern="run\b"><token type="OperatorWord"/></rule>
+      <rule pattern="(active|chan|D_proctype|hidden|init|local|mtype|never|notrace|proctype|show|trace|typedef|xr|xs)\b"><token type="KeywordDeclaration"/></rule>
+      <rule pattern="(priority|provided)\b"><token type="Keyword"/></rule>
+      <rule pattern="(inline|ltl|select)\b"><token type="KeywordDeclaration"/></rule>
+      <rule pattern="skip\b"><token type="Keyword"/></rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="^#if\s+0"><token type="CommentPreproc"/><push state="if0"/></rule>
+      <rule pattern="^#"><token type="CommentPreproc"/><push state="macro"/></rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#if\s+0)"><bygroups><usingself state="root"/><token type="CommentPreproc"/></bygroups><push state="if0"/></rule>
+      <rule pattern="^(\s*(?:/[*].*?[*]/\s*)?)(#)"><bygroups><usingself state="root"/><token type="CommentPreproc"/></bygroups><push state="macro"/></rule>
+      <rule pattern="(^[ \t]*)(?!(?:public|private|protected|default)\b)((?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8})+)(\s*)(:)(?!:)"><bygroups><token type="TextWhitespace"/><token type="NameLabel"/><token type="TextWhitespace"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="\n"><token type="TextWhitespace"/></rule>
+      <rule pattern="[^\S\n]+"><token type="TextWhitespace"/></rule>
+      <rule pattern="\\\n"><token type="Text"/></rule>
+      <rule pattern="//(?:.|(?&lt;=\\)\n)*\n"><token type="CommentSingle"/></rule>
+      <rule pattern="/(?:\\\n)?[*](?:[^*]|[*](?!(?:\\\n)?/))*[*](?:\\\n)?/"><token type="CommentMultiline"/></rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*"><token type="CommentMultiline"/></rule>
+    </state>
+    <state name="root">
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="keywords"/></rule>
+      <rule pattern="((?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|::)+(?:[&amp;*\s])+)(\s*(?:(?:(?://(?:.|(?&lt;=\\)\n)*\n)|(?:/(?:\\\n)?[*](?:[^*]|[*](?!(?:\\\n)?/))*[*](?:\\\n)?/))\s*)*)((?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|::)+)(\s*(?:(?:(?://(?:.|(?&lt;=\\)\n)*\n)|(?:/(?:\\\n)?[*](?:[^*]|[*](?!(?:\\\n)?/))*[*](?:\\\n)?/))\s*)*)(\([^;&quot;\&#x27;)]*?\))(\s*(?:(?:(?://(?:.|(?&lt;=\\)\n)*\n)|(?:/(?:\\\n)?[*](?:[^*]|[*](?!(?:\\\n)?/))*[*](?:\\\n)?/))\s*)*)([^;{/&quot;\&#x27;]*)(\{)"><bygroups><usingself state="root"/><usingself state="whitespace"/><token type="NameFunction"/><usingself state="whitespace"/><usingself state="root"/><usingself state="whitespace"/><usingself state="root"/><token type="Punctuation"/></bygroups><push state="function"/></rule>
+      <rule pattern="((?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|::)+(?:[&amp;*\s])+)(\s*(?:(?:(?://(?:.|(?&lt;=\\)\n)*\n)|(?:/(?:\\\n)?[*](?:[^*]|[*](?!(?:\\\n)?/))*[*](?:\\\n)?/))\s*)*)((?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|::)+)(\s*(?:(?:(?://(?:.|(?&lt;=\\)\n)*\n)|(?:/(?:\\\n)?[*](?:[^*]|[*](?!(?:\\\n)?/))*[*](?:\\\n)?/))\s*)*)(\([^;&quot;\&#x27;)]*?\))(\s*(?:(?:(?://(?:.|(?&lt;=\\)\n)*\n)|(?:/(?:\\\n)?[*](?:[^*]|[*](?!(?:\\\n)?/))*[*](?:\\\n)?/))\s*)*)([^;/&quot;\&#x27;]*)(;)"><bygroups><usingself state="root"/><usingself state="whitespace"/><token type="NameFunction"/><usingself state="whitespace"/><usingself state="root"/><usingself state="whitespace"/><usingself state="root"/><token type="Punctuation"/></bygroups></rule>
+      <rule><include state="types"/></rule>
+      <rule><push state="statement"/></rule>
+    </state>
+    <state name="statement">
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="statements"/></rule>
+      <rule pattern="\}"><token type="Punctuation"/></rule>
+      <rule pattern="[{;]"><token type="Punctuation"/><pop depth="1"/></rule>
+    </state>
+    <state name="function">
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="statements"/></rule>
+      <rule pattern=";"><token type="Punctuation"/></rule>
+      <rule pattern="\{"><token type="Punctuation"/><push/></rule>
+      <rule pattern="\}"><token type="Punctuation"/><pop depth="1"/></rule>
+    </state>
+    <state name="string">
+      <rule pattern="&quot;"><token type="LiteralString"/><pop depth="1"/></rule>
+      <rule pattern="\\([\\abfnrtv&quot;\&#x27;]|x[a-fA-F0-9]{2,4}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})"><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&quot;\n]+"><token type="LiteralString"/></rule>
+      <rule pattern="\\\n"><token type="LiteralString"/></rule>
+      <rule pattern="\\"><token type="LiteralString"/></rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(\s*(?:/[*].*?[*]/\s*)?)(include)(\s*(?:/[*].*?[*]/\s*)?)(&quot;[^&quot;]+&quot;)([^\n]*)"><bygroups><usingself state="root"/><token type="CommentPreproc"/><usingself state="root"/><token type="CommentPreprocFile"/><token type="CommentSingle"/></bygroups></rule>
+      <rule pattern="(\s*(?:/[*].*?[*]/\s*)?)(include)(\s*(?:/[*].*?[*]/\s*)?)(&lt;[^&gt;]+&gt;)([^\n]*)"><bygroups><usingself state="root"/><token type="CommentPreproc"/><usingself state="root"/><token type="CommentPreprocFile"/><token type="CommentSingle"/></bygroups></rule>
+      <rule pattern="[^/\n]+"><token type="CommentPreproc"/></rule>
+      <rule pattern="/[*](.|\n)*?[*]/"><token type="CommentMultiline"/></rule>
+      <rule pattern="//.*?\n"><token type="CommentSingle"/><pop depth="1"/></rule>
+      <rule pattern="/"><token type="CommentPreproc"/></rule>
+      <rule pattern="(?&lt;=\\)\n"><token type="CommentPreproc"/></rule>
+      <rule pattern="\n"><token type="CommentPreproc"/><pop depth="1"/></rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n"><token type="CommentPreproc"/><push/></rule>
+      <rule pattern="^\s*#el(?:se|if).*\n"><token type="CommentPreproc"/><pop depth="1"/></rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n"><token type="CommentPreproc"/><pop depth="1"/></rule>
+      <rule pattern=".*?\n"><token type="Comment"/></rule>
+    </state>
+    <state name="classname">
+      <rule pattern="(?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8})+"><token type="NameClass"/><pop depth="1"/></rule>
+      <rule pattern="\s*(?=&gt;)"><token type="Text"/><pop depth="1"/></rule>
+      <rule><pop depth="1"/></rule>
+    </state>
+    <state name="case-value">
+      <rule pattern="(?&lt;!:)(:)(?!:)"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="(?!\d)(?:[\w$]|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8})+"><token type="NameConstant"/></rule>
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="statements"/></rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/promql.xml 🔗

@@ -0,0 +1,123 @@
+<lexer>
+  <config>
+    <name>PromQL</name>
+    <alias>promql</alias>
+    <filename>*.promql</filename>
+  </config>
+  <rules>
+    <state name="range">
+      <rule pattern="\]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[1-9][0-9]*[smhdwy]">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule pattern="\)">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Operator"/>
+        <push/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(group_right|group_left|ignoring|without|offset|bool|on|by)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(count_values|quantile|bottomk|stdvar|stddev|count|group|topk|sum|min|max|avg)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(histogram_quantile|quantile_over_time|absent_over_time|stdvar_over_time|stddev_over_time|count_over_time|predict_linear|label_replace|max_over_time|avg_over_time|sum_over_time|days_in_month|min_over_time|day_of_month|holt_winters|day_of_week|label_join|sort_desc|clamp_max|timestamp|clamp_min|increase|changes|resets|vector|absent|idelta|minute|scalar|log10|delta|month|floor|deriv|round|irate|rate|year|sort|log2|sqrt|ceil|time|hour|abs|exp|ln)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="[1-9][0-9]*[smhdwy]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="-?[0-9]+\.[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="#.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(\+|\-|\*|\/|\%|\^)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="==|!=|&gt;=|&lt;=|&lt;|&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="and|or|unless">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="[_a-zA-Z][a-zA-Z0-9_]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="([&#34;\&#39;])(.*?)([&#34;\&#39;])">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\(">
+        <token type="Operator"/>
+        <push state="function"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="labels"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Punctuation"/>
+        <push state="range"/>
+      </rule>
+    </state>
+    <state name="labels">
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="([_a-zA-Z][a-zA-Z0-9_]*?)(\s*?)(=~|!=|=|!~)(\s*?)(&#34;|&#39;)(.*?)(&#34;|&#39;)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="Operator"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/properties.xml 🔗

@@ -0,0 +1,45 @@
+<lexer>
+  <config>
+    <name>properties</name>
+    <alias>java-properties</alias>
+    <filename>*.properties</filename>
+    <mime_type>text/x-java-properties</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^([ \t\f]*)([#!].*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^([ \t\f]*)(\S+?)([ \t\f]*)([=:])([ \t\f]*)(.*(?:(?&lt;=\\)\n.*)*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^([ \t\f]*)(\S+)([ \t\f]+)(.*(?:(?&lt;=\\)\n.*)*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^([ \t\f]*)(\w+)$">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameAttribute"/>
+        </bygroups>
+      </rule>
+       <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/protocol_buffer.xml 🔗

@@ -0,0 +1,118 @@
+<lexer>
+  <config>
+    <name>Protocol Buffer</name>
+    <alias>protobuf</alias>
+    <alias>proto</alias>
+    <filename>*.proto</filename>
+  </config>
+  <rules>
+    <state name="package">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="message">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="type">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[,;{}\[\]()&lt;&gt;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="/(\\\n)?/(\n|(.|\n)*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?\*(.|\n)*?\*(\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\b(extensions|required|repeated|optional|returns|default|option|packed|import|ctype|oneof|max|rpc|to)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(sfixed32|sfixed64|fixed32|fixed64|sint32|sint64|double|string|uint32|uint64|int32|float|int64|bytes|bool)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(package)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="package"/>
+      </rule>
+      <rule pattern="(message|extend)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="message"/>
+      </rule>
+      <rule pattern="(enum|group|service)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="type"/>
+      </rule>
+      <rule pattern="\&#34;.*?\&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\&#39;.*?\&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\-?(inf|nan))\b">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+[LlUu]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]+[LlUu]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\d+[LlUu]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[+-=]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="([a-zA-Z_][\w.]*)([ \t]*)(=)">
+        <bygroups>
+          <token type="Name"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_][\w.]*">
+        <token type="Name"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/prql.xml 🔗

@@ -0,0 +1,161 @@
+<lexer>
+  <config>
+    <name>PRQL</name>
+    <alias>prql</alias>
+    <filename>*.prql</filename>
+    <mime_type>application/prql</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#!.*"><token type="LiteralStringDoc"/></rule>
+      <rule pattern="#.*"><token type="CommentSingle"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="^(\s*)(module)(\s*)"><bygroups><token type="TextWhitespace"/><token type="KeywordNamespace"/><token type="TextWhitespace"/></bygroups><push state="imports"/></rule>
+      <rule pattern="(bool|int|int8|int16|int32|int64|int128|float|text|set)\b"><token type="KeywordType"/></rule>
+      <rule pattern="^prql "><token type="KeywordReserved"/></rule>
+      <rule pattern="let"><token type="KeywordDeclaration"/></rule>
+      <rule><include state="keywords"/></rule>
+      <rule><include state="expr"/></rule>
+      <rule pattern="^[A-Za-z_][a-zA-Z0-9_]*"><token type="Keyword"/></rule>
+    </state>
+    <state name="expr">
+      <rule pattern="(f)(&quot;&quot;&quot;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringDouble"/></bygroups><combined state="fstringescape" state="tdqf"/></rule>
+      <rule pattern="(f)(&#x27;&#x27;&#x27;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringSingle"/></bygroups><combined state="fstringescape" state="tsqf"/></rule>
+      <rule pattern="(f)(&quot;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringDouble"/></bygroups><combined state="fstringescape" state="dqf"/></rule>
+      <rule pattern="(f)(&#x27;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringSingle"/></bygroups><combined state="fstringescape" state="sqf"/></rule>
+      <rule pattern="(s)(&quot;&quot;&quot;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringDouble"/></bygroups><combined state="stringescape" state="tdqf"/></rule>
+      <rule pattern="(s)(&#x27;&#x27;&#x27;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringSingle"/></bygroups><combined state="stringescape" state="tsqf"/></rule>
+      <rule pattern="(s)(&quot;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringDouble"/></bygroups><combined state="stringescape" state="dqf"/></rule>
+      <rule pattern="(s)(&#x27;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringSingle"/></bygroups><combined state="stringescape" state="sqf"/></rule>
+      <rule pattern="(?i)(r)(&quot;&quot;&quot;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringDouble"/></bygroups><push state="tdqs"/></rule>
+      <rule pattern="(?i)(r)(&#x27;&#x27;&#x27;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringSingle"/></bygroups><push state="tsqs"/></rule>
+      <rule pattern="(?i)(r)(&quot;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringDouble"/></bygroups><push state="dqs"/></rule>
+      <rule pattern="(?i)(r)(&#x27;)"><bygroups><token type="LiteralStringAffix"/><token type="LiteralStringSingle"/></bygroups><push state="sqs"/></rule>
+      <rule pattern="&quot;&quot;&quot;"><token type="LiteralStringDouble"/><combined state="stringescape" state="tdqs"/></rule>
+      <rule pattern="&#x27;&#x27;&#x27;"><token type="LiteralStringSingle"/><combined state="stringescape" state="tsqs"/></rule>
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><combined state="stringescape" state="dqs"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralStringSingle"/><combined state="stringescape" state="sqs"/></rule>
+      <rule pattern="@\d{4}-\d{2}-\d{2}T\d{2}(:\d{2})?(:\d{2})?(\.\d{1,6})?(Z|[+-]\d{1,2}(:\d{1,2})?)?"><token type="LiteralDate"/></rule>
+      <rule pattern="@\d{4}-\d{2}-\d{2}"><token type="LiteralDate"/></rule>
+      <rule pattern="@\d{2}(:\d{2})?(:\d{2})?(\.\d{1,6})?(Z|[+-]\d{1,2}(:\d{1,2})?)?"><token type="LiteralDate"/></rule>
+      <rule pattern="[^\S\n]+"><token type="Text"/></rule>
+      <rule><include state="numbers"/></rule>
+      <rule pattern="-&gt;|=&gt;|==|!=|&gt;=|&lt;=|~=|&amp;&amp;|\|\||\?\?|\/\/"><token type="Operator"/></rule>
+      <rule pattern="[-~+/*%=&lt;&gt;&amp;^|.@]"><token type="Operator"/></rule>
+      <rule pattern="[]{}:(),;[]"><token type="Punctuation"/></rule>
+      <rule><include state="functions"/></rule>
+      <rule pattern="[A-Za-z_][a-zA-Z0-9_]*"><token type="NameVariable"/></rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="(\d(?:_?\d)*\.(?:\d(?:_?\d)*)?|(?:\d(?:_?\d)*)?\.\d(?:_?\d)*)([eE][+-]?\d(?:_?\d)*)?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="\d(?:_?\d)*[eE][+-]?\d(?:_?\d)*j?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="0[oO](?:_?[0-7])+"><token type="LiteralNumberOct"/></rule>
+      <rule pattern="0[bB](?:_?[01])+"><token type="LiteralNumberBin"/></rule>
+      <rule pattern="0[xX](?:_?[a-fA-F0-9])+"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="\d(?:_?\d)*"><token type="LiteralNumberInteger"/></rule>
+    </state>
+    <state name="fstringescape">
+      <rule><include state="stringescape"/></rule>
+    </state>
+    <state name="bytesescape">
+      <rule pattern="\\([\\bfnrt&quot;\&#x27;]|\n|x[a-fA-F0-9]{2}|[0-7]{1,3})"><token type="LiteralStringEscape"/></rule>
+    </state>
+    <state name="stringescape">
+      <rule pattern="\\(N\{.*?\}|u\{[a-fA-F0-9]{1,6}\})"><token type="LiteralStringEscape"/></rule>
+      <rule><include state="bytesescape"/></rule>
+    </state>
+    <state name="fstrings-single">
+      <rule pattern="\}"><token type="LiteralStringInterpol"/></rule>
+      <rule pattern="\{"><token type="LiteralStringInterpol"/><push state="expr-inside-fstring"/></rule>
+      <rule pattern="[^\\\&#x27;&quot;{}\n]+"><token type="LiteralStringSingle"/></rule>
+      <rule pattern="[\&#x27;&quot;\\]"><token type="LiteralStringSingle"/></rule>
+    </state>
+    <state name="fstrings-double">
+      <rule pattern="\}"><token type="LiteralStringInterpol"/></rule>
+      <rule pattern="\{"><token type="LiteralStringInterpol"/><push state="expr-inside-fstring"/></rule>
+      <rule pattern="[^\\\&#x27;&quot;{}\n]+"><token type="LiteralStringDouble"/></rule>
+      <rule pattern="[\&#x27;&quot;\\]"><token type="LiteralStringDouble"/></rule>
+    </state>
+    <state name="strings-single">
+      <rule pattern="\{((\w+)((\.\w+)|(\[[^\]]+\]))*)?(\:(.?[&lt;&gt;=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[E-GXb-gnosx%]?)?\}"><token type="LiteralStringInterpol"/></rule>
+      <rule pattern="[^\\\&#x27;&quot;%{\n]+"><token type="LiteralStringSingle"/></rule>
+      <rule pattern="[\&#x27;&quot;\\]"><token type="LiteralStringSingle"/></rule>
+      <rule pattern="%|(\{{1,2})"><token type="LiteralStringSingle"/></rule>
+    </state>
+    <state name="strings-double">
+      <rule pattern="\{((\w+)((\.\w+)|(\[[^\]]+\]))*)?(\:(.?[&lt;&gt;=\^])?[-+ ]?#?0?(\d+)?,?(\.\d+)?[E-GXb-gnosx%]?)?\}"><token type="LiteralStringInterpol"/></rule>
+      <rule pattern="[^\\\&#x27;&quot;%{\n]+"><token type="LiteralStringDouble"/></rule>
+      <rule pattern="[\&#x27;&quot;\\]"><token type="LiteralStringDouble"/></rule>
+      <rule pattern="%|(\{{1,2})"><token type="LiteralStringDouble"/></rule>
+    </state>
+    <state name="dqf">
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
+      <rule pattern="\\\\|\\&quot;|\\\n"><token type="LiteralStringEscape"/></rule>
+      <rule><include state="fstrings-double"/></rule>
+    </state>
+    <state name="sqf">
+      <rule pattern="&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
+      <rule pattern="\\\\|\\&#x27;|\\\n"><token type="LiteralStringEscape"/></rule>
+      <rule><include state="fstrings-single"/></rule>
+    </state>
+    <state name="dqs">
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
+      <rule pattern="\\\\|\\&quot;|\\\n"><token type="LiteralStringEscape"/></rule>
+      <rule><include state="strings-double"/></rule>
+    </state>
+    <state name="sqs">
+      <rule pattern="&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
+      <rule pattern="\\\\|\\&#x27;|\\\n"><token type="LiteralStringEscape"/></rule>
+      <rule><include state="strings-single"/></rule>
+    </state>
+    <state name="tdqf">
+      <rule pattern="&quot;&quot;&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
+      <rule><include state="fstrings-double"/></rule>
+      <rule pattern="\n"><token type="LiteralStringDouble"/></rule>
+    </state>
+    <state name="tsqf">
+      <rule pattern="&#x27;&#x27;&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
+      <rule><include state="fstrings-single"/></rule>
+      <rule pattern="\n"><token type="LiteralStringSingle"/></rule>
+    </state>
+    <state name="tdqs">
+      <rule pattern="&quot;&quot;&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
+      <rule><include state="strings-double"/></rule>
+      <rule pattern="\n"><token type="LiteralStringDouble"/></rule>
+    </state>
+    <state name="tsqs">
+      <rule pattern="&#x27;&#x27;&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
+      <rule><include state="strings-single"/></rule>
+      <rule pattern="\n"><token type="LiteralStringSingle"/></rule>
+    </state>
+    <state name="expr-inside-fstring">
+      <rule pattern="[{([]"><token type="Punctuation"/><push state="expr-inside-fstring-inner"/></rule>
+      <rule pattern="(=\s*)?\}"><token type="LiteralStringInterpol"/><pop depth="1"/></rule>
+      <rule pattern="(=\s*)?:"><token type="LiteralStringInterpol"/><pop depth="1"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule><include state="expr"/></rule>
+    </state>
+    <state name="expr-inside-fstring-inner">
+      <rule pattern="[{([]"><token type="Punctuation"/><push state="expr-inside-fstring-inner"/></rule>
+      <rule pattern="[])}]"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule><include state="expr"/></rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(into|case|type|module|internal)\b"><token type="Keyword"/></rule>
+      <rule pattern="(true|false|null)\b"><token type="KeywordConstant"/></rule>
+    </state>
+    <state name="functions">
+      <rule pattern="(min|max|sum|average|stddev|every|any|concat_array|count|lag|lead|first|last|rank|rank_dense|row_number|round|as|in|tuple_every|tuple_map|tuple_zip|_eq|_is_null|from_text|lower|upper|read_parquet|read_csv)\b"><token type="NameFunction"/></rule>
+    </state>
+    <state name="comment">
+      <rule pattern="-(?!\})"><token type="CommentMultiline"/></rule>
+      <rule pattern="\{-"><token type="CommentMultiline"/><push state="comment"/></rule>
+      <rule pattern="[^-}]"><token type="CommentMultiline"/></rule>
+      <rule pattern="-\}"><token type="CommentMultiline"/><pop depth="1"/></rule>
+    </state>
+    <state name="imports">
+      <rule pattern="\w+(\.\w+)*"><token type="NameClass"/><pop depth="1"/></rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/psl.xml 🔗

@@ -0,0 +1,213 @@
+<lexer>
+  <config>
+    <name>PSL</name>
+    <alias>psl</alias>
+    <filename>*.psl</filename>
+    <filename>*.BATCH</filename>
+    <filename>*.TRIG</filename>
+    <filename>*.PROC</filename>
+    <mime_type>text/x-psl</mime_type>
+  </config>
+  <rules>
+  <!-- NameFunction|TypeName -->
+    <state name="root">
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\+|-|\*|\/|\b%\b|&lt;|&gt;|=|'|\band\b|\bor\b|_|:|!">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{}(,)\[\]]">
+        <token type="Punctuation"/>
+        <push state="root"/>
+      </rule>
+      <rule pattern="#">
+        <token type="KeywordPseudo"/>
+        <push state="directive"/>
+      </rule>
+      <rule pattern="\.?\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="&quot;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\b(do|set|if|else|for|while|quit|catch|return|ret|while)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="\btype\b">
+        <token type="KeywordDeclaration"/>
+        <push state="typename"/>
+      </rule>
+      <rule pattern="\b(public|req|private|void)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="\b(Boolean|String|Number|Date)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(\${0,2}[_a-zA-z]\w*)?(\^[_a-zA-Z]\w*)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([_a-zA-z]\w*)(\.[_a-zA-Z]\w*)(\()">
+        <bygroups>
+          <token type="Name"/>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\${0,2}[_a-zA-z]\w*)(\.[_a-zA-Z]\w*)">
+        <bygroups>
+          <token type="Name"/>
+          <token type="NameProperty"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\.?(%|\${0,2})[_a-zA-Z]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&quot;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&quot;\&#x27;]|x[a-fA-F0-9]{2,4}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&quot;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="typename">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\b(public|req|private|void)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="([_a-zA-Z]\w*)?(\s+)([_a-zA-Z]\w*)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="Name"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="directive">
+      <rule pattern="ACCEPT">
+        <token type="KeywordPseudo"/>
+        <push state="accept-directive"/>
+      </rule>
+      <rule pattern="CLASSDEF">
+        <token type="KeywordPseudo"/>
+        <push state="classdef-directive"/>
+      </rule>
+      <rule pattern="IF|ELSEIF">
+        <token type="KeywordPseudo"/>
+        <push state="if-directive"/>
+      </rule>
+      <rule pattern="PACKAGE">
+        <token type="KeywordPseudo"/>
+        <push state="package-directive"/>
+      </rule>
+      <rule pattern="PROPERTYDEF">
+        <token type="KeywordPseudo"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="INFO|WARN">
+        <token type="KeywordPseudo"/>
+        <push state="warn-directive"/>
+      </rule>
+      <rule pattern="OPTION">
+        <token type="KeywordPseudo"/>
+        <push state="option-directive"/>
+      </rule>
+      <rule pattern="BYPASS|ELSE|END|ENDBYPASS|ENDIF|OPTIMIZE">
+        <token type="KeywordPseudo"/>
+        <push state="other-directive"/>
+      </rule>
+    </state>
+    <state name="accept-directive">
+      <rule pattern=".+$">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="other-directive">
+      <rule pattern=".+$">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="classdef-directive">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="delimiter|extends">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="public">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[\w\d]+">
+        <token type="NameClass"/>
+      </rule>
+    </state>
+    <state name="if-directive">
+      <rule pattern=".+$">
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="option-directive">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="ON|OFF">
+        <token type="KeywordConstant"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\w\d]+">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="package-directive">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="Name"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/puppet.xml 🔗

@@ -0,0 +1,100 @@
+<lexer>
+  <config>
+    <name>Puppet</name>
+    <alias>puppet</alias>
+    <filename>*.pp</filename>
+  </config>
+  <rules>
+    <state name="strings">
+      <rule pattern="&#34;([^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;(\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="names"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule>
+        <include state="operators"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule pattern="[]{}:(),;[]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="\s*#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="operators">
+      <rule pattern="(=&gt;|\?|&lt;|&gt;|=|\+|-|/|\*|~|!|\|)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(in|and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+    </state>
+    <state name="names">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(\$\S+)(\[)(\S+)(\])">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Punctuation"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\$\S+">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?j?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+[eE][+-]?[0-9]+j?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[0-7]+j?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][a-fA-F0-9]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+L">
+        <token type="LiteralNumberIntegerLong"/>
+      </rule>
+      <rule pattern="\d+j?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="keywords">

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/python.xml 🔗

@@ -0,0 +1,593 @@
+<lexer>
+  <config>
+    <name>Python</name>
+    <alias>python</alias>
+    <alias>py</alias>
+    <alias>sage</alias>
+    <alias>python3</alias>
+    <alias>py3</alias>
+    <filename>*.py</filename>
+    <filename>*.pyi</filename>
+    <filename>*.pyw</filename>
+    <filename>*.jy</filename>
+    <filename>*.sage</filename>
+    <filename>*.sc</filename>
+    <filename>SConstruct</filename>
+    <filename>SConscript</filename>
+    <filename>*.bzl</filename>
+    <filename>BUCK</filename>
+    <filename>BUILD</filename>
+    <filename>BUILD.bazel</filename>
+    <filename>WORKSPACE</filename>
+    <filename>WORKSPACE.bzlmod</filename>
+    <filename>WORKSPACE.bazel</filename>
+    <filename>MODULE.bazel</filename>
+    <filename>REPO.bazel</filename>
+    <filename>*.tac</filename>
+    <mime_type>text/x-python</mime_type>
+    <mime_type>application/x-python</mime_type>
+    <mime_type>text/x-python3</mime_type>
+    <mime_type>application/x-python3</mime_type>
+  </config>
+  <rules>
+    <state name="numbers">
+      <rule pattern="(\d(?:_?\d)*\.(?:\d(?:_?\d)*)?|(?:\d(?:_?\d)*)?\.\d(?:_?\d)*)([eE][+-]?\d(?:_?\d)*)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d(?:_?\d)*[eE][+-]?\d(?:_?\d)*j?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[oO](?:_?[0-7])+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[bB](?:_?[01])+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[xX](?:_?[a-fA-F0-9])+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d(?:_?\d)*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="expr">
+      <rule pattern="(?i)(rf|fr)(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="rfstringescape" state="tdqf"/>
+      </rule>
+      <rule pattern="(?i)(rf|fr)(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="rfstringescape" state="tsqf"/>
+      </rule>
+      <rule pattern="(?i)(rf|fr)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="rfstringescape" state="dqf"/>
+      </rule>
+      <rule pattern="(?i)(rf|fr)(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="rfstringescape" state="sqf"/>
+      </rule>
+      <rule pattern="([fF])(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="fstringescape" state="tdqf"/>
+      </rule>
+      <rule pattern="([fF])(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="fstringescape" state="tsqf"/>
+      </rule>
+      <rule pattern="([fF])(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="fstringescape" state="dqf"/>
+      </rule>
+      <rule pattern="([fF])(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="fstringescape" state="sqf"/>
+      </rule>
+      <rule pattern="(?i)(rb|br|r)(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <push state="tdqs"/>
+      </rule>
+      <rule pattern="(?i)(rb|br|r)(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <push state="tsqs"/>
+      </rule>
+      <rule pattern="(?i)(rb|br|r)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <push state="dqs"/>
+      </rule>
+      <rule pattern="(?i)(rb|br|r)(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <push state="sqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="stringescape" state="tdqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="stringescape" state="tsqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="stringescape" state="dqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="stringescape" state="sqs"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule pattern="!=|==|&lt;&lt;|&gt;&gt;|:=|[-~+/*%=&lt;&gt;&amp;^|.]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[]{}:(),;[]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(in|is|and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule>
+        <include state="expr-keywords"/>
+      </rule>
+      <rule>
+        <include state="builtins"/>
+      </rule>
+      <rule>
+        <include state="magicfuncs"/>
+      </rule>
+      <rule>
+        <include state="magicvars"/>
+      </rule>
+      <rule>
+        <include state="name"/>
+      </rule>
+    </state>
+    <state name="fstrings-double">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="expr-inside-fstring"/>
+      </rule>
+      <rule pattern="[^\\\&#39;&#34;{}\n]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[\&#39;&#34;\\]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(yield from|nonlocal|continue|finally|except|lambda|assert|global|return|raise|yield|while|break|await|async|pass|else|elif|with|try|for|del|as|if|match|case)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(False|True|None)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+    </state>
+    <state name="dqs">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\|\\&#34;|\\\n">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="strings-double"/>
+      </rule>
+    </state>
+    <state name="fromimport">
+      <rule pattern="(\s+)(import)\b">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordNamespace"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="None\b">
+        <token type="NameBuiltinPseudo"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[_\p{L}][_\p{L}\p{N}]*">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="builtins">
+      <rule pattern="(?&lt;!\.)(staticmethod|classmethod|memoryview|__import__|issubclass|isinstance|frozenset|bytearray|enumerate|reversed|property|compile|complex|delattr|hasattr|setattr|globals|getattr|divmod|filter|locals|format|object|sorted|slice|print|bytes|range|input|tuple|round|super|float|eval|list|dict|repr|type|vars|hash|next|bool|open|iter|oct|pow|min|zip|max|map|bin|len|set|any|dir|all|abs|str|sum|chr|int|hex|ord|id)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(self|Ellipsis|NotImplemented|cls)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/python_2.xml 🔗

@@ -0,0 +1,356 @@
+<lexer>
+  <config>
+    <name>Python 2</name>
+    <alias>python2</alias>
+    <alias>py2</alias>
+    <mime_type>text/x-python2</mime_type>
+    <mime_type>application/x-python2</mime_type>
+  </config>
+  <rules>
+    <state name="tdqs">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings-double"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="name">
+      <rule pattern="@[\w.]+">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="magicfuncs">
+      <rule pattern="(__instancecheck__|__subclasscheck__|__getattribute__|__rfloordiv__|__ifloordiv__|__setslice__|__getslice__|__contains__|__reversed__|__floordiv__|__rtruediv__|__itruediv__|__delslice__|__rlshift__|__rrshift__|__delitem__|__rdivmod__|__nonzero__|__missing__|__delattr__|__setattr__|__irshift__|__complex__|__setitem__|__getitem__|__truediv__|__unicode__|__ilshift__|__getattr__|__delete__|__coerce__|__invert__|__lshift__|__divmod__|__rshift__|__enter__|__index__|__float__|__iadd__|__rsub__|__init__|__imul__|__rpow__|__repr__|__rmul__|__isub__|__iter__|__rmod__|__ixor__|__call__|__imod__|__long__|__hash__|__rxor__|__idiv__|__iand__|__rdiv__|__ipow__|__rcmp__|__rand__|__exit__|__radd__|__str__|__cmp__|__pos__|__pow__|__oct__|__new__|__neg__|__mul__|__mod__|__set__|__xor__|__sub__|__len__|__and__|__get__|__rop__|__add__|__ior__|__div__|__iop__|__int__|__abs__|__hex__|__ror__|__del__|__eq__|__or__|__ne__|__lt__|__le__|__ge__|__gt__|__op__)\b">
+        <token type="NameFunctionMagic"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(yield from|continue|finally|lambda|assert|global|except|return|print|yield|while|break|raise|elif|pass|exec|else|with|try|for|del|as|if)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="tsqs">
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings-single"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="stringescape">
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|\n|N\{.*?\}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{8}|x[a-fA-F0-9]{2}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="(\d+\.\d*|\d*\.\d+)([eE][+-]?[0-9]+)?j?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+[eE][+-]?[0-9]+j?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[0-7]+j?">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[bB][01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[xX][a-fA-F0-9]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="\d+L">
+        <token type="LiteralNumberIntegerLong"/>
+      </rule>
+      <rule pattern="\d+j?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="(?:[ \t]|\\\n)+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="as\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[a-zA-Z_][\w.]*">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="magicvars">
+      <rule pattern="(__metaclass__|__defaults__|__globals__|__closure__|__weakref__|__module__|__slots__|__class__|__bases__|__file__|__func__|__dict__|__name__|__self__|__code__|__mro__|__doc__)\b">
+        <token type="NameVariableMagic"/>
+      </rule>
+    </state>
+    <state name="fromimport">
+      <rule pattern="(?:[ \t]|\\\n)+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="import\b">
+        <token type="KeywordNamespace"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="None\b">
+        <token type="NameBuiltinPseudo"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-zA-Z_.][\w.]*">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="strings-single">
+      <rule pattern="%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[E-GXc-giorsux%]">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^\\\&#39;&#34;%\n]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[\&#39;&#34;\\]">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="%">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="funcname">
+      <rule>
+        <include state="magicfuncs"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="backtick">
+      <rule pattern="`.*?`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="strings-double">
+      <rule pattern="%(\(\w+\))?[-#0 +]*([0-9]+|[*])?(\.([0-9]+|[*]))?[hlL]?[E-GXc-giorsux%]">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^\\\&#39;&#34;%\n]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[\&#39;&#34;\\]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="%">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="dqs">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\|\\&#34;|\\\n">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="strings-double"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="^(\s*)([rRuUbB]{,2})(&#34;&#34;&#34;(?:.|\n)*?&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDoc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)([rRuUbB]{,2})(&#39;&#39;&#39;(?:.|\n)*?&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDoc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\A#!.+$">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="[]{}:(),;[]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(in|is|and|or|not)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="!=|==|&lt;&lt;|&gt;&gt;|[-~+/*%=&lt;&gt;&amp;^|.]">
+        <token type="Operator"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule pattern="(def)((?:\s|\\\s)+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="(class)((?:\s|\\\s)+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(from)((?:\s|\\\s)+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="fromimport"/>
+      </rule>
+      <rule pattern="(import)((?:\s|\\\s)+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule>
+        <include state="builtins"/>
+      </rule>
+      <rule>
+        <include state="magicfuncs"/>
+      </rule>
+      <rule>
+        <include state="magicvars"/>
+      </rule>
+      <rule>
+        <include state="backtick"/>
+      </rule>
+      <rule pattern="([rR]|[uUbB][rR]|[rR][uUbB])(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <push state="tdqs"/>
+      </rule>
+      <rule pattern="([rR]|[uUbB][rR]|[rR][uUbB])(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <push state="tsqs"/>
+      </rule>
+      <rule pattern="([rR]|[uUbB][rR]|[rR][uUbB])(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <push state="dqs"/>
+      </rule>
+      <rule pattern="([rR]|[uUbB][rR]|[rR][uUbB])(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <push state="sqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#34;&#34;&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="stringescape" state="tdqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#39;&#39;&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="stringescape" state="tsqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringDouble"/>
+        </bygroups>
+        <combined state="stringescape" state="dqs"/>
+      </rule>
+      <rule pattern="([uUbB]?)(&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringSingle"/>
+        </bygroups>
+        <combined state="stringescape" state="sqs"/>
+      </rule>
+      <rule>
+        <include state="name"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+    </state>
+    <state name="sqs">
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\|\\&#39;|\\\n">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="strings-single"/>
+      </rule>
+    </state>
+    <state name="builtins">
+      <rule pattern="(?&lt;!\.)(staticmethod|classmethod|__import__|isinstance|basestring|issubclass|frozenset|raw_input|bytearray|enumerate|property|callable|reversed|execfile|hasattr|setattr|compile|complex|delattr|unicode|globals|getattr|unichr|reduce|xrange|buffer|intern|filter|locals|divmod|coerce|sorted|reload|object|slice|round|float|super|input|bytes|apply|tuple|range|iter|dict|long|type|hash|vars|next|file|exit|open|repr|eval|bool|list|bin|pow|zip|ord|oct|min|set|any|max|map|all|len|sum|int|dir|hex|chr|abs|cmp|str|id)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(self|None|Ellipsis|NotImplemented|False|True|cls)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(PendingDeprecationWarning|UnicodeTranslateError|NotImplementedError|UnicodeDecodeError|DeprecationWarning|UnicodeEncodeError|FloatingPointError|ZeroDivisionError|UnboundLocalError|KeyboardInterrupt|EnvironmentError|IndentationError|OverflowWarning|ArithmeticError|ReferenceError|AttributeError|AssertionError|RuntimeWarning|UnicodeWarning|GeneratorExit|SyntaxWarning|StandardError|BaseException|OverflowError|FutureWarning|ImportWarning|StopIteration|UnicodeError|WindowsError|RuntimeError|ImportError|UserWarning|LookupError|SyntaxError|SystemError|MemoryError|SystemExit|ValueError|IndexError|NameError|Exception|TypeError|EOFError|KeyError|VMSError|TabError|IOError|Warning|OSError)\b">
+        <token type="NameException"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/qbasic.xml 🔗

@@ -0,0 +1,173 @@
+<lexer>
+  <config>
+    <name>QBasic</name>
+    <alias>qbasic</alias>
+    <alias>basic</alias>
+    <filename>*.BAS</filename>
+    <filename>*.bas</filename>
+    <mime_type>text/basic</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="^(\s*)(\d*)(\s*)(REM .*)$">
+        <bygroups>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="CommentSingle"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(\d+)(\s*)">
+        <bygroups>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?=[\s]*)(\w+)(?=[\s]*=)">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="(?=[^&#34;]*)\&#39;.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="&#34;[^\n&#34;]*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="(END)(\s+)(FUNCTION|IF|SELECT|SUB)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="TextWhitespace"/>
+          <token type="KeywordReserved"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(DECLARE)(\s+)([A-Z]+)(\s+)(\S+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariable"/>
+          <token type="TextWhitespace"/>
+          <token type="Name"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(DIM)(\s+)(SHARED)(\s+)([^\s(]+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariable"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariableGlobal"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(DIM)(\s+)([^\s(]+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariableGlobal"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)([a-zA-Z_]+)(\s*)(\=)">
+        <bygroups>
+          <token type="TextWhitespace"/>
+          <token type="NameVariableGlobal"/>
+          <token type="TextWhitespace"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(GOTO|GOSUB)(\s+)(\w+\:?)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(SUB)(\s+)(\w+\:?)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="declarations"/>
+      </rule>
+      <rule>
+        <include state="functions"/>
+      </rule>
+      <rule>
+        <include state="metacommands"/>
+      </rule>
+      <rule>
+        <include state="operators"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*[$@#&amp;!]">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*\:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="\-?\d*\.\d+[@|#]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\-?\d+[@|#]">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\-?\d+#?">
+        <token type="LiteralNumberIntegerLong"/>
+      </rule>
+      <rule pattern="\-?\d+#?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="!=|==|:=|\.=|&lt;&lt;|&gt;&gt;|[-~+/\\*%=&lt;&gt;&amp;^|?:!.]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[\[\]{}(),;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\w]+">
+        <token type="NameVariableGlobal"/>
+      </rule>
+    </state>
+    <state name="declarations">
+      <rule pattern="\b(DATA|LET)(?=\(|\b)">
+        <token type="KeywordDeclaration"/>
+      </rule>
+    </state>
+    <state name="functions">
+      <rule pattern="\b(ABS|ASC|ATN|CDBL|CHR\$|CINT|CLNG|COMMAND\$|COS|CSNG|CSRLIN|CVD|CVDMBF|CVI|CVL|CVS|CVSMBF|DATE\$|ENVIRON\$|EOF|ERDEV|ERDEV\$|ERL|ERR|EXP|FILEATTR|FIX|FRE|FREEFILE|HEX\$|INKEY\$|INP|INPUT\$|INSTR|INT|IOCTL\$|LBOUND|LCASE\$|LEFT\$|LEN|LOC|LOF|LOG|LPOS|LTRIM\$|MID\$|MKD\$|MKDMBF\$|MKI\$|MKL\$|MKS\$|MKSMBF\$|OCT\$|PEEK|PEN|PLAY|PMAP|POINT|POS|RIGHT\$|RND|RTRIM\$|SADD|SCREEN|SEEK|SETMEM|SGN|SIN|SPACE\$|SPC|SQR|STICK|STR\$|STRIG|STRING\$|TAB|TAN|TIME\$|TIMER|UBOUND|UCASE\$|VAL|VARPTR|VARPTR\$|VARSEG)(?=\(|\b)">
+        <token type="KeywordReserved"/>
+      </rule>
+    </state>
+    <state name="metacommands">
+      <rule pattern="\b(\$DYNAMIC|\$INCLUDE|\$STATIC)(?=\(|\b)">
+        <token type="KeywordConstant"/>
+      </rule>
+    </state>
+    <state name="operators">
+      <rule pattern="\b(AND|EQV|IMP|NOT|OR|XOR)(?=\(|\b)">
+        <token type="OperatorWord"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule pattern="\b(BEEP|BLOAD|BSAVE|CALL|CALL\ ABSOLUTE|CALL\ INTERRUPT|CALLS|CHAIN|CHDIR|CIRCLE|CLEAR|CLOSE|CLS|COLOR|COM|COMMON|CONST|DATA|DATE\$|DECLARE|DEF\ FN|DEF\ SEG|DEFDBL|DEFINT|DEFLNG|DEFSNG|DEFSTR|DEF|DIM|DO|LOOP|DRAW|END|ENVIRON|ERASE|ERROR|EXIT|FIELD|FILES|FOR|NEXT|FUNCTION|GET|GOSUB|GOTO|IF|THEN|INPUT|INPUT\ \#|IOCTL|KEY|KEY|KILL|LET|LINE|LINE\ INPUT|LINE\ INPUT\ \#|LOCATE|LOCK|UNLOCK|LPRINT|LSET|MID\$|MKDIR|NAME|ON\ COM|ON\ ERROR|ON\ KEY|ON\ PEN|ON\ PLAY|ON\ STRIG|ON\ TIMER|ON\ UEVENT|ON|OPEN|OPEN\ COM|OPTION\ BASE|OUT|PAINT|PALETTE|PCOPY|PEN|PLAY|POKE|PRESET|PRINT|PRINT\ \#|PRINT\ USING|PSET|PUT|PUT|RANDOMIZE|READ|REDIM|REM|RESET|RESTORE|RESUME|RETURN|RMDIR|RSET|RUN|SCREEN|SEEK|SELECT\ CASE|SHARED|SHELL|SLEEP|SOUND|STATIC|STOP|STRIG|SUB|SWAP|SYSTEM|TIME\$|TIMER|TROFF|TRON|TYPE|UEVENT|UNLOCK|VIEW|WAIT|WHILE|WEND|WIDTH|WINDOW|WRITE)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="\b(ACCESS|ALIAS|ANY|APPEND|AS|BASE|BINARY|BYVAL|CASE|CDECL|DOUBLE|ELSE|ELSEIF|ENDIF|INTEGER|IS|LIST|LOCAL|LONG|LOOP|MOD|NEXT|OFF|ON|OUTPUT|RANDOM|SIGNAL|SINGLE|STEP|STRING|THEN|TO|UNTIL|USING|WEND)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/qml.xml 🔗

@@ -0,0 +1,113 @@
+<lexer>
+  <config>
+    <name>QML</name>
+    <alias>qml</alias>
+    <alias>qbs</alias>
+    <filename>*.qml</filename>
+    <filename>*.qbs</filename>
+    <mime_type>application/x-qml</mime_type>
+    <mime_type>application/x-qt.qbs+qml</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^(?=\s|/|&lt;!--)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="\+\+|--|~|&amp;&amp;|\?|:|\|\||\\(?=\n)|(&lt;&lt;|&gt;&gt;&gt;?|==?|!=?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\bid\s*:\s*[A-Za-z][\w.]*">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="\b[A-Za-z][\w.]*\s*:">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(for|in|while|do|break|return|continue|switch|case|default|if|else|throw|try|catch|finally|new|delete|typeof|instanceof|void|this)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(var|let|with|function)\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(abstract|boolean|byte|char|class|const|debugger|double|enum|export|extends|final|float|goto|implements|import|int|interface|long|native|package|private|protected|public|short|static|super|synchronized|throws|transient|volatile)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity|undefined)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(Array|Boolean|Date|Error|Function|Math|netscape|Number|Object|Packages|RegExp|String|sun|decodeURI|decodeURIComponent|encodeURI|encodeURIComponent|Error|eval|isFinite|isNaN|parseFloat|parseInt|document|this|window)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[$a-zA-Z_]\w*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gim]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="#pop" state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/r.xml 🔗

@@ -0,0 +1,128 @@
+<lexer>
+  <config>
+    <name>R</name>
+    <alias>splus</alias>
+    <alias>s</alias>
+    <alias>r</alias>
+    <filename>*.S</filename>
+    <filename>*.R</filename>
+    <filename>*.r</filename>
+    <filename>.Rhistory</filename>
+    <filename>.Rprofile</filename>
+    <filename>.Renviron</filename>
+    <mime_type>text/S-plus</mime_type>
+    <mime_type>text/S</mime_type>
+    <mime_type>text/x-r-source</mime_type>
+    <mime_type>text/x-r</mime_type>
+    <mime_type>text/x-R</mime_type>
+    <mime_type>text/x-r-history</mime_type>
+    <mime_type>text/x-r-profile</mime_type>
+    <priority>0.1</priority> <!-- higher priority than Rebol -->
+  </config>
+  <rules>
+    <state name="numbers">
+      <rule pattern="0[xX][a-fA-F0-9]+([pP][0-9]+)?[Li]?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[+-]?([0-9]+(\.[0-9]+)?|\.[0-9]+|\.)([eE][+-]?[0-9]+)?[Li]?">
+        <token type="LiteralNumber"/>
+      </rule>
+    </state>
+    <state name="operators">
+      <rule pattern="&lt;&lt;?-|-&gt;&gt;?|-|==|&lt;=|&gt;=|&lt;|&gt;|&amp;&amp;?|!=|\|\|?|\?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\*|\+|\^|/|!|%[^%]*%|=|~|\$|@|:{1,3}">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule pattern="((?:`[^`\\]*(?:\\.[^`\\]*)*`)|(?:(?:[a-zA-z]|[_.][^0-9])[\w_.]*))\s*(?=\()">
+        <token type="NameFunction"/>
+      </rule>
+      <rule>
+        <include state="statements"/>
+      </rule>
+      <rule pattern="\{|\}">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="valid_name">
+      <rule pattern="(?:`[^`\\]*(?:\\.[^`\\]*)*`)|(?:(?:[a-zA-z]|[_.][^0-9])[\w_.]*)">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(if|else|for|while|repeat|in|next|break|return|switch|function)(?![\w.])">
+        <token type="KeywordReserved"/>
+      </rule>
+    </state>
+    <state name="builtin_symbols">
+      <rule pattern="(NULL|NA(_(integer|real|complex|character)_)?|letters|LETTERS|Inf|TRUE|FALSE|NaN|pi|\.\.(\.|[0-9]+))(?![\w.])">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(T|F)\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+    </state>
+    <state name="string_squote">
+      <rule pattern="([^\&#39;\\]|\\.)*\&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="punctuation">
+      <rule pattern="\[{1,2}|\]{1,2}|\(|\)|;|,">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="statements">
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\&#39;">
+        <token type="LiteralString"/>
+        <push state="string_squote"/>
+      </rule>
+      <rule pattern="\&#34;">
+        <token type="LiteralString"/>
+        <push state="string_dquote"/>
+      </rule>
+      <rule>
+        <include state="builtin_symbols"/>
+      </rule>
+      <rule>
+        <include state="valid_name"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule>
+        <include state="operators"/>
+      </rule>
+    </state>
+    <state name="string_dquote">
+      <rule pattern="([^&#34;\\]|\\.)*&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/racket.xml 🔗

@@ -0,0 +1,260 @@
+<lexer>
+  <config>
+    <name>Racket</name>
+    <alias>racket</alias>
+    <alias>rkt</alias>
+    <filename>*.rkt</filename>
+    <filename>*.rktd</filename>
+    <filename>*.rktl</filename>
+    <mime_type>text/x-racket</mime_type>
+    <mime_type>application/x-racket</mime_type>
+  </config>
+  <rules>
+    <state name="datum*">
+      <rule pattern="`|,@?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(?:\|[^|]*\||\\[\w\W]|[^|\\()[\]{}&#34;,\&#39;`;\s]+)+">
+        <token type="LiteralStringSymbol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[|\\]">
+        <token type="Error"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="quoted-list">
+      <rule>
+        <include state="list"/>
+      </rule>
+      <rule pattern="(?!\Z)">
+        <token type="Text"/>
+        <push state="quoted-datum"/>
+      </rule>
+    </state>
+    <state name="quasiquoted-list">
+      <rule>
+        <include state="list"/>
+      </rule>
+      <rule pattern="(?!\Z)">
+        <token type="Text"/>
+        <push state="quasiquoted-datum"/>
+      </rule>
+    </state>
+    <state name="quoted-datum">
+      <rule>
+        <include state="datum"/>
+      </rule>
+      <rule pattern="[([{]">
+        <token type="Punctuation"/>
+        <push state="#pop" state="quoted-list"/>
+      </rule>
+      <rule>
+        <include state="datum*"/>
+      </rule>
+    </state>
+    <state name="block-comment">
+      <rule pattern="#\|">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\|#">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^#|]+|.">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="datum">
+      <rule pattern="(?s)#;|#![ /]([^\\\n]|\\.)*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern=";[^\n\r…

]*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="#\|">
+        <token type="CommentMultiline"/>
+        <push state="block-comment"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(?i)(?:#e)?(?:#d)?(?:#e)?[-+]?\d+(?=[()[\]{}&#34;,\&#39;`;\s])">
+        <token type="LiteralNumberInteger"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(?:#e)?(?:#d)?(?:#e)?[-+]?(\d+(\.\d*)?|\.\d+)([deflst][-+]?\d+)?(?=[()[\]{}&#34;,\&#39;`;\s])">
+        <token type="LiteralNumberFloat"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(?:#e)?(?:#d)?(?:#e)?[-+]?((?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)(?:[defls][-+]?\d+)?)([-+](?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)(?:[defls][-+]?\d+)?)?i)?|[-+](?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)(?:[defls][-+]?\d+)?)?i)(?=[()[\]{}&#34;,\&#39;`;\s])">
+        <token type="LiteralNumber"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(#d)?((?:[-+]?(?:(?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)|(?:\d+#+(?:\.#*|/\d+#*)?|\.\d+#+|\d+(?:\.\d*#+|/\d+#+)))(?:[defls][-+]?\d+)?)|[-+](?:(?:inf|nan)\.[0f]))([-+](?:(?:(?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)|(?:\d+#+(?:\.#*|/\d+#*)?|\.\d+#+|\d+(?:\.\d*#+|/\d+#+)))(?:[defls][-+]?\d+)?)|(?:(?:inf|nan)\.[0f]))?i)?|[-+](?:(?:(?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)|(?:\d+#+(?:\.#*|/\d+#*)?|\.\d+#+|\d+(?:\.\d*#+|/\d+#+)))(?:[defls][-+]?\d+)?)|(?:(?:inf|nan)\.[0f]))?i|(?:[-+]?(?:(?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)|(?:\d+#+(?:\.#*|/\d+#*)?|\.\d+#+|\d+(?:\.\d*#+|/\d+#+)))(?:[defls][-+]?\d+)?)|[-+](?:(?:inf|nan)\.[0f]))@(?:[-+]?(?:(?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)|(?:\d+#+(?:\.#*|/\d+#*)?|\.\d+#+|\d+(?:\.\d*#+|/\d+#+)))(?:[defls][-+]?\d+)?)|[-+](?:(?:inf|nan)\.[0f])))(?=[()[\]{}&#34;,\&#39;`;\s])">
+        <token type="LiteralNumberFloat"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(([-+]?(?:(?:\d+(?:/\d+|\.\d*)?|\.\d+)|(?:\d+#+(?:\.#*|/\d+#*)?|\.\d+#+|\d+(?:\.\d*#+|/\d+#+)))t[-+]?\d+)|[-+](inf|nan)\.t)(?=[()[\]{}&#34;,\&#39;`;\s])">
+        <token type="LiteralNumberFloat"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(#[ei])?#b(?:\|[^|]*\||\\[\w\W]|[^|\\()[\]{}&#34;,\&#39;`;\s]+)+">
+        <token type="LiteralNumberBin"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(#[ei])?#o(?:\|[^|]*\||\\[\w\W]|[^|\\()[\]{}&#34;,\&#39;`;\s]+)+">
+        <token type="LiteralNumberOct"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(#[ei])?#x(?:\|[^|]*\||\\[\w\W]|[^|\\()[\]{}&#34;,\&#39;`;\s]+)+">
+        <token type="LiteralNumberHex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?i)(#d)?#i(?:\|[^|]*\||\\[\w\W]|[^|\\()[\]{}&#34;,\&#39;`;\s]+)+">
+        <token type="LiteralNumberFloat"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#?&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="#pop" state="string"/>
+      </rule>
+      <rule pattern="#&lt;&lt;(.+)\n(^(?!\1$).*$\n)*^\1$">
+        <token type="LiteralStringHeredoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#\\(u[\da-fA-F]{1,4}|U[\da-fA-F]{1,8})">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?is)#\\([0-7]{3}|[a-z]+|.)">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?s)#[pr]x#?&#34;(\\?.)*?&#34;">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#(true|false|[tTfF])">
+        <token type="NameConstant"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#:(?:\|[^|]*\||\\[\w\W]|[^|\\()[\]{}&#34;,\&#39;`;\s]+)+">
+        <token type="KeywordDeclaration"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(#lang |#!)(\S+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#reader">
+        <token type="KeywordNamespace"/>
+        <push state="quoted-datum"/>
+      </rule>
+      <rule pattern="(?i)\.(?=[()[\]{}&#34;,\&#39;`;\s])|#c[is]|#[&#39;`]|#,@?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#39;|#[s&amp;]|#hash(eqv?)?|#\d*(?=[([{])">
+        <token type="Operator"/>
+        <push state="#pop" state="quoted-datum"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?s)\\([0-7]{1,3}|x[\da-fA-F]{1,2}|u[\da-fA-F]{1,4}|U[\da-fA-F]{1,8}|.)">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[)\]}]">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="(?!\Z)">
+        <token type="Text"/>
+        <push state="unquoted-datum"/>
+      </rule>
+    </state>
+    <state name="list">
+      <rule pattern="[)\]}]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="unquoted-datum">
+      <rule>
+        <include state="datum"/>
+      </rule>
+      <rule pattern="quote(?=[()[\]{}&#34;,\&#39;`;\s])">
+        <token type="Keyword"/>
+        <push state="#pop" state="quoted-datum"/>
+      </rule>
+      <rule pattern="`">
+        <token type="Operator"/>
+        <push state="#pop" state="quasiquoted-datum"/>
+      </rule>
+      <rule pattern="quasiquote(?=[()[\]{}&#34;,\&#39;`;\s])">
+        <token type="Keyword"/>
+        <push state="#pop" state="quasiquoted-datum"/>
+      </rule>
+      <rule pattern="[([{]">
+        <token type="Punctuation"/>
+        <push state="#pop" state="unquoted-list"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ragel.xml 🔗

@@ -0,0 +1,149 @@
+<lexer>
+  <config>
+    <name>Ragel</name>
+    <alias>ragel</alias>
+  </config>
+  <rules>
+    <state name="host">
+      <rule pattern="([^{}\&#39;&#34;/#]+|[^\\]\\[{}]|&#34;(\\\\|\\&#34;|[^&#34;])*&#34;|&#39;(\\\\|\\&#39;|[^&#39;])*&#39;|//.*$\n?|/\*(.|\n)*?\*/|\#.*$\n?|/(?!\*)(\\\\|\\/|[^/])*/|/)+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="0x[0-9A-Fa-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[+-]?[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="literals">
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\[(\\\\|\\\]|[^\]])*\]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="/(?!\*)(\\\\|\\/|[^/])*/">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(access|action|alphtype)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(getkey|write|machine|include)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(any|ascii|extend|alpha|digit|alnum|lower|upper)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(xdigit|cntrl|graph|print|punct|space|zlen|empty)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="identifiers">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="literals"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule>
+        <include state="identifiers"/>
+      </rule>
+      <rule>
+        <include state="operators"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <push state="host"/>
+      </rule>
+      <rule pattern="=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="\#.*$">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="operators">
+      <rule pattern=",">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\||&amp;|--?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\.|&lt;:|:&gt;&gt;?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="-&gt;">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(&gt;|\$|%|&lt;|@|&lt;&gt;)(/|eof\b)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(&gt;|\$|%|&lt;|@|&lt;&gt;)(!|err\b)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(&gt;|\$|%|&lt;|@|&lt;&gt;)(\^|lerr\b)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(&gt;|\$|%|&lt;|@|&lt;&gt;)(~|to\b)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(&gt;|\$|%|&lt;|@|&lt;&gt;)(\*|from\b)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&gt;|@|\$|%">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\*|\?|\+|\{[0-9]*,[0-9]*\}">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="!|\^">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\(|\)">
+        <token type="Operator"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/react.xml 🔗

@@ -0,0 +1,236 @@
+<lexer>
+  <config>
+    <name>react</name>
+    <alias>jsx</alias>
+    <alias>react</alias>
+    <filename>*.jsx</filename>
+    <filename>*.react</filename>
+    <mime_type>text/jsx</mime_type>
+    <mime_type>text/typescript-jsx</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gimuy]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="#pop" state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="([\w]+\s*)(=)(\s*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="attr"/>
+      </rule>
+      <rule pattern="[{}]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\w\.]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(/?)(\s*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="expression">
+      <rule pattern="{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="attr">
+      <rule pattern="{">
+        <token type="Punctuation"/>
+        <push state="expression"/>
+      </rule>
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="interp-inside">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="jsx">
+      <rule pattern="(&lt;)(/?)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;)([\w\.]+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="(&lt;)(/)([\w\.]+)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="jsx"/>
+      </rule>
+      <rule pattern="\A#! ?/.*?\n">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="^(?=\s|/|&lt;!--)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="(\.\d+|[0-9]+\.[0-9]*)([eE][-+]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[bB][01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[oO][0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\.\.\.|=&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\+\+|--|~|&amp;&amp;|\?|:|\|\||\\(?=\n)|(&lt;&lt;|&gt;&gt;&gt;?|==?|!=?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(for|in|while|do|break|return|continue|switch|case|default|if|else|throw|try|catch|finally|new|delete|typeof|instanceof|void|yield|this|of)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(var|let|with|function)\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(abstract|async|await|boolean|byte|char|class|const|debugger|double|enum|export|extends|final|float|goto|implements|import|int|interface|long|native|package|private|protected|public|short|static|super|synchronized|throws|transient|volatile)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity|undefined)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(Array|Boolean|Date|Error|Function|Math|netscape|Number|Object|Packages|RegExp|String|Promise|Proxy|sun|decodeURI|decodeURIComponent|encodeURI|encodeURIComponent|Error|eval|isFinite|isNaN|isSafeInteger|parseFloat|parseInt|document|this|window)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?:[$_\p{L}\p{N}]|\\u[a-fA-F0-9]{4})(?:(?:[$\p{L}\p{N}]|\\u[a-fA-F0-9]{4}))*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="interp"/>
+      </rule>
+    </state>
+    <state name="interp">
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\\`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interp-inside"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="[^`\\$]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/reasonml.xml 🔗

@@ -0,0 +1,147 @@
+<lexer>
+  <config>
+    <name>ReasonML</name>
+    <alias>reason</alias>
+    <alias>reasonml</alias>
+    <filename>*.re</filename>
+    <filename>*.rei</filename>
+    <mime_type>text/x-reasonml</mime_type>
+  </config>
+  <rules>
+    <state name="escape-sequence">
+      <rule pattern="\\[\\&#34;\&#39;ntbr]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\[0-9]{3}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\x[0-9a-fA-F]{2}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="false|true|\(\)|\[\]">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="\b([A-Z][\w\&#39;]*)(?=\s*\.)">
+        <token type="NameNamespace"/>
+        <push state="dotted"/>
+      </rule>
+      <rule pattern="\b([A-Z][\w\&#39;]*)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="\/\*(?![\/])">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\b(as|assert|begin|class|constraint|do|done|downto|else|end|exception|external|false|for|fun|esfun|function|functor|if|in|include|inherit|initializer|lazy|let|switch|module|pub|mutable|new|nonrec|object|of|open|pri|rec|sig|struct|then|to|true|try|type|val|virtual|when|while|with)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(~|\}|\|]|\||\|\||\{&lt;|\{|`|_|]|\[\||\[&gt;|\[&lt;|\[|\?\?|\?|&gt;\}|&gt;]|&gt;|=|&lt;-|&lt;|;;|;|:&gt;|:=|::|:|\.\.\.|\.\.|\.|=&gt;|-\.|-|,|\+|\*|\)|\(|&amp;&amp;|&amp;|#|!=)">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="([=&lt;&gt;@^|&amp;+\*/$%-]|[!?~])?[!$%&amp;*+\./:&lt;=&gt;?@^|~-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(and|asr|land|lor|lsl|lsr|lxor|mod|or)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="\b(unit|int|float|bool|string|char|list|array)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[^\W\d][\w&#39;]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="-?\d[\d_]*(.[\d_]*)?([eE][+\-]?\d[\d_]*)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[xX][\da-fA-F][\da-fA-F_]*">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[oO][0-7][0-7_]*">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[bB][01][01_]*">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="\d[\d_]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#39;(?:(\\[\\\&#34;&#39;ntbr ])|(\\[0-9]{3})|(\\x[0-9a-fA-F]{2}))&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;.&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[~?][a-z][\w\&#39;]*:">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^\/*]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\/\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\*\/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\*]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule>
+        <include state="escape-sequence"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="dotted">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*(?=\s*\.)">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="[A-Z][\w\&#39;]*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-z_][\w\&#39;]*">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/reg.xml 🔗

@@ -0,0 +1,68 @@
+<lexer>
+  <config>
+    <name>reg</name>
+    <alias>registry</alias>
+    <filename>*.reg</filename>
+    <mime_type>text/x-windows-registry</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="Windows Registry Editor.*">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[;#].*">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(\[)(-?)(HKEY_[A-Z_]+)(.*?\])$">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Operator"/>
+          <token type="NameBuiltin"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&#34;(?:\\&#34;|\\\\|[^&#34;])+&#34;)([ \t]*)(=)([ \t]*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="value"/>
+      </rule>
+      <rule pattern="(.*?)([ \t]*)(=)([ \t]*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="value"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule pattern="-">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(dword|hex(?:\([0-9a-fA-F]\))?)(:)([0-9a-fA-F,]+)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumber"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".+">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rego.xml 🔗

@@ -0,0 +1,94 @@
+<lexer>
+  <config>
+    <name>Rego</name>
+    <alias>rego</alias>
+    <filename>*.rego</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(package|import|as|not|with|default|else|some|in|if|contains)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <!-- importing keywords should then show up as keywords -->
+      <rule pattern="(import)( future.keywords.)(\w+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+          <token type="KeywordDeclaration"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#[^\r\n]*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(FIXME|TODO|XXX)\b( .*)$">
+        <bygroups>
+          <token type="Error"/>
+          <token type="CommentSpecial"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+\.\d*([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\.\d+([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+[Ee][-+]\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+(\.\d+[eE][+\-]?\d+|\.\d*|[eE][+\-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\.\d+([eE][+\-]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(0|[1-9][0-9]*)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;.*?&#34;&#34;&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\$/((?!/\$).)*/\$">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="/(\\\\|\\&#34;|[^/])*/">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="^(\w+)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[a-z_-][\w-]*(?=\()">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="[\r\n\s]+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="(package|import)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[=&lt;&gt;!+-/*&amp;|]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern=":=">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[[\]{}():;]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[$a-zA-Z_]\w*">
+        <token type="NameOther"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rexx.xml 🔗

@@ -0,0 +1,127 @@
+<lexer>
+  <config>
+    <name>Rexx</name>
+    <alias>rexx</alias>
+    <alias>arexx</alias>
+    <filename>*.rexx</filename>
+    <filename>*.rex</filename>
+    <filename>*.rx</filename>
+    <filename>*.arexx</filename>
+    <mime_type>text/x-rexx</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="keyword">
+      <rule pattern="(address|arg|by|call|do|drop|else|end|exit|for|forever|if|interpret|iterate|leave|nop|numeric|off|on|options|parse|pull|push|queue|return|say|select|signal|to|then|trace|until|while)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+    </state>
+    <state name="operator">
+      <rule pattern="(-|//|/|\(|\)|\*\*|\*|\\&lt;&lt;|\\&lt;|\\==|\\=|\\&gt;&gt;|\\&gt;|\\|\|\||\||&amp;&amp;|&amp;|%|\+|&lt;&lt;=|&lt;&lt;|&lt;=|&lt;&gt;|&lt;|==|=|&gt;&lt;|&gt;=|&gt;&gt;=|&gt;&gt;|&gt;|¬&lt;&lt;|¬&lt;|¬==|¬=|¬&gt;&gt;|¬&gt;|¬|\.|,)">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="string_double">
+      <rule pattern="[^&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string_single">
+      <rule pattern="[^\&#39;\n]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\&#39;\&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^*]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\*">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string_double"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="string_single"/>
+      </rule>
+      <rule pattern="[0-9]+(\.[0-9]+)?(e[+-]?[0-9])?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="([a-z_]\w*)(\s*)(:)(\s*)(procedure)\b">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="TextWhitespace"/>
+          <token type="Operator"/>
+          <token type="TextWhitespace"/>
+          <token type="KeywordDeclaration"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([a-z_]\w*)(\s*)(:)">
+        <bygroups>
+          <token type="NameLabel"/>
+          <token type="TextWhitespace"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="function"/>
+      </rule>
+      <rule>
+        <include state="keyword"/>
+      </rule>
+      <rule>
+        <include state="operator"/>
+      </rule>
+      <rule pattern="[a-z_]\w*">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="function">
+      <rule pattern="(sourceline|wordlength|errortext|translate|wordindex|condition|datatype|subword|lineout|lastpos|delword|address|charout|wordpos|compare|overlay|reverse|symbol|stream|charin|center|delstr|verify|digits|abbrev|bitxor|format|random|insert|bitand|queued|length|linein|substr|copies|xrange|space|words|lines|bitor|trunc|strip|right|value|chars|trace|sign|form|fuzz|word|left|time|date|c2d|d2c|d2x|c2x|pos|b2x|arg|abs|min|x2b|x2c|x2d|max)(\s*)(\()">
+        <bygroups>
+          <token type="NameBuiltin"/>
+          <token type="TextWhitespace"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rpm_spec.xml 🔗

@@ -0,0 +1,58 @@
+
+<lexer>
+  <config>
+    <name>RPMSpec</name>
+    <alias>spec</alias>
+    <filename>*.spec</filename>
+    <mime_type>text/x-rpm-spec</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="#.*$"><token type="Comment"/></rule>
+      <rule><include state="basic"/></rule>
+    </state>
+    <state name="description">
+      <rule pattern="^(%(?:package|prep|build|install|clean|check|pre[a-z]*|post[a-z]*|trigger[a-z]*|files))(.*)$"><bygroups><token type="NameDecorator"/><token type="Text"/></bygroups><pop depth="1"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="."><token type="Text"/></rule>
+    </state>
+    <state name="changelog">
+      <rule pattern="\*.*$"><token type="GenericSubheading"/></rule>
+      <rule pattern="^(%(?:package|prep|build|install|clean|check|pre[a-z]*|post[a-z]*|trigger[a-z]*|files))(.*)$"><bygroups><token type="NameDecorator"/><token type="Text"/></bygroups><pop depth="1"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="."><token type="Text"/></rule>
+    </state>
+    <state name="string">
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
+      <rule pattern="\\([\\abfnrtv&quot;\&#x27;]|x[a-fA-F0-9]{2,4}|[0-7]{1,3})"><token type="LiteralStringEscape"/></rule>
+      <rule><include state="interpol"/></rule>
+      <rule pattern="."><token type="LiteralStringDouble"/></rule>
+    </state>
+    <state name="basic">
+      <rule><include state="macro"/></rule>
+      <rule pattern="(?i)^(Name|Version|Release|Epoch|Summary|Group|License|Packager|Vendor|Icon|URL|Distribution|Prefix|Patch[0-9]*|Source[0-9]*|Requires\(?[a-z]*\)?|[a-z]+Req|Obsoletes|Suggests|Provides|Conflicts|Build[a-z]+|[a-z]+Arch|Auto[a-z]+)(:)(.*)$"><bygroups><token type="GenericHeading"/><token type="Punctuation"/><usingself state="root"/></bygroups></rule>
+      <rule pattern="^%description"><token type="NameDecorator"/><push state="description"/></rule>
+      <rule pattern="^%changelog"><token type="NameDecorator"/><push state="changelog"/></rule>
+      <rule pattern="^(%(?:package|prep|build|install|clean|check|pre[a-z]*|post[a-z]*|trigger[a-z]*|files))(.*)$"><bygroups><token type="NameDecorator"/><token type="Text"/></bygroups></rule>
+      <rule pattern="%(attr|defattr|dir|doc(?:dir)?|setup|config(?:ure)?|make(?:install)|ghost|patch[0-9]+|find_lang|exclude|verify)"><token type="Keyword"/></rule>
+      <rule><include state="interpol"/></rule>
+      <rule pattern="&#x27;.*?&#x27;"><token type="LiteralStringSingle"/></rule>
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><push state="string"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="."><token type="Text"/></rule>
+    </state>
+    <state name="macro">
+      <rule pattern="%define.*$"><token type="CommentPreproc"/></rule>
+      <rule pattern="%\{\!\?.*%define.*\}"><token type="CommentPreproc"/></rule>
+      <rule pattern="(%(?:if(?:n?arch)?|else(?:if)?|endif))(.*)$"><bygroups><token type="CommentPreproc"/><token type="Text"/></bygroups></rule>
+    </state>
+    <state name="interpol">
+      <rule pattern="%\{?__[a-z_]+\}?"><token type="NameFunction"/></rule>
+      <rule pattern="%\{?_([a-z_]+dir|[a-z_]+path|prefix)\}?"><token type="KeywordPseudo"/></rule>
+      <rule pattern="%\{\?\w+\}"><token type="NameVariable"/></rule>
+      <rule pattern="\$\{?RPM_[A-Z0-9_]+\}?"><token type="NameVariableGlobal"/></rule>
+      <rule pattern="%\{[a-zA-Z]\w+\}"><token type="KeywordConstant"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ruby.xml 🔗

@@ -0,0 +1,724 @@
+<lexer>
+  <config>
+    <name>Ruby</name>
+    <alias>rb</alias>
+    <alias>ruby</alias>
+    <alias>duby</alias>
+    <filename>*.rb</filename>
+    <filename>*.rbw</filename>
+    <filename>Rakefile</filename>
+    <filename>*.rake</filename>
+    <filename>*.gemspec</filename>
+    <filename>*.rbx</filename>
+    <filename>*.duby</filename>
+    <filename>Gemfile</filename>
+    <filename>Vagrantfile</filename>
+    <mime_type>text/x-ruby</mime_type>
+    <mime_type>application/x-ruby</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="simple-sym">
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[^\\&#34;#]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringSymbol"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="interpolated-regex">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="strings">
+      <rule pattern="\:@{0,2}[a-zA-Z_]\w*[!?]?">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="\:@{0,2}(===|\[\]=|&lt;=&gt;|\*\*|==|&gt;=|\+@|&lt;&gt;|&gt;&gt;|&lt;&lt;|-@|\[\]|~|`|\^|\||&amp;|&lt;|%|/|&gt;|\+|-|\*)">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern=":&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern=":&#34;">
+        <token type="LiteralStringSymbol"/>
+        <push state="simple-sym"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*)(:)(?!:)">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="simple-string"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)`">
+        <token type="LiteralStringBacktick"/>
+        <push state="simple-backtick"/>
+      </rule>
+      <rule pattern="%[QWx]?\{">
+        <token type="LiteralStringOther"/>
+        <push state="cb-intp-string"/>
+      </rule>
+      <rule pattern="%[qsw]\{">
+        <token type="LiteralStringOther"/>
+        <push state="cb-string"/>
+      </rule>
+      <rule pattern="%r\{">
+        <token type="LiteralStringRegex"/>
+        <push state="cb-regex"/>
+      </rule>
+      <rule pattern="%[QWx]?\[">
+        <token type="LiteralStringOther"/>
+        <push state="sb-intp-string"/>
+      </rule>
+      <rule pattern="%[qsw]\[">
+        <token type="LiteralStringOther"/>
+        <push state="sb-string"/>
+      </rule>
+      <rule pattern="%r\[">
+        <token type="LiteralStringRegex"/>
+        <push state="sb-regex"/>
+      </rule>
+      <rule pattern="%[QWx]?\(">
+        <token type="LiteralStringOther"/>
+        <push state="pa-intp-string"/>
+      </rule>
+      <rule pattern="%[qsw]\(">
+        <token type="LiteralStringOther"/>
+        <push state="pa-string"/>
+      </rule>
+      <rule pattern="%r\(">
+        <token type="LiteralStringRegex"/>
+        <push state="pa-regex"/>
+      </rule>
+      <rule pattern="%[QWx]?&lt;">
+        <token type="LiteralStringOther"/>
+        <push state="ab-intp-string"/>
+      </rule>
+      <rule pattern="%[qsw]&lt;">
+        <token type="LiteralStringOther"/>
+        <push state="ab-string"/>
+      </rule>
+      <rule pattern="%r&lt;">
+        <token type="LiteralStringRegex"/>
+        <push state="ab-regex"/>
+      </rule>
+      <rule pattern="(%r([\W_]))((?:\\\2|(?!\2).)*)(\2[mixounse]*)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="%[qsw]([\W_])((?:\\\1|(?!\1).)*)\1">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="(%[QWx]([\W_]))((?:\\\2|(?!\2).)*)(\2)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(?&lt;=[-+/*%=&lt;&gt;&amp;!^|~,(])(\s*)(%([\t ])(?:(?:\\\3|(?!\3).)*)\3)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringOther"/>
+          <token type="None"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(%([\t ])(?:(?:\\\3|(?!\3).)*)\3)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringOther"/>
+          <token type="None"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(%([^a-zA-Z0-9\s]))((?:\\\2|(?!\2).)*)(\2)">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="simple-backtick">
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[^\\`#]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="ab-regex">
+      <rule pattern="\\[\\&lt;&gt;]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="&lt;">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="&gt;[mixounse]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#&lt;&gt;]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#&lt;&gt;]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="cb-regex">
+      <rule pattern="\\[\\{}]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="\}[mixounse]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#{}]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#{}]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="end-part">
+      <rule pattern=".+">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string-intp">
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="in-intp"/>
+      </rule>
+      <rule pattern="#@@?[a-zA-Z_]\w*">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="#\$[a-zA-Z_]\w*">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+    </state>
+    <state name="interpolated-string">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="defexpr"/>
+      </rule>
+      <rule pattern="&lt;&lt;">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[A-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="cb-intp-string">
+      <rule pattern="\\[\\{}]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#{}]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#{}]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\A#!.+?$">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="#.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="=begin\s.*?\n=end.*?$">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(defined\?|return|ensure|rescue|unless|undef|until|break|begin|elsif|super|alias|while|retry|BEGIN|raise|yield|redo|next|case|when|then|else|end|for|END|do|if|in)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(module)(\s+)([a-zA-Z_]\w*(?:::[a-zA-Z_]\w*)*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(def)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="def(?=[*%&amp;^`~+-/\[&lt;&gt;=])">
+        <token type="Keyword"/>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="(class)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(module_function|attr_accessor|attr_reader|attr_writer|initialize|protected|include|private|extend|public|raise|false|catch|throw|attr|loop|true|new|nil)\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="(not|and|or)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(protected_method_defined|private_method_defined|public_method_defined|method_defined|const_defined|block_given|instance_of|respond_to|iterator|autoload|kind_of|tainted|include|frozen|equal|is_a|nil|eql)\?">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(chomp|chop|exit|gsub|sub)!">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(protected_instance_methods|private_instance_methods|public_instance_methods|instance_variable_set|instance_variable_get|private_class_method|public_class_method|instance_variables|protected_methods|singleton_methods|included_modules|instance_methods|global_variables|private_methods|local_variables|instance_method|class_variables|public_methods|const_defined\?|set_trace_func|method_missing|const_missing|instance_eval|module_eval|untrace_var|class_eval|trace_var|const_get|readlines|ancestors|constants|const_set|object_id|readline|autoload|__send__|untaint|methods|display|Integer|sprintf|inspect|require|syscall|at_exit|binding|extend|printf|lambda|__id__|String|callcc|method|select|format|system|freeze|caller|raise|Float|print|throw|taint|clone|srand|Array|abort|split|catch|chomp|sleep|open|puts|putc|fork|fail|trap|exit|scan|getc|self|send|eval|gets|exec|gsub|proc|load|loop|chop|warn|hash|test|name|to_a|rand|to_s|sub|dup|id|p)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="__(FILE|LINE)__\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="(?&lt;!\w)(&lt;&lt;-?)([&#34;`\&#39;]?)([a-zA-Z_]\w*)(\2)(.*?\n)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(&lt;&lt;-?)(&#34;|\&#39;)()(\2)(.*?\n)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="__END__">
+        <token type="CommentPreproc"/>
+        <push state="end-part"/>
+      </rule>
+      <rule pattern="(?:^|(?&lt;=[=&lt;&gt;~!:])|(?&lt;=(?:\s|;)when\s)|(?&lt;=(?:\s|;)or\s)|(?&lt;=(?:\s|;)and\s)|(?&lt;=\.index\s)|(?&lt;=\.scan\s)|(?&lt;=\.sub\s)|(?&lt;=\.sub!\s)|(?&lt;=\.gsub\s)|(?&lt;=\.gsub!\s)|(?&lt;=\.match\s)|(?&lt;=(?:\s|;)if\s)|(?&lt;=(?:\s|;)elsif\s)|(?&lt;=^when\s)|(?&lt;=^index\s)|(?&lt;=^scan\s)|(?&lt;=^sub\s)|(?&lt;=^gsub\s)|(?&lt;=^sub!\s)|(?&lt;=^gsub!\s)|(?&lt;=^match\s)|(?&lt;=^if\s)|(?&lt;=^elsif\s))(\s*)(/)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringRegex"/>
+        </bygroups>
+        <push state="multiline-regex"/>
+      </rule>
+      <rule pattern="(?&lt;=\(|,|\[)/">
+        <token type="LiteralStringRegex"/>
+        <push state="multiline-regex"/>
+      </rule>
+      <rule pattern="(\s+)(/)(?![\s=])">
+        <bygroups>
+          <token type="Text"/>
+          <token type="LiteralStringRegex"/>
+        </bygroups>
+        <push state="multiline-regex"/>
+      </rule>
+      <rule pattern="(0_?[0-7]+(?:_[0-7]+)*)(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberOct"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(0x[0-9A-Fa-f]+(?:_[0-9A-Fa-f]+)*)(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberHex"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(0b[01]+(?:_[01]+)*)(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberBin"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([\d]+(?:[_e]\d+)*)(\s*)([/?])?">
+        <bygroups>
+          <token type="LiteralNumberInteger"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="@@[a-zA-Z_]\w*">
+        <token type="NameVariableClass"/>
+      </rule>
+      <rule pattern="@[a-zA-Z_]\w*">
+        <token type="NameVariableInstance"/>
+      </rule>
+      <rule pattern="\$\w+">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="\$[!@&amp;`\&#39;+~=/\\,;.&lt;&gt;_*$?:&#34;^-]">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="\$-[0adFiIlpvw]">
+        <token type="NameVariableGlobal"/>
+      </rule>
+      <rule pattern="::">
+        <token type="Operator"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule pattern="\?(\\[MC]-)*(\\([\\abefnrstv#&#34;\&#39;]|x[a-fA-F0-9]{1,2}|[0-7]{1,3})|\S)(?!\w)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="[A-Z]\w+">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(\.|::)(===|\[\]=|&lt;=&gt;|\*\*|==|&gt;=|\+@|&lt;&gt;|&gt;&gt;|&lt;&lt;|-@|\[\]|~|`|\^|\||&amp;|&lt;|%|/|&gt;|\+|-|\*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameOperator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\.|::)([a-zA-Z_]\w*[!?]?|[*%&amp;^`~+\-/\[&lt;&gt;=])">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Name"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*[!?]?">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="(\[|\]|\*\*|&lt;&lt;?|&gt;&gt;?|&gt;=|&lt;=|&lt;=&gt;|=~|={3}|!~|&amp;&amp;?|\|\||\.{1,3})">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[-+/*%=&lt;&gt;&amp;!^|~]=?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[(){};,/?:\\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="defexpr">
+      <rule pattern="(\))(\.|::)?">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Operator"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Operator"/>
+        <push/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="in-intp">
+      <rule pattern="\{">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="multiline-regex">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\\/">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\/#]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="/[mixounse]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="cb-string">
+      <rule pattern="\\[\\{}]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#{}]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#{}]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="funcname">
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="defexpr"/>
+      </rule>
+      <rule pattern="(?:([a-zA-Z_]\w*)(\.))?([a-zA-Z_]\w*[!?]?|\*\*?|[-+]@?|[/%&amp;|^`~]|\[\]=?|&lt;&lt;|&gt;&gt;|&lt;=?&gt;|&gt;=?|===?)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Operator"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="sb-intp-string">
+      <rule pattern="\\[\\\[\]]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#\[\]]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#\[\]]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="pa-string">
+      <rule pattern="\\[\\()]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#()]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#()]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="string-intp-escaped">
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="\\([\\abefnrstv#&#34;\&#39;]|x[a-fA-F0-9]{1,2}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="simple-string">
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[^\\&#34;#]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[\\#]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="pa-intp-string">
+      <rule pattern="\\[\\()]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#()]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#()]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="ab-string">
+      <rule pattern="\\[\\&lt;&gt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&lt;">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#&lt;&gt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#&lt;&gt;]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="sb-regex">
+      <rule pattern="\\[\\\[\]]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="\][mixounse]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#\[\]]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#\[\]]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="pa-regex">
+      <rule pattern="\\[\\()]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="LiteralStringRegex"/>
+        <push/>
+      </rule>
+      <rule pattern="\)[mixounse]*">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp"/>
+      </rule>
+      <rule pattern="[\\#()]">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="[^\\#()]+">
+        <token type="LiteralStringRegex"/>
+      </rule>
+    </state>
+    <state name="sb-string">
+      <rule pattern="\\[\\\[\]]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="\]">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[\\#\[\]]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#\[\]]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+    <state name="ab-intp-string">
+      <rule pattern="\\[\\&lt;&gt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="&lt;">
+        <token type="LiteralStringOther"/>
+        <push/>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="LiteralStringOther"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="string-intp-escaped"/>
+      </rule>
+      <rule pattern="[\\#&lt;&gt;]">
+        <token type="LiteralStringOther"/>
+      </rule>
+      <rule pattern="[^\\#&lt;&gt;]+">
+        <token type="LiteralStringOther"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/rust.xml 🔗

@@ -0,0 +1,375 @@
+<lexer>
+  <config>
+    <name>Rust</name>
+    <alias>rust</alias>
+    <alias>rs</alias>
+    <filename>*.rs</filename>
+    <filename>*.rs.in</filename>
+    <mime_type>text/rust</mime_type>
+    <mime_type>text/x-rust</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="modname">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="lifetime">
+      <rule pattern="[a-zA-Z_]+\w*">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="number_lit">
+      <rule pattern="[ui](8|16|32|64|size)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="f(32|64)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="attribute_common">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="CommentPreproc"/>
+        <push state="attribute["/>
+      </rule>
+    </state>
+    <state name="bytestring">
+      <rule pattern="\\x[89a-fA-F][0-9a-fA-F]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule>
+        <include state="string"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^*/]+">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="doccomment">
+      <rule pattern="[^*/]+">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="LiteralStringDoc"/>
+        <push/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="LiteralStringDoc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="LiteralStringDoc"/>
+      </rule>
+    </state>
+    <state name="funcname">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="formatted_string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\[&#39;&#34;\\nrt]|\\(?=\n)|\\x[0-7][0-9a-fA-F]|\\0|\\u\{[0-9a-fA-F]{1,6}\}|\{\{|\}\}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\{[^}]*\}">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="[^\\&#34;\{\}]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\[&#39;&#34;\\nrt]|\\(?=\n)|\\x[0-7][0-9a-fA-F]|\\0|\\u\{[0-9a-fA-F]{1,6}\}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="#![^[\r\n].*$">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule>
+        <push state="base"/>
+      </rule>
+    </state>
+    <state name="attribute[">
+      <rule>
+        <include state="attribute_common"/>
+      </rule>
+      <rule pattern="\]">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#34;\]\[]+">
+        <token type="CommentPreproc"/>
+      </rule>
+    </state>
+    <state name="base">
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="//!.*?\n">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="///(\n|[^/].*?\n)">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="//(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*\*(\n|[^/*])">
+        <token type="LiteralStringDoc"/>
+        <push state="doccomment"/>
+      </rule>
+      <rule pattern="/\*!">
+        <token type="LiteralStringDoc"/>
+        <push state="doccomment"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="\$([a-zA-Z_]\w*|\(,?|\),?|,?)">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(unsafe|static|extern|return|const|crate|where|while|await|trait|super|async|match|impl|else|move|loop|pub|ref|mut|for|dyn|use|box|in|if|as)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(abstract|override|unsized|virtual|become|typeof|final|macro|yield|priv|try|do)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="self\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="mod\b">
+        <token type="Keyword"/>
+        <push state="modname"/>
+      </rule>
+      <rule pattern="let\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="fn\b">
+        <token type="Keyword"/>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="(struct|enum|type|union)\b">
+        <token type="Keyword"/>
+        <push state="typename"/>
+      </rule>
+      <rule pattern="(default)(\s+)(type|fn)\b">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(isize|usize|bool|char|u128|i128|i64|i32|i16|str|u64|u32|f32|f64|u16|i8|u8)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[sS]elf\b">
+        <token type="NameBuiltinPseudo"/>
+      </rule>
+      <rule pattern="(DoubleEndedIterator|ExactSizeIterator|IntoIterator|PartialOrd|PartialEq|ToString|Iterator|ToOwned|Default|Result|String|FnOnce|Extend|Option|FnMut|Unpin|Sized|AsRef|AsMut|Clone|None|From|Into|Sync|drop|Send|Drop|Copy|Some|Ord|Err|Box|Vec|Eq|Ok|Fn)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="::\b">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(?::|-&gt;)">
+        <token type="Text"/>
+        <push state="typename"/>
+      </rule>
+      <rule pattern="(break|continue)(\b\s*)(\&#39;[A-Za-z_]\w*)?">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameLabel"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#39;(\\[&#39;&#34;\\nrt]|\\x[0-7][0-9a-fA-F]|\\0|\\u\{[0-9a-fA-F]{1,6}\}|.)&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(b)(&#39;(?:\\[&#39;&#34;\\nrt]|\\x[0-9a-fA-F]{2}|\\0|.)&#39;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralStringChar"/>
+        </bygroups>
+      </rule>
+      <rule pattern="0b[01_]+">
+        <token type="LiteralNumberBin"/>
+        <push state="number_lit"/>
+      </rule>
+      <rule pattern="0o[0-7_]+">
+        <token type="LiteralNumberOct"/>
+        <push state="number_lit"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F_]+">
+        <token type="LiteralNumberHex"/>
+        <push state="number_lit"/>
+      </rule>
+      <rule pattern="[0-9][0-9_]*(\.[0-9_]+[eE][+\-]?[0-9_]+|\.[0-9_]*(?!\.)|[eE][+\-]?[0-9_]+)">
+        <token type="LiteralNumberFloat"/>
+        <push state="number_lit"/>
+      </rule>
+      <rule pattern="[0-9][0-9_]*">
+        <token type="LiteralNumberInteger"/>
+        <push state="number_lit"/>
+      </rule>
+      <rule pattern="(b)(&#34;)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="bytestring"/>
+      </rule>
+      <rule pattern="(?s)(b?r)(#*)(&#34;.*?&#34;\2)">
+        <bygroups>
+          <token type="LiteralStringAffix"/>
+          <token type="LiteralString"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&#39;(static|_)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="NameAttribute"/>
+        <push state="lifetime"/>
+      </rule>
+      <rule pattern="\.\.=?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{}()\[\],.;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[+\-*/%&amp;|&lt;&gt;^!~@=:?]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\b(r#)?_?([A-Z][A-Z0-9_]*){2,}\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="((?:e?print(?:ln)?|format(?:_args)?|panic|todo|un(?:reachable|implemented))!)(\s*)(\()(\s*)(&#34;)">
+        <bygroups>
+          <token type="NameFunctionMagic"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+          <token type="LiteralString"/>
+        </bygroups>
+        <push state="formatted_string"/>
+      </rule>
+      <rule pattern="([a-zA-Z_]\w*!)(\s*)(\(|\[|\{)">
+        <bygroups>
+          <token type="NameFunctionMagic"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(r#)?[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="r#[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="#!?\[">
+        <token type="CommentPreproc"/>
+        <push state="attribute["/>
+      </rule>
+      <rule pattern="#">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="typename">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&amp;">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="&#39;(static|_)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="NameAttribute"/>
+        <push state="lifetime"/>
+      </rule>
+      <rule pattern="(DoubleEndedIterator|ExactSizeIterator|IntoIterator|PartialOrd|PartialEq|ToString|Iterator|ToOwned|Default|Result|String|FnOnce|Extend|Option|FnMut|Unpin|Sized|AsRef|AsMut|Clone|None|From|Into|Sync|drop|Send|Drop|Copy|Some|Ord|Err|Box|Vec|Eq|Ok|Fn)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(isize|usize|bool|char|u128|i128|i64|i32|i16|str|u64|u32|f32|f64|u16|i8|u8)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sas.xml 🔗

@@ -0,0 +1,191 @@
+<lexer>
+  <config>
+    <name>SAS</name>
+    <alias>sas</alias>
+    <filename>*.SAS</filename>
+    <filename>*.sas</filename>
+    <mime_type>text/x-sas</mime_type>
+    <mime_type>text/sas</mime_type>
+    <mime_type>application/x-sas</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="validvar">
+      <rule pattern="[a-z_]\w{0,31}\.?">
+        <token type="NameVariable"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="cards-datalines">
+      <rule pattern="^\s*(datalines|cards)\s*;\s*$">
+        <token type="Keyword"/>
+        <push state="data"/>
+      </rule>
+    </state>
+    <state name="proc-data">
+      <rule pattern="(^|;)\s*(proc \w+|data|run|quit)[\s;]">
+        <token type="KeywordReserved"/>
+      </rule>
+    </state>
+    <state name="string_dquote">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\|\\&#34;|\\\n">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&amp;">
+        <token type="NameVariable"/>
+        <push state="validvar"/>
+      </rule>
+      <rule pattern="[^$&amp;&#34;\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[$&#34;\\]">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="general">
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="vars-strings"/>
+      </rule>
+      <rule>
+        <include state="special"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+    </state>
+    <state name="vars-strings">
+      <rule pattern="&amp;[a-z_]\w{0,31}\.?">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="%[a-z_]\w{0,31}">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="\&#39;">
+        <token type="LiteralString"/>
+        <push state="string_squote"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string_dquote"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule>
+        <include state="proc-data"/>
+      </rule>
+      <rule>
+        <include state="cards-datalines"/>
+      </rule>
+      <rule>
+        <include state="logs"/>
+      </rule>
+      <rule>
+        <include state="general"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="data">
+      <rule pattern="(.|\n)*^\s*;\s*$">
+        <token type="Other"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="logs">
+      <rule pattern="\n?^\s*%?put ">
+        <token type="Keyword"/>
+        <push state="log-messages"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="\b(datalines4|datalines|delimiter|startsas|redirect|lostcard|continue|informat|filename|footnote|catname|options|libname|systask|display|waitsas|missing|replace|delete|window|endsas|update|format|attrib|length|infile|select|return|retain|rename|remove|output|cards4|modify|leave|title|merge|delim|input|cards|abort|where|label|array|error|call|page|stop|keep|file|drop|link|skip|list|goto|put|out|set|by|dm|in|x)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(references|distinct|describe|validate|restrict|cascade|msgtype|message|primary|foreign|delete|update|create|unique|having|modify|insert|select|group|check|table|alter|order|reset|index|where|into|from|view|null|like|drop|add|not|key|and|set|on|in|or|as)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(while|until|then|else|end|if|do)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="%(sysevalf|nrbquote|qsysfunc|qlowcase|compstor|nrquote|display|qupcase|datatyp|qcmpres|unquote|syscall|sysfunc|sysrput|sysprod|syslput|sysexec|lowcase|qsubstr|sysget|length|keydef|global|superq|substr|verify|bquote|cmpres|upcase|window|label|qleft|while|qtrim|quote|nrstr|until|sysrc|input|macro|local|qscan|index|else|scan|mend|eval|trim|then|goto|left|put|let|end|str|do|to|if)\b">
+        <token type="NameBuiltin"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sass.xml 🔗

@@ -0,0 +1,362 @@
+<lexer>
+  <config>
+    <name>Sass</name>
+    <alias>sass</alias>
+    <filename>*.sass</filename>
+    <mime_type>text/x-sass</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="import">
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <push state="root"/>
+      </rule>
+    </state>
+    <state name="string-single">
+      <rule pattern="(\\.|#(?=[^\n{])|[^\n&#39;#])+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpolation"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string-double">
+      <rule pattern="(\\.|#(?=[^\n{])|[^\n&#34;#])+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpolation"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="pseudo-class">
+      <rule pattern="[\w-]+">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpolation"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="for">
+      <rule pattern="(from|to|through)">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule>
+        <include state="value"/>
+      </rule>
+    </state>
+    <state name="selector">
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\:">
+        <token type="NameDecorator"/>
+        <push state="pseudo-class"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="NameClass"/>
+        <push state="class"/>
+      </rule>
+      <rule pattern="\#">
+        <token type="NameNamespace"/>
+        <push state="id"/>
+      </rule>
+      <rule pattern="[\w-]+">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpolation"/>
+      </rule>
+      <rule pattern="&amp;">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[~^*!&amp;\[\]()&lt;&gt;|+=@:;,./?-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string-double"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="string-single"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <push state="root"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[!$][\w-]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="url\(">
+        <token type="LiteralStringOther"/>
+        <push state="string-url"/>
+      </rule>
+      <rule pattern="[a-z_-][\w-]*(?=\()">
+        <token type="NameFunction"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scala.xml 🔗

@@ -0,0 +1,274 @@
+<lexer>
+  <config>
+    <name>Scala</name>
+    <alias>scala</alias>
+    <filename>*.scala</filename>
+    <mime_type>text/x-scala</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="import">
+      <rule pattern="([\\$_\p{L}](?:[\\$_\p{L}]|[0-9])*(?:(?&lt;=_)[-~\^\*!%&amp;\\&lt;&gt;\|+=:/?@�-�����-����϶҂؆-؈؎-؏۩۽-۾߶৺୰௳-௸௺౿ೱ-ೲ൹༁-༃༓-༗༚-༟༴༶༸྾-࿅࿇-࿏႞-႟፠᎐-᎙᥀᧠-᧿᭡-᭪᭴-᭼⁄⁒⁺-⁼₊-₌℀-℁℃-℆℈-℉℔№-℘℞-℣℥℧℩℮℺-℻⅀-⅄⅊-⅍⅏←-⌨⌫-⑊⒜-ⓩ─-❧➔-⟄⟇-⟥⟰-⦂⦙-⧗⧜-⧻⧾-⭔⳥-⳪⺀-⿻〄〒-〓〠〶-〷〾-〿㆐-㆑㆖-㆟㇀-㇣㈀-㈞㈪-㉐㉠-㉿㊊-㊰㋀-㏿䷀-䷿꒐-꓆꠨-꠫﬩﷽﹢﹤-﹦+<->|~¬¦│-○-�]+)?|\.)+">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="interpstringcommon">
+      <rule pattern="[^&#34;$\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$\$">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$[\\$_\p{L}](?:[\\$_\p{L}]|\d)*">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpbrace"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="interptriplestring">
+      <rule pattern="&#34;&#34;&#34;(?!&#34;)">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="interpstringcommon"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="(class|trait|object)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="[^\S\n]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="@[\\$_\p{L}](?:[\\$_\p{L}]|[0-9])*(?:(?&lt;=_)[-~\^\*!%&amp;\\&lt;&gt;\|+=:/?@�-�����-����϶҂؆-؈؎-؏۩۽-۾߶৺୰௳-௸௺౿ೱ-ೲ൹༁-༃༓-༗༚-༟༴༶༸྾-࿅࿇-࿏႞-႟፠᎐-᎙᥀᧠-᧿᭡-᭪᭴-᭼⁄⁒⁺-⁼₊-₌℀-℁℃-℆℈-℉℔№-℘℞-℣℥℧℩℮℺-℻⅀-⅄⅊-⅍⅏←-⌨⌫-⑊⒜-ⓩ─-❧➔-⟄⟇-⟥⟰-⦂⦙-⧗⧜-⧻⧾-⭔⳥-⳪⺀-⿻〄〒-〓〠〶-〷〾-〿㆐-㆑㆖-㆟㇀-㇣㈀-㈞㈪-㉐㉠-㉿㊊-㊰㋀-㏿䷀-䷿꒐-꓆꠨-꠫﬩﷽﹢﹤-﹦+<->|~¬¦│-○-�]+)?">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="(abstract|ca(?:se|tch)|d(?:ef|o)|e(?:lse|xtends)|f(?:inal(?:ly)?|or(?:Some)?)|i(?:f|mplicit)|lazy|match|new|override|pr(?:ivate|otected)|re(?:quires|turn)|s(?:ealed|uper)|t(?:h(?:is|row)|ry)|va[lr]|w(?:hile|ith)|yield)\b|(&lt;[%:-]|=&gt;|&gt;:|[#=@_⇒←])(\b|(?=\s)|$)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern=":(?![-~\^\*!%&amp;\\&lt;&gt;\|+=:/?@�-�����-����϶҂؆-؈؎-؏۩۽-۾߶৺୰௳-௸௺౿ೱ-ೲ൹༁-༃༓-༗༚-༟༴༶༸྾-࿅࿇-࿏႞-႟፠᎐-᎙᥀᧠-᧿᭡-᭪᭴-᭼⁄⁒⁺-⁼₊-₌℀-℁℃-℆℈-℉℔№-℘℞-℣℥℧℩℮℺-℻⅀-⅄⅊-⅍⅏←-⌨⌫-⑊⒜-ⓩ─-❧➔-⟄⟇-⟥⟰-⦂⦙-⧗⧜-⧻⧾-⭔⳥-⳪⺀-⿻〄〒-〓〠〶-〷〾-〿㆐-㆑㆖-㆟㇀-㇣㈀-㈞㈪-㉐㉠-㉿㊊-㊰㋀-㏿䷀-䷿꒐-꓆꠨-꠫﬩﷽﹢﹤-﹦+<->|~¬¦│-○-�]+%s)">
+        <token type="Keyword"/>
+        <push state="type"/>
+      </rule>
+      <rule pattern="[\\$_\p{Lu}][\\$_\p{L}](?:[\\$_\p{L}]|[0-9])*(?:(?&lt;=_)[-~\^\*!%&amp;\\&lt;&gt;\|+=:/?@�-�����-����϶҂؆-؈؎-؏۩۽-۾߶৺୰௳-௸௺౿ೱ-ೲ൹༁-༃༓-༗༚-༟༴༶༸྾-࿅࿇-࿏႞-႟፠᎐-᎙᥀᧠-᧿᭡-᭪᭴-᭼⁄⁒⁺-⁼₊-₌℀-℁℃-℆℈-℉℔№-℘℞-℣℥℧℩℮℺-℻⅀-⅄⅊-⅍⅏←-⌨⌫-⑊⒜-ⓩ─-❧➔-⟄⟇-⟥⟰-⦂⦙-⧗⧜-⧻⧾-⭔⳥-⳪⺀-⿻〄〒-〓〠〶-〷〾-〿㆐-㆑㆖-㆟㇀-㇣㈀-㈞㈪-㉐㉠-㉿㊊-㊰㋀-㏿䷀-䷿꒐-꓆꠨-꠫﬩﷽﹢﹤-﹦+<->|~¬¦│-○-�]+)?\b">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(import|package)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="(type)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="type"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;.*?&#34;&#34;&#34;(?!&#34;)">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;|&#39;\\u[0-9a-fA-F]{4}&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#39;[\\$_\p{L}](?:[\\$_\p{L}]|[0-9])*(?:(?&lt;=_)[-~\^\*!%&amp;\\&lt;&gt;\|+=:/?@�-�����-����϶҂؆-؈؎-؏۩۽-۾߶৺୰௳-௸௺౿ೱ-ೲ൹༁-༃༓-༗༚-༟༴༶༸྾-࿅࿇-࿏႞-႟፠᎐-᎙᥀᧠-᧿᭡-᭪᭴-᭼⁄⁒⁺-⁼₊-₌℀-℁℃-℆℈-℉℔№-℘℞-℣℥℧℩℮℺-℻⅀-⅄⅊-⅍⅏←-⌨⌫-⑊⒜-ⓩ─-❧➔-⟄⟇-⟥⟰-⦂⦙-⧗⧜-⧻⧾-⭔⳥-⳪⺀-⿻〄〒-〓〠〶-〷〾-〿㆐-㆑㆖-㆟㇀-㇣㈀-㈞㈪-㉐㉠-㉿㊊-㊰㋀-㏿䷀-䷿꒐-꓆꠨-꠫﬩﷽﹢﹤-﹦+<->|~¬¦│-○-�]+)?">
+        <token type="TextSymbol"/>
+      </rule>
+      <rule pattern="[fs]&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="interptriplestring"/>
+      </rule>
+      <rule pattern="[fs]&#34;">
+        <token type="LiteralString"/>
+        <push state="interpstring"/>
+      </rule>
+      <rule pattern="raw&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[\\$_\p{L}](?:[\\$_\p{L}]|[0-9])*(?:(?&lt;=_)[-~\^\*!%&amp;\\&lt;&gt;\|+=:/?@�-�����-����϶҂؆-؈؎-؏۩۽-۾߶৺୰௳-௸௺౿ೱ-ೲ൹༁-༃༓-༗༚-༟༴༶༸྾-࿅࿇-࿏႞-႟፠᎐-᎙᥀᧠-᧿᭡-᭪᭴-᭼⁄⁒⁺-⁼₊-₌℀-℁℃-℆℈-℉℔№-℘℞-℣℥℧℩℮℺-℻⅀-⅄⅊-⅍⅏←-⌨⌫-⑊⒜-ⓩ─-❧➔-⟄⟇-⟥⟰-⦂⦙-⧗⧜-⧻⧾-⭔⳥-⳪⺀-⿻〄〒-〓〠〶-〷〾-〿㆐-㆑㆖-㆟㇀-㇣㈀-㈞㈪-㉐㉠-㉿㊊-㊰㋀-㏿䷀-䷿꒐-꓆꠨-꠫﬩﷽﹢﹤-﹦+<->|~¬¦│-○-�]+)?">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="`[^`]+`">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Operator"/>
+        <push state="typeparam"/>
+      </rule>
+      <rule pattern="[(){};,.#]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[-~\^\*!%&amp;\\&lt;&gt;\|+=:/?@�-�����-����϶҂؆-؈؎-؏۩۽-۾߶৺୰௳-௸௺౿ೱ-ೲ൹༁-༃༓-༗༚-༟༴༶༸྾-࿅࿇-࿏႞-႟፠᎐-᎙᥀᧠-᧿᭡-᭪᭴-᭼⁄⁒⁺-⁼₊-₌℀-℁℃-℆℈-℉℔№-℘℞-℣℥℧℩℮℺-℻⅀-⅄⅊-⅍⅏←-⌨⌫-⑊⒜-ⓩ─-❧➔-⟄⟇-⟥⟰-⦂⦙-⧗⧜-⧻⧾-⭔⳥-⳪⺀-⿻〄〒-〓〠〶-〷〾-〿㆐-㆑㆖-㆟㇀-㇣㈀-㈞㈪-㉐㉠-㉿㊊-㊰㋀-㏿䷀-䷿꒐-꓆꠨-꠫﬩﷽﹢﹤-﹦+<->|~¬¦│-○-�]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="([0-9][0-9]*\.[0-9]*|\.[0-9]+)([eE][+-]?[0-9]+)?[fFdD]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+L?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="type">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;[%:]|&gt;:|[#_]|forSome|type">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="([,);}]|=&gt;|=|⇒)(\s*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[({]">
+        <token type="Operator"/>
+        <push/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scheme.xml 🔗

@@ -0,0 +1,106 @@
+<lexer>
+  <config>
+    <name>Scheme</name>
+    <alias>scheme</alias>
+    <alias>scm</alias>
+    <filename>*.scm</filename>
+    <filename>*.ss</filename>
+    <mime_type>text/x-scheme</mime_type>
+    <mime_type>application/x-scheme</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern=";.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="#\|">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comment"/>
+      </rule>
+      <rule pattern="#;\s*\(">
+        <token type="Comment"/>
+        <push state="commented-form"/>
+      </rule>
+      <rule pattern="#!r6rs">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="-?\d+\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="-?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;[\w!$%&amp;*+,/:&lt;=&gt;?@^~|-]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="#\\(alarm|backspace|delete|esc|linefeed|newline|page|return|space|tab|vtab|x[0-9a-zA-Z]{1,5}|.)">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(#t|#f)">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="(&#39;|#|`|,@|,|\.)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(lambda |define |if |else |cond |and |or |case |let |let\* |letrec |begin |do |delay |set\! |\=\&gt; |quote |quasiquote |unquote |unquote\-splicing |define\-syntax |let\-syntax |letrec\-syntax |syntax\-rules )">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?&lt;=&#39;\()[\w!$%&amp;*+,/:&lt;=&gt;?@^~|-]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(?&lt;=#\()[\w!$%&amp;*+,/:&lt;=&gt;?@^~|-]+">
+        <token type="NameVariable"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scilab.xml 🔗

@@ -0,0 +1,98 @@
+<lexer>
+  <config>
+    <name>Scilab</name>
+    <alias>scilab</alias>
+    <filename>*.sci</filename>
+    <filename>*.sce</filename>
+    <filename>*.tst</filename>
+    <mime_type>text/scilab</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="//.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="^\s*function">
+        <token type="Keyword"/>
+        <push state="deffunc"/>
+      </rule>
+      <rule pattern="(unwind_protect_cleanup|end_unwind_protect|unwind_protect|end_try_catch|endproperties|endclassdef|endfunction|persistent|properties|endmethods|otherwise|endevents|endswitch|__FILE__|continue|classdef|__LINE__|endwhile|function|methods|elseif|return|static|events|global|endfor|switch|until|endif|while|catch|break|case|else|set|end|try|for|get|do|if)\b">
+        <token type="Keyword"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/scss.xml 🔗

@@ -0,0 +1,373 @@
+<lexer>
+  <config>
+    <name>SCSS</name>
+    <alias>scss</alias>
+    <filename>*.scss</filename>
+    <mime_type>text/x-scss</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <dot_all>true</dot_all>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="string-double">
+      <rule pattern="(\\.|#(?=[^\n{])|[^\n&#34;#])+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpolation"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="id">
+      <rule pattern="[\w-]+">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="#\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interpolation"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="!(important|default|global)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[!$][\w-]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="url\(">
+        <token type="LiteralStringOther"/>
+        <push state="string-url"/>
+      </rule>
+      <rule pattern="[a-z_-][\w-]*(?=\()">
+        <token type="NameFunction"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sed.xml 🔗

@@ -0,0 +1,28 @@
+<lexer>
+  <config>
+    <name>Sed</name>
+    <alias>sed</alias>
+    <alias>gsed</alias>
+    <alias>ssed</alias>
+    <filename>*.sed</filename>
+    <filename>*.[gs]sed</filename>
+    <mime_type>text/x-sed</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="#.*$"><token type="CommentSingle"/></rule>
+      <rule pattern="[0-9]+"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="\$"><token type="Operator"/></rule>
+      <rule pattern="[{};,!]"><token type="Punctuation"/></rule>
+      <rule pattern="[dDFgGhHlnNpPqQxz=]"><token type="Keyword"/></rule>
+      <rule pattern="([berRtTvwW:])([^;\n]*)"><bygroups><token type="Keyword"/><token type="LiteralStringSingle"/></bygroups></rule>
+      <rule pattern="([aci])((?:.*?\\\n)*(?:.*?[^\\]$))"><bygroups><token type="Keyword"/><token type="LiteralStringDouble"/></bygroups></rule>
+      <rule pattern="([qQ])([0-9]*)"><bygroups><token type="Keyword"/><token type="LiteralNumberInteger"/></bygroups></rule>
+      <rule pattern="(/)((?:(?:\\[^\n]|[^\\])*?\\\n)*?(?:\\.|[^\\])*?)(/)"><bygroups><token type="Punctuation"/><token type="LiteralStringRegex"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="(\\(.))((?:(?:\\[^\n]|[^\\])*?\\\n)*?(?:\\.|[^\\])*?)(\2)"><bygroups><token type="Punctuation"/>None<token type="LiteralStringRegex"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="(y)(.)((?:(?:\\[^\n]|[^\\])*?\\\n)*?(?:\\.|[^\\])*?)(\2)((?:(?:\\[^\n]|[^\\])*?\\\n)*?(?:\\.|[^\\])*?)(\2)"><bygroups><token type="Keyword"/><token type="Punctuation"/><token type="LiteralStringSingle"/><token type="Punctuation"/><token type="LiteralStringSingle"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="(s)(.)((?:(?:\\[^\n]|[^\\])*?\\\n)*?(?:\\.|[^\\])*?)(\2)((?:(?:\\[^\n]|[^\\])*?\\\n)*?(?:\\.|[^\\])*?)(\2)((?:[gpeIiMm]|[0-9])*)"><bygroups><token type="Keyword"/><token type="Punctuation"/><token type="LiteralStringRegex"/><token type="Punctuation"/><token type="LiteralStringSingle"/><token type="Punctuation"/><token type="Keyword"/></bygroups></rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sieve.xml 🔗

@@ -0,0 +1,61 @@
+<lexer>
+  <config>
+    <name>Sieve</name>
+    <alias>sieve</alias>
+    <filename>*.siv</filename>
+    <filename>*.sieve</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[();,{}\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(?i)require">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(?i)(:)(addresses|all|contains|content|create|copy|comparator|count|days|detail|domain|fcc|flags|from|handle|importance|is|localpart|length|lowerfirst|lower|matches|message|mime|options|over|percent|quotewildcard|raw|regex|specialuse|subject|text|under|upperfirst|upper|value)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)(address|addflag|allof|anyof|body|discard|elsif|else|envelope|ereject|exists|false|fileinto|if|hasflag|header|keep|notify_method_capability|notify|not|redirect|reject|removeflag|setflag|size|spamtest|stop|string|true|vacation|virustest)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?i)set">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="([0-9.]+)([kmgKMG])?">
+        <bygroups>
+          <token type="LiteralNumber"/>
+          <token type="LiteralNumber"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]*?&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="text:">
+        <token type="NameTag"/>
+        <push state="text"/>
+      </rule>
+    </state>
+    <state name="text">
+      <rule pattern="[^.].*?\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="^\.">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/smali.xml 🔗

@@ -0,0 +1,73 @@
+<!--
+  Generated from https://github.com/pygments/pygments/blob/15f222adefd2bf7835bfd74a12d720028ae68d29/pygments/lexers/dalvik.py.
+-->
+<lexer>
+  <config>
+    <name>Smali</name>
+    <alias>smali</alias>
+    <filename>*.smali</filename>
+    <mime_type>text/smali</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule><include state="comment"/></rule>
+      <rule><include state="label"/></rule>
+      <rule><include state="field"/></rule>
+      <rule><include state="method"/></rule>
+      <rule><include state="class"/></rule>
+      <rule><include state="directive"/></rule>
+      <rule><include state="access-modifier"/></rule>
+      <rule><include state="instruction"/></rule>
+      <rule><include state="literal"/></rule>
+      <rule><include state="punctuation"/></rule>
+      <rule><include state="type"/></rule>
+      <rule><include state="whitespace"/></rule>
+    </state>
+    <state name="directive">
+      <rule pattern="^([ \t]*)(\.(?:class|super|implements|field|subannotation|annotation|enum|method|registers|locals|array-data|packed-switch|sparse-switch|catchall|catch|line|parameter|local|prologue|epilogue|source))"><bygroups><token type="TextWhitespace"/><token type="Keyword"/></bygroups></rule>
+      <rule pattern="^([ \t]*)(\.end)( )(field|subannotation|annotation|method|array-data|packed-switch|sparse-switch|parameter|local)"><bygroups><token type="TextWhitespace"/><token type="Keyword"/><token type="TextWhitespace"/><token type="Keyword"/></bygroups></rule>
+      <rule pattern="^([ \t]*)(\.restart)( )(local)"><bygroups><token type="TextWhitespace"/><token type="Keyword"/><token type="TextWhitespace"/><token type="Keyword"/></bygroups></rule>
+    </state>
+    <state name="access-modifier">
+      <rule pattern="(public|private|protected|static|final|synchronized|bridge|varargs|native|abstract|strictfp|synthetic|constructor|declared-synchronized|interface|enum|annotation|volatile|transient)"><token type="Keyword"/></rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\n"><token type="TextWhitespace"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+    </state>
+    <state name="instruction">
+      <rule pattern="\b[vp]\d+\b"><token type="NameBuiltin"/></rule>
+      <rule pattern="(\b[a-z][A-Za-z0-9/-]+)(\s+)"><bygroups><token type="Text"/><token type="TextWhitespace"/></bygroups></rule>
+    </state>
+    <state name="literal">
+      <rule pattern="&quot;.*&quot;"><token type="LiteralString"/></rule>
+      <rule pattern="0x[0-9A-Fa-f]+t?"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="[0-9]*\.[0-9]+([eE][0-9]+)?[fd]?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="[0-9]+L?"><token type="LiteralNumberInteger"/></rule>
+    </state>
+    <state name="field">
+      <rule pattern="(\$?\b)([\w$]*)(:)"><bygroups><token type="Punctuation"/><token type="NameVariable"/><token type="Punctuation"/></bygroups></rule>
+    </state>
+    <state name="method">
+      <rule pattern="&lt;(?:cl)?init&gt;"><token type="NameFunction"/></rule>
+      <rule pattern="(\$?\b)([\w$]*)(\()"><bygroups><token type="Punctuation"/><token type="NameFunction"/><token type="Punctuation"/></bygroups></rule>
+    </state>
+    <state name="label">
+      <rule pattern=":\w+"><token type="NameLabel"/></rule>
+    </state>
+    <state name="class">
+      <rule pattern="(L)((?:[\w$]+/)*)([\w$]+)(;)"><bygroups><token type="KeywordType"/><token type="Text"/><token type="NameClass"/><token type="Text"/></bygroups></rule>
+    </state>
+    <state name="punctuation">
+      <rule pattern="-&gt;"><token type="Punctuation"/></rule>
+      <rule pattern="[{},():=.-]"><token type="Punctuation"/></rule>
+    </state>
+    <state name="type">
+      <rule pattern="[ZBSCIJFDV\[]+"><token type="KeywordType"/></rule>
+    </state>
+    <state name="comment">
+      <rule pattern="#.*?\n"><token type="Comment"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/smalltalk.xml 🔗

@@ -0,0 +1,294 @@
+<lexer>
+  <config>
+    <name>Smalltalk</name>
+    <alias>smalltalk</alias>
+    <alias>squeak</alias>
+    <alias>st</alias>
+    <filename>*.st</filename>
+    <mime_type>text/x-smalltalk</mime_type>
+  </config>
+  <rules>
+    <state name="inner_parenth">
+      <rule pattern="\)">
+        <token type="LiteralStringSymbol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="_parenth_helper"/>
+      </rule>
+    </state>
+    <state name="objects">
+      <rule pattern="\[">
+        <token type="Text"/>
+        <push state="blockvariables"/>
+      </rule>
+      <rule pattern="\]">
+        <token type="Text"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule pattern="\b(self|super|true|false|nil|thisContext)\b">
+        <token type="NameBuiltinPseudo"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule pattern="\b[A-Z]\w*(?!:)\b">
+        <token type="NameClass"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule pattern="\b[a-z]\w*(?!:)\b">
+        <token type="NameVariable"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule pattern="#(&#34;(&#34;&#34;|[^&#34;])*&#34;|[-+*/\\~&lt;&gt;=|&amp;!?,@%]+|[\w:]+)">
+        <token type="LiteralStringSymbol"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule>
+        <include state="literals"/>
+      </rule>
+    </state>
+    <state name="afterobject">
+      <rule pattern="! !$">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="whitespaces"/>
+      </rule>
+      <rule pattern="\b(ifTrue:|ifFalse:|whileTrue:|whileFalse:|timesRepeat:)">
+        <token type="NameBuiltin"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\b(new\b(?!:))">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern=":=|_">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\b[a-zA-Z]+\w*:">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\b[a-zA-Z]+\w*">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="\w+:?|[-+*/\\~&lt;&gt;=|&amp;!?,@%]+">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\])}]">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[\[({]">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="literals">
+      <rule pattern="&#39;(&#39;&#39;|[^&#39;])*&#39;">
+        <token type="LiteralString"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule pattern="\$.">
+        <token type="LiteralStringChar"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule pattern="#\(">
+        <token type="LiteralStringSymbol"/>
+        <push state="parenth"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Text"/>
+        <push state="afterobject"/>
+      </rule>
+      <rule pattern="(\d+r)?-?\d+(\.\d+)?(e-?\d+)?">
+        <token type="LiteralNumber"/>
+        <push state="afterobject"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="(&lt;)(\w+:)(.*?)(&gt;)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="squeak fileout"/>
+      </rule>
+      <rule>
+        <include state="whitespaces"/>
+      </rule>
+      <rule>
+        <include state="method definition"/>
+      </rule>
+      <rule pattern="(\|)([\w\s]*)(\|)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameVariable"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="objects"/>
+      </rule>
+      <rule pattern="\^|:=|_">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[\]({}.;!]">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="_parenth_helper">
+      <rule>
+        <include state="whitespaces"/>
+      </rule>
+      <rule pattern="(\d+r)?-?\d+(\.\d+)?(e-?\d+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[-+*/\\~&lt;&gt;=|&amp;#!?,@%\w:]+">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+      <rule pattern="&#39;(&#39;&#39;|[^&#39;])*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\$.">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="#*\(">
+        <token type="LiteralStringSymbol"/>
+        <push state="inner_parenth"/>
+      </rule>
+    </state>
+    <state name="parenth">
+      <rule pattern="\)">
+        <token type="LiteralStringSymbol"/>
+        <push state="root" state="afterobject"/>
+      </rule>
+      <rule>
+        <include state="_parenth_helper"/>
+      </rule>
+    </state>
+    <state name="whitespaces">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&#34;(&#34;&#34;|[^&#34;])*&#34;">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="squeak fileout">
+      <rule pattern="^&#34;(&#34;&#34;|[^&#34;])*&#34;!">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="^&#39;(&#39;&#39;|[^&#39;])*&#39;!">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="^(!)(\w+)( commentStamp: )(.*?)( prior: .*?!\n)(.*?)(!)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="NameClass"/>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(!)(\w+(?: class)?)( methodsFor: )(&#39;(?:&#39;&#39;|[^&#39;])*&#39;)(.*?!)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="NameClass"/>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\w+)( subclass: )(#\w+)(\s+instanceVariableNames: )(.*?)(\s+classVariableNames: )(.*?)(\s+poolDictionaries: )(.*?)(\s+category: )(.*?)(!)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Keyword"/>
+          <token type="LiteralStringSymbol"/>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\w+(?: class)?)(\s+instanceVariableNames: )(.*?)(!)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Keyword"/>
+          <token type="LiteralString"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(!\n)(\].*)(! !)$">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="! !$">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="method definition">
+      <rule pattern="([a-zA-Z]+\w*:)(\s*)(\w+)">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\b[a-zA-Z]+\w*\b)(\s*)$">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^([-+*/\\~&lt;&gt;=|&amp;!?,@%]+)(\s*)(\w+)(\s*)$">
+        <bygroups>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="blockvariables">
+      <rule>
+        <include state="whitespaces"/>
+      </rule>
+      <rule pattern="(:)(\s*)(\w+)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\|">
+        <token type="Operator"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/smarty.xml 🔗

@@ -0,0 +1,79 @@
+<lexer>
+  <config>
+    <name>Smarty</name>
+    <alias>smarty</alias>
+    <filename>*.tpl</filename>
+    <mime_type>application/x-smarty</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^{]+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="(\{)(\*.*?\*)(\})">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Comment"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{php\})(.*?)(\{/php\})">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <using lexer="PHP"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{)(/?[a-zA-Z_]\w*)(\s*)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="smarty"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="CommentPreproc"/>
+        <push state="smarty"/>
+      </rule>
+    </state>
+    <state name="smarty">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="CommentPreproc"/>
+        <push/>
+      </rule>
+      <rule pattern="\}">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="#[a-zA-Z_]\w*#">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\$[a-zA-Z_]\w*(\.\w+)*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*()+=|\[\]:;,.&lt;&gt;/?@-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|0[xX][0-9a-fA-F]+[Ll]?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameAttribute"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/snbt.xml 🔗

@@ -0,0 +1,58 @@
+
+<lexer>
+  <config>
+    <name>SNBT</name>
+    <alias>snbt</alias>
+    <filename>*.snbt</filename>
+    <mime_type>text/snbt</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\{"><token type="Punctuation"/><push state="compound"/></rule>
+      <rule pattern="[^\{]+"><token type="Text"/></rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+    </state>
+    <state name="operators">
+      <rule pattern="[,:;]"><token type="Punctuation"/></rule>
+    </state>
+    <state name="literals">
+      <rule pattern="(true|false)"><token type="KeywordConstant"/></rule>
+      <rule pattern="-?\d+[eE]-?\d+"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="-?\d*\.\d+[fFdD]?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="-?\d+[bBsSlLfFdD]?"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><push state="literals.string_double"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralStringSingle"/><push state="literals.string_single"/></rule>
+    </state>
+    <state name="literals.string_double">
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&quot;\n]+"><token type="LiteralStringDouble"/></rule>
+      <rule pattern="&quot;"><token type="LiteralStringDouble"/><pop depth="1"/></rule>
+    </state>
+    <state name="literals.string_single">
+      <rule pattern="\\."><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&#x27;\n]+"><token type="LiteralStringSingle"/></rule>
+      <rule pattern="&#x27;"><token type="LiteralStringSingle"/><pop depth="1"/></rule>
+    </state>
+    <state name="compound">
+      <rule pattern="[A-Z_a-z]+"><token type="NameAttribute"/></rule>
+      <rule><include state="operators"/></rule>
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="literals"/></rule>
+      <rule pattern="\{"><token type="Punctuation"/><push/></rule>
+      <rule pattern="\["><token type="Punctuation"/><push state="list"/></rule>
+      <rule pattern="\}"><token type="Punctuation"/><pop depth="1"/></rule>
+    </state>
+    <state name="list">
+      <rule pattern="[A-Z_a-z]+"><token type="NameAttribute"/></rule>
+      <rule><include state="literals"/></rule>
+      <rule><include state="operators"/></rule>
+      <rule><include state="whitespace"/></rule>
+      <rule pattern="\["><token type="Punctuation"/><push/></rule>
+      <rule pattern="\{"><token type="Punctuation"/><push state="compound"/></rule>
+      <rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/snobol.xml 🔗

@@ -0,0 +1,95 @@
+<lexer>
+  <config>
+    <name>Snobol</name>
+    <alias>snobol</alias>
+    <filename>*.snobol</filename>
+    <mime_type>text/x-snobol</mime_type>
+  </config>
+  <rules>
+    <state name="heredoc">
+      <rule pattern=".*\n">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\*.*\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[+.] ">
+        <token type="Punctuation"/>
+        <push state="statement"/>
+      </rule>
+      <rule pattern="-.*\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="END\s*\n">
+        <token type="NameLabel"/>
+        <push state="heredoc"/>
+      </rule>
+      <rule pattern="[A-Za-z$][\w$]*">
+        <token type="NameLabel"/>
+        <push state="statement"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+        <push state="statement"/>
+      </rule>
+    </state>
+    <state name="statement">
+      <rule pattern="\s*\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(?&lt;=[^\w.])(LT|LE|EQ|NE|GE|GT|INTEGER|IDENT|DIFFER|LGT|SIZE|REPLACE|TRIM|DUPL|REMDR|DATE|TIME|EVAL|APPLY|OPSYN|LOAD|UNLOAD|LEN|SPAN|BREAK|ANY|NOTANY|TAB|RTAB|REM|POS|RPOS|FAIL|FENCE|ABORT|ARB|ARBNO|BAL|SUCCEED|INPUT|OUTPUT|TERMINAL)(?=[^\w.])">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[A-Za-z][\w.]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\*\*|[?$.!%*/#+\-@|&amp;\\=]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;[^&#39;]*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[0-9]+(?=[^.EeDd])">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[0-9]+(\.[0-9]*)?([EDed][-+]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+        <push state="goto"/>
+      </rule>
+      <rule pattern="[()&lt;&gt;,;]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="goto">
+      <rule pattern="\s*\n">
+        <token type="Text"/>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="F|S">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(\()([A-Za-z][\w.]*)(\))">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameLabel"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/solidity.xml 🔗

@@ -0,0 +1,279 @@
+<lexer>
+  <config>
+    <name>Solidity</name>
+    <alias>sol</alias>
+    <alias>solidity</alias>
+    <filename>*.sol</filename>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="strings">
+      <rule pattern="hex&#39;[0-9a-fA-F]+&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="hex&#34;[0-9a-fA-F]+&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <combined state="string-parse-common" state="string-parse-double"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <combined state="string-parse-common" state="string-parse-single"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule>
+        <include state="keywords-types"/>
+      </rule>
+      <rule>
+        <include state="keywords-other"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="\+\+|--|\*\*|\?|:|~|&amp;&amp;|\|\||=&gt;|==?|!=?|(&lt;&lt;|&gt;&gt;&gt;?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(abi|block|msg|tx)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?!abi\.)(decode|encode|encodePacked|encodeWithSelector|encodeWithSignature|encodeWithSelector)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?!block\.)(chainid|coinbase|difficulty|gaslimit|number|timestamp)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?!msg\.)(data|gas|sender|value)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?!tx\.)(gasprice|origin)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(type)(\()([a-zA-Z_]\w*)(\))">
+        <bygroups>
+          <token type="NameBuiltin"/>
+          <token type="Punctuation"/>
+          <token type="NameClass"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?!type\([a-zA-Z_]\w*\)\.)(creationCode|interfaceId|max|min|name|runtimeCode)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(now|this|super|gasleft)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(selfdestruct|suicide)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?!0x[0-9a-fA-F]+\.)(balance|code|codehash|send|transfer)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(assert|revert|require)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(call|callcode|delegatecall)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="selector\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(addmod|blockhash|ecrecover|keccak256|mulmod|ripemd160|sha256|sha3)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="keywords-types">
+      <rule pattern="(address|ufixed|string|bytes|fixed|byte|bool|uint|int)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(int160|int248|int240|int232|int224|int216|int208|int200|int192|int184|int176|int168|int104|int112|int120|int128|int136|int144|int152|int256|int96|int88|int80|int72|int64|int56|int48|int40|int32|int24|int16|int8)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(uint160|uint248|uint240|uint232|uint224|uint216|uint208|uint200|uint192|uint184|uint176|uint168|uint104|uint112|uint120|uint128|uint136|uint144|uint152|uint256|uint96|uint88|uint80|uint72|uint64|uint56|uint48|uint40|uint32|uint24|uint16|uint8)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(bytes23|bytes31|bytes30|bytes29|bytes28|bytes27|bytes26|bytes25|bytes24|bytes10|bytes11|bytes12|bytes13|bytes14|bytes15|bytes16|bytes17|bytes18|bytes19|bytes20|bytes21|bytes22|bytes32|bytes9|bytes8|bytes7|bytes6|bytes5|bytes4|bytes3|bytes2|bytes1)\b">
+        <token type="KeywordType"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sourcepawn.xml 🔗

@@ -0,0 +1,59 @@
+<lexer>
+  <config>
+    <name>SourcePawn</name>
+    <alias>sp</alias>
+    <filename>*.sp</filename>
+    <filename>*.inc</filename>
+    <mime_type>text/x-sourcepawn</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^#if\s+0"><token type="CommentPreproc"/><push state="if0"/></rule>
+      <rule pattern="^#"><token type="CommentPreproc"/><push state="macro"/></rule>
+      <rule pattern="^\s*(?:/[*].*?[*]/\s*)*#if\s+0"><token type="CommentPreproc"/><push state="if0"/></rule>
+      <rule pattern="^\s*(?:/[*].*?[*]/\s*)*#"><token type="CommentPreproc"/><push state="macro"/></rule>
+      <rule pattern="\n"><token type="Text"/></rule>
+      <rule pattern="\s+"><token type="Text"/></rule>
+      <rule pattern="\\\n"><token type="Text"/></rule>
+      <rule pattern="/(\\\n)?/(\n|(.|\n)*?[^\\]\n)"><token type="CommentSingle"/></rule>
+      <rule pattern="/(\\\n)?\*(.|\n)*?\*(\\\n)?/"><token type="CommentMultiline"/></rule>
+      <rule pattern="[{}]"><token type="Punctuation"/></rule>
+      <rule pattern="L?&quot;"><token type="LiteralString"/><push state="string"/></rule>
+      <rule pattern="L?&#x27;(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#x27;\n])&#x27;"><token type="LiteralStringChar"/></rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[LlUu]*"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="0x[0-9a-fA-F]+[LlUu]*"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="0[0-7]+[LlUu]*"><token type="LiteralNumberOct"/></rule>
+      <rule pattern="\d+[LlUu]*"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]"><token type="Operator"/></rule>
+      <rule pattern="[()\[\],.;]"><token type="Punctuation"/></rule>
+      <rule pattern="(case|const|continue|native|default|else|enum|for|if|new|operator|public|return|sizeof|static|decl|struct|switch)\b"><token type="Keyword"/></rule>
+      <rule pattern="(bool|float|void|int|char)\b"><token type="KeywordType"/></rule>
+      <rule pattern="(true|false)\b"><token type="KeywordConstant"/></rule>
+      <rule pattern="[a-zA-Z_]\w*"><token type="Name"/></rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;{]*)(\{)"><bygroups><usingself state="root"/><token type="NameFunction"/><usingself state="root"/><usingself state="root"/><token type="Punctuation"/></bygroups><push state="function"/></rule>
+      <rule pattern="((?:[\w*\s])+?(?:\s|[*]))([a-zA-Z_]\w*)(\s*\([^;]*?\))([^;]*)(;)"><bygroups><usingself state="root"/><token type="NameFunction"/><usingself state="root"/><usingself state="root"/><token type="Punctuation"/></bygroups></rule>
+    </state>
+    <state name="string">
+      <rule pattern="&quot;"><token type="LiteralString"/><pop depth="1"/></rule>
+      <rule pattern="\\([\\abfnrtv&quot;\&#x27;]|x[a-fA-F0-9]{2,4}|[0-7]{1,3})"><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&quot;\n]+"><token type="LiteralString"/></rule>
+      <rule pattern="\\\n"><token type="LiteralString"/></rule>
+      <rule pattern="\\"><token type="LiteralString"/></rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(include)(\s*(?:/[*].*?[*]/\s*)?)([^\n]+)"><bygroups><token type="CommentPreproc"/><token type="Text"/><token type="CommentPreprocFile"/></bygroups></rule>
+      <rule pattern="[^/\n]+"><token type="CommentPreproc"/></rule>
+      <rule pattern="/\*(.|\n)*?\*/"><token type="CommentMultiline"/></rule>
+      <rule pattern="//.*?\n"><token type="CommentSingle"/><pop depth="1"/></rule>
+      <rule pattern="/"><token type="CommentPreproc"/></rule>
+      <rule pattern="(?&lt;=\\)\n"><token type="CommentPreproc"/></rule>
+      <rule pattern="\n"><token type="CommentPreproc"/><pop depth="1"/></rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n"><token type="CommentPreproc"/><push/></rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n"><token type="CommentPreproc"/><pop depth="1"/></rule>
+      <rule pattern=".*?\n"><token type="Comment"/></rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sparql.xml 🔗

@@ -0,0 +1,160 @@
+<lexer>
+  <config>
+    <name>SPARQL</name>
+    <alias>sparql</alias>
+    <filename>*.rq</filename>
+    <filename>*.sparql</filename>
+    <mime_type>application/sparql-query</mime_type>
+  </config>
+  <rules>
+    <state name="string-escape">
+      <rule pattern="u[0-9A-Fa-f]{4}">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="U[0-9A-Fa-f]{8}">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".">
+        <token type="LiteralStringEscape"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="end-of-string">
+      <rule pattern="(@)([a-zA-Z]+(?:-[a-zA-Z0-9]+)*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="\^\^">
+        <token type="Operator"/>
+        <pop depth="2"/>
+      </rule>
+      <rule>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="((?i)select|construct|describe|ask|where|filter|group\s+by|minus|distinct|reduced|from\s+named|from|order\s+by|desc|asc|limit|offset|bindings|load|clear|drop|create|add|move|copy|insert\s+data|delete\s+data|delete\s+where|delete|insert|using\s+named|using|graph|default|named|all|optional|service|silent|bind|union|not\s+in|in|as|having|to|prefix|base)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(a)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(&lt;(?:[^&lt;&gt;&#34;{}|^`\\\x00-\x20])*&gt;)">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="(_:[_\p{L}\p{N}](?:[-_.\p{L}\p{N}]*[-_\p{L}\p{N}])?)">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="[?$][_\p{L}\p{N}]+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="([\p{L}][-_.\p{L}\p{N}]*)?(\:)((?:[_:\p{L}\p{N}]|(?:%[0-9A-Fa-f][0-9A-Fa-f])|(?:\\[ _~.\-!$&amp;&#34;()*+,;=/?#@%]))(?:(?:[-_:.\p{L}\p{N}]|(?:%[0-9A-Fa-f][0-9A-Fa-f])|(?:\\[ _~.\-!$&amp;&#34;()*+,;=/?#@%]))*(?:[-_:\p{L}\p{N}]|(?:%[0-9A-Fa-f][0-9A-Fa-f])|(?:\\[ _~.\-!$&amp;&#34;()*+,;=/?#@%])))?)?">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="((?i)str|lang|langmatches|datatype|bound|iri|uri|bnode|rand|abs|ceil|floor|round|concat|strlen|ucase|lcase|encode_for_uri|contains|strstarts|strends|strbefore|strafter|year|month|day|hours|minutes|seconds|timezone|tz|now|md5|sha1|sha256|sha384|sha512|coalesce|if|strlang|strdt|sameterm|isiri|isuri|isblank|isliteral|isnumeric|regex|substr|replace|exists|not\s+exists|count|sum|min|max|avg|sample|group_concat|separator)\b">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(true|false)">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="[+\-]?(\d+\.\d*[eE][+-]?\d+|\.?\d+[eE][+-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+\-]?(\d+\.\d*|\.\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+\-]?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(\|\||&amp;&amp;|=|\*|\-|\+|/|!=|&lt;=|&gt;=|!|&lt;|&gt;)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[(){}.;,:^\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="#[^\n]*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="triple-double-quoted-string"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="single-double-quoted-string"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <push state="triple-single-quoted-string"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="single-single-quoted-string"/>
+      </rule>
+    </state>
+    <state name="triple-double-quoted-string">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+    <state name="single-double-quoted-string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^&#34;\\\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+    <state name="triple-single-quoted-string">
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralStringEscape"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+    <state name="single-single-quoted-string">
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^&#39;\\\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/sql.xml 🔗

@@ -0,0 +1,90 @@
+<lexer>
+  <config>
+    <name>SQL</name>
+    <alias>sql</alias>
+    <filename>*.sql</filename>
+    <mime_type>text/x-sql</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="--.*\n?">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comments"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="double-string"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/squidconf.xml 🔗

@@ -0,0 +1,63 @@
+<lexer>
+  <config>
+    <name>SquidConf</name>
+    <alias>squidconf</alias>
+    <alias>squid.conf</alias>
+    <alias>squid</alias>
+    <filename>squid.conf</filename>
+    <mime_type>text/x-squidconf</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="#">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/standard_ml.xml 🔗

@@ -0,0 +1,548 @@
+<lexer>
+  <config>
+    <name>Standard ML</name>
+    <alias>sml</alias>
+    <filename>*.sml</filename>
+    <filename>*.sig</filename>
+    <filename>*.fun</filename>
+    <mime_type>text/x-standardml</mime_type>
+    <mime_type>application/x-standardml</mime_type>
+  </config>
+  <rules>
+    <state name="delimiters">
+      <rule pattern="\(|\[|\{">
+        <token type="Punctuation"/>
+        <push state="main"/>
+      </rule>
+      <rule pattern="\)|\]|\}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\b(let|if|local)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="main" state="main"/>
+      </rule>
+      <rule pattern="\b(struct|sig|while)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="main"/>
+      </rule>
+      <rule pattern="\b(do|else|end|in|then)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <push state="main"/>
+      </rule>
+    </state>
+    <state name="breakout">
+      <rule pattern="(?=\b(where|do|handle|if|sig|op|while|case|as|else|signature|andalso|struct|infixr|functor|in|structure|then|local|rec|end|fun|of|orelse|val|include|fn|with|exception|let|and|infix|sharing|datatype|type|abstype|withtype|eqtype|nonfix|raise|open)\b(?!\&#39;))">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="tyvarseq">
+      <rule pattern="\s">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\&#39;[\w\&#39;]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="[a-zA-Z][\w&#39;]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\)">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="char">
+      <rule pattern="[^&#34;\\]">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="\\[\\&#34;abtnvfr]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\\^[\x40-\x5e]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\[0-9]{3}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\u[0-9a-fA-F]{4}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\\s+\\">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="datbind">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="\b(and)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="#pop" state="dname"/>
+      </rule>
+      <rule pattern="\b(withtype)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="#pop" state="tname"/>
+      </rule>
+      <rule pattern="\b(of)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(\|)(\s*)([a-zA-Z][\w&#39;]*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\|)(\s+)([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="breakout"/>
+      </rule>
+      <rule>
+        <include state="core"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="[^&#34;\\]">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\\[\\&#34;abtnvfr]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\\^[\x40-\x5e]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\[0-9]{3}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\u[0-9a-fA-F]{4}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\\\s+\\">
+        <token type="LiteralStringInterpol"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="tname">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="breakout"/>
+      </rule>
+      <rule pattern="\&#39;[\w\&#39;]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="tyvarseq"/>
+      </rule>
+      <rule pattern="=(?![!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="Punctuation"/>
+        <push state="#pop" state="typbind"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="dname">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="breakout"/>
+      </rule>
+      <rule pattern="\&#39;[\w\&#39;]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="tyvarseq"/>
+      </rule>
+      <rule pattern="(=)(\s*)(datatype)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="KeywordReserved"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="=(?![!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="Punctuation"/>
+        <push state="#pop" state="datbind" state="datcon"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="typbind">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="\b(and)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="#pop" state="tname"/>
+      </rule>
+      <rule>
+        <include state="breakout"/>
+      </rule>
+      <rule>
+        <include state="core"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="ename">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="(exception|and)\b(\s+)([a-zA-Z][\w&#39;]*)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(exception|and)\b(\s*)([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b(of)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule>
+        <include state="breakout"/>
+      </rule>
+      <rule>
+        <include state="core"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+      </rule>
+    </state>
+    <state name="vname">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="\&#39;[\w\&#39;]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="tyvarseq"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)(\s*)(=(?![!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+))">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)(\s*)(=(?![!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+))">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="NameVariable"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="NameVariable"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="sname">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="breakout"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="main-fun">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="\s">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\b(fun|and)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="fname"/>
+      </rule>
+      <rule pattern="\b(val)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="#pop" state="main" state="vname"/>
+      </rule>
+      <rule pattern="\|">
+        <token type="Punctuation"/>
+        <push state="fname"/>
+      </rule>
+      <rule pattern="\b(case|handle)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="#pop" state="main"/>
+      </rule>
+      <rule>
+        <include state="delimiters"/>
+      </rule>
+      <rule>
+        <include state="core"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+      </rule>
+    </state>
+    <state name="datcon">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="dotted">
+      <rule pattern="([a-zA-Z][\w&#39;]*)(\.)">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="Name"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+      </rule>
+    </state>
+    <state name="main">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="\b(val|and)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="vname"/>
+      </rule>
+      <rule pattern="\b(fun)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="#pop" state="main-fun" state="fname"/>
+      </rule>
+      <rule>
+        <include state="delimiters"/>
+      </rule>
+      <rule>
+        <include state="core"/>
+      </rule>
+      <rule pattern="\S+">
+        <token type="Error"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^(*)]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="CommentMultiline"/>
+        <push/>
+      </rule>
+      <rule pattern="\*\)">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[(*)]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\(\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+    </state>
+    <state name="core">
+      <rule pattern="(_|\}|\{|\)|;|,|\[|\(|\]|\.\.\.)">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="#&#34;">
+        <token type="LiteralStringChar"/>
+        <push state="char"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="~?0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0wx[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0w\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="~?\d+\.\d+[eE]~?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="~?\d+\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="~?\d+[eE]~?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="~?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="#\s*[1-9][0-9]*">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="#\s*([a-zA-Z][\w&#39;]*)">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="#\s+([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="\b(datatype|abstype)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="dname"/>
+      </rule>
+      <rule pattern="(?=\b(exception)\b(?!\&#39;))">
+        <token type="Text"/>
+        <push state="ename"/>
+      </rule>
+      <rule pattern="\b(functor|include|open|signature|structure)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="sname"/>
+      </rule>
+      <rule pattern="\b(type|eqtype)\b(?!\&#39;)">
+        <token type="KeywordReserved"/>
+        <push state="tname"/>
+      </rule>
+      <rule pattern="\&#39;[\w\&#39;]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)(\.)">
+        <token type="NameNamespace"/>
+        <push state="dotted"/>
+      </rule>
+      <rule pattern="\b(abstype|and|andalso|as|case|datatype|do|else|end|exception|fn|fun|handle|if|in|infix|infixr|let|local|nonfix|of|op|open|orelse|raise|rec|then|type|val|with|withtype|while|eqtype|functor|include|sharing|sig|signature|struct|structure|where)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\b(:|\|,=|=&gt;|-&gt;|#|:&gt;)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="fname">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="\&#39;[\w\&#39;]*">
+        <token type="NameDecorator"/>
+      </rule>
+      <rule pattern="\(">
+        <token type="Punctuation"/>
+        <push state="tyvarseq"/>
+      </rule>
+      <rule pattern="([a-zA-Z][\w&#39;]*)">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="([!%&amp;$#+\-/:&lt;=&gt;?@\\~`^|*]+)">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/stas.xml 🔗

@@ -0,0 +1,85 @@
+<lexer>
+  <config>
+    <name>stas</name>
+    <filename>*.stas</filename>
+  </config>
+  <rules>
+    <state name="string-double-quoted">
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string-single-quoted">
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#39;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="string-char-literal">
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\`]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="(\n|\s)+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)(fn|argc|argv|swap|dup|over|over2|rot|rot4|drop|w8|w16|w32|w64|r8|r16|r32|r64|syscall0|syscall1|syscall2|syscall3|syscall4|syscall5|syscall6|_breakpoint|assert|const|auto|reserve|pop|include|addr|if|else|elif|while|break|continue|ret)(?!\S)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)(\+|\-|\*|\/|\%|\%\%|\+\+|\-\-|&gt;&gt;|&lt;&lt;)(?!\S)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)(\=|\!\=|&gt;|&lt;|&gt;\=|&lt;\=|&gt;s|&lt;s|&gt;\=s|&lt;\=s)(?!\S)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)(\&amp;|\||\^|\~|\!|-\>)(?!\S)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)\-?(\d+)(?!\S)">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="(?&lt;!\S);.*(\S|\n)">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="string-single-quoted"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string-double-quoted"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringChar"/>
+        <push state="string-char-literal"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)[{}](?!\S)">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(?&lt;!\S)[^\s]+(?!\S)">
+        <token type="Name"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/stylus.xml 🔗

@@ -0,0 +1,132 @@
+<lexer>
+  <config>
+    <name>Stylus</name>
+    <alias>stylus</alias>
+    <filename>*.styl</filename>
+    <mime_type>text/x-styl</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="values">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(\#[a-f0-9]{3,6})">
+        <token type="LiteralNumberHex"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/swift.xml 🔗

@@ -0,0 +1,207 @@
+<lexer>
+  <config>
+    <name>Swift</name>
+    <alias>swift</alias>
+    <filename>*.swift</filename>
+    <mime_type>text/x-swift</mime_type>
+  </config>
+  <rules>
+    <state name="comment">
+      <rule pattern=":param: [a-zA-Z_]\w*|:returns?:|(FIXME|MARK|TODO):">
+        <token type="CommentSpecial"/>
+      </rule>
+    </state>
+    <state name="preproc">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule pattern="[A-Za-z]\w*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="comment-single">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="comment"/>
+      </rule>
+      <rule pattern="[^\n]">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="module">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="\\\(">
+        <token type="LiteralStringInterpol"/>
+        <push state="string-intp"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\[&#39;&#34;\\nrt]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="string-intp">
+      <rule pattern="\(">
+        <token type="LiteralStringInterpol"/>
+        <push/>
+      </rule>
+      <rule pattern="\)">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//">
+        <token type="CommentSingle"/>
+        <push state="comment-single"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment-multi"/>
+      </rule>
+      <rule pattern="#(if|elseif|else|endif|available)\b">
+        <token type="CommentPreproc"/>
+        <push state="preproc"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/systemd.xml 🔗

@@ -0,0 +1,63 @@
+<lexer>
+  <config>
+    <name>SYSTEMD</name>
+    <alias>systemd</alias>
+    <filename>*.automount</filename>
+    <filename>*.device</filename>
+    <filename>*.dnssd</filename>
+    <filename>*.link</filename>
+    <filename>*.mount</filename>
+    <filename>*.netdev</filename>
+    <filename>*.network</filename>
+    <filename>*.path</filename>
+    <filename>*.scope</filename>
+    <filename>*.service</filename>
+    <filename>*.slice</filename>
+    <filename>*.socket</filename>
+    <filename>*.swap</filename>
+    <filename>*.target</filename>
+    <filename>*.timer</filename>
+    <mime_type>text/plain</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[;#].*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\[.*?\]$">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(.*?)(=)(.*)(\\\n)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="continuation"/>
+      </rule>
+      <rule pattern="(.*?)(=)(.*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="continuation">
+      <rule pattern="(.*?)(\\\n)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(.*)">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/systemverilog.xml 🔗

@@ -0,0 +1,181 @@
+<lexer>
+  <config>
+    <name>systemverilog</name>
+    <alias>systemverilog</alias>
+    <alias>sv</alias>
+    <filename>*.sv</filename>
+    <filename>*.svh</filename>
+    <mime_type>text/x-systemverilog</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="macro">
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[\w:]+\*?">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^\s*`define">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="^(\s*)(package)(\s+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(import)(\s+)(&#34;DPI(?:-C)?&#34;)(\s+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+          <token type="LiteralString"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(import)(\s+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="/(\\\n)?/(\n|(.|\n)*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="[{}#@]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="L?&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="L?&#39;(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;h)[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;b)[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;d)[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;o)[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\&#39;[01xz]">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+[Ll]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.;\&#39;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="`[a-zA-Z_]\w*">
+        <token type="NameConstant"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tablegen.xml 🔗

@@ -0,0 +1,69 @@
+<lexer>
+  <config>
+    <name>TableGen</name>
+    <alias>tablegen</alias>
+    <filename>*.td</filename>
+    <mime_type>text/x-tablegen</mime_type>
+  </config>
+  <rules>
+    <state name="whitespace">
+      <rule pattern="(\n|\s)+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="keyword">
+      <rule pattern="(multiclass|foreach|string|class|field|defm|bits|code|list|def|int|let|dag|bit|in)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="macro"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="c?&#34;[^&#34;]*?&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule>
+        <include state="keyword"/>
+      </rule>
+      <rule pattern="\$[_a-zA-Z][_\w]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\d*[_a-zA-Z][_\w]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\[\{[\w\W]*?\}\]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[+-]?\d+|0x[\da-fA-F]+|0b[01]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[=&lt;&gt;{}\[\]()*.,!:;]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="macro">
+      <rule pattern="(#include\s+)(&#34;[^&#34;]*&#34;)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^\s*#(ifdef|ifndef)\s+[_\w][_\w\d]*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="^\s*#define\s+[_\w][_\w\d]*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="^\s*#endif">
+        <token type="CommentPreproc"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tal.xml 🔗

@@ -0,0 +1,43 @@
+
+<lexer>
+  <config>
+    <name>Tal</name>
+    <alias>tal</alias>
+    <alias>uxntal</alias>
+    <filename>*.tal</filename>
+    <mime_type>text/x-uxntal</mime_type>
+  </config>
+  <rules>
+    <state name="comment">
+      <rule pattern="(?&lt;!\S)\((?!\S)"><token type="CommentMultiline"/><push/></rule>
+      <rule pattern="(?&lt;!\S)\)(?!\S)"><token type="CommentMultiline"/><pop depth="1"/></rule>
+      <rule pattern="[^()]+"><token type="CommentMultiline"/></rule>
+      <rule pattern="[()]+"><token type="CommentMultiline"/></rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="(?&lt;!\S)\((?!\S)"><token type="CommentMultiline"/><push state="comment"/></rule>
+      <rule pattern="(?&lt;!\S)(BRK|LIT|INC|POP|DUP|NIP|SWP|OVR|ROT|EQU|NEQ|GTH|LTH|JMP|JCN|JSR|STH|LDZ|STZ|LDR|STR|LDA|STA|DEI|DEO|ADD|SUB|MUL|DIV|AND|ORA|EOR|SFT)2?k?r?(?!\S)"><token type="KeywordReserved"/></rule>
+      <rule pattern="[][{}](?!\S)"><token type="Punctuation"/></rule>
+      <rule pattern="#([0-9a-f]{2}){1,2}(?!\S)"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="&quot;\S+"><token type="LiteralString"/></rule>
+      <rule pattern="([0-9a-f]{2}){1,2}(?!\S)"><token type="Literal"/></rule>
+      <rule pattern="[|$][0-9a-f]{1,4}(?!\S)"><token type="KeywordDeclaration"/></rule>
+      <rule pattern="%\S+"><token type="NameDecorator"/></rule>
+      <rule pattern="@\S+"><token type="NameFunction"/></rule>
+      <rule pattern="&amp;\S+"><token type="NameLabel"/></rule>
+      <rule pattern="/\S+"><token type="NameTag"/></rule>
+      <rule pattern="\.\S+"><token type="NameVariableMagic"/></rule>
+      <rule pattern=",\S+"><token type="NameVariableInstance"/></rule>
+      <rule pattern=";\S+"><token type="NameVariableGlobal"/></rule>
+      <rule pattern="-\S+"><token type="Literal"/></rule>
+      <rule pattern="_\S+"><token type="Literal"/></rule>
+      <rule pattern="=\S+"><token type="Literal"/></rule>
+      <rule pattern="!\S+"><token type="NameFunction"/></rule>
+      <rule pattern="\?\S+"><token type="NameFunction"/></rule>
+      <rule pattern="~\S+"><token type="KeywordNamespace"/></rule>
+      <rule pattern="\S+"><token type="NameFunction"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tasm.xml 🔗

@@ -0,0 +1,135 @@
+<lexer>
+  <config>
+    <name>TASM</name>
+    <alias>tasm</alias>
+    <filename>*.asm</filename>
+    <filename>*.ASM</filename>
+    <filename>*.tasm</filename>
+    <mime_type>text/x-tasm</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="preproc">
+      <rule pattern="[^;\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern=";.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="[\n\r]">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\[\n\r]">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";.*">
+        <token type="CommentSingle"/>
+      </rule>
+    </state>
+    <state name="punctuation">
+      <rule pattern="[,():\[\]]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[&amp;|^&lt;&gt;+*=/%~-]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[$]+">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="seg|wrt|strict">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="byte|[dq]?word">
+        <token type="KeywordType"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^\s*%">
+        <token type="CommentPreproc"/>
+        <push state="preproc"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="[@a-z$._?][\w$.?#@~]*:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="BITS|USE16|USE32|SECTION|SEGMENT|ABSOLUTE|EXTERN|GLOBAL|ORG|ALIGN|STRUC|ENDSTRUC|ENDS|COMMON|CPU|GROUP|UPPERCASE|INCLUDE|EXPORT|LIBRARY|MODULE|PROC|ENDP|USES|ARG|DATASEG|UDATASEG|END|IDEAL|P386|MODEL|ASSUME|CODESEG|SIZE">
+        <token type="Keyword"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="([@a-z$._?][\w$.?#@~]*)(\s+)(db|dd|dw|T[A-Z][a-z]+)">
+        <bygroups>
+          <token type="NameConstant"/>
+          <token type="KeywordDeclaration"/>
+          <token type="KeywordDeclaration"/>
+        </bygroups>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="(?:res|d)[bwdqt]|times">
+        <token type="KeywordDeclaration"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="[@a-z$._?][\w$.?#@~]*">
+        <token type="NameFunction"/>
+        <push state="instruction-args"/>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="instruction-args">
+      <rule pattern="&#34;(\\&#34;|[^&#34;\n])*&#34;|&#39;(\\&#39;|[^&#39;\n])*&#39;|`(\\`|[^`\n])*`">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(?:0x[0-9a-f]+|$0[0-9a-f]*|[0-9]+[0-9a-f]*h)">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-7]+q">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="[01]+b">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="[0-9]+\.e?[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule pattern="r[0-9][0-5]?[bwd]|[a-d][lh]|[er]?[a-d]x|[er]?[sb]p|[er]?[sd]i|[c-gs]s|st[0-7]|mm[0-7]|cr[0-4]|dr[0-367]|tr[3-7]">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[@a-z$._?][\w$.?#@~]*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="(\\\s*)(;.*)([\r\n])">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentSingle"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[\r\n]+">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tcl.xml 🔗

@@ -0,0 +1,272 @@
+<lexer>
+  <config>
+    <name>Tcl</name>
+    <alias>tcl</alias>
+    <filename>*.tcl</filename>
+    <filename>*.rvt</filename>
+    <mime_type>text/x-tcl</mime_type>
+    <mime_type>text/x-script.tcl</mime_type>
+    <mime_type>application/x-tcl</mime_type>
+  </config>
+  <rules>
+    <state name="command-in-bracket">
+      <rule pattern="\b(namespace|continue|variable|uplevel|foreach|return|update|elseif|global|rename|switch|upvar|error|vwait|catch|break|unset|array|apply|trace|after|while|then|else|expr|eval|proc|for|set|if)\b">
+        <token type="Keyword"/>
+        <push state="params-in-bracket"/>
+      </rule>
+      <rule pattern="\b(platform::shell|pkg::create|pkg_mkIndex|fconfigure|re_syntax|fileevent|platform|fblocked|lreverse|mathfunc|encoding|registry|lreplace|history|bgerror|llength|lsearch|linsert|lassign|lappend|refchan|unknown|package|lrepeat|msgcat|mathop|format|interp|lrange|string|source|lindex|socket|concat|regsub|regexp|loadTk|memory|binary|append|unload|subst|split|lsort|clock|close|flush|fcopy|chan|glob|time|gets|http|dict|file|puts|tell|join|read|exit|exec|open|list|scan|seek|incr|info|lset|load|dde|pwd|pid|eof|tm|cd)\b">
+        <token type="NameBuiltin"/>
+        <push state="params-in-bracket"/>
+      </rule>
+      <rule pattern="([\w.-]+)">
+        <token type="NameVariable"/>
+        <push state="params-in-bracket"/>
+      </rule>
+      <rule pattern="#">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+    </state>
+    <state name="command-in-paren">
+      <rule pattern="\b(namespace|continue|variable|uplevel|foreach|return|update|elseif|global|rename|switch|upvar|error|vwait|catch|break|unset|array|apply|trace|after|while|then|else|expr|eval|proc|for|set|if)\b">
+        <token type="Keyword"/>
+        <push state="params-in-paren"/>
+      </rule>
+      <rule pattern="\b(platform::shell|pkg::create|pkg_mkIndex|fconfigure|re_syntax|fileevent|platform|fblocked|lreverse|mathfunc|encoding|registry|lreplace|history|bgerror|llength|lsearch|linsert|lassign|lappend|refchan|unknown|package|lrepeat|msgcat|mathop|format|interp|lrange|string|source|lindex|socket|concat|regsub|regexp|loadTk|memory|binary|append|unload|subst|split|lsort|clock|close|flush|fcopy|chan|glob|time|gets|http|dict|file|puts|tell|join|read|exit|exec|open|list|scan|seek|incr|info|lset|load|dde|pwd|pid|eof|tm|cd)\b">
+        <token type="NameBuiltin"/>
+        <push state="params-in-paren"/>
+      </rule>
+      <rule pattern="([\w.-]+)">
+        <token type="NameVariable"/>
+        <push state="params-in-paren"/>
+      </rule>
+      <rule pattern="#">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+    </state>
+    <state name="command-in-brace">
+      <rule pattern="\b(namespace|continue|variable|uplevel|foreach|return|update|elseif|global|rename|switch|upvar|error|vwait|catch|break|unset|array|apply|trace|after|while|then|else|expr|eval|proc|for|set|if)\b">
+        <token type="Keyword"/>
+        <push state="params-in-brace"/>
+      </rule>
+      <rule pattern="\b(platform::shell|pkg::create|pkg_mkIndex|fconfigure|re_syntax|fileevent|platform|fblocked|lreverse|mathfunc|encoding|registry|lreplace|history|bgerror|llength|lsearch|linsert|lassign|lappend|refchan|unknown|package|lrepeat|msgcat|mathop|format|interp|lrange|string|source|lindex|socket|concat|regsub|regexp|loadTk|memory|binary|append|unload|subst|split|lsort|clock|close|flush|fcopy|chan|glob|time|gets|http|dict|file|puts|tell|join|read|exit|exec|open|list|scan|seek|incr|info|lset|load|dde|pwd|pid|eof|tm|cd)\b">
+        <token type="NameBuiltin"/>
+        <push state="params-in-brace"/>
+      </rule>
+      <rule pattern="([\w.-]+)">
+        <token type="NameVariable"/>
+        <push state="params-in-brace"/>
+      </rule>
+      <rule pattern="#">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+    </state>
+    <state name="basic">
+      <rule pattern="\(">
+        <token type="Keyword"/>
+        <push state="paren"/>
+      </rule>
+      <rule pattern="\[">
+        <token type="Keyword"/>
+        <push state="bracket"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Keyword"/>
+        <push state="brace"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(eq|ne|in|ni)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="!=|==|&lt;&lt;|&gt;&gt;|&lt;=|&gt;=|&amp;&amp;|\|\||\*\*|[-+~!*/%&lt;&gt;&amp;^|?:]">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="params-in-bracket">
+      <rule pattern="\]">
+        <token type="Keyword"/>
+        <push state="#pop" state="#pop"/>
+      </rule>
+      <rule>
+        <include state="params"/>
+      </rule>
+    </state>
+    <state name="data">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="0x[a-fA-F0-9]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="0[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\d+\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\$([\w.:-]+)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="([\w.:-]+)">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="command">
+      <rule pattern="\b(namespace|continue|variable|uplevel|foreach|return|update|elseif|global|rename|switch|upvar|error|vwait|catch|break|unset|array|apply|trace|after|while|then|else|expr|eval|proc|for|set|if)\b">
+        <token type="Keyword"/>
+        <push state="params"/>
+      </rule>
+      <rule pattern="\b(platform::shell|pkg::create|pkg_mkIndex|fconfigure|re_syntax|fileevent|platform|fblocked|lreverse|mathfunc|encoding|registry|lreplace|history|bgerror|llength|lsearch|linsert|lassign|lappend|refchan|unknown|package|lrepeat|msgcat|mathop|format|interp|lrange|string|source|lindex|socket|concat|regsub|regexp|loadTk|memory|binary|append|unload|subst|split|lsort|clock|close|flush|fcopy|chan|glob|time|gets|http|dict|file|puts|tell|join|read|exit|exec|open|list|scan|seek|incr|info|lset|load|dde|pwd|pid|eof|tm|cd)\b">
+        <token type="NameBuiltin"/>
+        <push state="params"/>
+      </rule>
+      <rule pattern="([\w.-]+)">
+        <token type="NameVariable"/>
+        <push state="params"/>
+      </rule>
+      <rule pattern="#">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+    </state>
+    <state name="params-in-brace">
+      <rule pattern="\}">
+        <token type="Keyword"/>
+        <push state="#pop" state="#pop"/>
+      </rule>
+      <rule>
+        <include state="params"/>
+      </rule>
+    </state>
+    <state name="string-square">
+      <rule pattern="\[">
+        <token type="LiteralStringDouble"/>
+        <push state="string-square"/>
+      </rule>
+      <rule pattern="(?s)(\\\\|\\[0-7]+|\\.|\\\n|[^\]\\])">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\]">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="bracket">
+      <rule pattern="\]">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="command-in-bracket"/>
+      </rule>
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+    </state>
+    <state name="params-in-paren">
+      <rule pattern="\)">
+        <token type="Keyword"/>
+        <push state="#pop" state="#pop"/>
+      </rule>
+      <rule>
+        <include state="params"/>
+      </rule>
+    </state>
+    <state name="paren">
+      <rule pattern="\)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="command-in-paren"/>
+      </rule>
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern=".*[^\\]\n">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=".*\\\n">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="command"/>
+      </rule>
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+      <rule pattern="\}">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="brace">
+      <rule pattern="\}">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="command-in-brace"/>
+      </rule>
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+    </state>
+    <state name="params">
+      <rule pattern=";">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(else|elseif|then)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="\[">
+        <token type="LiteralStringDouble"/>
+        <push state="string-square"/>
+      </rule>
+      <rule pattern="(?s)(\\\\|\\[0-7]+|\\.|[^&#34;\\])">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tcsh.xml 🔗

@@ -0,0 +1,121 @@
+<lexer>
+  <config>
+    <name>Tcsh</name>
+    <alias>tcsh</alias>
+    <alias>csh</alias>
+    <filename>*.tcsh</filename>
+    <filename>*.csh</filename>
+    <mime_type>application/x-csh</mime_type>
+  </config>
+  <rules>
+    <state name="basic">
+      <rule pattern="\b(if|endif|else|while|then|foreach|case|default|continue|goto|breaksw|end|switch|endsw)\s*\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(alias|alloc|bg|bindkey|break|builtins|bye|caller|cd|chdir|complete|dirs|echo|echotc|eval|exec|exit|fg|filetest|getxvers|glob|getspath|hashstat|history|hup|inlib|jobs|kill|limit|log|login|logout|ls-F|migrate|newgrp|nice|nohup|notify|onintr|popd|printenv|pushd|rehash|repeat|rootnode|popd|pushd|set|shift|sched|setenv|setpath|settc|setty|setxvers|shift|source|stop|suspend|source|suspend|telltc|time|umask|unalias|uncomplete|unhash|universe|unlimit|unset|unsetenv|ver|wait|warp|watchlog|where|which)\s*\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="#.*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\\[\w\W]">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="(\b\w+)(\s*)(=)">
+        <bygroups>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[\[\]{}()=]+">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&lt;&lt;\s*(\&#39;?)\\?(\w+)[\w\W]+?\2">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="data">
+      <rule pattern="(?s)&#34;(\\\\|\\[0-7]+|\\.|[^&#34;\\])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="(?s)&#39;(\\\\|\\[0-7]+|\\.|[^&#39;\\])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[^=\s\[\]{}()$&#34;\&#39;`\\;#]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\d+(?= |\Z)">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\$#?(\w+|.)">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+    <state name="curly">
+      <rule pattern="\}">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=":-">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[^}:&#34;\&#39;`$]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="paren">
+      <rule pattern="\)">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="backticks">
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="basic"/>
+      </rule>
+      <rule pattern="\$\(">
+        <token type="Keyword"/>
+        <push state="paren"/>
+      </rule>
+      <rule pattern="\$\{#?">
+        <token type="Keyword"/>
+        <push state="curly"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="backticks"/>
+      </rule>
+      <rule>
+        <include state="data"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/termcap.xml 🔗

@@ -0,0 +1,75 @@
+<lexer>
+  <config>
+    <name>Termcap</name>
+    <alias>termcap</alias>
+    <filename>termcap</filename>
+    <filename>termcap.src</filename>
+  </config>
+  <rules>
+    <state name="defs">
+      <rule pattern="\\\n[ \t]*">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n[ \t]*">
+        <token type="Text"/>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="(#)([0-9]+)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="LiteralNumber"/>
+        </bygroups>
+      </rule>
+      <rule pattern="=">
+        <token type="Operator"/>
+        <push state="data"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[^\s:=#]+">
+        <token type="NameClass"/>
+      </rule>
+    </state>
+    <state name="data">
+      <rule pattern="\\072">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^:\\]+">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Literal"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="^[^\s#:|]+">
+        <token type="NameTag"/>
+        <push state="names"/>
+      </rule>
+    </state>
+    <state name="names">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern=":">
+        <token type="Punctuation"/>
+        <push state="defs"/>
+      </rule>
+      <rule pattern="\|">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[^:|]+">
+        <token type="NameAttribute"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/terminfo.xml 🔗

@@ -0,0 +1,84 @@
+<lexer>
+  <config>
+    <name>Terminfo</name>
+    <alias>terminfo</alias>
+    <filename>terminfo</filename>
+    <filename>terminfo.src</filename>
+  </config>
+  <rules>
+    <state name="names">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(,)([ \t]*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="defs"/>
+      </rule>
+      <rule pattern="\|">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[^,|]+">
+        <token type="NameAttribute"/>
+      </rule>
+    </state>
+    <state name="defs">
+      <rule pattern="\n[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="(#)([0-9]+)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="LiteralNumber"/>
+        </bygroups>
+      </rule>
+      <rule pattern="=">
+        <token type="Operator"/>
+        <push state="data"/>
+      </rule>
+      <rule pattern="(,)([ \t]*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\s,=#]+">
+        <token type="NameClass"/>
+      </rule>
+    </state>
+    <state name="data">
+      <rule pattern="\\[,\\]">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="(,)([ \t]*)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\\,]+">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Literal"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="^[^\s#,|]+">
+        <token type="NameTag"/>
+        <push state="names"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/terraform.xml 🔗

@@ -0,0 +1,140 @@
+<lexer>
+  <config>
+    <name>Terraform</name>
+    <alias>terraform</alias>
+    <alias>tf</alias>
+    <filename>*.tf</filename>
+    <mime_type>application/x-tf</mime_type>
+    <mime_type>application/x-terraform</mime_type>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\\\\&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interp-inside"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="[^&#34;\\\\$]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="interp-inside">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[\[\](),.{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="-?[0-9]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="=&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(false|true)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="/(?s)\*(((?!\*/).)*)\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\s*(#|//).*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="([a-zA-Z]\w*)(\s*)(=(?!&gt;))">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Text"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^\s*(provisioner|variable|resource|provider|module|output|data)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(for|in)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(module|count|data|each|var)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(parseint|signum|floor|ceil|log|max|min|abs|pow)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(trimsuffix|formatlist|trimprefix|trimspace|regexall|replace|indent|strrev|format|substr|chomp|split|title|regex|lower|upper|trim|join)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[^.](setintersection|coalescelist|setsubtract|setproduct|matchkeys|chunklist|transpose|contains|distinct|coalesce|setunion|reverse|flatten|element|compact|lookup|length|concat|values|zipmap|range|merge|slice|index|list|sort|keys|map)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[^.](base64decode|base64encode|base64gzip|jsondecode|jsonencode|yamldecode|yamlencode|csvdecode|urlencode)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(templatefile|filebase64|fileexists|pathexpand|basename|abspath|fileset|dirname|file)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(formatdate|timestamp|timeadd)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(filebase64sha256|filebase64sha512|base64sha512|base64sha256|filesha256|rsadecrypt|filesha512|filesha1|filemd5|uuidv5|bcrypt|sha256|sha512|sha1|uuid|md5)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(cidrnetmask|cidrsubnet|cidrhost)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(tostring|tonumber|tobool|tolist|tomap|toset|can|try)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="=(?!&gt;)|\+|-|\*|\/|:|!|%|&gt;|&lt;(?!&lt;)|&gt;=|&lt;=|==|!=|&amp;&amp;|\||\?">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\n|\s+|\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[a-zA-Z]\w*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="(?s)(&lt;&lt;-?)(\w+)(\n\s*(?:(?!\2).)*\s*\n\s*)(\2)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Operator"/>
+          <token type="LiteralString"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="declaration">
+      <rule pattern="(\s*)(&#34;(?:\\\\|\\&#34;|[^&#34;])*&#34;)(\s*)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="NameVariable"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\{">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tex.xml 🔗

@@ -0,0 +1,113 @@
+<lexer>
+  <config>
+    <name>TeX</name>
+    <alias>tex</alias>
+    <alias>latex</alias>
+    <filename>*.tex</filename>
+    <filename>*.aux</filename>
+    <filename>*.toc</filename>
+    <mime_type>text/x-tex</mime_type>
+    <mime_type>text/x-latex</mime_type>
+  </config>
+  <rules>
+    <state name="displaymath">
+      <rule pattern="\\\]">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\$\$">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule>
+        <include state="math"/>
+      </rule>
+    </state>
+    <state name="command">
+      <rule pattern="\[.*?\]">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="\*">
+        <token type="Keyword"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="general">
+      <rule pattern="%.*?\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[{}]">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[&amp;_^]">
+        <token type="NameBuiltin"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\\\[">
+        <token type="LiteralStringBacktick"/>
+        <push state="displaymath"/>
+      </rule>
+      <rule pattern="\\\(">
+        <token type="LiteralString"/>
+        <push state="inlinemath"/>
+      </rule>
+      <rule pattern="\$\$">
+        <token type="LiteralStringBacktick"/>
+        <push state="displaymath"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralString"/>
+        <push state="inlinemath"/>
+      </rule>
+      <rule pattern="\\([a-zA-Z]+|.)">
+        <token type="Keyword"/>
+        <push state="command"/>
+      </rule>
+      <rule pattern="\\$">
+        <token type="Keyword"/>
+      </rule>
+      <rule>
+        <include state="general"/>
+      </rule>
+      <rule pattern="[^\\$%&amp;_^{}]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="math">
+      <rule pattern="\\([a-zA-Z]+|.)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule>
+        <include state="general"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="[-=!+*/()\[\]]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[^=!+*/()\[\]\\$%&amp;_^{}0-9-]+">
+        <token type="NameBuiltin"/>
+      </rule>
+    </state>
+    <state name="inlinemath">
+      <rule pattern="\\\)">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="math"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/thrift.xml 🔗

@@ -0,0 +1,154 @@
+<lexer>
+  <config>
+    <name>Thrift</name>
+    <alias>thrift</alias>
+    <filename>*.thrift</filename>
+    <mime_type>application/x-thrift</mime_type>
+  </config>
+  <rules>
+    <state name="class">
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(async|oneway|extends|throws|required|optional)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(const|typedef)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(smalltalk_category|smalltalk_prefix|delphi_namespace|csharp_namespace|ruby_namespace|xsd_namespace|cpp_namespace|php_namespace|xsd_nillable|xsd_optional|java_package|cocoa_prefix|perl_package|cpp_include|py_module|xsd_attrs|cpp_type|xsd_all|include)\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(double|binary|string|slist|senum|bool|void|byte|list|i64|map|set|i32|i16)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\b(__NAMESPACE__|synchronized|__FUNCTION__|__METHOD__|endforeach|implements|enddeclare|instanceof|transient|endswitch|protected|interface|__CLASS__|continue|__FILE__|abstract|function|endwhile|unsigned|register|volatile|__LINE__|declare|foreach|default|__DIR__|private|finally|dynamic|virtual|lambda|elseif|inline|switch|unless|endfor|delete|import|return|module|ensure|native|rescue|assert|sizeof|static|global|except|public|float|BEGIN|super|endif|yield|elsif|throw|clone|class|catch|until|break|retry|begin|raise|alias|while|print|undef|exec|with|when|case|redo|args|elif|this|then|self|goto|else|pass|next|var|for|xor|END|not|try|del|and|def|new|use|nil|end|if|do|is|or|in|as)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="[+-]?(\d+\.\d+([eE][+-]?\d+)?|\.?\d+[eE][+-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+-]?0x[0-9A-Fa-f]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[+-]?[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <combined state="stringescape" state="dqs"/>
+      </rule>
+      <rule pattern="\&#39;">
+        <token type="LiteralStringSingle"/>
+        <combined state="stringescape" state="sqs"/>
+      </rule>
+      <rule pattern="(namespace)(\s+)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="namespace"/>
+      </rule>
+      <rule pattern="(enum|union|struct|service|exception)(\s+)">
+        <bygroups>
+          <token type="KeywordDeclaration"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+        <push state="class"/>
+      </rule>
+      <rule pattern="((?:(?:[^\W\d]|\$)[\w.\[\]$&lt;&gt;]*\s+)+?)((?:[^\W\d]|\$)[\w$]*)(\s*)(\()">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="NameFunction"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule pattern="[&amp;=]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[:;,{}()&lt;&gt;\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[a-zA-Z_](\.\w|\w)*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="dqs">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="namespace">
+      <rule pattern="[a-z*](\.\w|\w)*">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="/\*[\w\W]*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="stringescape">
+      <rule pattern="\\([\\nrt&#34;\&#39;])">
+        <token type="LiteralStringEscape"/>
+      </rule>
+    </state>
+    <state name="sqs">
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\\\&#39;\n]+">
+        <token type="LiteralStringSingle"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/toml.xml 🔗

@@ -0,0 +1,44 @@
+<lexer>
+  <config>
+    <name>TOML</name>
+    <alias>toml</alias>
+    <filename>*.toml</filename>
+    <filename>Pipfile</filename>
+    <filename>poetry.lock</filename>
+    <mime_type>text/x-toml</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(false|true)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="\d\d\d\d-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(\.\d\+)?(Z|[+-]\d{2}:\d{2})">
+        <token type="LiteralDate"/>
+      </rule>
+      <rule pattern="[+-]?[0-9](_?\d)*\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+-]?[0-9](_?\d)*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="[.,=\[\]{}]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[A-Za-z0-9_-]+">
+        <token type="NameOther"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/tradingview.xml 🔗

@@ -0,0 +1,81 @@
+<lexer>
+  <config>
+    <name>TradingView</name>
+    <alias>tradingview</alias>
+    <alias>tv</alias>
+    <filename>*.tv</filename>
+    <mime_type>text/x-tradingview</mime_type>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^\S\n]+|\n|[()]">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(//.*?)(\n)">
+        <bygroups>
+          <token type="CommentSingle"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="&gt;=|&lt;=|==|!=|&gt;|&lt;|\?|-|\+|\*|\/|%|\[|\]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[:,.]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="=">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;\n])*[&#34;\n]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;\\.&#39;|&#39;[^\\]&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[0-9](\.[0-9]*)?([eE][+-][0-9]+)?">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="#[a-fA-F0-9]{8}|#[a-fA-F0-9]{6}|#[a-fA-F0-9]{3}">
+        <token type="LiteralStringOther"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/transact-sql.xml 🔗

@@ -0,0 +1,137 @@
+<lexer>
+  <config>
+    <name>Transact-SQL</name>
+    <alias>tsql</alias>
+    <alias>t-sql</alias>
+    <mime_type>text/x-tsql</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="--(?m).*?$\n?">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="multiline-comments"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralStringSingle"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralStringName"/>
+        <push state="quoted-ident"/>
+      </rule>
+      <rule pattern="(\*=|!=|!&gt;|\^=|&lt;=|&lt;&gt;|\|=|&amp;=|&gt;=|%=|\+=|/=|-=|!&lt;|::|/|-|%|\+|&amp;|&gt;|\||=|\^|&lt;|~|\*)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(intersect|between|except|exists|union|some|like|all|any|not|and|or|in)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="(uniqueidentifier|datetimeoffset|smalldatetime|hierarchyid|sql_variant|smallmoney|varbinary|datetime2|timestamp|datetime|smallint|nvarchar|decimal|tinyint|varchar|numeric|binary|bigint|cursor|image|nchar|money|float|table|ntext|text|time|real|date|char|int|bit|xml)\b">
+        <token type="NameClass"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/turing.xml 🔗

@@ -0,0 +1,82 @@
+<lexer>
+  <config>
+    <name>Turing</name>
+    <alias>turing</alias>
+    <filename>*.turing</filename>
+    <filename>*.tu</filename>
+    <mime_type>text/x-turing</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="%(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(var|fcn|function|proc|procedure|process|class|end|record|type|begin|case|loop|for|const|union|monitor|module|handler)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(all|asm|assert|bind|bits|body|break|by|cheat|checked|close|condition|decreasing|def|deferred|else|elsif|exit|export|external|flexible|fork|forward|free|get|if|implement|import|include|inherit|init|invariant|label|new|objectclass|of|opaque|open|packed|pause|pervasive|post|pre|priority|put|quit|read|register|result|seek|self|set|signal|skip|tag|tell|then|timeout|to|unchecked|unqualified|wait|when|write)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(true|false)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(addressint|boolean|pointer|string|array|real4|real8|nat1|int8|int4|int2|nat2|nat4|nat8|int1|real|char|enum|nat|int)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+\.\d*([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\.\d+([Ee][-+]\d+)?i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+[Ee][-+]\d+i">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+(\.\d+[eE][+\-]?\d+|\.\d*|[eE][+\-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\.\d+([eE][+\-]?\d+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(0|[1-9][0-9]*)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(div|mod|rem|\*\*|=|&lt;|&gt;|&gt;=|&lt;=|not=|not|and|or|xor|=&gt;|in|shl|shr|-&gt;|~|~=|~in|&amp;|:=|\.\.|[\^+\-*/&amp;#])">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#39;(\\[&#39;&#34;\\abfnrtv]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|[^\\])&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[()\[\]{}.,:]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[^\W\d]\w*">
+        <token type="NameOther"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/turtle.xml 🔗

@@ -0,0 +1,170 @@
+<lexer>
+  <config>
+    <name>Turtle</name>
+    <alias>turtle</alias>
+    <filename>*.ttl</filename>
+    <mime_type>text/turtle</mime_type>
+    <mime_type>application/x-turtle</mime_type>
+    <case_insensitive>true</case_insensitive>
+    <not_multiline>true</not_multiline>
+  </config>
+  <rules>
+    <state name="triple-double-quoted-string">
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+    <state name="single-double-quoted-string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^&#34;\\\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+    <state name="triple-single-quoted-string">
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+    <state name="single-single-quoted-string">
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="end-of-string"/>
+      </rule>
+      <rule pattern="[^&#39;\\\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+        <push state="string-escape"/>
+      </rule>
+    </state>
+    <state name="string-escape">
+      <rule pattern=".">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="end-of-string">
+      <rule pattern="(@)([a-z]+(:?-[a-z0-9]+)*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="GenericEmph"/>
+          <token type="GenericEmph"/>
+        </bygroups>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="(\^\^)(&lt;[^&lt;&gt;&#34;{}|^`\\\x00-\x20]*&gt;)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="GenericEmph"/>
+        </bygroups>
+        <pop depth="2"/>
+      </rule>
+      <rule pattern="(\^\^)((?:[a-z][\w-]*)?\:)([a-z][\w-]*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="GenericEmph"/>
+          <token type="GenericEmph"/>
+        </bygroups>
+        <pop depth="2"/>
+      </rule>
+      <rule>
+        <pop depth="2"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="(@base|BASE)(\s+)(&lt;[^&lt;&gt;&#34;{}|^`\\\x00-\x20]*&gt;)(\s*)(\.?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariable"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(@prefix|PREFIX)(\s+)((?:[a-z][\w-]*)?\:)(\s+)(&lt;[^&lt;&gt;&#34;{}|^`\\\x00-\x20]*&gt;)(\s*)(\.?)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="TextWhitespace"/>
+          <token type="NameNamespace"/>
+          <token type="TextWhitespace"/>
+          <token type="NameVariable"/>
+          <token type="TextWhitespace"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?&lt;=\s)a(?=\s)">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(&lt;[^&lt;&gt;&#34;{}|^`\\\x00-\x20]*&gt;)">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="((?:[a-z][\w-]*)?\:)([a-z][\w-]*)">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="#[^\n]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="\b(true|false)\b">
+        <token type="Literal"/>
+      </rule>
+      <rule pattern="[+\-]?\d*\.\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+\-]?\d*(:?\.\d+)?E[+\-]?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[+\-]?\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[\[\](){}.;,:^]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;&#34;&#34;">
+        <token type="LiteralString"/>
+        <push state="triple-double-quoted-string"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="single-double-quoted-string"/>
+      </rule>
+      <rule pattern="&#39;&#39;&#39;">
+        <token type="LiteralString"/>
+        <push state="triple-single-quoted-string"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="LiteralString"/>
+        <push state="single-single-quoted-string"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/twig.xml 🔗

@@ -0,0 +1,155 @@
+<lexer>
+  <config>
+    <name>Twig</name>
+    <alias>twig</alias>
+    <filename>*.twig</filename>
+    <mime_type>application/x-twig</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="var">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(-?)(\}\})">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="varnames"/>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(-?)(%\})">
+        <bygroups>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="varnames"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="[^{]+">
+        <token type="Other"/>
+      </rule>
+      <rule pattern="\{\{">
+        <token type="CommentPreproc"/>
+        <push state="var"/>
+      </rule>
+      <rule pattern="\{\#.*?\#\}">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)(raw)(\s*-?)(%\})(.*?)(\{%)(-?\s*)(endraw)(\s*-?)(%\})">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <token type="Other"/>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)(verbatim)(\s*-?)(%\})(.*?)(\{%)(-?\s*)(endverbatim)(\s*-?)(%\})">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+          <token type="Other"/>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="CommentPreproc"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)(filter)(\s+)((?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w-]|[^\x00-\x7f])*)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="(\{%)(-?\s*)([a-zA-Z_]\w*)">
+        <bygroups>
+          <token type="CommentPreproc"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="\{">
+        <token type="Other"/>
+      </rule>
+    </state>
+    <state name="varnames">
+      <rule pattern="(\|)(\s*)((?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w-]|[^\x00-\x7f])*)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(is)(\s+)(not)?(\s*)((?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w-]|[^\x00-\x7f])*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameFunction"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?i)(true|false|none|null)\b">
+        <token type="KeywordPseudo"/>
+      </rule>
+      <rule pattern="(in|not|and|b-and|or|b-or|b-xor|isif|elseif|else|importconstant|defined|divisibleby|empty|even|iterable|odd|sameasmatches|starts\s+with|ends\s+with)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(loop|block|parent)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w-]|[^\x00-\x7f])*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\.(?:[\\_a-z]|[^\x00-\x7f])(?:[\\\w-]|[^\x00-\x7f])*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="\.[0-9]+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern=":?&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern=":?&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="([{}()\[\]+\-*/,:~%]|\.\.|\?|:|\*\*|\/\/|!=|[&gt;&lt;=]=?)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[0-9](\.[0-9]*)?(eE[+-][0-9])?[flFLdD]?|0[xX][0-9a-fA-F]+[Ll]?">
+        <token type="LiteralNumber"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typescript.xml 🔗

@@ -0,0 +1,295 @@
+<lexer>
+  <config>
+    <name>TypeScript</name>
+    <alias>ts</alias>
+    <alias>tsx</alias>
+    <alias>typescript</alias>
+    <filename>*.ts</filename>
+    <filename>*.tsx</filename>
+    <filename>*.mts</filename>
+    <filename>*.cts</filename>
+    <mime_type>text/x-typescript</mime_type>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="expression">
+      <rule pattern="{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="jsx">
+      <rule pattern="(&lt;)(/?)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;)([\w\.]+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="(&lt;)(/)([\w\.]*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule>
+        <include state="jsx"/>
+      </rule>
+      <rule pattern=",">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="interp"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="([\w-]+\s*)(=)(\s*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="attr"/>
+      </rule>
+      <rule pattern="[{}]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\w\.]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(/?)(\s*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+   <state name="comment">
+      <rule pattern="[^-]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="--&gt;">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="-">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="interp">
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\\`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interp-inside"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="[^`\\$]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="attr">
+      <rule pattern="{">
+        <token type="Punctuation"/>
+        <push state="expression"/>
+      </rule>
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="interp-inside">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gim]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="jsx"/>
+      </rule>
+      <rule pattern="^(?=\s|/|&lt;!--)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="\+\+|--|~|&amp;&amp;|\?|:|\|\||\\(?=\n)|(&lt;&lt;|&gt;&gt;&gt;?|==?|!=?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(for|in|of|while|do|break|return|yield|continue|switch|case|default|if|else|throw|try|catch|finally|new|delete|typeof|instanceof|keyof|asserts|is|infer|await|void|this)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(var|let|with|function)\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(abstract|async|boolean|class|const|debugger|enum|export|extends|from|get|global|goto|implements|import|interface|namespace|package|private|protected|public|readonly|require|set|static|super|type)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity|undefined)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(Array|Boolean|Date|Error|Function|Math|Number|Object|Packages|RegExp|String|decodeURI|decodeURIComponent|encodeURI|encodeURIComponent|eval|isFinite|isNaN|parseFloat|parseInt|document|this|window)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="\b(module)(\s+)(&quot;[\w\./@]+&quot;)(\s+)">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+          <token type="NameOther"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="\b(string|bool|number|any|never|object|symbol|unique|unknown|bigint)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="\b(constructor|declare|interface|as)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(super)(\s*)(\([\w,?.$\s]+\s*\))">
+        <bygroups>
+          <token type="KeywordReserved"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="([a-zA-Z_?.$][\w?.$]*)\(\) \{">
+        <token type="NameOther"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="([\w?.$][\w?.$]*)(\s*:\s*)([\w?.$][\w?.$]*)">
+        <bygroups>
+          <token type="NameOther"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[$a-zA-Z_]\w*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="[0-9][0-9]*\.[0-9]+([eE][0-9]+)?[fd]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="interp"/>
+      </rule>
+      <rule pattern="@\w+">
+        <token type="KeywordDeclaration"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typoscript.xml 🔗

@@ -0,0 +1,178 @@
+<lexer>
+  <config>
+    <name>TypoScript</name>
+    <alias>typoscript</alias>
+    <filename>*.ts</filename>
+    <mime_type>text/x-typoscript</mime_type>
+    <dot_all>true</dot_all>
+    <priority>0.1</priority>
+  </config>
+  <rules>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="html">
+      <rule pattern="&lt;\S[^\n&gt;]*&gt;">
+        <using lexer="TypoScriptHTMLData"/>
+      </rule>
+      <rule pattern="&amp;[^;\n]*;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(_CSS_DEFAULT_STYLE)(\s*)(\()(?s)(.*(?=\n\)))">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="LiteralStringSymbol"/>
+          <using lexer="TypoScriptCSSData"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="operator">
+      <rule pattern="[&lt;&gt;,:=.*%+|]">
+        <token type="Operator"/>
+      </rule>
+    </state>
+    <state name="structure">
+      <rule pattern="[{}()\[\]\\]">
+        <token type="LiteralStringSymbol"/>
+      </rule>
+    </state>
+    <state name="constant">
+      <rule pattern="(\{)(\$)((?:[\w\-]+\.)*)([\w\-]+)(\})">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="Operator"/>
+          <token type="NameConstant"/>
+          <token type="NameConstant"/>
+          <token type="LiteralStringSymbol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{)([\w\-]+)(\s*:\s*)([\w\-]+)(\})">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="NameConstant"/>
+          <token type="Operator"/>
+          <token type="NameConstant"/>
+          <token type="LiteralStringSymbol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(#[a-fA-F0-9]{6}\b|#[a-fA-F0-9]{3}\b)">
+        <token type="LiteralStringChar"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="(?&lt;!(#|\&#39;|&#34;))(?:#(?!(?:[a-fA-F0-9]{6}|[a-fA-F0-9]{3}))[^\n#]+|//[^\n]*)">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="/\*(?:(?!\*/).)*\*/">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(\s*#\s*\n)">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="comment"/>
+      </rule>
+      <rule>
+        <include state="constant"/>
+      </rule>
+      <rule>
+        <include state="html"/>
+      </rule>
+      <rule>
+        <include state="label"/>
+      </rule>
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="punctuation"/>
+      </rule>
+      <rule>
+        <include state="operator"/>
+      </rule>
+      <rule>
+        <include state="structure"/>
+      </rule>
+      <rule>
+        <include state="literal"/>
+      </rule>
+      <rule>
+        <include state="other"/>
+      </rule>
+    </state>
+    <state name="literal">
+      <rule pattern="0x[0-9A-Fa-f]+t?">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(###\w+###)">
+        <token type="NameConstant"/>
+      </rule>
+    </state>
+    <state name="label">
+      <rule pattern="(EXT|FILE|LLL):[^}\n&#34;]*">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(?![^\w\-])([\w\-]+(?:/[\w\-]+)+/?)(\S*\n)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="punctuation">
+      <rule pattern="[,.]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+    <state name="other">
+      <rule pattern="[\w&#34;\-!/&amp;;]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(\[)(?i)(browser|compatVersion|dayofmonth|dayofweek|dayofyear|device|ELSE|END|GLOBAL|globalString|globalVar|hostname|hour|IP|language|loginUser|loginuser|minute|month|page|PIDinRootline|PIDupinRootline|system|treeLevel|useragent|userFunc|usergroup|version)([^\]]*)(\])">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="NameConstant"/>
+          <token type="Text"/>
+          <token type="LiteralStringSymbol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?=[\w\-])(HTMLparser|HTMLparser_tags|addParams|cache|encapsLines|filelink|if|imageLinkWrap|imgResource|makelinks|numRows|numberFormat|parseFunc|replacement|round|select|split|stdWrap|strPad|tableStyle|tags|textStyle|typolink)(?![\w\-])">
+        <token type="NameFunction"/>
+      </rule>
+      <rule pattern="(?:(=?\s*&lt;?\s+|^\s*))(cObj|field|config|content|constants|FEData|file|frameset|includeLibs|lib|page|plugin|register|resources|sitemap|sitetitle|styles|temp|tt_[^:.\s]*|types|xmlnews|INCLUDE_TYPOSCRIPT|_CSS_DEFAULT_STYLE|_DEFAULT_PI_VARS|_LOCAL_LANG)(?![\w\-])">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="NameBuiltin"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(?=[\w\-])(CASE|CLEARGIF|COA|COA_INT|COBJ_ARRAY|COLUMNS|CONTENT|CTABLE|EDITPANEL|FILE|FILES|FLUIDTEMPLATE|FORM|HMENU|HRULER|HTML|IMAGE|IMGTEXT|IMG_RESOURCE|LOAD_REGISTER|MEDIA|MULTIMEDIA|OTABLE|PAGE|QTOBJECT|RECORDS|RESTORE_REGISTER|SEARCHRESULT|SVG|SWFOBJECT|TEMPLATE|TEXT|USER|USER_INT)(?![\w\-])">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(?=[\w\-])(ACTIFSUBRO|ACTIFSUB|ACTRO|ACT|CURIFSUBRO|CURIFSUB|CURRO|CUR|IFSUBRO|IFSUB|NO|SPC|USERDEF1RO|USERDEF1|USERDEF2RO|USERDEF2|USRRO|USR)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(?=[\w\-])(GMENU_FOLDOUT|GMENU_LAYERS|GMENU|IMGMENUITEM|IMGMENU|JSMENUITEM|JSMENU|TMENUITEM|TMENU_LAYERS|TMENU)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(?=[\w\-])(PHP_SCRIPT(_EXT|_INT)?)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(?=[\w\-])(userFunc)(?![\w\-])">
+        <token type="NameFunction"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typoscriptcssdata.xml 🔗

@@ -0,0 +1,52 @@
+<lexer>
+  <config>
+    <name>TypoScriptCssData</name>
+    <alias>typoscriptcssdata</alias>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(.*)(###\w+###)(.*)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="NameConstant"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{)(\$)((?:[\w\-]+\.)*)([\w\-]+)(\})">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="Operator"/>
+          <token type="NameConstant"/>
+          <token type="NameConstant"/>
+          <token type="LiteralStringSymbol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(.*)(\{)([\w\-]+)(\s*:\s*)([\w\-]+)(\})(.*)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="LiteralStringSymbol"/>
+          <token type="NameConstant"/>
+          <token type="Operator"/>
+          <token type="NameConstant"/>
+          <token type="LiteralStringSymbol"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="/\*(?:(?!\*/).)*\*/">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(?&lt;!(#|\&#39;|&#34;))(?:#(?!(?:[a-fA-F0-9]{6}|[a-fA-F0-9]{3}))[^\n#]+|//[^\n]*)">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[&lt;&gt;,:=.*%+|]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[\w&#34;\-!/&amp;;(){}]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typoscripthtmldata.xml 🔗

@@ -0,0 +1,52 @@
+<lexer>
+  <config>
+    <name>TypoScriptHtmlData</name>
+    <alias>typoscripthtmldata</alias>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(INCLUDE_TYPOSCRIPT)">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(EXT|FILE|LLL):[^}\n&#34;]*">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(.*)(###\w+###)(.*)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="NameConstant"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(\{)(\$)((?:[\w\-]+\.)*)([\w\-]+)(\})">
+        <bygroups>
+          <token type="LiteralStringSymbol"/>
+          <token type="Operator"/>
+          <token type="NameConstant"/>
+          <token type="NameConstant"/>
+          <token type="LiteralStringSymbol"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(.*)(\{)([\w\-]+)(\s*:\s*)([\w\-]+)(\})(.*)">
+        <bygroups>
+          <token type="LiteralString"/>
+          <token type="LiteralStringSymbol"/>
+          <token type="NameConstant"/>
+          <token type="Operator"/>
+          <token type="NameConstant"/>
+          <token type="LiteralStringSymbol"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[&lt;&gt;,:=.*%+|]">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[\w&#34;\-!/&amp;;(){}#]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/typst.xml 🔗

@@ -0,0 +1,108 @@
+
+<lexer>
+  <config>
+    <name>Typst</name>
+    <alias>typst</alias>
+    <filename>*.typ</filename>
+    <mime_type>text/x-typst</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule><include state="markup"/></rule>
+    </state>
+    <state name="into_code">
+      <rule pattern="(\#let|\#set|\#show)\b"><token type="KeywordDeclaration"/><push state="inline_code"/></rule>
+      <rule pattern="(\#import|\#include)\b"><token type="KeywordNamespace"/><push state="inline_code"/></rule>
+      <rule pattern="(\#if|\#for|\#while|\#export)\b"><token type="KeywordReserved"/><push state="inline_code"/></rule>
+      <rule pattern="#\{"><token type="Punctuation"/><push state="code"/></rule>
+      <rule pattern="#\("><token type="Punctuation"/><push state="code"/></rule>
+      <rule pattern="(#[a-zA-Z_][a-zA-Z0-9_-]*)(\[)"><bygroups><token type="NameFunction"/><token type="Punctuation"/></bygroups><push state="markup"/></rule>
+      <rule pattern="(#[a-zA-Z_][a-zA-Z0-9_-]*)(\()"><bygroups><token type="NameFunction"/><token type="Punctuation"/></bygroups><push state="code"/></rule>
+      <rule pattern="(\#true|\#false|\#none|\#auto)\b"><token type="KeywordConstant"/></rule>
+      <rule pattern="#[a-zA-Z_][a-zA-Z0-9_]*"><token type="NameVariable"/></rule>
+      <rule pattern="#0x[0-9a-fA-F]+"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="#0b[01]+"><token type="LiteralNumberBin"/></rule>
+      <rule pattern="#0o[0-7]+"><token type="LiteralNumberOct"/></rule>
+      <rule pattern="#[0-9]+[\.e][0-9]+"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="#[0-9]+"><token type="LiteralNumberInteger"/></rule>
+    </state>
+    <state name="markup">
+      <rule><include state="comment"/></rule>
+      <rule pattern="^\s*=+.*$"><token type="GenericHeading"/></rule>
+      <rule pattern="[*][^*]*[*]"><token type="GenericStrong"/></rule>
+      <rule pattern="_[^_]*_"><token type="GenericEmph"/></rule>
+      <rule pattern="\$"><token type="Punctuation"/><push state="math"/></rule>
+      <rule pattern="`[^`]*`"><token type="LiteralStringBacktick"/></rule>
+      <rule pattern="^(\s*)(-)(\s+)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/><token type="TextWhitespace"/></bygroups></rule>
+      <rule pattern="^(\s*)(\+)(\s+)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/><token type="TextWhitespace"/></bygroups></rule>
+      <rule pattern="^(\s*)([0-9]+\.)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="^(\s*)(/)(\s+)([^:]+)(:)"><bygroups><token type="TextWhitespace"/><token type="Punctuation"/><token type="TextWhitespace"/><token type="NameVariable"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="&lt;[a-zA-Z_][a-zA-Z0-9_-]*&gt;"><token type="NameLabel"/></rule>
+      <rule pattern="@[a-zA-Z_][a-zA-Z0-9_-]*"><token type="NameLabel"/></rule>
+      <rule pattern="\\#"><token type="Text"/></rule>
+      <rule><include state="into_code"/></rule>
+      <rule pattern="```(?:.|\n)*?```"><token type="LiteralStringBacktick"/></rule>
+      <rule pattern="https?://[0-9a-zA-Z~/%#&amp;=\&#x27;,;.+?]*"><token type="GenericEmph"/></rule>
+      <rule pattern="(\-\-\-|\\|\~|\-\-|\.\.\.)\B"><token type="Punctuation"/></rule>
+      <rule pattern="\\\["><token type="Punctuation"/></rule>
+      <rule pattern="\\\]"><token type="Punctuation"/></rule>
+      <rule pattern="\["><token type="Punctuation"/><push/></rule>
+      <rule pattern="\]"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="[ \t]+\n?|\n"><token type="TextWhitespace"/></rule>
+      <rule pattern="((?![*_$`&lt;@\\#\] ]|https?://).)+"><token type="Text"/></rule>
+    </state>
+    <state name="math">
+      <rule><include state="comment"/></rule>
+      <rule pattern="(\\_|\\\^|\\\&amp;)"><token type="Text"/></rule>
+      <rule pattern="(_|\^|\&amp;|;)"><token type="Punctuation"/></rule>
+      <rule pattern="(\+|/|=|\[\||\|\]|\|\||\*|:=|::=|\.\.\.|&#x27;|\-|=:|!=|&gt;&gt;|&gt;=|&gt;&gt;&gt;|&lt;&lt;|&lt;=|&lt;&lt;&lt;|\-&gt;|\|\-&gt;|=&gt;|\|=&gt;|==&gt;|\-\-&gt;|\~\~&gt;|\~&gt;|&gt;\-&gt;|\-&gt;&gt;|&lt;\-|&lt;==|&lt;\-\-|&lt;\~\~|&lt;\~|&lt;\-&lt;|&lt;&lt;\-|&lt;\-&gt;|&lt;=&gt;|&lt;==&gt;|&lt;\-\-&gt;|&gt;|&lt;|\~|:|\|)"><token type="Operator"/></rule>
+      <rule pattern="\\"><token type="Punctuation"/></rule>
+      <rule pattern="\\\$"><token type="Punctuation"/></rule>
+      <rule pattern="\$"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule><include state="into_code"/></rule>
+      <rule pattern="([a-zA-Z][a-zA-Z0-9-]*)(\s*)(\()"><bygroups><token type="NameFunction"/><token type="TextWhitespace"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="([a-zA-Z][a-zA-Z0-9-]*)(:)"><bygroups><token type="NameVariable"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="([a-zA-Z][a-zA-Z0-9-]*)"><token type="NameVariable"/></rule>
+      <rule pattern="[0-9]+(\.[0-9]+)?"><token type="LiteralNumber"/></rule>
+      <rule pattern="\.{1,3}|\(|\)|,|\{|\}"><token type="Punctuation"/></rule>
+      <rule pattern="&quot;[^&quot;]*&quot;"><token type="LiteralStringDouble"/></rule>
+      <rule pattern="[ \t\n]+"><token type="TextWhitespace"/></rule>
+    </state>
+    <state name="comment">
+      <rule pattern="//.*$"><token type="CommentSingle"/></rule>
+      <rule pattern="/[*](.|\n)*?[*]/"><token type="CommentMultiline"/></rule>
+    </state>
+    <state name="code">
+      <rule><include state="comment"/></rule>
+      <rule pattern="\["><token type="Punctuation"/><push state="markup"/></rule>
+      <rule pattern="\(|\{"><token type="Punctuation"/><push state="code"/></rule>
+      <rule pattern="\)|\}"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="&quot;[^&quot;]*&quot;"><token type="LiteralStringDouble"/></rule>
+      <rule pattern=",|\.{1,2}"><token type="Punctuation"/></rule>
+      <rule pattern="="><token type="Operator"/></rule>
+      <rule pattern="(and|or|not)\b"><token type="OperatorWord"/></rule>
+      <rule pattern="=&gt;|&lt;=|==|!=|&gt;|&lt;|-=|\+=|\*=|/=|\+|-|\\|\*"><token type="Operator"/></rule>
+      <rule pattern="([a-zA-Z_][a-zA-Z0-9_-]*)(:)"><bygroups><token type="NameVariable"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="([a-zA-Z_][a-zA-Z0-9_-]*)(\()"><bygroups><token type="NameFunction"/><token type="Punctuation"/></bygroups><push state="code"/></rule>
+      <rule pattern="(as|break|export|continue|else|for|if|in|return|while)\b"><token type="KeywordReserved"/></rule>
+      <rule pattern="(import|include)\b"><token type="KeywordNamespace"/></rule>
+      <rule pattern="(auto|none|true|false)\b"><token type="KeywordConstant"/></rule>
+      <rule pattern="([0-9.]+)(mm|pt|cm|in|em|fr|%)"><bygroups><token type="LiteralNumber"/><token type="KeywordReserved"/></bygroups></rule>
+      <rule pattern="0x[0-9a-fA-F]+"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="0b[01]+"><token type="LiteralNumberBin"/></rule>
+      <rule pattern="0o[0-7]+"><token type="LiteralNumberOct"/></rule>
+      <rule pattern="[0-9]+[\.e][0-9]+"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="[0-9]+"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="(let|set|show)\b"><token type="KeywordDeclaration"/></rule>
+      <rule pattern="([a-zA-Z_][a-zA-Z0-9_-]*)"><token type="NameVariable"/></rule>
+      <rule pattern="[ \t\n]+"><token type="TextWhitespace"/></rule>
+      <rule pattern=":"><token type="Punctuation"/></rule>
+    </state>
+    <state name="inline_code">
+      <rule pattern=";\b"><token type="Punctuation"/><pop depth="1"/></rule>
+      <rule pattern="\n"><token type="TextWhitespace"/><pop depth="1"/></rule>
+      <rule><include state="code"/></rule>
+    </state>
+  </rules>
+</lexer>
+

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/ucode.xml 🔗

@@ -0,0 +1,147 @@
+<lexer>
+  <config>
+    <name>ucode</name>
+    <filename>*.uc</filename>
+    <mime_type>application/x.ucode</mime_type>
+    <mime_type>text/x.ucode</mime_type>
+    <dot_all>true</dot_all>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="interp">
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\\`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\\[^`\\]">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interp-inside"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="[^`\\$]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="interp-inside">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gimuy]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="#pop" state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\A#! ?/.*?\n">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="^(?=\s|/|&lt;!--)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="\d+(\.\d*|[eE][+\-]?\d+)">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[bB][01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[oO][0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9][0-9_]*">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\.\.\.|=&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\+\+|--|~|&amp;&amp;|\?|:|\|\||\\(?=\n)|(&lt;&lt;|&gt;&gt;&gt;?|==?|!=?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(import|export|from|as|for|in|while|break|return|continue|switch|case|default|if|else|try|catch|delete|this)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(const|let|function)\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(?:[$_\p{L}\p{N}]|\\u[a-fA-F0-9]{4})(?:(?:[$\p{L}\p{N}]|\\u[a-fA-F0-9]{4}))*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="interp"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/v.xml 🔗

@@ -0,0 +1,355 @@
+<lexer>
+  <config>
+    <name>V</name>
+    <alias>v</alias>
+    <alias>vlang</alias>
+    <filename>*.v</filename>
+    <filename>*.vv</filename>
+    <filename>v.mod</filename>
+    <mime_type>text/x-v</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(?&lt;=module\s+\w[^\n]*\s+)(//[^\n]+\n)+(?=\n)">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule pattern="(// *)(\w+)([^\n]+\n)(?=(?://[^\n]*\n)* *(?:pub +)?(?:fn|struct|union|type|interface|enum|const) +\2\b)">
+        <bygroups>
+          <token type="LiteralStringDoc"/>
+          <token type="GenericEmph"/>
+          <token type="LiteralStringDoc"/>
+        </bygroups>
+        <push state="string-doc"/>
+      </rule>
+      <rule pattern="//[^\n]*\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*(?:(?:/\*(?:.|\n)*?\*/)*|.|\n)*\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="\b(import|module)\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="\b(fn|struct|union|map|chan|type|interface|enum|const|mut|shared|pub|__global)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="\?">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(?&lt;=\)\s*)!">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="[ \t]*#include[^\n]+">
+        <using lexer="c"/>
+      </rule>
+      <rule pattern="[ \t]*#\w[^\n]*">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(sql)(\s+)(\w+)(\s+)({)([^}]*?)(})">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Name"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <using lexer="sql"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\$(?=\w)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(?&lt;=\$)(?:embed_file|pkgconfig|tmpl|env|compile_error|compile_warn)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(asm)(\s+)(\w+)(\s*)({)([^}]*?)(})">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="KeywordType"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+          <using lexer="nasm"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="\b_(?:un)?likely_(?=\()">
+        <token type="NameFunctionMagic"/>
+      </rule>
+      <rule pattern="(?&lt;=\$if.+?(?:&amp;&amp;|\|\|)?)((no_segfault_handler|no_bounds_checking|little_endian|freestanding|no_backtrace|big_endian|cplusplus|dragonfly|prealloc|android|windows|no_main|solaris|darwin|clang|tinyc|glibc|mingw|haiku|macos|amd64|arm64|debug|linux|prod|msvc|test|hpux|mach|x32|x64|gcc|qnx|gnu|ios|mac|js))+">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="@(VMOD_FILE|VEXEROOT|VMODROOT|METHOD|STRUCT|COLUMN|VHASH|FILE|LINE|VEXE|MOD|FN)\b">
+        <token type="NameVariableMagic"/>
+      </rule>
+      <rule pattern="\b(?&lt;!@)(__offsetof|isreftype|continue|volatile|typeof|static|unsafe|return|assert|sizeof|atomic|select|match|break|defer|rlock|lock|else|goto|for|in|is|as|or|if|go)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(?&lt;!@)(none|true|false|si_s_code|si_g32_code|si_g64_code)\b">
+        <token type="KeywordConstant"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/v_shell.xml 🔗

@@ -0,0 +1,365 @@
+<lexer>
+  <config>
+    <name>V shell</name>
+    <alias>vsh</alias>
+    <alias>vshell</alias>
+    <filename>*.vsh</filename>
+    <mime_type>text/x-vsh</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="attribute">
+      <rule pattern="\]">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;">
+        <token type="Punctuation"/>
+        <push state="string-single"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="Punctuation"/>
+        <push state="string-double"/>
+      </rule>
+      <rule pattern="[;:]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(?&lt;=\[)if\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(?&lt;=: *)\w+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="[^\W\d]\w*">
+        <token type="NameAttribute"/>
+      </rule>
+    </state>
+    <state name="string-double">
+      <rule pattern="&#34;">
+        <token type="LiteralStringDouble"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="char-escape"/>
+      </rule>
+      <rule pattern="(\$)((?!\\){)">
+        <bygroups>
+          <token type="Operator"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <push state="string-curly-interpolation"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="Operator"/>
+        <push state="string-interpolation"/>
+      </rule>
+      <rule pattern="[^&#34;]+?">
+        <token type="LiteralStringDouble"/>
+      </rule>
+    </state>
+    <state name="char">
+      <rule pattern="`">
+        <token type="LiteralStringChar"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="char-escape"/>
+      </rule>
+      <rule pattern="[^\\]">
+        <token type="LiteralStringChar"/>
+      </rule>
+    </state>
+    <state name="string-doc">
+      <rule pattern="(// *)(#+ [^\n]+)(\n)">
+        <bygroups>
+          <token type="LiteralStringDoc"/>
+          <token type="GenericHeading"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="// *([=_*~-])\1{2,}\n">
+        <token type="LiteralStringDelimiter"/>
+      </rule>
+      <rule pattern="//[^\n]*\n">
+        <token type="LiteralStringDoc"/>
+      </rule>
+      <rule>
+        <mutators>
+          <pop depth="1"/>
+        </mutators>
+      </rule>
+    </state>
+    <state name="string-interpolation">
+      <rule pattern="(\.)?(@)?(?:([^\W\d]\w*)(\()([^)]*)(\))|([^\W\d]\w*))">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Operator"/>
+          <token type="NameFunction"/>
+          <token type="Punctuation"/>
+          <usingself state="root"/>
+          <token type="Punctuation"/>
+          <token type="NameVariable"/>
+        </bygroups>
+      </rule>
+      <rule>
+        <mutators>
+          <pop depth="1"/>
+        </mutators>
+      </rule>
+    </state>
+    <state name="string-curly-interpolation">
+      <rule pattern="}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="strings"/>
+      </rule>
+      <rule pattern="(:)( *?)([ 0&#39;#+-])?(?:(\.)?([0-9]+))?([fFgeEGxXobsd])?">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Punctuation"/>
+          <token type="LiteralNumber"/>
+          <token type="LiteralStringAffix"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^}&#34;&#39;:]+">
+        <usingself state="root"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^#![^\n]*\n">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="\b(path_delimiter|path_separator|wd_at_startup|max_path_len|sys_write|sys_close|sys_mkdir|sys_creat|sys_open|s_iflnk|s_irusr|s_ifdir|s_ixoth|s_isuid|s_isgid|s_isvtx|s_iwoth|s_iwusr|s_ixusr|s_irgrp|s_iwgrp|s_ixgrp|s_iroth|s_ifmt|args)\b">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="\b(ExecutableNotFoundError|FileNotOpenedError|SizeOfTypeIs0Error|ProcessState|SeekMode|Command|Process|Signal|Result|Uname|File)\b">
+        <token type="NameBuiltin"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vala.xml 🔗

@@ -0,0 +1,72 @@
+
+<lexer>
+  <config>
+    <name>Vala</name>
+    <alias>vala</alias>
+    <alias>vapi</alias>
+    <filename>*.vala</filename>
+    <filename>*.vapi</filename>
+    <mime_type>text/x-vala</mime_type>
+  </config>
+  <rules>
+    <state name="whitespace">
+      <rule pattern="^\s*#if\s+0"><token type="CommentPreproc"/><push state="if0"/></rule>
+      <rule pattern="\n"><token type="TextWhitespace"/></rule>
+      <rule pattern="\s+"><token type="TextWhitespace"/></rule>
+      <rule pattern="\\\n"><token type="Text"/></rule>
+      <rule pattern="//(\n|(.|\n)*?[^\\]\n)"><token type="CommentSingle"/></rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/"><token type="CommentMultiline"/></rule>
+    </state>
+    <state name="statements">
+      <rule pattern="[L@]?&quot;"><token type="LiteralString"/><push state="string"/></rule>
+      <rule pattern="L?&#x27;(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#x27;\n])&#x27;"><token type="LiteralStringChar"/></rule>
+      <rule pattern="(?s)&quot;&quot;&quot;.*?&quot;&quot;&quot;"><token type="LiteralString"/></rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?"><token type="LiteralNumberFloat"/></rule>
+      <rule pattern="0x[0-9a-fA-F]+[Ll]?"><token type="LiteralNumberHex"/></rule>
+      <rule pattern="0[0-7]+[Ll]?"><token type="LiteralNumberOct"/></rule>
+      <rule pattern="\d+[Ll]?"><token type="LiteralNumberInteger"/></rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]"><token type="Operator"/></rule>
+      <rule pattern="(\[)(Compact|Immutable|(?:Boolean|Simple)Type)(\])"><bygroups><token type="Punctuation"/><token type="NameDecorator"/><token type="Punctuation"/></bygroups></rule>
+      <rule pattern="(\[)(CCode|(?:Integer|Floating)Type)"><bygroups><token type="Punctuation"/><token type="NameDecorator"/></bygroups></rule>
+      <rule pattern="[()\[\],.]"><token type="Punctuation"/></rule>
+      <rule pattern="(as|base|break|case|catch|construct|continue|default|delete|do|else|enum|finally|for|foreach|get|if|in|is|lock|new|out|params|return|set|sizeof|switch|this|throw|try|typeof|while|yield)\b"><token type="Keyword"/></rule>
+      <rule pattern="(abstract|const|delegate|dynamic|ensures|extern|inline|internal|override|owned|private|protected|public|ref|requires|signal|static|throws|unowned|var|virtual|volatile|weak|yields)\b"><token type="KeywordDeclaration"/></rule>
+      <rule pattern="(namespace|using)(\s+)"><bygroups><token type="KeywordNamespace"/><token type="TextWhitespace"/></bygroups><push state="namespace"/></rule>
+      <rule pattern="(class|errordomain|interface|struct)(\s+)"><bygroups><token type="KeywordDeclaration"/><token type="TextWhitespace"/></bygroups><push state="class"/></rule>
+      <rule pattern="(\.)([a-zA-Z_]\w*)"><bygroups><token type="Operator"/><token type="NameAttribute"/></bygroups></rule>
+      <rule pattern="(void|bool|char|double|float|int|int8|int16|int32|int64|long|short|size_t|ssize_t|string|time_t|uchar|uint|uint8|uint16|uint32|uint64|ulong|unichar|ushort)\b"><token type="KeywordType"/></rule>
+      <rule pattern="(true|false|null)\b"><token type="NameBuiltin"/></rule>
+      <rule pattern="[a-zA-Z_]\w*"><token type="Name"/></rule>
+    </state>
+    <state name="root">
+      <rule><include state="whitespace"/></rule>
+      <rule><push state="statement"/></rule>
+    </state>
+    <state name="statement">
+      <rule><include state="whitespace"/></rule>
+      <rule><include state="statements"/></rule>
+      <rule pattern="[{}]"><token type="Punctuation"/></rule>
+      <rule pattern=";"><token type="Punctuation"/><pop depth="1"/></rule>
+    </state>
+    <state name="string">
+      <rule pattern="&quot;"><token type="LiteralString"/><pop depth="1"/></rule>
+      <rule pattern="\\([\\abfnrtv&quot;\&#x27;]|x[a-fA-F0-9]{2,4}|[0-7]{1,3})"><token type="LiteralStringEscape"/></rule>
+      <rule pattern="[^\\&quot;\n]+"><token type="LiteralString"/></rule>
+      <rule pattern="\\\n"><token type="LiteralString"/></rule>
+      <rule pattern="\\"><token type="LiteralString"/></rule>
+    </state>
+    <state name="if0">
+      <rule pattern="^\s*#if.*?(?&lt;!\\)\n"><token type="CommentPreproc"/><push/></rule>
+      <rule pattern="^\s*#el(?:se|if).*\n"><token type="CommentPreproc"/><pop depth="1"/></rule>
+      <rule pattern="^\s*#endif.*?(?&lt;!\\)\n"><token type="CommentPreproc"/><pop depth="1"/></rule>
+      <rule pattern=".*?\n"><token type="Comment"/></rule>
+    </state>
+    <state name="class">
+      <rule pattern="[a-zA-Z_]\w*"><token type="NameClass"/><pop depth="1"/></rule>
+    </state>
+    <state name="namespace">
+      <rule pattern="[a-zA-Z_][\w.]*"><token type="NameNamespace"/><pop depth="1"/></rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vb_net.xml 🔗

@@ -0,0 +1,162 @@
+<lexer>
+  <config>
+    <name>VB.net</name>
+    <alias>vb.net</alias>
+    <alias>vbnet</alias>
+    <filename>*.vb</filename>
+    <filename>*.bas</filename>
+    <mime_type>text/x-vbnet</mime_type>
+    <mime_type>text/x-vba</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="dim">
+      <rule pattern="[_\w][\w]*">
+        <token type="NameVariable"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="funcname">
+      <rule pattern="[_\w][\w]*">
+        <token type="NameFunction"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="classname">
+      <rule pattern="[_\w][\w]*">
+        <token type="NameClass"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="namespace">
+      <rule pattern="[_\w][\w]*">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="\.">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="end">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(Function|Sub|Property|Class|Structure|Enum|Module|Namespace)\b">
+        <token type="Keyword"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="^\s*&lt;.*?&gt;">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="rem\b.*?\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="&#39;.*?\n">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="#If\s.*?\sThen|#ElseIf\s.*?\sThen|#Else|#End\s+If|#Const|#ExternalSource.*?\n|#End\s+ExternalSource|#Region.*?\n|#End\s+Region|#ExternalChecksum">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="[(){}!#,.:]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="Option\s+(Strict|Explicit|Compare)\s+(On|Off|Binary|Text)">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(NotOverridable|NotInheritable|RemoveHandler|MustOverride|Overridable|MustInherit|Implements|RaiseEvent|AddHandler|ParamArray|WithEvents|DirectCast|Overrides|Overloads|Protected|WriteOnly|Interface|Narrowing|Inherits|Widening|SyncLock|ReadOnly|Operator|Continue|Delegate|Optional|MyClass|Declare|CUShort|Handles|Default|Shadows|TryCast|Finally|Private|Nothing|Partial|CSByte|Select|Option|Return|Friend|Resume|ElseIf|MyBase|Shared|Single|Public|CShort|Static|Global|Catch|CType|Error|CUInt|Using|While|GoSub|False|CDate|Throw|Event|CChar|CULng|CBool|Erase|ByVal|ByRef|Alias|EndIf|CByte|ReDim|Stop|Call|Wend|Next|CLng|Loop|True|CDec|With|Then|GoTo|CObj|CSng|Exit|CStr|Else|Each|Case|CInt|Step|When|CDbl|Set|For|Let|Lib|Try|New|Not|Get|On|To|Do|If|Of|Me)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)End\b">
+        <token type="Keyword"/>
+        <push state="end"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(Dim|Const)\b">
+        <token type="Keyword"/>
+        <push state="dim"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(Function|Sub|Property)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="funcname"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(Class|Structure|Enum)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="classname"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(Module|Namespace|Imports)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="namespace"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(Boolean|Byte|Char|Date|Decimal|Double|Integer|Long|Object|SByte|Short|Single|String|Variant|UInteger|ULong|UShort)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(?&lt;!\.)(AddressOf|And|AndAlso|As|GetType|In|Is|IsNot|Like|Mod|Or|OrElse|TypeOf|Xor)\b">
+        <token type="OperatorWord"/>
+      </rule>
+      <rule pattern="&amp;=|[*]=|/=|\\=|\^=|\+=|-=|&lt;&lt;=|&gt;&gt;=|&lt;&lt;|&gt;&gt;|:=|&lt;=|&gt;=|&lt;&gt;|[-&amp;*/\\^+=&lt;&gt;\[\]]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="_\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[_\w][\w]*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="#.*?#">
+        <token type="LiteralDate"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\d*\.\d+)(F[+-]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="\d+([SILDFR]|US|UI|UL)?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&amp;H[0-9a-f]+([SILDFR]|US|UI|UL)?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&amp;O[0-7]+([SILDFR]|US|UI|UL)?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;C?">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^&#34;]+">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/verilog.xml 🔗

@@ -0,0 +1,158 @@
+<lexer>
+  <config>
+    <name>verilog</name>
+    <alias>verilog</alias>
+    <alias>v</alias>
+    <filename>*.v</filename>
+    <mime_type>text/x-verilog</mime_type>
+    <ensure_nl>true</ensure_nl>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^\s*`define">
+        <token type="CommentPreproc"/>
+        <push state="macro"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="/(\\\n)?/(\n|(.|\n)*?[^\\]\n)">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*](.|\n)*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="[{}#@]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="L?&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="L?&#39;(\\.|\\[0-7]{1,3}|\\x[a-fA-F0-9]{1,2}|[^\\\&#39;\n])&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)[eE][+-]?\d+[lL]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+[fF])[fF]?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;h)[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;b)[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;d)[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="([0-9]+)|(\&#39;o)[0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="\&#39;[01xz]">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\d+[Ll]?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="Error"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[()\[\],.;\&#39;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="`[a-zA-Z_]\w*">
+        <token type="NameConstant"/>
+      </rule>
+      <rule pattern="^(\s*)(package)(\s+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^(\s*)(import)(\s+)">
+        <bygroups>
+          <token type="Text"/>
+          <token type="KeywordNamespace"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="import"/>
+      </rule>
+      <rule pattern="(endprimitive|always_latch|macromodule|always_comb|endgenerate|endfunction|endpackage|endspecify|localparam|parameter|primitive|always_ff|automatic|specparam|endmodule|rtranif1|scalared|continue|deassign|endtable|defparam|function|strength|generate|pulldown|vectored|rtranif0|unsigned|specify|endcase|negedge|strong0|disable|default|endtask|posedge|strong1|typedef|tranif1|integer|forever|release|initial|tranif0|highz0|genvar|highz1|pullup|notif0|bufif1|bufif0|repeat|medium|return|struct|assign|signed|module|packed|string|output|notif1|always|final|casex|while|table|const|large|break|begin|input|pull0|pull1|inout|weak1|rcmos|weak0|casez|force|small|rnmos|rpmos|rtran|event|type|void|enum|wait|fork|join|else|edge|pmos|nand|cmos|nmos|task|xnor|case|tran|buf|ref|end|var|and|xor|for|nor|not|do|if|or)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="`(autoexpand_vectornets|nounconnected_drive|noexpand_vectornets|noremove_gatenames|unconnected_drive|noremove_netnames|expand_vectornets|remove_gatenames|default_nettype|remove_netnames|endcelldefine|noaccelerate|endprotected|accelerate|celldefine|endprotect|protected|timescale|resetall|protect|include|ifndef|ifdef|endif|elsif|undef|else)\b">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\$(shortrealtobits|bitstoshortreal|printtimescale|showvariables|countdrivers|reset_value|reset_count|getpattern|showscopes|realtobits|bitstoreal|monitoroff|timeformat|sreadmemh|monitoron|sreadmemb|fmonitor|showvars|fdisplay|realtime|readmemb|readmemh|monitor|history|fstrobe|display|restart|incsave|strobe|fwrite|finish|random|fclose|stime|nokey|fopen|floor|nolog|scale|scope|input|reset|write|rtoi|bits|list|stop|itor|time|save|key|log)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(woshortreal|shortint|realtime|longint|integer|supply0|supply1|triand|trireg|uwire|logic|trior|byte|wand|tri0|tri1|time|real|wire|reg|bit|int|tri)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*:(?!:)">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="\$?[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="string">
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\([\\abfnrtv&#34;\&#39;]|x[a-fA-F0-9]{2,4}|[0-7]{1,3})">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\">
+        <token type="LiteralString"/>
+      </rule>
+    </state>
+    <state name="macro">
+      <rule pattern="[^/\n]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="/">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="(?&lt;=\\)\n">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="CommentPreproc"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="import">
+      <rule pattern="[\w:]+\*?">
+        <token type="NameNamespace"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vhdl.xml 🔗

@@ -0,0 +1,171 @@
+<lexer>
+  <config>
+    <name>VHDL</name>
+    <alias>vhdl</alias>
+    <filename>*.vhdl</filename>
+    <filename>*.vhd</filename>
+    <mime_type>text/x-vhdl</mime_type>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="--.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="&#39;(U|X|0|1|Z|W|L|H|-)&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="[~!%^&amp;*+=|?:&lt;&gt;/-]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#39;[a-z_]\w*">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="[()\[\],.;\&#39;]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="&#34;[^\n\\&#34;]*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(library)(\s+)([a-z_]\w*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(use)(\s+)(entity)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(use)(\s+)([a-z_][\w.]*\.)(all)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(use)(\s+)([a-z_][\w.]*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(std|ieee)(\.[a-z_]\w*)">
+        <bygroups>
+          <token type="NameNamespace"/>
+          <token type="NameNamespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(ieee|work|std)\b">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="(entity|component)(\s+)([a-z_]\w*)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(architecture|configuration)(\s+)([a-z_]\w*)(\s+)(of)(\s+)([a-z_]\w*)(\s+)(is)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="NameClass"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([a-z_]\w*)(:)(\s+)(process|for)">
+        <bygroups>
+          <token type="NameClass"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="Keyword"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(end)(\s+)">
+        <bygroups>
+          <token type="Keyword"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="endblock"/>
+      </rule>
+      <rule>
+        <include state="types"/>
+      </rule>
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule>
+        <include state="numbers"/>
+      </rule>
+      <rule pattern="[a-z_]\w*">
+        <token type="Name"/>
+      </rule>
+    </state>
+    <state name="endblock">
+      <rule>
+        <include state="keywords"/>
+      </rule>
+      <rule pattern="[a-z_]\w*">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(\s+)">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=";">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="types">
+      <rule pattern="(std_ulogic_vector|file_open_status|std_logic_vector|severity_level|file_open_kind|delay_length|std_ulogic|bit_vector|character|std_logic|positive|unsigned|boolean|natural|integer|signed|string|time|bit)\b">
+        <token type="KeywordType"/>
+      </rule>
+    </state>
+    <state name="keywords">
+      <rule pattern="(configuration|architecture|disconnect|attribute|transport|postponed|procedure|component|function|variable|severity|constant|generate|register|inertial|package|library|guarded|linkage|generic|subtype|process|literal|record|entity|others|shared|signal|downto|access|assert|return|reject|buffer|impure|select|elsif|inout|until|label|range|group|units|begin|array|alias|after|block|while|null|next|file|when|wait|open|nand|exit|then|case|port|type|loop|else|pure|with|xnor|body|not|rem|bus|rol|ror|xor|abs|end|and|sla|sll|sra|srl|all|out|nor|mod|map|for|new|use|or|on|of|in|if|is|to)\b">
+        <token type="Keyword"/>
+      </rule>
+    </state>
+    <state name="numbers">
+      <rule pattern="\d{1,2}#[0-9a-f_]+#?">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(\d+\.\d*|\.\d+|\d+)E[+-]?\d+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="X&#34;[0-9a-f_]+&#34;">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="O&#34;[0-7_]+&#34;">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="B&#34;[01_]+&#34;">
+        <token type="LiteralNumberBin"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vhs.xml 🔗

@@ -0,0 +1,48 @@
+<lexer>
+  <config>
+    <name>VHS</name>
+	<alias>vhs</alias>
+	<alias>tape</alias>
+	<alias>cassette</alias>
+	<filename>*.tape</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="(Output)(\s+)(.*)(\s+)">
+	    <bygroups>
+	      <token type="Keyword"/>
+	      <token type="TextWhitespace"/>
+	      <token type="LiteralString"/>
+	      <token type="TextWhitespace"/>
+		</bygroups>
+      </rule>
+      <rule pattern="\b(Set|Type|Left|Right|Up|Down|Backspace|Enter|Tab|Space|Ctrl|Sleep|Hide|Show|Escape)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(FontFamily|FontSize|Framerate|Height|Width|Theme|Padding|TypingSpeed|PlaybackSpeed|LineHeight|Framerate|LetterSpacing)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="#.*(\S|$)">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="(?s)&#34;.*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="(?s)&#39;.*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="(@|\+)">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="(ms|s)">
+        <token type="Text"/>
+      </rule>
+	</state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/viml.xml 🔗

@@ -0,0 +1,85 @@
+<lexer>
+  <config>
+    <name>VimL</name>
+    <alias>vim</alias>
+    <filename>*.vim</filename>
+    <filename>.vimrc</filename>
+    <filename>.exrc</filename>
+    <filename>.gvimrc</filename>
+    <filename>_vimrc</filename>
+    <filename>_exrc</filename>
+    <filename>_gvimrc</filename>
+    <filename>vimrc</filename>
+    <filename>gvimrc</filename>
+    <mime_type>text/x-vim</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="^([ \t:]*)(py(?:t(?:h(?:o(?:n)?)?)?)?)([ \t]*)(&lt;&lt;)([ \t]*)(.*)((?:\n|.)*)(\6)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+          <token type="Text"/>
+          <using lexer="Python"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^([ \t:]*)(py(?:t(?:h(?:o(?:n)?)?)?)?)([ \t])(.*)">
+        <bygroups>
+          <usingself state="root"/>
+          <token type="Keyword"/>
+          <token type="Text"/>
+          <using lexer="Python"/>
+        </bygroups>
+      </rule>
+      <rule pattern="^\s*&#34;.*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="[ \t]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="/(\\\\|\\/|[^\n/])*/">
+        <token type="LiteralStringRegex"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^\n&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(&#39;&#39;|[^\n&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="(?&lt;=\s)&#34;[^\-:.%#=*].*">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="-?\d+">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="#[0-9a-f]{6}">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="^:">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[()&lt;&gt;+=!|,~-]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\b(let|if|else|endif|elseif|fun|function|endfunction|set|map|autocmd|filetype|hi(ghlight)?|execute|syntax|colorscheme)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="\b(NONE|bold|italic|underline|dark|light)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="\b\w+\b">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/vue.xml 🔗

@@ -0,0 +1,307 @@
+<lexer>
+  <config>
+    <name>vue</name>
+    <alias>vue</alias>
+    <alias>vuejs</alias>
+    <filename>*.vue</filename>
+    <mime_type>text/x-vue</mime_type>
+    <mime_type>application/x-vue</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="interp-inside">
+      <rule pattern="\}">
+        <token type="LiteralStringInterpol"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="attr">
+      <rule pattern="{">
+        <token type="Punctuation"/>
+        <push state="expression"/>
+      </rule>
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="interp">
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="\\\\">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\\`">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="\$\{">
+        <token type="LiteralStringInterpol"/>
+        <push state="interp-inside"/>
+      </rule>
+      <rule pattern="\$">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+      <rule pattern="[^`\\$]+">
+        <token type="LiteralStringBacktick"/>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="(-)([\w]+)">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="(@[\w]+)(=&#34;[\S]+&#34;)(&gt;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(@[\w]+)(=&#34;[\S]+&#34;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(@[\S]+)(=&#34;[\S]+&#34;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(:[\S]+)(=)(&#34;[\S]+&#34;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Operator"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(:)">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="(v-b-[\S]+)">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="(v-[\w]+)(=&#34;.+)([:][\w]+)(=&#34;[\w]+&#34;)(&gt;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="LiteralString"/>
+          <token type="NameTag"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(v-[\w]+)(=)(&#34;[\S ]+&#34;)(&gt;|\s)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Operator"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(v-[\w]+)(&gt;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(v-[\w]+)(=&#34;.+&#34;)(&gt;)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="LiteralString"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;)([\w]+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(&lt;)(/)([\w]+)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([\w]+\s*)(=)(\s*)">
+        <bygroups>
+          <token type="NameAttribute"/>
+          <token type="Operator"/>
+          <token type="Text"/>
+        </bygroups>
+        <push state="attr"/>
+      </rule>
+      <rule pattern="[{}]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="[\w\.]+">
+        <token type="NameAttribute"/>
+      </rule>
+      <rule pattern="(/?)(\s*)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+          <token type="Punctuation"/>
+        </bygroups>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="slashstartsregex">
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="/(\\.|[^[/\\\n]|\[(\\.|[^\]\\\n])*])+/([gimuy]+\b|\B)">
+        <token type="LiteralStringRegex"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="(?=/)">
+        <token type="Text"/>
+        <push state="#pop" state="badregex"/>
+      </rule>
+      <rule>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule>
+        <include state="vue"/>
+      </rule>
+      <rule pattern="\A#! ?/.*?\n">
+        <token type="CommentHashbang"/>
+      </rule>
+      <rule pattern="^(?=\s|/|&lt;!--)">
+        <token type="Text"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule>
+        <include state="commentsandwhitespace"/>
+      </rule>
+      <rule pattern="(\.\d+|[0-9]+\.[0-9]*)([eE][-+]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0[bB][01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0[oO][0-7]+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="\.\.\.|=&gt;">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="\+\+|--|~|&amp;&amp;|\?|:|\|\||\\(?=\n)|(&lt;&lt;|&gt;&gt;&gt;?|==?|!=?|[-&lt;&gt;+*%&amp;|^/])=?">
+        <token type="Operator"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[{(\[;,]">
+        <token type="Punctuation"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="[})\].]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(for|in|while|do|break|return|continue|switch|case|default|if|else|throw|try|catch|finally|new|delete|typeof|instanceof|void|yield|this|of)\b">
+        <token type="Keyword"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(var|let|with|function)\b">
+        <token type="KeywordDeclaration"/>
+        <push state="slashstartsregex"/>
+      </rule>
+      <rule pattern="(abstract|boolean|byte|char|class|const|debugger|double|enum|export|extends|final|float|goto|implements|import|int|interface|long|native|package|private|protected|public|short|static|super|synchronized|throws|transient|volatile)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(true|false|null|NaN|Infinity|undefined)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(Array|Boolean|Date|Error|Function|Math|netscape|Number|Object|Packages|RegExp|String|Promise|Proxy|sun|decodeURI|decodeURIComponent|encodeURI|encodeURIComponent|Error|eval|isFinite|isNaN|isSafeInteger|parseFloat|parseInt|document|this|window)\b">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(?:[$_\p{L}\p{N}]|\\u[a-fA-F0-9]{4})(?:(?:[$\p{L}\p{N}]|\\u[a-fA-F0-9]{4}))*">
+        <token type="NameOther"/>
+      </rule>
+      <rule pattern="&#34;(\\\\|\\&#34;|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(\\\\|\\&#39;|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="`">
+        <token type="LiteralStringBacktick"/>
+        <push state="interp"/>
+      </rule>
+    </state>
+    <state name="badregex">
+      <rule pattern="\n">
+        <token type="Text"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="vue">
+      <rule pattern="(&lt;)([\w-]+)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+        </bygroups>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="(&lt;)(/)([\w-]+)(&gt;)">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="Punctuation"/>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="expression">
+      <rule pattern="{">
+        <token type="Punctuation"/>
+        <push/>
+      </rule>
+      <rule pattern="}">
+        <token type="Punctuation"/>
+        <pop depth="1"/>
+      </rule>
+      <rule>
+        <include state="root"/>
+      </rule>
+    </state>
+    <state name="commentsandwhitespace">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*.*?\*/">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/wdte.xml 🔗

@@ -0,0 +1,43 @@
+<lexer>
+  <config>
+    <name>WDTE</name>
+    <filename>*.wdte</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#(.*?)\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="-?[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="-?[0-9]*\.[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="&#34;[^&#34;]*&#34;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#39;[^&#39;]*&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="(default|switch|memo)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="{|}|;|-&gt;|=&gt;|\(|\)|\[|\]|\.">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[^{};()[\].\s]+">
+        <token type="NameVariable"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/webgpu_shading_language.xml 🔗

@@ -0,0 +1,142 @@
+<lexer>
+  <config>
+    <name>WebGPU Shading Language</name>
+    <alias>wgsl</alias>
+    <filename>*.wgsl</filename>
+    <mime_type>text/wgsl</mime_type>
+  </config>
+  <rules>
+    <state name="blankspace">
+      <rule pattern="[\u0020\u0009\u000a\u000b\u000c\u000d\u0085\u200e\u200f\u2028\u2029]+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="//[^\u000a\u000b\u000c\u000d\u0085\u2028\u2029]*\u000d\u000a">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="//[^\u000a\u000b\u000c\u000d\u0085\u2028\u2029]*[\u000a\u000b\u000c\u000d\u0085\u2028\u2029]">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="block_comment"/>
+      </rule>
+    </state>
+    <state name="attribute">
+      <rule>
+        <include state="blankspace"/>
+      </rule>
+      <rule>
+        <include state="comments"/>
+      </rule>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/webvtt.xml 🔗

@@ -0,0 +1,283 @@
+<lexer>
+  <config>
+    <name>WebVTT</name>
+    <alias>vtt</alias>
+    <filename>*.vtt</filename>
+    <mime_type>text/vtt</mime_type>
+  </config>
+  <!--
+    The WebVTT spec refers to a WebVTT line terminator as either CRLF, CR or LF.
+    (https://www.w3.org/TR/webvtt1/#webvtt-line-terminator) However, with this 
+    definition it is unclear whether CRLF is one line terminator (CRLF) or two
+    line terminators (CR and LF).
+
+    To work around this ambiguity, only CRLF and LF are considered as line terminators. 
+    To my knowledge only classic Mac OS uses CR as line terminators, so the lexer should
+    still work for most files.
+  -->
+  <rules>
+    <!-- https://www.w3.org/TR/webvtt1/#webvtt-file-body -->
+    <state name="root">
+      <rule pattern="(\AWEBVTT)((?:[ \t][^\r\n]*)?(?:\r?\n){2,})">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Text" />
+        </bygroups>
+      </rule>
+      <rule pattern="(^REGION)([ \t]*$)">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Text" />
+        </bygroups>
+        <push state="region-settings-list" />
+      </rule>
+      <rule
+        pattern="(^STYLE)([ \t]*$)((?:(?!&#45;&#45;&gt;)[\s\S])*?)((?:\r?\n){2})">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Text" />
+          <using lexer="CSS" />
+          <token type="Text" />
+        </bygroups>
+      </rule>
+      <rule>
+        <include state="comment" />
+      </rule>
+      <rule
+        pattern="(?=((?![^\r\n]*&#45;&#45;&gt;)[^\r\n]*\r?\n)?(\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3}[ \t]+&#45;&#45;&gt;[ \t]+(\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})"
+      >
+        <push state="cues" />
+      </rule>
+    </state>
+
+    <!-- https://www.w3.org/TR/webvtt1/#webvtt-region-settings-list -->
+    <state name="region-settings-list">
+      <rule pattern="(?: |\t|\r?\n(?!\r?\n))+">
+        <token type="Text" />
+      </rule>
+      <rule pattern="(?:\r?\n){2}">
+        <token type="Text" />
+        <pop depth="1" />
+      </rule>
+      <rule pattern="(id)(:)(?!&#45;&#45;&gt;)(\S+)">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+        </bygroups>
+      </rule>
+      <rule pattern="(width)(:)((?:[1-9]?\d|100)(?:\.\d+)?)(%)">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+          <token type="KeywordType" />
+        </bygroups>
+      </rule>
+      <rule pattern="(lines)(:)(\d+)">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+        </bygroups>
+      </rule>
+      <rule
+        pattern="(regionanchor|viewportanchor)(:)((?:[1-9]?\d|100)(?:\.\d+)?)(%)(,)((?:[1-9]?\d|100)(?:\.\d+)?)(%)">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+          <token type="KeywordType" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+          <token type="KeywordType" />
+        </bygroups>
+      </rule>
+      <rule pattern="(scroll)(:)(up)">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="KeywordConstant" />
+        </bygroups>
+      </rule>
+    </state>
+
+    <!-- https://www.w3.org/TR/webvtt1/#webvtt-comment-block -->
+    <state name="comment">
+      <rule
+        pattern="^NOTE( |\t|\r?\n)((?!&#45;&#45;&gt;)[\s\S])*?(?:(\r?\n){2}|\Z)">
+        <token type="Comment" />
+      </rule>
+    </state>
+
+    <!-- 
+      "Zero or more WebVTT cue blocks and WebVTT comment blocks separated from each other by one or more
+      WebVTT line terminators." (https://www.w3.org/TR/webvtt1/#file-structure)
+    -->
+    <state name="cues">
+      <rule
+        pattern="(?:((?!&#45;&#45;&gt;)[^\r\n]+)?(\r?\n))?((?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})([ \t]+)(&#45;&#45;&gt;)([ \t]+)((?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})([ \t]*)">
+        <bygroups>
+          <token type="Name" />
+          <token type="Text" />
+          <token type="LiteralDate" />
+          <token type="Text" />
+          <token type="Operator" />
+          <token type="Text" />
+          <token type="LiteralDate" />
+          <token type="Text" />
+        </bygroups>
+        <push state="cue-settings-list" />
+      </rule>
+      <rule>
+        <include state="comment" />
+      </rule>
+    </state>
+
+    <!-- https://www.w3.org/TR/webvtt1/#webvtt-cue-settings-list -->
+    <state name="cue-settings-list">
+      <rule pattern="[ \t]+">
+        <token type="Text" />
+      </rule>
+      <rule pattern="(vertical)(:)?(rl|lr)?">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="KeywordConstant" />
+        </bygroups>
+      </rule>
+      <rule
+        pattern="(line)(:)?(?:(?:((?:[1-9]?\d|100)(?:\.\d+)?)(%)|(-?\d+))(?:(,)(start|center|end))?)?">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+          <token type="KeywordType" />
+          <token type="Literal" />
+          <token type="Punctuation" />
+          <token type="KeywordConstant" />
+        </bygroups>
+      </rule>
+      <rule
+        pattern="(position)(:)?(?:(?:((?:[1-9]?\d|100)(?:\.\d+)?)(%)|(-?\d+))(?:(,)(line-left|center|line-right))?)?">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+          <token type="KeywordType" />
+          <token type="Literal" />
+          <token type="Punctuation" />
+          <token type="KeywordConstant" />
+        </bygroups>
+      </rule>
+      <rule pattern="(size)(:)?(?:((?:[1-9]?\d|100)(?:\.\d+)?)(%))?">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+          <token type="KeywordType" />
+        </bygroups>
+      </rule>
+      <rule pattern="(align)(:)?(start|center|end|left|right)?">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="KeywordConstant" />
+        </bygroups>
+      </rule>
+      <rule pattern="(region)(:)?((?![^\r\n]*&#45;&#45;&gt;(?=[ \t]+?))[^ \t\r\n]+)?">
+        <bygroups>
+          <token type="Keyword" />
+          <token type="Punctuation" />
+          <token type="Literal" />
+        </bygroups>
+      </rule>
+      <rule
+        pattern="(?=\r?\n)">
+        <push state="cue-payload" />
+      </rule>
+    </state>
+
+    <!-- https://www.w3.org/TR/webvtt1/#cue-payload -->
+    <state name="cue-payload">
+      <rule pattern="(\r?\n){2,}">
+        <token type="Text" />
+        <pop depth="2" />
+      </rule>
+      <rule pattern="[^&lt;&amp;]+?">
+        <token type="Text" />
+      </rule>
+      <rule pattern="&amp;(#\d+|#x[0-9A-Fa-f]+|[a-zA-Z0-9]+);">
+        <token type="Text" />
+      </rule>
+      <rule pattern="(?=&lt;)">
+        <token type="Text" />
+        <push state="cue-span-tag" />
+      </rule>
+    </state>
+    <state name="cue-span-tag">
+      <rule
+        pattern="&lt;(?=c|i|b|u|ruby|rt|v|lang|(?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})">
+        <token type="Punctuation" />
+        <push state="cue-span-start-tag-name" />
+      </rule>
+      <rule pattern="(&lt;/)(c|i|b|u|ruby|rt|v|lang)">
+        <bygroups>
+          <token type="Punctuation" />
+          <token type="NameTag" />
+        </bygroups>
+      </rule>
+      <rule pattern="&gt;">
+        <token type="Punctuation" />
+        <pop depth="1" />
+      </rule>
+    </state>
+    <state name="cue-span-start-tag-name">
+      <rule pattern="(c|i|b|u|ruby|rt)|((?:\d{2}:)?(?:[0-5][0-9]):(?:[0-5][0-9])\.\d{3})">
+        <bygroups>
+          <token type="NameTag" />
+          <token type="LiteralDate" />
+        </bygroups>
+        <push state="cue-span-classes-without-annotations" />
+      </rule>
+      <rule pattern="v|lang">
+        <token type="NameTag" />
+        <push state="cue-span-classes-with-annotations" />
+      </rule>
+    </state>
+    <state name="cue-span-classes-without-annotations">
+      <rule>
+        <include state="cue-span-classes" />
+      </rule>
+      <rule pattern="(?=&gt;)">
+        <pop depth="2" />
+      </rule>
+    </state>
+    <state name="cue-span-classes-with-annotations">
+      <rule>
+        <include state="cue-span-classes" />
+      </rule>
+      <rule pattern="(?=[ \t])">
+        <push state="cue-span-start-tag-annotations" />
+      </rule>
+    </state>
+    <state name="cue-span-classes">
+      <rule pattern="(\.)([^ \t\n\r&amp;&lt;&gt;\.]+)">
+        <bygroups>
+          <token type="Punctuation" />
+          <token type="NameTag" />
+        </bygroups>
+      </rule>
+    </state>
+    <state name="cue-span-start-tag-annotations">
+      <rule
+        pattern="[ \t](?:[^\n\r&amp;&gt;]|&amp;(?:#\d+|#x[0-9A-Fa-f]+|[a-zA-Z0-9]+);)+">
+        <token type="Text" />
+      </rule>
+      <rule pattern="(?=&gt;)">
+        <token type="Text" />
+        <pop depth="3" />
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/whiley.xml 🔗

@@ -0,0 +1,57 @@
+<lexer>
+  <config>
+    <name>Whiley</name>
+    <alias>whiley</alias>
+    <filename>*.whiley</filename>
+    <mime_type>text/x-whiley</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\\\n">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="/[*](.|\n)*?[*]/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>      
+      <rule pattern="(function|import|from|method|property|type|with|variant)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="(assert|assume|all|break|case|continue|debug|default|do|else|ensures|export|fail|final|for|if|in|is|native|no|new|private|protected|public|return|requires|skip|some|switch|unsafe|where|while)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(true|false|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(bool|byte|int|void)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="0b(?:_?[01])+">
+        <token type="LiteralNumberBin"/>
+      </rule>      
+      <rule pattern="0[xX][0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(0|[1-9][0-9]*)">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="[+%=&gt;&lt;|^!?/\-*&amp;~:]">
+        <token type="Operator"/>
+      </rule>      
+      <rule pattern="[{}()\[\],.;\|]">
+        <token type="Punctuation"/>
+      </rule>      
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/xml.xml 🔗

@@ -0,0 +1,95 @@
+<lexer>
+  <config>
+    <name>XML</name>
+    <alias>xml</alias>
+    <filename>*.xml</filename>
+    <filename>*.xsl</filename>
+    <filename>*.rss</filename>
+    <filename>*.xslt</filename>
+    <filename>*.xsd</filename>
+    <filename>*.wsdl</filename>
+    <filename>*.wsf</filename>
+    <filename>*.svg</filename>
+    <filename>*.csproj</filename>
+    <filename>*.vcxproj</filename>
+    <filename>*.fsproj</filename>
+    <mime_type>text/xml</mime_type>
+    <mime_type>application/xml</mime_type>
+    <mime_type>image/svg+xml</mime_type>
+    <mime_type>application/rss+xml</mime_type>
+    <mime_type>application/atom+xml</mime_type>
+    <dot_all>true</dot_all>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="[^&lt;&amp;]+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&amp;\S*?;">
+        <token type="NameEntity"/>
+      </rule>
+      <rule pattern="\&lt;\!\[CDATA\[.*?\]\]\&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="&lt;!--">
+        <token type="Comment"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="&lt;\?.*?\?&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="&lt;![^&gt;]*&gt;">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="&lt;\s*[\w:.-]+">
+        <token type="NameTag"/>
+        <push state="tag"/>
+      </rule>
+      <rule pattern="&lt;\s*/\s*[\w:.-]+\s*&gt;">
+        <token type="NameTag"/>
+      </rule>
+    </state>
+    <state name="comment">
+      <rule pattern="[^-]+">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="--&gt;">
+        <token type="Comment"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="-">
+        <token type="Comment"/>
+      </rule>
+    </state>
+    <state name="tag">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="[\w.:-]+\s*=">
+        <token type="NameAttribute"/>
+        <push state="attr"/>
+      </rule>
+      <rule pattern="/?\s*&gt;">
+        <token type="NameTag"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="attr">
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="&#34;.*?&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="&#39;.*?&#39;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[^\s&gt;]+">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/xorg.xml 🔗

@@ -0,0 +1,35 @@
+<lexer>
+  <config>
+    <name>Xorg</name>
+    <alias>xorg.conf</alias>
+    <filename>xorg.conf</filename>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="((|Sub)Section)(\s+)(&#34;\w+&#34;)">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="LiteralStringEscape"/>
+          <token type="TextWhitespace"/>
+          <token type="LiteralStringEscape"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(End(|Sub)Section)">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(\w+)(\s+)([^\n#]+)">
+        <bygroups>
+          <token type="NameKeyword"/>
+          <token type="TextWhitespace"/>
+          <token type="LiteralString"/>
+        </bygroups>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/yaml.xml 🔗

@@ -0,0 +1,122 @@
+<lexer>
+  <config>
+    <name>YAML</name>
+    <alias>yaml</alias>
+    <filename>*.yaml</filename>
+    <filename>*.yml</filename>
+    <mime_type>text/x-yaml</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule>
+        <include state="whitespace"/>
+      </rule>
+      <rule pattern="^---">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="^\.\.\.">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="[\n?]?\s*- ">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="#.*$">
+        <token type="Comment"/>
+      </rule>
+      <rule pattern="!![^\s]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="&amp;[^\s]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\*[^\s]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="^%include\s+[^\n\r]+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule>
+        <include state="key"/>
+      </rule>
+      <rule>
+        <include state="value"/>
+      </rule>
+      <rule pattern="[?:,\[\]]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern=".">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="value">
+      <rule pattern="([&gt;|](?:[+-])?)(\n(^ {1,})(?:(?:.*\n*(?:^\3 *).*)+|.*))">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="LiteralStringDoc"/>
+          <token type="Ignore"/>
+        </bygroups>
+      </rule>
+      <rule pattern="(false|False|FALSE|true|True|TRUE|null|Off|off|yes|Yes|YES|OFF|On|ON|no|No|on|NO|n|N|Y|y)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="&#34;(?:\\.|[^&#34;])*&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(?:\\.|[^&#39;])*&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="\d\d\d\d-\d\d-\d\d([T ]\d\d:\d\d:\d\d(\.\d+)?(Z|\s+[-+]\d+)?)?">
+        <token type="LiteralDate"/>
+      </rule>
+      <rule pattern="\b[+\-]?(0x[\da-f]+|0o[0-7]+|(\d+\.?\d*|\.?\d+)(e[\+\-]?\d+)?|\.inf|\.nan)\b">
+        <token type="LiteralNumber"/>
+      </rule>
+      <rule pattern="([^\{\}\[\]\?,\:\!\-\*&amp;\@].*)( )+(#.*)">
+        <bygroups>
+          <token type="Literal"/>
+          <token type="TextWhitespace"/>
+          <token type="Comment"/>
+        </bygroups>
+      </rule>
+      <rule pattern="[^\{\}\[\]\?,\:\!\-\*&amp;\@].*">
+        <token type="Literal"/>
+      </rule>
+    </state>
+    <state name="key">
+      <rule pattern="&#34;[^&#34;\n].*&#34;: ">
+        <token type="NameTag"/>
+      </rule>
+      <rule pattern="(-)( )([^&#34;\n{]*)(:)( )">
+        <bygroups>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([^&#34;\n{]*)(:)( )">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([^&#34;\n{]*)(:)(\n)">
+        <bygroups>
+          <token type="NameTag"/>
+          <token type="Punctuation"/>
+          <token type="TextWhitespace"/>
+        </bygroups>
+      </rule>
+    </state>
+    <state name="whitespace">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\n+">
+        <token type="TextWhitespace"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/yang.xml 🔗

@@ -0,0 +1,99 @@
+<lexer>
+  <config>
+    <name>YANG</name>
+    <alias>yang</alias>
+    <filename>*.yang</filename>
+    <mime_type>application/yang</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="[\{\}\;]+">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="(?&lt;![\-\w])(and|or|not|\+|\.)(?![\-\w])">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="&#34;(?:\\&#34;|[^&#34;])*?&#34;">
+        <token type="LiteralStringDouble"/>
+      </rule>
+      <rule pattern="&#39;(?:\\&#39;|[^&#39;])*?&#39;">
+        <token type="LiteralStringSingle"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comments"/>
+      </rule>
+      <rule pattern="//.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(?:^|(?&lt;=[\s{};]))([\w.-]+)(:)([\w.-]+)(?=[\s{};])">
+        <bygroups>
+          <token type="KeywordNamespace"/>
+          <token type="Punctuation"/>
+          <token type="Text"/>
+        </bygroups>
+      </rule>
+      <rule pattern="([0-9]{4}\-[0-9]{2}\-[0-9]{2})(?=[\s\{\}\;])">
+        <token type="LiteralDate"/>
+      </rule>
+      <rule pattern="([0-9]+\.[0-9]+)(?=[\s\{\}\;])">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="([0-9]+)(?=[\s\{\}\;])">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="(submodule|module)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(yang-version|belongs-to|namespace|prefix)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(organization|description|reference|revision|contact)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(revision-date|include|import)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(notification|if-feature|deviation|extension|identity|argument|grouping|typedef|feature|augment|output|action|input|rpc)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(leaf-list|container|presence|anydata|deviate|choice|config|anyxml|refine|leaf|must|list|case|uses|when)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(require-instance|fraction-digits|error-app-tag|error-message|min-elements|max-elements|yin-element|ordered-by|position|modifier|default|pattern|length|status|units|value|range|type|path|enum|base|bit)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(mandatory|unique|key)(?=[^\w\-\:])">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(not-supported|invert-match|deprecated|unbounded|obsolete|current|replace|delete|false|true|user|min|max|add)(?=[^\w\-\:])">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="(instance-identifier|identityref|enumeration|decimal64|boolean|leafref|uint64|uint32|string|binary|uint16|int32|int64|int16|empty|uint8|union|int8|bits)(?=[^\w\-\:])">
+        <token type="NameClass"/>
+      </rule>
+      <rule pattern="[^;{}\s\&#39;\&#34;]+">
+        <token type="Text"/>
+      </rule>
+    </state>
+    <state name="comments">
+      <rule pattern="[^*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/\*">
+        <token type="CommentMultiline"/>
+        <push state="comment"/>
+      </rule>
+      <rule pattern="\*/">
+        <token type="CommentMultiline"/>
+        <pop depth="1"/>
+      </rule>
+      <rule pattern="[*/]">
+        <token type="CommentMultiline"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/z80_assembly.xml 🔗

@@ -0,0 +1,74 @@
+<lexer>
+  <config>
+    <name>Z80 Assembly</name>
+    <alias>z80</alias>
+    <filename>*.z80</filename>
+    <filename>*.asm</filename>
+    <case_insensitive>true</case_insensitive>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="[^&#34;\\]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\.">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern=";.*?$">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="^[.\w]+:">
+        <token type="NameLabel"/>
+      </rule>
+      <rule pattern="((0x)|\$)[0-9a-fA-F]+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="[0-9][0-9a-fA-F]+h">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="((0b)|%)[01]+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="-?[0-9]+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="&#39;\\?.&#39;">
+        <token type="LiteralStringChar"/>
+      </rule>
+      <rule pattern="[,=()\\]">
+        <token type="Punctuation"/>
+      </rule>
+      <rule pattern="^\s*#\w+">
+        <token type="CommentPreproc"/>
+      </rule>
+      <rule pattern="\.(db|dw|end|org|byte|word|fill|block|addinstr|echo|error|list|nolist|equ|show|option|seek)">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="(ex|exx|ld|ldd|lddr|ldi|ldir|pop|push|adc|add|cp|cpd|cpdr|cpi|cpir|cpl|daa|dec|inc|neg|sbc|sub|and|bit|ccf|or|res|scf|set|xor|rl|rla|rlc|rlca|rld|rr|rra|rrc|rrca|rrd|sla|sra|srl|call|djnz|jp|jr|ret|rst|nop|reti|retn|di|ei|halt|im|in|ind|indr|ini|inir|out|outd|otdr|outi|otir)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(z|nz|c|nc|po|pe|p|m)">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="[+-/*~\^&amp;|]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="\w+">
+        <token type="Text"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="Text"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/zed.xml 🔗

@@ -0,0 +1,51 @@
+<lexer>
+  <config>
+    <name>Zed</name>
+    <alias>zed</alias>
+    <filename>*.zed</filename>
+    <mime_type>text/zed</mime_type>
+  </config>
+  <rules>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*?[*](\\\n)?/">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="/(\\\n)?[*][\w\W]*">
+        <token type="CommentMultiline"/>
+      </rule>
+      <rule pattern="(definition)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(relation)\b">
+        <token type="KeywordNamespace"/>
+      </rule>
+      <rule pattern="(permission)\b">
+        <token type="KeywordDeclaration"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*/">
+        <token type="NameNamespace"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="#[a-zA-Z_]\w*">
+        <token type="NameVariable"/>
+      </rule>
+      <rule pattern="[+%=&gt;&lt;|^!?/\-*&amp;~:]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{}()\[\],.;]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/embedded/zig.xml 🔗

@@ -0,0 +1,112 @@
+<lexer>
+  <config>
+    <name>Zig</name>
+    <alias>zig</alias>
+    <filename>*.zig</filename>
+    <mime_type>text/zig</mime_type>
+  </config>
+  <rules>
+    <state name="string">
+      <rule pattern="\\(x[a-fA-F0-9]{2}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{6}|[nr\\t\&#39;&#34;])">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="[^\\&#34;\n]+">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="&#34;">
+        <token type="LiteralString"/>
+        <pop depth="1"/>
+      </rule>
+    </state>
+    <state name="root">
+      <rule pattern="\n">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="\s+">
+        <token type="TextWhitespace"/>
+      </rule>
+      <rule pattern="//.*?\n">
+        <token type="CommentSingle"/>
+      </rule>
+      <rule pattern="(unreachable|continue|errdefer|suspend|return|resume|cancel|break|catch|async|await|defer|asm|try)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(threadlocal|linksection|allowzero|stdcallcc|volatile|comptime|noalias|nakedcc|inline|export|packed|extern|align|const|pub|var)\b">
+        <token type="KeywordReserved"/>
+      </rule>
+      <rule pattern="(struct|union|error|enum)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(while|for)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(comptime_float|comptime_int|c_longdouble|c_ulonglong|c_longlong|c_voidi8|noreturn|c_ushort|anyerror|promise|c_short|c_ulong|c_uint|c_long|isize|c_int|usize|void|f128|i128|type|bool|u128|u16|f64|f32|u64|i16|f16|i32|u32|i64|u8|i0|u0)\b">
+        <token type="KeywordType"/>
+      </rule>
+      <rule pattern="(undefined|false|true|null)\b">
+        <token type="KeywordConstant"/>
+      </rule>
+      <rule pattern="(switch|orelse|else|and|if|or)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="(usingnamespace|test|fn)\b">
+        <token type="Keyword"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+\.[0-9a-fA-F]+([pP][\-+]?[0-9a-fA-F]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0x[0-9a-fA-F]+\.?[pP][\-+]?[0-9a-fA-F]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+\.[0-9]+([eE][-+]?[0-9]+)?">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="[0-9]+\.?[eE][-+]?[0-9]+">
+        <token type="LiteralNumberFloat"/>
+      </rule>
+      <rule pattern="0b(?:_?[01])+">
+        <token type="LiteralNumberBin"/>
+      </rule>
+      <rule pattern="0o(?:_?[0-7])+">
+        <token type="LiteralNumberOct"/>
+      </rule>
+      <rule pattern="0x(?:_?[0-9a-fA-F])+">
+        <token type="LiteralNumberHex"/>
+      </rule>
+      <rule pattern="(?:_?[0-9])+">
+        <token type="LiteralNumberInteger"/>
+      </rule>
+      <rule pattern="@[a-zA-Z_]\w*">
+        <token type="NameBuiltin"/>
+      </rule>
+      <rule pattern="[a-zA-Z_]\w*">
+        <token type="Name"/>
+      </rule>
+      <rule pattern="\&#39;\\\&#39;\&#39;">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\&#39;\\(|x[a-fA-F0-9]{2}|u[a-fA-F0-9]{4}|U[a-fA-F0-9]{6}|[nr\\t\&#39;&#34;])\&#39;">
+        <token type="LiteralStringEscape"/>
+      </rule>
+      <rule pattern="\&#39;[^\\\&#39;]\&#39;">
+        <token type="LiteralString"/>
+      </rule>
+      <rule pattern="\\\\[^\n]*">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="c\\\\[^\n]*">
+        <token type="LiteralStringHeredoc"/>
+      </rule>
+      <rule pattern="c?&#34;">
+        <token type="LiteralString"/>
+        <push state="string"/>
+      </rule>
+      <rule pattern="[+%=&gt;&lt;|^!?/\-*&amp;~:]">
+        <token type="Operator"/>
+      </rule>
+      <rule pattern="[{}()\[\],.;]">
+        <token type="Punctuation"/>
+      </rule>
+    </state>
+  </rules>
+</lexer>

vendor/github.com/alecthomas/chroma/v2/lexers/genshi.go 🔗

@@ -0,0 +1,118 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Genshi Text lexer.
+var GenshiText = Register(MustNewLexer(
+	&Config{
+		Name:      "Genshi Text",
+		Aliases:   []string{"genshitext"},
+		Filenames: []string{},
+		MimeTypes: []string{"application/x-genshi-text", "text/x-genshi"},
+	},
+	genshiTextRules,
+))
+
+func genshiTextRules() Rules {
+	return Rules{
+		"root": {
+			{`[^#$\s]+`, Other, nil},
+			{`^(\s*)(##.*)$`, ByGroups(Text, Comment), nil},
+			{`^(\s*)(#)`, ByGroups(Text, CommentPreproc), Push("directive")},
+			Include("variable"),
+			{`[#$\s]`, Other, nil},
+		},
+		"directive": {
+			{`\n`, Text, Pop(1)},
+			{`(?:def|for|if)\s+.*`, Using("Python"), Pop(1)},
+			{`(choose|when|with)([^\S\n]+)(.*)`, ByGroups(Keyword, Text, Using("Python")), Pop(1)},
+			{`(choose|otherwise)\b`, Keyword, Pop(1)},
+			{`(end\w*)([^\S\n]*)(.*)`, ByGroups(Keyword, Text, Comment), Pop(1)},
+		},
+		"variable": {
+			{`(?<!\$)(\$\{)(.+?)(\})`, ByGroups(CommentPreproc, Using("Python"), CommentPreproc), nil},
+			{`(?<!\$)(\$)([a-zA-Z_][\w.]*)`, NameVariable, nil},
+		},
+	}
+}
+
+// Html+Genshi lexer.
+var GenshiHTMLTemplate = Register(MustNewLexer(
+	&Config{
+		Name:         "Genshi HTML",
+		Aliases:      []string{"html+genshi", "html+kid"},
+		Filenames:    []string{},
+		MimeTypes:    []string{"text/html+genshi"},
+		NotMultiline: true,
+		DotAll:       true,
+	},
+	genshiMarkupRules,
+))
+
+// Genshi lexer.
+var Genshi = Register(MustNewLexer(
+	&Config{
+		Name:         "Genshi",
+		Aliases:      []string{"genshi", "kid", "xml+genshi", "xml+kid"},
+		Filenames:    []string{"*.kid"},
+		MimeTypes:    []string{"application/x-genshi", "application/x-kid"},
+		NotMultiline: true,
+		DotAll:       true,
+	},
+	genshiMarkupRules,
+))
+
+func genshiMarkupRules() Rules {
+	return Rules{
+		"root": {
+			{`[^<$]+`, Other, nil},
+			{`(<\?python)(.*?)(\?>)`, ByGroups(CommentPreproc, Using("Python"), CommentPreproc), nil},
+			{`<\s*(script|style)\s*.*?>.*?<\s*/\1\s*>`, Other, nil},
+			{`<\s*py:[a-zA-Z0-9]+`, NameTag, Push("pytag")},
+			{`<\s*[a-zA-Z0-9:.]+`, NameTag, Push("tag")},
+			Include("variable"),
+			{`[<$]`, Other, nil},
+		},
+		"pytag": {
+			{`\s+`, Text, nil},
+			{`[\w:-]+\s*=`, NameAttribute, Push("pyattr")},
+			{`/?\s*>`, NameTag, Pop(1)},
+		},
+		"pyattr": {
+			{`(")(.*?)(")`, ByGroups(LiteralString, Using("Python"), LiteralString), Pop(1)},
+			{`(')(.*?)(')`, ByGroups(LiteralString, Using("Python"), LiteralString), Pop(1)},
+			{`[^\s>]+`, LiteralString, Pop(1)},
+		},
+		"tag": {
+			{`\s+`, Text, nil},
+			{`py:[\w-]+\s*=`, NameAttribute, Push("pyattr")},
+			{`[\w:-]+\s*=`, NameAttribute, Push("attr")},
+			{`/?\s*>`, NameTag, Pop(1)},
+		},
+		"attr": {
+			{`"`, LiteralString, Push("attr-dstring")},
+			{`'`, LiteralString, Push("attr-sstring")},
+			{`[^\s>]*`, LiteralString, Pop(1)},
+		},
+		"attr-dstring": {
+			{`"`, LiteralString, Pop(1)},
+			Include("strings"),
+			{`'`, LiteralString, nil},
+		},
+		"attr-sstring": {
+			{`'`, LiteralString, Pop(1)},
+			Include("strings"),
+			{`'`, LiteralString, nil},
+		},
+		"strings": {
+			{`[^"'$]+`, LiteralString, nil},
+			Include("variable"),
+		},
+		"variable": {
+			{`(?<!\$)(\$\{)(.+?)(\})`, ByGroups(CommentPreproc, Using("Python"), CommentPreproc), nil},
+			{`(?<!\$)(\$)([a-zA-Z_][\w\.]*)`, NameVariable, nil},
+		},
+	}
+}

vendor/github.com/alecthomas/chroma/v2/lexers/go.go 🔗

@@ -0,0 +1,81 @@
+package lexers
+
+import (
+	"strings"
+
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Go lexer.
+var Go = Register(MustNewLexer(
+	&Config{
+		Name:      "Go",
+		Aliases:   []string{"go", "golang"},
+		Filenames: []string{"*.go"},
+		MimeTypes: []string{"text/x-gosrc"},
+	},
+	goRules,
+).SetAnalyser(func(text string) float32 {
+	if strings.Contains(text, "fmt.") && strings.Contains(text, "package ") {
+		return 0.5
+	}
+	if strings.Contains(text, "package ") {
+		return 0.1
+	}
+	return 0.0
+}))
+
+func goRules() Rules {
+	return Rules{
+		"root": {
+			{`\n`, Text, nil},
+			{`\s+`, Text, nil},
+			{`\\\n`, Text, nil},
+			{`//[^\n\r]*`, CommentSingle, nil},
+			{`/(\\\n)?[*](.|\n)*?[*](\\\n)?/`, CommentMultiline, nil},
+			{`(import|package)\b`, KeywordNamespace, nil},
+			{`(var|func|struct|map|chan|type|interface|const)\b`, KeywordDeclaration, nil},
+			{Words(``, `\b`, `break`, `default`, `select`, `case`, `defer`, `go`, `else`, `goto`, `switch`, `fallthrough`, `if`, `range`, `continue`, `for`, `return`), Keyword, nil},
+			{`(true|false|iota|nil)\b`, KeywordConstant, nil},
+			{Words(``, `\b(\()`, `uint`, `uint8`, `uint16`, `uint32`, `uint64`, `int`, `int8`, `int16`, `int32`, `int64`, `float`, `float32`, `float64`, `complex64`, `complex128`, `byte`, `rune`, `string`, `bool`, `error`, `uintptr`, `print`, `println`, `panic`, `recover`, `close`, `complex`, `real`, `imag`, `len`, `cap`, `append`, `copy`, `delete`, `new`, `make`, `clear`, `min`, `max`), ByGroups(NameBuiltin, Punctuation), nil},
+			{Words(``, `\b`, `uint`, `uint8`, `uint16`, `uint32`, `uint64`, `int`, `int8`, `int16`, `int32`, `int64`, `float`, `float32`, `float64`, `complex64`, `complex128`, `byte`, `rune`, `string`, `bool`, `error`, `uintptr`, `any`), KeywordType, nil},
+			{`\d+i`, LiteralNumber, nil},
+			{`\d+\.\d*([Ee][-+]\d+)?i`, LiteralNumber, nil},
+			{`\.\d+([Ee][-+]\d+)?i`, LiteralNumber, nil},
+			{`\d+[Ee][-+]\d+i`, LiteralNumber, nil},
+			{`\d+(\.\d+[eE][+\-]?\d+|\.\d*|[eE][+\-]?\d+)`, LiteralNumberFloat, nil},
+			{`\.\d+([eE][+\-]?\d+)?`, LiteralNumberFloat, nil},
+			{`0[0-7]+`, LiteralNumberOct, nil},
+			{`0[xX][0-9a-fA-F_]+`, LiteralNumberHex, nil},
+			{`0b[01_]+`, LiteralNumberBin, nil},
+			{`(0|[1-9][0-9_]*)`, LiteralNumberInteger, nil},
+			{`'(\\['"\\abfnrtv]|\\x[0-9a-fA-F]{2}|\\[0-7]{1,3}|\\u[0-9a-fA-F]{4}|\\U[0-9a-fA-F]{8}|[^\\])'`, LiteralStringChar, nil},
+			{"(`)([^`]*)(`)", ByGroups(LiteralString, UsingLexer(TypeRemappingLexer(GoTextTemplate, TypeMapping{{Other, LiteralString, nil}})), LiteralString), nil},
+			{`"(\\\\|\\"|[^"])*"`, LiteralString, nil},
+			{`(<<=|>>=|<<|>>|<=|>=|&\^=|&\^|\+=|-=|\*=|/=|%=|&=|\|=|&&|\|\||<-|\+\+|--|==|!=|:=|\.\.\.|[+\-*/%&])`, Operator, nil},
+			{`([a-zA-Z_]\w*)(\s*)(\()`, ByGroups(NameFunction, UsingSelf("root"), Punctuation), nil},
+			{`[|^<>=!()\[\]{}.,;:~]`, Punctuation, nil},
+			{`[^\W\d]\w*`, NameOther, nil},
+		},
+	}
+}
+
+var GoHTMLTemplate = Register(DelegatingLexer(HTML, MustNewXMLLexer(
+	embedded,
+	"embedded/go_template.xml",
+).SetConfig(
+	&Config{
+		Name:    "Go HTML Template",
+		Aliases: []string{"go-html-template"},
+	},
+)))
+
+var GoTextTemplate = Register(MustNewXMLLexer(
+	embedded,
+	"embedded/go_template.xml",
+).SetConfig(
+	&Config{
+		Name:    "Go Text Template",
+		Aliases: []string{"go-text-template"},
+	},
+))

vendor/github.com/alecthomas/chroma/v2/lexers/haxe.go 🔗

@@ -0,0 +1,647 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Haxe lexer.
+var Haxe = Register(MustNewLexer(
+	&Config{
+		Name:      "Haxe",
+		Aliases:   []string{"hx", "haxe", "hxsl"},
+		Filenames: []string{"*.hx", "*.hxsl"},
+		MimeTypes: []string{"text/haxe", "text/x-haxe", "text/x-hx"},
+		DotAll:    true,
+	},
+	haxeRules,
+))
+
+func haxeRules() Rules {
+	return Rules{
+		"root": {
+			Include("spaces"),
+			Include("meta"),
+			{`(?:package)\b`, KeywordNamespace, Push("semicolon", "package")},
+			{`(?:import)\b`, KeywordNamespace, Push("semicolon", "import")},
+			{`(?:using)\b`, KeywordNamespace, Push("semicolon", "using")},
+			{`(?:extern|private)\b`, KeywordDeclaration, nil},
+			{`(?:abstract)\b`, KeywordDeclaration, Push("abstract")},
+			{`(?:class|interface)\b`, KeywordDeclaration, Push("class")},
+			{`(?:enum)\b`, KeywordDeclaration, Push("enum")},
+			{`(?:typedef)\b`, KeywordDeclaration, Push("typedef")},
+			{`(?=.)`, Text, Push("expr-statement")},
+		},
+		"spaces": {
+			{`\s+`, Text, nil},
+			{`//[^\n\r]*`, CommentSingle, nil},
+			{`/\*.*?\*/`, CommentMultiline, nil},
+			{`(#)(if|elseif|else|end|error)\b`, CommentPreproc, MutatorFunc(haxePreProcMutator)},
+		},
+		"string-single-interpol": {
+			{`\$\{`, LiteralStringInterpol, Push("string-interpol-close", "expr")},
+			{`\$\$`, LiteralStringEscape, nil},
+			{`\$(?=(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+))`, LiteralStringInterpol, Push("ident")},
+			Include("string-single"),
+		},
+		"string-single": {
+			{`'`, LiteralStringSingle, Pop(1)},
+			{`\\.`, LiteralStringEscape, nil},
+			{`.`, LiteralStringSingle, nil},
+		},
+		"string-double": {
+			{`"`, LiteralStringDouble, Pop(1)},
+			{`\\.`, LiteralStringEscape, nil},
+			{`.`, LiteralStringDouble, nil},
+		},
+		"string-interpol-close": {
+			{`\$(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, LiteralStringInterpol, nil},
+			{`\}`, LiteralStringInterpol, Pop(1)},
+		},
+		"package": {
+			Include("spaces"),
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, NameNamespace, nil},
+			{`\.`, Punctuation, Push("import-ident")},
+			Default(Pop(1)),
+		},
+		"import": {
+			Include("spaces"),
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, NameNamespace, nil},
+			{`\*`, Keyword, nil},
+			{`\.`, Punctuation, Push("import-ident")},
+			{`in`, KeywordNamespace, Push("ident")},
+			Default(Pop(1)),
+		},
+		"import-ident": {
+			Include("spaces"),
+			{`\*`, Keyword, Pop(1)},
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, NameNamespace, Pop(1)},
+		},
+		"using": {
+			Include("spaces"),
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, NameNamespace, nil},
+			{`\.`, Punctuation, Push("import-ident")},
+			Default(Pop(1)),
+		},
+		"preproc-error": {
+			{`\s+`, CommentPreproc, nil},
+			{`'`, LiteralStringSingle, Push("#pop", "string-single")},
+			{`"`, LiteralStringDouble, Push("#pop", "string-double")},
+			Default(Pop(1)),
+		},
+		"preproc-expr": {
+			{`\s+`, CommentPreproc, nil},
+			{`\!`, CommentPreproc, nil},
+			{`\(`, CommentPreproc, Push("#pop", "preproc-parenthesis")},
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, CommentPreproc, Pop(1)},
+			{`\.[0-9]+`, LiteralNumberFloat, nil},
+			{`[0-9]+[eE][+\-]?[0-9]+`, LiteralNumberFloat, nil},
+			{`[0-9]+\.[0-9]*[eE][+\-]?[0-9]+`, LiteralNumberFloat, nil},
+			{`[0-9]+\.[0-9]+`, LiteralNumberFloat, nil},
+			{`[0-9]+\.(?!(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)|\.\.)`, LiteralNumberFloat, nil},
+			{`0x[0-9a-fA-F]+`, LiteralNumberHex, nil},
+			{`[0-9]+`, LiteralNumberInteger, nil},
+			{`'`, LiteralStringSingle, Push("#pop", "string-single")},
+			{`"`, LiteralStringDouble, Push("#pop", "string-double")},
+		},
+		"preproc-parenthesis": {
+			{`\s+`, CommentPreproc, nil},
+			{`\)`, CommentPreproc, Pop(1)},
+			Default(Push("preproc-expr-in-parenthesis")),
+		},
+		"preproc-expr-chain": {
+			{`\s+`, CommentPreproc, nil},
+			{`(?:%=|&=|\|=|\^=|\+=|\-=|\*=|/=|<<=|>\s*>\s*=|>\s*>\s*>\s*=|==|!=|<=|>\s*=|&&|\|\||<<|>>>|>\s*>|\.\.\.|<|>|%|&|\||\^|\+|\*|/|\-|=>|=)`, CommentPreproc, Push("#pop", "preproc-expr-in-parenthesis")},
+			Default(Pop(1)),
+		},
+		"preproc-expr-in-parenthesis": {
+			{`\s+`, CommentPreproc, nil},
+			{`\!`, CommentPreproc, nil},
+			{`\(`, CommentPreproc, Push("#pop", "preproc-expr-chain", "preproc-parenthesis")},
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, CommentPreproc, Push("#pop", "preproc-expr-chain")},
+			{`\.[0-9]+`, LiteralNumberFloat, Push("#pop", "preproc-expr-chain")},
+			{`[0-9]+[eE][+\-]?[0-9]+`, LiteralNumberFloat, Push("#pop", "preproc-expr-chain")},
+			{`[0-9]+\.[0-9]*[eE][+\-]?[0-9]+`, LiteralNumberFloat, Push("#pop", "preproc-expr-chain")},
+			{`[0-9]+\.[0-9]+`, LiteralNumberFloat, Push("#pop", "preproc-expr-chain")},
+			{`[0-9]+\.(?!(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)|\.\.)`, LiteralNumberFloat, Push("#pop", "preproc-expr-chain")},
+			{`0x[0-9a-fA-F]+`, LiteralNumberHex, Push("#pop", "preproc-expr-chain")},
+			{`[0-9]+`, LiteralNumberInteger, Push("#pop", "preproc-expr-chain")},
+			{`'`, LiteralStringSingle, Push("#pop", "preproc-expr-chain", "string-single")},
+			{`"`, LiteralStringDouble, Push("#pop", "preproc-expr-chain", "string-double")},
+		},
+		"abstract": {
+			Include("spaces"),
+			Default(Pop(1), Push("abstract-body"), Push("abstract-relation"), Push("abstract-opaque"), Push("type-param-constraint"), Push("type-name")),
+		},
+		"abstract-body": {
+			Include("spaces"),
+			{`\{`, Punctuation, Push("#pop", "class-body")},
+		},
+		"abstract-opaque": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "parenthesis-close", "type")},
+			Default(Pop(1)),
+		},
+		"abstract-relation": {
+			Include("spaces"),
+			{`(?:to|from)`, KeywordDeclaration, Push("type")},
+			{`,`, Punctuation, nil},
+			Default(Pop(1)),
+		},
+		"meta": {
+			Include("spaces"),
+			{`@`, NameDecorator, Push("meta-body", "meta-ident", "meta-colon")},
+		},
+		"meta-colon": {
+			Include("spaces"),
+			{`:`, NameDecorator, Pop(1)},
+			Default(Pop(1)),
+		},
+		"meta-ident": {
+			Include("spaces"),
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, NameDecorator, Pop(1)},
+		},
+		"meta-body": {
+			Include("spaces"),
+			{`\(`, NameDecorator, Push("#pop", "meta-call")},
+			Default(Pop(1)),
+		},
+		"meta-call": {
+			Include("spaces"),
+			{`\)`, NameDecorator, Pop(1)},
+			Default(Pop(1), Push("meta-call-sep"), Push("expr")),
+		},
+		"meta-call-sep": {
+			Include("spaces"),
+			{`\)`, NameDecorator, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "meta-call")},
+		},
+		"typedef": {
+			Include("spaces"),
+			Default(Pop(1), Push("typedef-body"), Push("type-param-constraint"), Push("type-name")),
+		},
+		"typedef-body": {
+			Include("spaces"),
+			{`=`, Operator, Push("#pop", "optional-semicolon", "type")},
+		},
+		"enum": {
+			Include("spaces"),
+			Default(Pop(1), Push("enum-body"), Push("bracket-open"), Push("type-param-constraint"), Push("type-name")),
+		},
+		"enum-body": {
+			Include("spaces"),
+			Include("meta"),
+			{`\}`, Punctuation, Pop(1)},
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Push("enum-member", "type-param-constraint")},
+		},
+		"enum-member": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "semicolon", "flag", "function-param")},
+			Default(Pop(1), Push("semicolon"), Push("flag")),
+		},
+		"class": {
+			Include("spaces"),
+			Default(Pop(1), Push("class-body"), Push("bracket-open"), Push("extends"), Push("type-param-constraint"), Push("type-name")),
+		},
+		"extends": {
+			Include("spaces"),
+			{`(?:extends|implements)\b`, KeywordDeclaration, Push("type")},
+			{`,`, Punctuation, nil},
+			Default(Pop(1)),
+		},
+		"bracket-open": {
+			Include("spaces"),
+			{`\{`, Punctuation, Pop(1)},
+		},
+		"bracket-close": {
+			Include("spaces"),
+			{`\}`, Punctuation, Pop(1)},
+		},
+		"class-body": {
+			Include("spaces"),
+			Include("meta"),
+			{`\}`, Punctuation, Pop(1)},
+			{`(?:static|public|private|override|dynamic|inline|macro)\b`, KeywordDeclaration, nil},
+			Default(Push("class-member")),
+		},
+		"class-member": {
+			Include("spaces"),
+			{`(var)\b`, KeywordDeclaration, Push("#pop", "optional-semicolon", "var")},
+			{`(function)\b`, KeywordDeclaration, Push("#pop", "optional-semicolon", "class-method")},
+		},
+		"function-local": {
+			Include("spaces"),
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, NameFunction, Push("#pop", "optional-expr", "flag", "function-param", "parenthesis-open", "type-param-constraint")},
+			Default(Pop(1), Push("optional-expr"), Push("flag"), Push("function-param"), Push("parenthesis-open"), Push("type-param-constraint")),
+		},
+		"optional-expr": {
+			Include("spaces"),
+			Include("expr"),
+			Default(Pop(1)),
+		},
+		"class-method": {
+			Include("spaces"),
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, NameFunction, Push("#pop", "optional-expr", "flag", "function-param", "parenthesis-open", "type-param-constraint")},
+		},
+		"function-param": {
+			Include("spaces"),
+			{`\)`, Punctuation, Pop(1)},
+			{`\?`, Punctuation, nil},
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Push("#pop", "function-param-sep", "assign", "flag")},
+		},
+		"function-param-sep": {
+			Include("spaces"),
+			{`\)`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "function-param")},
+		},
+		"prop-get-set": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "parenthesis-close", "prop-get-set-opt", "comma", "prop-get-set-opt")},
+			Default(Pop(1)),
+		},
+		"prop-get-set-opt": {
+			Include("spaces"),
+			{`(?:default|null|never|dynamic|get|set)\b`, Keyword, Pop(1)},
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Text, Pop(1)},
+		},
+		"expr-statement": {
+			Include("spaces"),
+			Default(Pop(1), Push("optional-semicolon"), Push("expr")),
+		},
+		"expr": {
+			Include("spaces"),
+			{`@`, NameDecorator, Push("#pop", "optional-expr", "meta-body", "meta-ident", "meta-colon")},
+			{`(?:\+\+|\-\-|~(?!/)|!|\-)`, Operator, nil},
+			{`\(`, Punctuation, Push("#pop", "expr-chain", "parenthesis")},
+			{`(?:static|public|private|override|dynamic|inline)\b`, KeywordDeclaration, nil},
+			{`(?:function)\b`, KeywordDeclaration, Push("#pop", "expr-chain", "function-local")},
+			{`\{`, Punctuation, Push("#pop", "expr-chain", "bracket")},
+			{`(?:true|false|null)\b`, KeywordConstant, Push("#pop", "expr-chain")},
+			{`(?:this)\b`, Keyword, Push("#pop", "expr-chain")},
+			{`(?:cast)\b`, Keyword, Push("#pop", "expr-chain", "cast")},
+			{`(?:try)\b`, Keyword, Push("#pop", "catch", "expr")},
+			{`(?:var)\b`, KeywordDeclaration, Push("#pop", "var")},
+			{`(?:new)\b`, Keyword, Push("#pop", "expr-chain", "new")},
+			{`(?:switch)\b`, Keyword, Push("#pop", "switch")},
+			{`(?:if)\b`, Keyword, Push("#pop", "if")},
+			{`(?:do)\b`, Keyword, Push("#pop", "do")},
+			{`(?:while)\b`, Keyword, Push("#pop", "while")},
+			{`(?:for)\b`, Keyword, Push("#pop", "for")},
+			{`(?:untyped|throw)\b`, Keyword, nil},
+			{`(?:return)\b`, Keyword, Push("#pop", "optional-expr")},
+			{`(?:macro)\b`, Keyword, Push("#pop", "macro")},
+			{`(?:continue|break)\b`, Keyword, Pop(1)},
+			{`(?:\$\s*[a-z]\b|\$(?!(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)))`, Name, Push("#pop", "dollar")},
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Push("#pop", "expr-chain")},
+			{`\.[0-9]+`, LiteralNumberFloat, Push("#pop", "expr-chain")},
+			{`[0-9]+[eE][+\-]?[0-9]+`, LiteralNumberFloat, Push("#pop", "expr-chain")},
+			{`[0-9]+\.[0-9]*[eE][+\-]?[0-9]+`, LiteralNumberFloat, Push("#pop", "expr-chain")},
+			{`[0-9]+\.[0-9]+`, LiteralNumberFloat, Push("#pop", "expr-chain")},
+			{`[0-9]+\.(?!(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)|\.\.)`, LiteralNumberFloat, Push("#pop", "expr-chain")},
+			{`0x[0-9a-fA-F]+`, LiteralNumberHex, Push("#pop", "expr-chain")},
+			{`[0-9]+`, LiteralNumberInteger, Push("#pop", "expr-chain")},
+			{`'`, LiteralStringSingle, Push("#pop", "expr-chain", "string-single-interpol")},
+			{`"`, LiteralStringDouble, Push("#pop", "expr-chain", "string-double")},
+			{`~/(\\\\|\\/|[^/\n])*/[gimsu]*`, LiteralStringRegex, Push("#pop", "expr-chain")},
+			{`\[`, Punctuation, Push("#pop", "expr-chain", "array-decl")},
+		},
+		"expr-chain": {
+			Include("spaces"),
+			{`(?:\+\+|\-\-)`, Operator, nil},
+			{`(?:%=|&=|\|=|\^=|\+=|\-=|\*=|/=|<<=|>\s*>\s*=|>\s*>\s*>\s*=|==|!=|<=|>\s*=|&&|\|\||<<|>>>|>\s*>|\.\.\.|<|>|%|&|\||\^|\+|\*|/|\-|=>|=)`, Operator, Push("#pop", "expr")},
+			{`(?:in)\b`, Keyword, Push("#pop", "expr")},
+			{`\?`, Operator, Push("#pop", "expr", "ternary", "expr")},
+			{`(\.)((?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+))`, ByGroups(Punctuation, Name), nil},
+			{`\[`, Punctuation, Push("array-access")},
+			{`\(`, Punctuation, Push("call")},
+			Default(Pop(1)),
+		},
+		"macro": {
+			Include("spaces"),
+			Include("meta"),
+			{`:`, Punctuation, Push("#pop", "type")},
+			{`(?:extern|private)\b`, KeywordDeclaration, nil},
+			{`(?:abstract)\b`, KeywordDeclaration, Push("#pop", "optional-semicolon", "abstract")},
+			{`(?:class|interface)\b`, KeywordDeclaration, Push("#pop", "optional-semicolon", "macro-class")},
+			{`(?:enum)\b`, KeywordDeclaration, Push("#pop", "optional-semicolon", "enum")},
+			{`(?:typedef)\b`, KeywordDeclaration, Push("#pop", "optional-semicolon", "typedef")},
+			Default(Pop(1), Push("expr")),
+		},
+		"macro-class": {
+			{`\{`, Punctuation, Push("#pop", "class-body")},
+			Include("class"),
+		},
+		"cast": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "parenthesis-close", "cast-type", "expr")},
+			Default(Pop(1), Push("expr")),
+		},
+		"cast-type": {
+			Include("spaces"),
+			{`,`, Punctuation, Push("#pop", "type")},
+			Default(Pop(1)),
+		},
+		"catch": {
+			Include("spaces"),
+			{`(?:catch)\b`, Keyword, Push("expr", "function-param", "parenthesis-open")},
+			Default(Pop(1)),
+		},
+		"do": {
+			Include("spaces"),
+			Default(Pop(1), Push("do-while"), Push("expr")),
+		},
+		"do-while": {
+			Include("spaces"),
+			{`(?:while)\b`, Keyword, Push("#pop", "parenthesis", "parenthesis-open")},
+		},
+		"while": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "expr", "parenthesis")},
+		},
+		"for": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "expr", "parenthesis")},
+		},
+		"if": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "else", "optional-semicolon", "expr", "parenthesis")},
+		},
+		"else": {
+			Include("spaces"),
+			{`(?:else)\b`, Keyword, Push("#pop", "expr")},
+			Default(Pop(1)),
+		},
+		"switch": {
+			Include("spaces"),
+			Default(Pop(1), Push("switch-body"), Push("bracket-open"), Push("expr")),
+		},
+		"switch-body": {
+			Include("spaces"),
+			{`(?:case|default)\b`, Keyword, Push("case-block", "case")},
+			{`\}`, Punctuation, Pop(1)},
+		},
+		"case": {
+			Include("spaces"),
+			{`:`, Punctuation, Pop(1)},
+			Default(Pop(1), Push("case-sep"), Push("case-guard"), Push("expr")),
+		},
+		"case-sep": {
+			Include("spaces"),
+			{`:`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "case")},
+		},
+		"case-guard": {
+			Include("spaces"),
+			{`(?:if)\b`, Keyword, Push("#pop", "parenthesis", "parenthesis-open")},
+			Default(Pop(1)),
+		},
+		"case-block": {
+			Include("spaces"),
+			{`(?!(?:case|default)\b|\})`, Keyword, Push("expr-statement")},
+			Default(Pop(1)),
+		},
+		"new": {
+			Include("spaces"),
+			Default(Pop(1), Push("call"), Push("parenthesis-open"), Push("type")),
+		},
+		"array-decl": {
+			Include("spaces"),
+			{`\]`, Punctuation, Pop(1)},
+			Default(Pop(1), Push("array-decl-sep"), Push("expr")),
+		},
+		"array-decl-sep": {
+			Include("spaces"),
+			{`\]`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "array-decl")},
+		},
+		"array-access": {
+			Include("spaces"),
+			Default(Pop(1), Push("array-access-close"), Push("expr")),
+		},
+		"array-access-close": {
+			Include("spaces"),
+			{`\]`, Punctuation, Pop(1)},
+		},
+		"comma": {
+			Include("spaces"),
+			{`,`, Punctuation, Pop(1)},
+		},
+		"colon": {
+			Include("spaces"),
+			{`:`, Punctuation, Pop(1)},
+		},
+		"semicolon": {
+			Include("spaces"),
+			{`;`, Punctuation, Pop(1)},
+		},
+		"optional-semicolon": {
+			Include("spaces"),
+			{`;`, Punctuation, Pop(1)},
+			Default(Pop(1)),
+		},
+		"ident": {
+			Include("spaces"),
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Pop(1)},
+		},
+		"dollar": {
+			Include("spaces"),
+			{`\{`, Punctuation, Push("#pop", "expr-chain", "bracket-close", "expr")},
+			Default(Pop(1), Push("expr-chain")),
+		},
+		"type-name": {
+			Include("spaces"),
+			{`_*[A-Z]\w*`, Name, Pop(1)},
+		},
+		"type-full-name": {
+			Include("spaces"),
+			{`\.`, Punctuation, Push("ident")},
+			Default(Pop(1)),
+		},
+		"type": {
+			Include("spaces"),
+			{`\?`, Punctuation, nil},
+			{`(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Push("#pop", "type-check", "type-full-name")},
+			{`\{`, Punctuation, Push("#pop", "type-check", "type-struct")},
+			{`\(`, Punctuation, Push("#pop", "type-check", "type-parenthesis")},
+		},
+		"type-parenthesis": {
+			Include("spaces"),
+			Default(Pop(1), Push("parenthesis-close"), Push("type")),
+		},
+		"type-check": {
+			Include("spaces"),
+			{`->`, Punctuation, Push("#pop", "type")},
+			{`<(?!=)`, Punctuation, Push("type-param")},
+			Default(Pop(1)),
+		},
+		"type-struct": {
+			Include("spaces"),
+			{`\}`, Punctuation, Pop(1)},
+			{`\?`, Punctuation, nil},
+			{`>`, Punctuation, Push("comma", "type")},
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Push("#pop", "type-struct-sep", "type", "colon")},
+			Include("class-body"),
+		},
+		"type-struct-sep": {
+			Include("spaces"),
+			{`\}`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "type-struct")},
+		},
+		"type-param-type": {
+			{`\.[0-9]+`, LiteralNumberFloat, Pop(1)},
+			{`[0-9]+[eE][+\-]?[0-9]+`, LiteralNumberFloat, Pop(1)},
+			{`[0-9]+\.[0-9]*[eE][+\-]?[0-9]+`, LiteralNumberFloat, Pop(1)},
+			{`[0-9]+\.[0-9]+`, LiteralNumberFloat, Pop(1)},
+			{`[0-9]+\.(?!(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)|\.\.)`, LiteralNumberFloat, Pop(1)},
+			{`0x[0-9a-fA-F]+`, LiteralNumberHex, Pop(1)},
+			{`[0-9]+`, LiteralNumberInteger, Pop(1)},
+			{`'`, LiteralStringSingle, Push("#pop", "string-single")},
+			{`"`, LiteralStringDouble, Push("#pop", "string-double")},
+			{`~/(\\\\|\\/|[^/\n])*/[gim]*`, LiteralStringRegex, Pop(1)},
+			{`\[`, Operator, Push("#pop", "array-decl")},
+			Include("type"),
+		},
+		"type-param": {
+			Include("spaces"),
+			Default(Pop(1), Push("type-param-sep"), Push("type-param-type")),
+		},
+		"type-param-sep": {
+			Include("spaces"),
+			{`>`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "type-param")},
+		},
+		"type-param-constraint": {
+			Include("spaces"),
+			{`<(?!=)`, Punctuation, Push("#pop", "type-param-constraint-sep", "type-param-constraint-flag", "type-name")},
+			Default(Pop(1)),
+		},
+		"type-param-constraint-sep": {
+			Include("spaces"),
+			{`>`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "type-param-constraint-sep", "type-param-constraint-flag", "type-name")},
+		},
+		"type-param-constraint-flag": {
+			Include("spaces"),
+			{`:`, Punctuation, Push("#pop", "type-param-constraint-flag-type")},
+			Default(Pop(1)),
+		},
+		"type-param-constraint-flag-type": {
+			Include("spaces"),
+			{`\(`, Punctuation, Push("#pop", "type-param-constraint-flag-type-sep", "type")},
+			Default(Pop(1), Push("type")),
+		},
+		"type-param-constraint-flag-type-sep": {
+			Include("spaces"),
+			{`\)`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("type")},
+		},
+		"parenthesis": {
+			Include("spaces"),
+			Default(Pop(1), Push("parenthesis-close"), Push("flag"), Push("expr")),
+		},
+		"parenthesis-open": {
+			Include("spaces"),
+			{`\(`, Punctuation, Pop(1)},
+		},
+		"parenthesis-close": {
+			Include("spaces"),
+			{`\)`, Punctuation, Pop(1)},
+		},
+		"var": {
+			Include("spaces"),
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Text, Push("#pop", "var-sep", "assign", "flag", "prop-get-set")},
+		},
+		"var-sep": {
+			Include("spaces"),
+			{`,`, Punctuation, Push("#pop", "var")},
+			Default(Pop(1)),
+		},
+		"assign": {
+			Include("spaces"),
+			{`=`, Operator, Push("#pop", "expr")},
+			Default(Pop(1)),
+		},
+		"flag": {
+			Include("spaces"),
+			{`:`, Punctuation, Push("#pop", "type")},
+			Default(Pop(1)),
+		},
+		"ternary": {
+			Include("spaces"),
+			{`:`, Operator, Pop(1)},
+		},
+		"call": {
+			Include("spaces"),
+			{`\)`, Punctuation, Pop(1)},
+			Default(Pop(1), Push("call-sep"), Push("expr")),
+		},
+		"call-sep": {
+			Include("spaces"),
+			{`\)`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "call")},
+		},
+		"bracket": {
+			Include("spaces"),
+			{`(?!(?:\$\s*[a-z]\b|\$(?!(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+))))(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Push("#pop", "bracket-check")},
+			{`'`, LiteralStringSingle, Push("#pop", "bracket-check", "string-single")},
+			{`"`, LiteralStringDouble, Push("#pop", "bracket-check", "string-double")},
+			Default(Pop(1), Push("block")),
+		},
+		"bracket-check": {
+			Include("spaces"),
+			{`:`, Punctuation, Push("#pop", "object-sep", "expr")},
+			Default(Pop(1), Push("block"), Push("optional-semicolon"), Push("expr-chain")),
+		},
+		"block": {
+			Include("spaces"),
+			{`\}`, Punctuation, Pop(1)},
+			Default(Push("expr-statement")),
+		},
+		"object": {
+			Include("spaces"),
+			{`\}`, Punctuation, Pop(1)},
+			Default(Pop(1), Push("object-sep"), Push("expr"), Push("colon"), Push("ident-or-string")),
+		},
+		"ident-or-string": {
+			Include("spaces"),
+			{`(?!(?:function|class|static|var|if|else|while|do|for|break|return|continue|extends|implements|import|switch|case|default|public|private|try|untyped|catch|new|this|throw|extern|enum|in|interface|cast|override|dynamic|typedef|package|inline|using|null|true|false|abstract)\b)(?:_*[a-z]\w*|_+[0-9]\w*|_*[A-Z]\w*|_+|\$\w+)`, Name, Pop(1)},
+			{`'`, LiteralStringSingle, Push("#pop", "string-single")},
+			{`"`, LiteralStringDouble, Push("#pop", "string-double")},
+		},
+		"object-sep": {
+			Include("spaces"),
+			{`\}`, Punctuation, Pop(1)},
+			{`,`, Punctuation, Push("#pop", "object")},
+		},
+	}
+}
+
+func haxePreProcMutator(state *LexerState) error {
+	stack, ok := state.Get("haxe-pre-proc").([][]string)
+	if !ok {
+		stack = [][]string{}
+	}
+
+	proc := state.Groups[2]
+	switch proc {
+	case "if":
+		stack = append(stack, state.Stack)
+	case "else", "elseif":
+		if len(stack) > 0 {
+			state.Stack = stack[len(stack)-1]
+		}
+	case "end":
+		if len(stack) > 0 {
+			stack = stack[:len(stack)-1]
+		}
+	}
+
+	if proc == "if" || proc == "elseif" {
+		state.Stack = append(state.Stack, "preproc-expr")
+	}
+
+	if proc == "error" {
+		state.Stack = append(state.Stack, "preproc-error")
+	}
+	state.Set("haxe-pre-proc", stack)
+	return nil
+}

vendor/github.com/alecthomas/chroma/v2/lexers/http.go 🔗

@@ -0,0 +1,131 @@
+package lexers
+
+import (
+	"strings"
+
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// HTTP lexer.
+var HTTP = Register(httpBodyContentTypeLexer(MustNewLexer(
+	&Config{
+		Name:         "HTTP",
+		Aliases:      []string{"http"},
+		Filenames:    []string{},
+		MimeTypes:    []string{},
+		NotMultiline: true,
+		DotAll:       true,
+	},
+	httpRules,
+)))
+
+func httpRules() Rules {
+	return Rules{
+		"root": {
+			{`(GET|POST|PUT|DELETE|HEAD|OPTIONS|TRACE|PATCH|CONNECT)( +)([^ ]+)( +)(HTTP)(/)([123](?:\.[01])?)(\r?\n|\Z)`, ByGroups(NameFunction, Text, NameNamespace, Text, KeywordReserved, Operator, LiteralNumber, Text), Push("headers")},
+			{`(HTTP)(/)([123](?:\.[01])?)( +)(\d{3})( *)([^\r\n]*)(\r?\n|\Z)`, ByGroups(KeywordReserved, Operator, LiteralNumber, Text, LiteralNumber, Text, NameException, Text), Push("headers")},
+		},
+		"headers": {
+			{`([^\s:]+)( *)(:)( *)([^\r\n]+)(\r?\n|\Z)`, EmitterFunc(httpHeaderBlock), nil},
+			{`([\t ]+)([^\r\n]+)(\r?\n|\Z)`, EmitterFunc(httpContinuousHeaderBlock), nil},
+			{`\r?\n`, Text, Push("content")},
+		},
+		"content": {
+			{`.+`, EmitterFunc(httpContentBlock), nil},
+		},
+	}
+}
+
+func httpContentBlock(groups []string, state *LexerState) Iterator {
+	tokens := []Token{
+		{Generic, groups[0]},
+	}
+	return Literator(tokens...)
+}
+
+func httpHeaderBlock(groups []string, state *LexerState) Iterator {
+	tokens := []Token{
+		{Name, groups[1]},
+		{Text, groups[2]},
+		{Operator, groups[3]},
+		{Text, groups[4]},
+		{Literal, groups[5]},
+		{Text, groups[6]},
+	}
+	return Literator(tokens...)
+}
+
+func httpContinuousHeaderBlock(groups []string, state *LexerState) Iterator {
+	tokens := []Token{
+		{Text, groups[1]},
+		{Literal, groups[2]},
+		{Text, groups[3]},
+	}
+	return Literator(tokens...)
+}
+
+func httpBodyContentTypeLexer(lexer Lexer) Lexer { return &httpBodyContentTyper{lexer} }
+
+type httpBodyContentTyper struct{ Lexer }
+
+func (d *httpBodyContentTyper) Tokenise(options *TokeniseOptions, text string) (Iterator, error) { // nolint: gocognit
+	var contentType string
+	var isContentType bool
+	var subIterator Iterator
+
+	it, err := d.Lexer.Tokenise(options, text)
+	if err != nil {
+		return nil, err
+	}
+
+	return func() Token {
+		token := it()
+
+		if token == EOF {
+			if subIterator != nil {
+				return subIterator()
+			}
+			return EOF
+		}
+
+		switch {
+		case token.Type == Name && strings.ToLower(token.Value) == "content-type":
+			{
+				isContentType = true
+			}
+		case token.Type == Literal && isContentType:
+			{
+				isContentType = false
+				contentType = strings.TrimSpace(token.Value)
+				pos := strings.Index(contentType, ";")
+				if pos > 0 {
+					contentType = strings.TrimSpace(contentType[:pos])
+				}
+			}
+		case token.Type == Generic && contentType != "":
+			{
+				lexer := MatchMimeType(contentType)
+
+				// application/calendar+xml can be treated as application/xml
+				// if there's not a better match.
+				if lexer == nil && strings.Contains(contentType, "+") {
+					slashPos := strings.Index(contentType, "/")
+					plusPos := strings.LastIndex(contentType, "+")
+					contentType = contentType[:slashPos+1] + contentType[plusPos+1:]
+					lexer = MatchMimeType(contentType)
+				}
+
+				if lexer == nil {
+					token.Type = Text
+				} else {
+					subIterator, err = lexer.Tokenise(nil, token.Value)
+					if err != nil {
+						panic(err)
+					}
+					return EOF
+				}
+			}
+		}
+		return token
+	}, nil
+}

vendor/github.com/alecthomas/chroma/v2/lexers/lexers.go 🔗

@@ -0,0 +1,79 @@
+package lexers
+
+import (
+	"embed"
+	"io/fs"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+//go:embed embedded
+var embedded embed.FS
+
+// GlobalLexerRegistry is the global LexerRegistry of Lexers.
+var GlobalLexerRegistry = func() *chroma.LexerRegistry {
+	reg := chroma.NewLexerRegistry()
+	// index(reg)
+	paths, err := fs.Glob(embedded, "embedded/*.xml")
+	if err != nil {
+		panic(err)
+	}
+	for _, path := range paths {
+		reg.Register(chroma.MustNewXMLLexer(embedded, path))
+	}
+	return reg
+}()
+
+// Names of all lexers, optionally including aliases.
+func Names(withAliases bool) []string {
+	return GlobalLexerRegistry.Names(withAliases)
+}
+
+// Get a Lexer by name, alias or file extension.
+//
+// Note that this if there isn't an exact match on name or alias, this will
+// call Match(), so it is not efficient.
+func Get(name string) chroma.Lexer {
+	return GlobalLexerRegistry.Get(name)
+}
+
+// MatchMimeType attempts to find a lexer for the given MIME type.
+func MatchMimeType(mimeType string) chroma.Lexer {
+	return GlobalLexerRegistry.MatchMimeType(mimeType)
+}
+
+// Match returns the first lexer matching filename.
+//
+// Note that this iterates over all file patterns in all lexers, so it's not
+// particularly efficient.
+func Match(filename string) chroma.Lexer {
+	return GlobalLexerRegistry.Match(filename)
+}
+
+// Register a Lexer with the global registry.
+func Register(lexer chroma.Lexer) chroma.Lexer {
+	return GlobalLexerRegistry.Register(lexer)
+}
+
+// Analyse text content and return the "best" lexer..
+func Analyse(text string) chroma.Lexer {
+	return GlobalLexerRegistry.Analyse(text)
+}
+
+// PlaintextRules is used for the fallback lexer as well as the explicit
+// plaintext lexer.
+func PlaintextRules() chroma.Rules {
+	return chroma.Rules{
+		"root": []chroma.Rule{
+			{`.+`, chroma.Text, nil},
+			{`\n`, chroma.Text, nil},
+		},
+	}
+}
+
+// Fallback lexer if no other is found.
+var Fallback chroma.Lexer = chroma.MustNewLexer(&chroma.Config{
+	Name:      "fallback",
+	Filenames: []string{"*"},
+	Priority:  -1,
+}, PlaintextRules)

vendor/github.com/alecthomas/chroma/v2/lexers/markdown.go 🔗

@@ -0,0 +1,46 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Markdown lexer.
+var Markdown = Register(DelegatingLexer(HTML, MustNewLexer(
+	&Config{
+		Name:      "markdown",
+		Aliases:   []string{"md", "mkd"},
+		Filenames: []string{"*.md", "*.mkd", "*.markdown"},
+		MimeTypes: []string{"text/x-markdown"},
+	},
+	markdownRules,
+)))
+
+func markdownRules() Rules {
+	return Rules{
+		"root": {
+			{`^(#[^#].+\n)`, ByGroups(GenericHeading), nil},
+			{`^(#{2,6}.+\n)`, ByGroups(GenericSubheading), nil},
+			{`^(\s*)([*-] )(\[[ xX]\])( .+\n)`, ByGroups(Text, Keyword, Keyword, UsingSelf("inline")), nil},
+			{`^(\s*)([*-])(\s)(.+\n)`, ByGroups(Text, Keyword, Text, UsingSelf("inline")), nil},
+			{`^(\s*)([0-9]+\.)( .+\n)`, ByGroups(Text, Keyword, UsingSelf("inline")), nil},
+			{`^(\s*>\s)(.+\n)`, ByGroups(Keyword, GenericEmph), nil},
+			{"^(```\\n)([\\w\\W]*?)(^```$)", ByGroups(String, Text, String), nil},
+			{
+				"^(```)(\\w+)(\\n)([\\w\\W]*?)(^```$)",
+				UsingByGroup(2, 4, String, String, String, Text, String),
+				nil,
+			},
+			Include("inline"),
+		},
+		"inline": {
+			{`\\.`, Text, nil},
+			{`(\s)(\*|_)((?:(?!\2).)*)(\2)((?=\W|\n))`, ByGroups(Text, GenericEmph, GenericEmph, GenericEmph, Text), nil},
+			{`(\s)((\*\*|__).*?)\3((?=\W|\n))`, ByGroups(Text, GenericStrong, GenericStrong, Text), nil},
+			{`(\s)(~~[^~]+~~)((?=\W|\n))`, ByGroups(Text, GenericDeleted, Text), nil},
+			{"`[^`]+`", LiteralStringBacktick, nil},
+			{`[@#][\w/:]+`, NameEntity, nil},
+			{`(!?\[)([^]]+)(\])(\()([^)]+)(\))`, ByGroups(Text, NameTag, Text, Text, NameAttribute, Text), nil},
+			{`.|\n`, Other, nil},
+		},
+	}
+}

vendor/github.com/alecthomas/chroma/v2/lexers/mysql.go 🔗

@@ -0,0 +1,33 @@
+package lexers
+
+import (
+	"regexp"
+)
+
+var (
+	mysqlAnalyserNameBetweenBacktickRe = regexp.MustCompile("`[a-zA-Z_]\\w*`")
+	mysqlAnalyserNameBetweenBracketRe  = regexp.MustCompile(`\[[a-zA-Z_]\w*\]`)
+)
+
+func init() { // nolint: gochecknoinits
+	Get("mysql").
+		SetAnalyser(func(text string) float32 {
+			nameBetweenBacktickCount := len(mysqlAnalyserNameBetweenBacktickRe.FindAllString(text, -1))
+			nameBetweenBracketCount := len(mysqlAnalyserNameBetweenBracketRe.FindAllString(text, -1))
+
+			var result float32
+
+			// Same logic as above in the TSQL analysis.
+			dialectNameCount := nameBetweenBacktickCount + nameBetweenBracketCount
+			if dialectNameCount >= 1 && nameBetweenBacktickCount >= (2*nameBetweenBracketCount) {
+				// Found at least twice as many `name` as [name].
+				result += 0.5
+			} else if nameBetweenBacktickCount > nameBetweenBracketCount {
+				result += 0.2
+			} else if nameBetweenBacktickCount > 0 {
+				result += 0.1
+			}
+
+			return result
+		})
+}

vendor/github.com/alecthomas/chroma/v2/lexers/php.go 🔗

@@ -0,0 +1,37 @@
+package lexers
+
+import (
+	"strings"
+
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// phtml lexer is PHP in HTML.
+var _ = Register(DelegatingLexer(HTML, MustNewLexer(
+	&Config{
+		Name:            "PHTML",
+		Aliases:         []string{"phtml"},
+		Filenames:       []string{"*.phtml", "*.php", "*.php[345]", "*.inc"},
+		MimeTypes:       []string{"application/x-php", "application/x-httpd-php", "application/x-httpd-php3", "application/x-httpd-php4", "application/x-httpd-php5", "text/x-php"},
+		DotAll:          true,
+		CaseInsensitive: true,
+		EnsureNL:        true,
+		Priority:        2,
+	},
+	func() Rules {
+		return Get("PHP").(*RegexLexer).MustRules().
+			Rename("root", "php").
+			Merge(Rules{
+				"root": {
+					{`<\?(php)?`, CommentPreproc, Push("php")},
+					{`[^<]+`, Other, nil},
+					{`<`, Other, nil},
+				},
+			})
+	},
+).SetAnalyser(func(text string) float32 {
+	if strings.Contains(text, "<?php") {
+		return 0.5
+	}
+	return 0.0
+})))

vendor/github.com/alecthomas/chroma/v2/lexers/raku.go 🔗

@@ -0,0 +1,1721 @@
+package lexers
+
+import (
+	"regexp"
+	"strings"
+	"unicode/utf8"
+
+	"github.com/dlclark/regexp2"
+
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Raku lexer.
+var Raku Lexer = Register(MustNewLexer(
+	&Config{
+		Name:    "Raku",
+		Aliases: []string{"perl6", "pl6", "raku"},
+		Filenames: []string{
+			"*.pl", "*.pm", "*.nqp", "*.p6", "*.6pl", "*.p6l", "*.pl6", "*.6pm",
+			"*.p6m", "*.pm6", "*.t", "*.raku", "*.rakumod", "*.rakutest", "*.rakudoc",
+		},
+		MimeTypes: []string{
+			"text/x-perl6", "application/x-perl6",
+			"text/x-raku", "application/x-raku",
+		},
+		DotAll: true,
+	},
+	rakuRules,
+))
+
+func rakuRules() Rules {
+	type RakuToken int
+
+	const (
+		rakuQuote RakuToken = iota
+		rakuNameAttribute
+		rakuPod
+		rakuPodFormatter
+		rakuPodDeclaration
+		rakuMultilineComment
+		rakuMatchRegex
+		rakuSubstitutionRegex
+	)
+
+	const (
+		colonPairOpeningBrackets = `(?:<<|<|«|\(|\[|\{)`
+		colonPairClosingBrackets = `(?:>>|>|»|\)|\]|\})`
+		colonPairPattern         = `(?<!:)(?<colon>:)(?<key>\w[\w'-]*)(?<opening_delimiters>` + colonPairOpeningBrackets + `)`
+		colonPairLookahead       = `(?=(:['\w-]+` +
+			colonPairOpeningBrackets + `.+?` + colonPairClosingBrackets + `)?`
+		namePattern           = `(?:(?!` + colonPairPattern + `)(?:::|[\w':-]))+`
+		variablePattern       = `[$@%&]+[.^:?=!~]?` + namePattern
+		globalVariablePattern = `[$@%&]+\*` + namePattern
+	)
+
+	keywords := []string{
+		`BEGIN`, `CATCH`, `CHECK`, `CLOSE`, `CONTROL`, `DOC`, `END`, `ENTER`, `FIRST`, `INIT`,
+		`KEEP`, `LAST`, `LEAVE`, `NEXT`, `POST`, `PRE`, `QUIT`, `UNDO`, `anon`, `augment`, `but`,
+		`class`, `constant`, `default`, `does`, `else`, `elsif`, `enum`, `for`, `gather`, `given`,
+		`grammar`, `has`, `if`, `import`, `is`, `of`, `let`, `loop`, `made`, `make`, `method`,
+		`module`, `multi`, `my`, `need`, `orwith`, `our`, `proceed`, `proto`, `repeat`, `require`,
+		`where`, `return`, `return-rw`, `returns`, `->`, `-->`, `role`, `state`, `sub`, `no`,
+		`submethod`, `subset`, `succeed`, `supersede`, `try`, `unit`, `unless`, `until`,
+		`use`, `when`, `while`, `with`, `without`, `export`, `native`, `repr`, `required`, `rw`,
+		`symbol`, `default`, `cached`, `DEPRECATED`, `dynamic`, `hidden-from-backtrace`, `nodal`,
+		`pure`, `raw`, `start`, `react`, `supply`, `whenever`, `also`, `rule`, `token`, `regex`,
+		`dynamic-scope`, `built`, `temp`,
+	}
+
+	keywordsPattern := Words(`(?<!['\w:-])`, `(?!['\w:-])`, keywords...)
+
+	wordOperators := []string{
+		`X`, `Z`, `R`, `after`, `and`, `andthen`, `before`, `cmp`, `div`, `eq`, `eqv`, `extra`, `ge`,
+		`gt`, `le`, `leg`, `lt`, `mod`, `ne`, `or`, `orelse`, `x`, `xor`, `xx`, `gcd`, `lcm`,
+		`but`, `min`, `max`, `^fff`, `fff^`, `fff`, `^ff`, `ff^`, `ff`, `so`, `not`, `unicmp`,
+		`TR`, `o`, `(&)`, `(.)`, `(|)`, `(+)`, `(-)`, `(^)`, `coll`, `(elem)`, `(==)`,
+		`(cont)`, `(<)`, `(<=)`, `(>)`, `(>=)`, `minmax`, `notandthen`, `S`,
+	}
+
+	wordOperatorsPattern := Words(`(?<=^|\b|\s)`, `(?=$|\b|\s)`, wordOperators...)
+
+	operators := []string{
+		`++`, `--`, `-`, `**`, `!`, `+`, `~`, `?`, `+^`, `~^`, `?^`, `^`, `*`, `/`, `%`, `%%`, `+&`,
+		`+<`, `+>`, `~&`, `~<`, `~>`, `?&`, `+|`, `+^`, `~|`, `~^`, `?`, `?|`, `?^`, `&`, `^`,
+		`<=>`, `^…^`, `^…`, `…^`, `…`, `...`, `...^`, `^...`, `^...^`, `..`, `..^`, `^..`, `^..^`,
+		`::=`, `:=`, `!=`, `==`, `<=`, `<`, `>=`, `>`, `~~`, `===`, `&&`, `||`, `|`, `^^`, `//`,
+		`??`, `!!`, `^fff^`, `^ff^`, `<==`, `==>`, `<<==`, `==>>`, `=>`, `=`, `<<`, `«`, `>>`, `»`,
+		`,`, `>>.`, `».`, `.&`, `.=`, `.^`, `.?`, `.+`, `.*`, `.`, `∘`, `∩`, `⊍`, `∪`, `⊎`, `∖`,
+		`⊖`, `≠`, `≤`, `≥`, `=:=`, `=~=`, `≅`, `∈`, `∉`, `≡`, `≢`, `∋`, `∌`, `⊂`, `⊄`, `⊆`, `⊈`,
+		`⊃`, `⊅`, `⊇`, `⊉`, `:`, `!!!`, `???`, `¯`, `×`, `÷`, `−`, `⁺`, `⁻`,
+	}
+
+	operatorsPattern := Words(``, ``, operators...)
+
+	builtinTypes := []string{
+		`False`, `True`, `Order`, `More`, `Less`, `Same`, `Any`, `Array`, `Associative`, `AST`,
+		`atomicint`, `Attribute`, `Backtrace`, `Backtrace::Frame`, `Bag`, `Baggy`, `BagHash`,
+		`Blob`, `Block`, `Bool`, `Buf`, `Callable`, `CallFrame`, `Cancellation`, `Capture`,
+		`CArray`, `Channel`, `Code`, `compiler`, `Complex`, `ComplexStr`, `CompUnit`,
+		`CompUnit::PrecompilationRepository`, `CompUnit::Repository`, `Empty`,
+		`CompUnit::Repository::FileSystem`, `CompUnit::Repository::Installation`, `Cool`,
+		`CurrentThreadScheduler`, `CX::Warn`, `CX::Take`, `CX::Succeed`, `CX::Return`, `CX::Redo`,
+		`CX::Proceed`, `CX::Next`, `CX::Last`, `CX::Emit`, `CX::Done`, `Cursor`, `Date`, `Dateish`,
+		`DateTime`, `Distribution`, `Distribution::Hash`, `Distribution::Locally`,
+		`Distribution::Path`, `Distribution::Resource`, `Distro`, `Duration`, `Encoding`,
+		`Encoding::GlobalLexerRegistry`, `Endian`, `Enumeration`, `Exception`, `Failure`, `FatRat`, `Grammar`,
+		`Hash`, `HyperWhatever`, `Instant`, `Int`, `int`, `int16`, `int32`, `int64`, `int8`, `str`,
+		`IntStr`, `IO`, `IO::ArgFiles`, `IO::CatHandle`, `IO::Handle`, `IO::Notification`,
+		`IO::Notification::Change`, `IO::Path`, `IO::Path::Cygwin`, `IO::Path::Parts`,
+		`IO::Path::QNX`, `IO::Path::Unix`, `IO::Path::Win32`, `IO::Pipe`, `IO::Socket`,
+		`IO::Socket::Async`, `IO::Socket::Async::ListenSocket`, `IO::Socket::INET`, `IO::Spec`,
+		`IO::Spec::Cygwin`, `IO::Spec::QNX`, `IO::Spec::Unix`, `IO::Spec::Win32`, `IO::Special`,
+		`Iterable`, `Iterator`, `Junction`, `Kernel`, `Label`, `List`, `Lock`, `Lock::Async`,
+		`Lock::ConditionVariable`, `long`, `longlong`, `Macro`, `Map`, `Match`,
+		`Metamodel::AttributeContainer`, `Metamodel::C3MRO`, `Metamodel::ClassHOW`,
+		`Metamodel::ConcreteRoleHOW`, `Metamodel::CurriedRoleHOW`, `Metamodel::DefiniteHOW`,
+		`Metamodel::Documenting`, `Metamodel::EnumHOW`, `Metamodel::Finalization`,
+		`Metamodel::MethodContainer`, `Metamodel::Mixins`, `Metamodel::MROBasedMethodDispatch`,
+		`Metamodel::MultipleInheritance`, `Metamodel::Naming`, `Metamodel::Primitives`,
+		`Metamodel::PrivateMethodContainer`, `Metamodel::RoleContainer`, `Metamodel::RolePunning`,
+		`Metamodel::Stashing`, `Metamodel::Trusting`, `Metamodel::Versioning`, `Method`, `Mix`,
+		`MixHash`, `Mixy`, `Mu`, `NFC`, `NFD`, `NFKC`, `NFKD`, `Nil`, `Num`, `num32`, `num64`,
+		`Numeric`, `NumStr`, `ObjAt`, `Order`, `Pair`, `Parameter`, `Perl`, `Pod::Block`,
+		`Pod::Block::Code`, `Pod::Block::Comment`, `Pod::Block::Declarator`, `Pod::Block::Named`,
+		`Pod::Block::Para`, `Pod::Block::Table`, `Pod::Heading`, `Pod::Item`, `Pointer`,
+		`Positional`, `PositionalBindFailover`, `Proc`, `Proc::Async`, `Promise`, `Proxy`,
+		`PseudoStash`, `QuantHash`, `RaceSeq`, `Raku`, `Range`, `Rat`, `Rational`, `RatStr`,
+		`Real`, `Regex`, `Routine`, `Routine::WrapHandle`, `Scalar`, `Scheduler`, `Semaphore`,
+		`Seq`, `Sequence`, `Set`, `SetHash`, `Setty`, `Signature`, `size_t`, `Slip`, `Stash`,
+		`Str`, `StrDistance`, `Stringy`, `Sub`, `Submethod`, `Supplier`, `Supplier::Preserving`,
+		`Supply`, `Systemic`, `Tap`, `Telemetry`, `Telemetry::Instrument::Thread`,
+		`Telemetry::Instrument::ThreadPool`, `Telemetry::Instrument::Usage`, `Telemetry::Period`,
+		`Telemetry::Sampler`, `Thread`, `Test`, `ThreadPoolScheduler`, `UInt`, `uint16`, `uint32`,
+		`uint64`, `uint8`, `Uni`, `utf8`, `ValueObjAt`, `Variable`, `Version`, `VM`, `Whatever`,
+		`WhateverCode`, `WrapHandle`, `NativeCall`,
+		// Pragmas
+		`precompilation`, `experimental`, `worries`, `MONKEY-TYPING`, `MONKEY-SEE-NO-EVAL`,
+		`MONKEY-GUTS`, `fatal`, `lib`, `isms`, `newline`, `nqp`, `soft`,
+		`strict`, `trace`, `variables`,
+	}
+
+	builtinTypesPattern := Words(`(?<!['\w:-])`, `(?::[_UD])?(?!['\w:-])`, builtinTypes...)
+
+	builtinRoutines := []string{
+		`ACCEPTS`, `abs`, `abs2rel`, `absolute`, `accept`, `accepts_type`, `accessed`, `acos`,
+		`acosec`, `acosech`, `acosh`, `acotan`, `acotanh`, `acquire`, `act`, `action`, `actions`,
+		`add`, `add_attribute`, `add_enum_value`, `add_fallback`, `add_method`, `add_parent`,
+		`add_private_method`, `add_role`, `add_stash`, `add_trustee`, `addendum`, `adverb`, `after`,
+		`all`, `allocate`, `allof`, `allowed`, `alternative-names`, `annotations`, `antipair`,
+		`antipairs`, `any`, `anyof`, `api`, `app_lifetime`, `append`, `arch`, `archetypes`,
+		`archname`, `args`, `ARGS-TO-CAPTURE`, `arity`, `Array`, `asec`, `asech`, `asin`, `asinh`,
+		`ASSIGN-KEY`, `ASSIGN-POS`, `assuming`, `ast`, `at`, `atan`, `atan2`, `atanh`, `AT-KEY`,
+		`atomic-assign`, `atomic-dec-fetch`, `atomic-fetch`, `atomic-fetch-add`, `atomic-fetch-dec`,
+		`atomic-fetch-inc`, `atomic-fetch-sub`, `atomic-inc-fetch`, `AT-POS`, `attributes`, `auth`,
+		`await`, `backend`, `backtrace`, `Bag`, `bag`, `Baggy`, `BagHash`, `bail-out`, `base`,
+		`basename`, `base-repeating`, `base_type`, `batch`, `BIND-KEY`, `BIND-POS`, `bind-stderr`,
+		`bind-stdin`, `bind-stdout`, `bind-udp`, `bits`, `bless`, `block`, `Bool`, `bool-only`,
+		`bounds`, `break`, `Bridge`, `broken`, `BUILD`, `TWEAK`, `build-date`, `bytes`, `cache`,
+		`callframe`, `calling-package`, `CALL-ME`, `callsame`, `callwith`, `can`, `cancel`,
+		`candidates`, `cando`, `can-ok`, `canonpath`, `caps`, `caption`, `Capture`, `capture`,
+		`cas`, `catdir`, `categorize`, `categorize-list`, `catfile`, `catpath`, `cause`, `ceiling`,
+		`cglobal`, `changed`, `Channel`, `channel`, `chars`, `chdir`, `child`, `child-name`,
+		`child-typename`, `chmod`, `chomp`, `chop`, `chr`, `chrs`, `chunks`, `cis`, `classify`,
+		`classify-list`, `cleanup`, `clone`, `close`, `closed`, `close-stdin`, `cmp-ok`, `code`,
+		`codename`, `codes`, `coerce_type`, `coll`, `collate`, `column`, `comb`, `combinations`,
+		`command`, `comment`, `compiler`, `Complex`, `compose`, `composalizer`, `compose_type`,
+		`compose_values`, `composer`, `compute_mro`, `condition`, `config`, `configure_destroy`,
+		`configure_type_checking`, `conj`, `connect`, `constraints`, `construct`, `contains`,
+		`content`, `contents`, `copy`, `cos`, `cosec`, `cosech`, `cosh`, `cotan`, `cotanh`, `count`,
+		`count-only`, `cpu-cores`, `cpu-usage`, `CREATE`, `create_type`, `cross`, `cue`, `curdir`,
+		`curupdir`, `d`, `Date`, `DateTime`, `day`, `daycount`, `day-of-month`, `day-of-week`,
+		`day-of-year`, `days-in-month`, `dd-mm-yyyy`, `declaration`, `decode`, `decoder`, `deepmap`,
+		`default`, `defined`, `DEFINITE`, `definite`, `delayed`, `delete`, `delete-by-compiler`,
+		`DELETE-KEY`, `DELETE-POS`, `denominator`, `desc`, `DESTROY`, `destroyers`, `devnull`,
+		`diag`, `did-you-mean`, `die`, `dies-ok`, `dir`, `dirname`, `distribution`, `dir-sep`,
+		`DISTROnames`, `do`, `does`, `does-ok`, `done`, `done-testing`, `duckmap`, `dynamic`, `e`,
+		`eager`, `earlier`, `elems`, `emit`, `enclosing`, `encode`, `encoder`, `encoding`, `end`,
+		`endian`, `ends-with`, `enum_from_value`, `enum_value_list`, `enum_values`, `enums`, `EOF`,
+		`eof`, `EVAL`, `eval-dies-ok`, `EVALFILE`, `eval-lives-ok`, `event`, `exception`,
+		`excludes-max`, `excludes-min`, `EXISTS-KEY`, `EXISTS-POS`, `exit`, `exitcode`, `exp`,
+		`expected`, `explicitly-manage`, `expmod`, `export_callback`, `extension`, `f`, `fail`,
+		`FALLBACK`, `fails-like`, `fc`, `feature`, `file`, `filename`, `files`, `find`,
+		`find_method`, `find_method_qualified`, `finish`, `first`, `flat`, `first-date-in-month`,
+		`flatmap`, `flip`, `floor`, `flunk`, `flush`, `flush_cache`, `fmt`, `format`, `formatter`,
+		`free-memory`, `freeze`, `from`, `from-list`, `from-loop`, `from-posix`, `from-slurpy`,
+		`full`, `full-barrier`, `GENERATE-USAGE`, `generate_mixin`, `get`, `get_value`, `getc`,
+		`gist`, `got`, `grab`, `grabpairs`, `grep`, `handle`, `handled`, `handles`, `hardware`,
+		`has_accessor`, `Hash`, `hash`, `head`, `headers`, `hh-mm-ss`, `hidden`, `hides`, `hostname`,
+		`hour`, `how`, `hyper`, `id`, `illegal`, `im`, `in`, `in-timezone`, `indent`, `index`,
+		`indices`, `indir`, `infinite`, `infix`, `postcirumfix`, `cicumfix`, `install`,
+		`install_method_cache`, `Instant`, `instead`, `Int`, `int-bounds`, `interval`, `in-timezone`,
+		`invalid-str`, `invert`, `invocant`, `IO`, `IO::Notification.watch-path`, `is_trusted`,
+		`is_type`, `isa`, `is-absolute`, `isa-ok`, `is-approx`, `is-deeply`, `is-hidden`,
+		`is-initial-thread`, `is-int`, `is-lazy`, `is-leap-year`, `isNaN`, `isnt`, `is-prime`,
+		`is-relative`, `is-routine`, `is-setting`, `is-win`, `item`, `iterator`, `join`, `keep`,
+		`kept`, `KERNELnames`, `key`, `keyof`, `keys`, `kill`, `kv`, `kxxv`, `l`, `lang`, `last`,
+		`lastcall`, `later`, `lazy`, `lc`, `leading`, `level`, `like`, `line`, `lines`, `link`,
+		`List`, `list`, `listen`, `live`, `lives-ok`, `load`, `load-repo-id`, `load-unit`, `loaded`,
+		`loads`, `local`, `lock`, `log`, `log10`, `lookup`, `lsb`, `made`, `MAIN`, `make`, `Map`,
+		`map`, `match`, `max`, `maxpairs`, `merge`, `message`, `method`, `meta`, `method_table`,
+		`methods`, `migrate`, `min`, `minmax`, `minpairs`, `minute`, `misplaced`, `Mix`, `mix`,
+		`MixHash`, `mixin`, `mixin_attribute`, `Mixy`, `mkdir`, `mode`, `modified`, `month`, `move`,
+		`mro`, `msb`, `multi`, `multiness`, `name`, `named`, `named_names`, `narrow`,
+		`nativecast`, `native-descriptor`, `nativesizeof`, `need`, `new`, `new_type`,
+		`new-from-daycount`, `new-from-pairs`, `next`, `nextcallee`, `next-handle`, `nextsame`,
+		`nextwith`, `next-interesting-index`, `NFC`, `NFD`, `NFKC`, `NFKD`, `nice`, `nl-in`,
+		`nl-out`, `nodemap`, `nok`, `normalize`, `none`, `norm`, `not`, `note`, `now`, `nude`,
+		`Num`, `numerator`, `Numeric`, `of`, `offset`, `offset-in-hours`, `offset-in-minutes`,
+		`ok`, `old`, `on-close`, `one`, `on-switch`, `open`, `opened`, `operation`, `optional`,
+		`ord`, `ords`, `orig`, `os-error`, `osname`, `out-buffer`, `pack`, `package`, `package-kind`,
+		`package-name`, `packages`, `Pair`, `pair`, `pairs`, `pairup`, `parameter`, `params`,
+		`parent`, `parent-name`, `parents`, `parse`, `parse-base`, `parsefile`, `parse-names`,
+		`parts`, `pass`, `path`, `path-sep`, `payload`, `peer-host`, `peer-port`, `periods`, `perl`,
+		`permutations`, `phaser`, `pick`, `pickpairs`, `pid`, `placeholder`, `plan`, `plus`,
+		`polar`, `poll`, `polymod`, `pop`, `pos`, `positional`, `posix`, `postfix`, `postmatch`,
+		`precomp-ext`, `precomp-target`, `precompiled`, `pred`, `prefix`, `prematch`, `prepend`,
+		`primary`, `print`, `printf`, `print-nl`, `print-to`, `private`, `private_method_names`,
+		`private_method_table`, `proc`, `produce`, `Promise`, `promise`, `prompt`, `protect`,
+		`protect-or-queue-on-recursion`, `publish_method_cache`, `pull-one`, `push`, `push-all`,
+		`push-at-least`, `push-exactly`, `push-until-lazy`, `put`, `qualifier-type`, `quaternary`,
+		`quit`, `r`, `race`, `radix`, `raku`, `rand`, `Range`, `range`, `Rat`, `raw`, `re`, `read`,
+		`read-bits`, `read-int128`, `read-int16`, `read-int32`, `read-int64`, `read-int8`,
+		`read-num32`, `read-num64`, `read-ubits`, `read-uint128`, `read-uint16`, `read-uint32`,
+		`read-uint64`, `read-uint8`, `readchars`, `readonly`, `ready`, `Real`, `reallocate`,
+		`reals`, `reason`, `rebless`, `receive`, `recv`, `redispatcher`, `redo`, `reduce`,
+		`rel2abs`, `relative`, `release`, `remove`, `rename`, `repeated`, `replacement`,
+		`replace-with`, `repo`, `repo-id`, `report`, `required`, `reserved`, `resolve`, `restore`,
+		`result`, `resume`, `rethrow`, `return`, `return-rw`, `returns`, `reverse`, `right`,
+		`rindex`, `rmdir`, `role`, `roles_to_compose`, `rolish`, `roll`, `rootdir`, `roots`,
+		`rotate`, `rotor`, `round`, `roundrobin`, `routine-type`, `run`, `RUN-MAIN`, `rw`, `rwx`,
+		`samecase`, `samemark`, `samewith`, `say`, `schedule-on`, `scheduler`, `scope`, `sec`,
+		`sech`, `second`, `secondary`, `seek`, `self`, `send`, `Seq`, `Set`, `set`, `serial`,
+		`set_hidden`, `set_name`, `set_package`, `set_rw`, `set_value`, `set_api`, `set_auth`,
+		`set_composalizer`, `set_export_callback`, `set_is_mixin`, `set_mixin_attribute`,
+		`set_package`, `set_ver`, `set_why`, `SetHash`, `Setty`, `set-instruments`,
+		`setup_finalization`, `setup_mixin_cache`, `shape`, `share`, `shell`, `short-id`,
+		`short-name`, `shortname`, `shift`, `sibling`, `sigil`, `sign`, `signal`, `signals`,
+		`signature`, `sin`, `sinh`, `sink`, `sink-all`, `skip`, `skip-at-least`,
+		`skip-at-least-pull-one`, `skip-one`, `skip-rest`, `sleep`, `sleep-timer`, `sleep-until`,
+		`Slip`, `slip`, `slurp`, `slurp-rest`, `slurpy`, `snap`, `snapper`, `so`, `socket-host`,
+		`socket-port`, `sort`, `source`, `source-package`, `spawn`, `SPEC`, `splice`, `split`,
+		`splitdir`, `splitpath`, `sprintf`, `spurt`, `sqrt`, `squish`, `srand`, `stable`, `start`,
+		`started`, `starts-with`, `status`, `stderr`, `stdout`, `STORE`, `store-file`,
+		`store-repo-id`, `store-unit`, `Str`, `Stringy`, `sub_signature`, `subbuf`, `subbuf-rw`,
+		`subname`, `subparse`, `subst`, `subst-mutate`, `substr`, `substr-eq`, `substr-rw`,
+		`subtest`, `succ`, `sum`, `suffix`, `summary`, `Supply`, `symlink`, `T`, `t`, `tail`,
+		`take`, `take-rw`, `tan`, `tanh`, `tap`, `target`, `target-name`, `tc`, `tclc`, `tell`,
+		`term`, `tertiary`, `then`, `throttle`, `throw`, `throws-like`, `time`, `timezone`,
+		`tmpdir`, `to`, `today`, `todo`, `toggle`, `to-posix`, `total`, `total-memory`, `trailing`,
+		`trans`, `tree`, `trim`, `trim-leading`, `trim-trailing`, `truncate`, `truncated-to`,
+		`trusts`, `try_acquire`, `trying`, `twigil`, `type`, `type_captures`, `type_check`,
+		`typename`, `uc`, `udp`, `uncaught_handler`, `undefine`, `unimatch`, `unicmp`, `uniname`,
+		`uninames`, `uninstall`, `uniparse`, `uniprop`, `uniprops`, `unique`, `unival`, `univals`,
+		`unlike`, `unlink`, `unlock`, `unpack`, `unpolar`, `unset`, `unshift`, `unwrap`, `updir`,
+		`USAGE`, `usage-name`, `use-ok`, `utc`, `val`, `value`, `values`, `VAR`, `variable`, `ver`,
+		`verbose-config`, `Version`, `version`, `VMnames`, `volume`, `vow`, `w`, `wait`, `warn`,
+		`watch`, `watch-path`, `week`, `weekday-of-month`, `week-number`, `week-year`, `WHAT`,
+		`what`, `when`, `WHERE`, `WHEREFORE`, `WHICH`, `WHO`, `whole-second`, `WHY`, `why`,
+		`with-lock-hidden-from-recursion-check`, `wordcase`, `words`, `workaround`, `wrap`,
+		`write`, `write-bits`, `write-int128`, `write-int16`, `write-int32`, `write-int64`,
+		`write-int8`, `write-num32`, `write-num64`, `write-ubits`, `write-uint128`, `write-uint16`,
+		`write-uint32`, `write-uint64`, `write-uint8`, `write-to`, `x`, `yada`, `year`, `yield`,
+		`yyyy-mm-dd`, `z`, `zip`, `zip-latest`, `HOW`, `s`, `DEPRECATED`, `trait_mod`,
+	}
+
+	builtinRoutinesPattern := Words(`(?<!['\w:-])`, `(?!['\w-])`, builtinRoutines...)
+
+	// A map of opening and closing brackets
+	brackets := map[rune]rune{
+		'\u0028': '\u0029', '\u003c': '\u003e', '\u005b': '\u005d',
+		'\u007b': '\u007d', '\u00ab': '\u00bb', '\u0f3a': '\u0f3b',
+		'\u0f3c': '\u0f3d', '\u169b': '\u169c', '\u2018': '\u2019',
+		'\u201a': '\u2019', '\u201b': '\u2019', '\u201c': '\u201d',
+		'\u201e': '\u201d', '\u201f': '\u201d', '\u2039': '\u203a',
+		'\u2045': '\u2046', '\u207d': '\u207e', '\u208d': '\u208e',
+		'\u2208': '\u220b', '\u2209': '\u220c', '\u220a': '\u220d',
+		'\u2215': '\u29f5', '\u223c': '\u223d', '\u2243': '\u22cd',
+		'\u2252': '\u2253', '\u2254': '\u2255', '\u2264': '\u2265',
+		'\u2266': '\u2267', '\u2268': '\u2269', '\u226a': '\u226b',
+		'\u226e': '\u226f', '\u2270': '\u2271', '\u2272': '\u2273',
+		'\u2274': '\u2275', '\u2276': '\u2277', '\u2278': '\u2279',
+		'\u227a': '\u227b', '\u227c': '\u227d', '\u227e': '\u227f',
+		'\u2280': '\u2281', '\u2282': '\u2283', '\u2284': '\u2285',
+		'\u2286': '\u2287', '\u2288': '\u2289', '\u228a': '\u228b',
+		'\u228f': '\u2290', '\u2291': '\u2292', '\u2298': '\u29b8',
+		'\u22a2': '\u22a3', '\u22a6': '\u2ade', '\u22a8': '\u2ae4',
+		'\u22a9': '\u2ae3', '\u22ab': '\u2ae5', '\u22b0': '\u22b1',
+		'\u22b2': '\u22b3', '\u22b4': '\u22b5', '\u22b6': '\u22b7',
+		'\u22c9': '\u22ca', '\u22cb': '\u22cc', '\u22d0': '\u22d1',
+		'\u22d6': '\u22d7', '\u22d8': '\u22d9', '\u22da': '\u22db',
+		'\u22dc': '\u22dd', '\u22de': '\u22df', '\u22e0': '\u22e1',
+		'\u22e2': '\u22e3', '\u22e4': '\u22e5', '\u22e6': '\u22e7',
+		'\u22e8': '\u22e9', '\u22ea': '\u22eb', '\u22ec': '\u22ed',
+		'\u22f0': '\u22f1', '\u22f2': '\u22fa', '\u22f3': '\u22fb',
+		'\u22f4': '\u22fc', '\u22f6': '\u22fd', '\u22f7': '\u22fe',
+		'\u2308': '\u2309', '\u230a': '\u230b', '\u2329': '\u232a',
+		'\u23b4': '\u23b5', '\u2768': '\u2769', '\u276a': '\u276b',
+		'\u276c': '\u276d', '\u276e': '\u276f', '\u2770': '\u2771',
+		'\u2772': '\u2773', '\u2774': '\u2775', '\u27c3': '\u27c4',
+		'\u27c5': '\u27c6', '\u27d5': '\u27d6', '\u27dd': '\u27de',
+		'\u27e2': '\u27e3', '\u27e4': '\u27e5', '\u27e6': '\u27e7',
+		'\u27e8': '\u27e9', '\u27ea': '\u27eb', '\u2983': '\u2984',
+		'\u2985': '\u2986', '\u2987': '\u2988', '\u2989': '\u298a',
+		'\u298b': '\u298c', '\u298d': '\u298e', '\u298f': '\u2990',
+		'\u2991': '\u2992', '\u2993': '\u2994', '\u2995': '\u2996',
+		'\u2997': '\u2998', '\u29c0': '\u29c1', '\u29c4': '\u29c5',
+		'\u29cf': '\u29d0', '\u29d1': '\u29d2', '\u29d4': '\u29d5',
+		'\u29d8': '\u29d9', '\u29da': '\u29db', '\u29f8': '\u29f9',
+		'\u29fc': '\u29fd', '\u2a2b': '\u2a2c', '\u2a2d': '\u2a2e',
+		'\u2a34': '\u2a35', '\u2a3c': '\u2a3d', '\u2a64': '\u2a65',
+		'\u2a79': '\u2a7a', '\u2a7d': '\u2a7e', '\u2a7f': '\u2a80',
+		'\u2a81': '\u2a82', '\u2a83': '\u2a84', '\u2a8b': '\u2a8c',
+		'\u2a91': '\u2a92', '\u2a93': '\u2a94', '\u2a95': '\u2a96',
+		'\u2a97': '\u2a98', '\u2a99': '\u2a9a', '\u2a9b': '\u2a9c',
+		'\u2aa1': '\u2aa2', '\u2aa6': '\u2aa7', '\u2aa8': '\u2aa9',
+		'\u2aaa': '\u2aab', '\u2aac': '\u2aad', '\u2aaf': '\u2ab0',
+		'\u2ab3': '\u2ab4', '\u2abb': '\u2abc', '\u2abd': '\u2abe',
+		'\u2abf': '\u2ac0', '\u2ac1': '\u2ac2', '\u2ac3': '\u2ac4',
+		'\u2ac5': '\u2ac6', '\u2acd': '\u2ace', '\u2acf': '\u2ad0',
+		'\u2ad1': '\u2ad2', '\u2ad3': '\u2ad4', '\u2ad5': '\u2ad6',
+		'\u2aec': '\u2aed', '\u2af7': '\u2af8', '\u2af9': '\u2afa',
+		'\u2e02': '\u2e03', '\u2e04': '\u2e05', '\u2e09': '\u2e0a',
+		'\u2e0c': '\u2e0d', '\u2e1c': '\u2e1d', '\u2e20': '\u2e21',
+		'\u3008': '\u3009', '\u300a': '\u300b', '\u300c': '\u300d',
+		'\u300e': '\u300f', '\u3010': '\u3011', '\u3014': '\u3015',
+		'\u3016': '\u3017', '\u3018': '\u3019', '\u301a': '\u301b',
+		'\u301d': '\u301e', '\ufd3e': '\ufd3f', '\ufe17': '\ufe18',
+		'\ufe35': '\ufe36', '\ufe37': '\ufe38', '\ufe39': '\ufe3a',
+		'\ufe3b': '\ufe3c', '\ufe3d': '\ufe3e', '\ufe3f': '\ufe40',
+		'\ufe41': '\ufe42', '\ufe43': '\ufe44', '\ufe47': '\ufe48',
+		'\ufe59': '\ufe5a', '\ufe5b': '\ufe5c', '\ufe5d': '\ufe5e',
+		'\uff08': '\uff09', '\uff1c': '\uff1e', '\uff3b': '\uff3d',
+		'\uff5b': '\uff5d', '\uff5f': '\uff60', '\uff62': '\uff63',
+	}
+
+	bracketsPattern := `[` + regexp.QuoteMeta(joinRuneMap(brackets)) + `]`
+
+	// Finds opening brackets and their closing counterparts (including pod and heredoc)
+	// and modifies state groups and position accordingly
+	findBrackets := func(tokenClass RakuToken) MutatorFunc {
+		return func(state *LexerState) error {
+			var openingChars []rune
+			var adverbs []rune
+			switch tokenClass {
+			case rakuPod:
+				openingChars = []rune(strings.Join(state.Groups[1:5], ``))
+			default:
+				adverbs = []rune(state.NamedGroups[`adverbs`])
+				openingChars = []rune(state.NamedGroups[`opening_delimiters`])
+			}
+
+			openingChar := openingChars[0]
+
+			nChars := len(openingChars)
+
+			var closingChar rune
+			var closingCharExists bool
+			var closingChars []rune
+
+			switch tokenClass {
+			case rakuPod:
+				closingCharExists = true
+			default:
+				closingChar, closingCharExists = brackets[openingChar]
+			}
+
+			switch tokenClass {
+			case rakuPodFormatter:
+				formatter := StringOther
+
+				switch state.NamedGroups[`keyword`] {
+				case "B":
+					formatter = GenericStrong
+				case "I":
+					formatter = GenericEmph
+				case "U":
+					formatter = GenericUnderline
+				}
+
+				formatterRule := ruleReplacingConfig{
+					pattern:      `.+?`,
+					tokenType:    formatter,
+					mutator:      nil,
+					stateName:    `pod-formatter`,
+					rulePosition: bottomRule,
+				}
+
+				err := replaceRule(formatterRule)(state)
+				if err != nil {
+					panic(err)
+				}
+
+				err = replaceRule(ruleReplacingConfig{
+					delimiter:              []rune{closingChar},
+					tokenType:              Punctuation,
+					stateName:              `pod-formatter`,
+					pushState:              true,
+					numberOfDelimiterChars: nChars,
+					appendMutator:          popRule(formatterRule),
+				})(state)
+				if err != nil {
+					panic(err)
+				}
+
+				return nil
+			case rakuMatchRegex:
+				var delimiter []rune
+				if closingCharExists {
+					delimiter = []rune{closingChar}
+				} else {
+					delimiter = openingChars
+				}
+
+				err := replaceRule(ruleReplacingConfig{
+					delimiter: delimiter,
+					tokenType: Punctuation,
+					stateName: `regex`,
+					popState:  true,
+					pushState: true,
+				})(state)
+				if err != nil {
+					panic(err)
+				}
+
+				return nil
+			case rakuSubstitutionRegex:
+				delimiter := regexp2.Escape(string(openingChars))
+
+				err := replaceRule(ruleReplacingConfig{
+					pattern:      `(` + delimiter + `)` + `((?:\\\\|\\/|.)*?)` + `(` + delimiter + `)`,
+					tokenType:    ByGroups(Punctuation, UsingSelf(`qq`), Punctuation),
+					rulePosition: topRule,
+					stateName:    `regex`,
+					popState:     true,
+					pushState:    true,
+				})(state)
+				if err != nil {
+					panic(err)
+				}
+
+				return nil
+			}
+
+			text := state.Text
+
+			var endPos int
+
+			var nonMirroredOpeningCharPosition int
+
+			if !closingCharExists {
+				// it's not a mirrored character, which means we
+				// just need to look for the next occurrence
+				closingChars = openingChars
+				nonMirroredOpeningCharPosition = indexAt(text, closingChars, state.Pos)
+				endPos = nonMirroredOpeningCharPosition
+			} else {
+				var podRegex *regexp2.Regexp
+				if tokenClass == rakuPod {
+					podRegex = regexp2.MustCompile(
+						state.NamedGroups[`ws`]+`=end`+`\s+`+regexp2.Escape(state.NamedGroups[`name`]),
+						0,
+					)
+				} else {
+					closingChars = []rune(strings.Repeat(string(closingChar), nChars))
+				}
+
+				// we need to look for the corresponding closing character,
+				// keep nesting in mind
+				nestingLevel := 1
+
+				searchPos := state.Pos - nChars
+
+				var nextClosePos int
+
+				for nestingLevel > 0 {
+					if tokenClass == rakuPod {
+						match, err := podRegex.FindRunesMatchStartingAt(text, searchPos+nChars)
+						if err == nil {
+							closingChars = match.Runes()
+							nextClosePos = match.Index
+						} else {
+							nextClosePos = -1
+						}
+					} else {
+						nextClosePos = indexAt(text, closingChars, searchPos+nChars)
+					}
+
+					nextOpenPos := indexAt(text, openingChars, searchPos+nChars)
+
+					switch {
+					case nextClosePos == -1:
+						nextClosePos = len(text)
+						nestingLevel = 0
+					case nextOpenPos != -1 && nextOpenPos < nextClosePos:
+						nestingLevel++
+						nChars = len(openingChars)
+						searchPos = nextOpenPos
+					default: // next_close_pos < next_open_pos
+						nestingLevel--
+						nChars = len(closingChars)
+						searchPos = nextClosePos
+					}
+				}
+
+				endPos = nextClosePos
+			}
+
+			if endPos < 0 {
+				// if we didn't find a closer, just highlight the
+				// rest of the text in this class
+				endPos = len(text)
+			}
+
+			adverbre := regexp.MustCompile(`:to\b|:heredoc\b`)
+			var heredocTerminator []rune
+			var endHeredocPos int
+			if adverbre.MatchString(string(adverbs)) {
+				if endPos != len(text) {
+					heredocTerminator = text[state.Pos:endPos]
+					nChars = len(heredocTerminator)
+				} else {
+					endPos = state.Pos + 1
+					heredocTerminator = []rune{}
+					nChars = 0
+				}
+
+				if nChars > 0 {
+					endHeredocPos = indexAt(text[endPos:], heredocTerminator, 0)
+					if endHeredocPos > -1 {
+						endPos += endHeredocPos
+					} else {
+						endPos = len(text)
+					}
+				}
+			}
+
+			textBetweenBrackets := string(text[state.Pos:endPos])
+			switch tokenClass {
+			case rakuPod, rakuPodDeclaration, rakuNameAttribute:
+				state.NamedGroups[`value`] = textBetweenBrackets
+				state.NamedGroups[`closing_delimiters`] = string(closingChars)
+			case rakuQuote:
+				if len(heredocTerminator) > 0 {
+					// Length of heredoc terminator + closing chars + `;`
+					heredocFristPunctuationLen := nChars + len(openingChars) + 1
+
+					state.NamedGroups[`opening_delimiters`] = string(openingChars) +
+						string(text[state.Pos:state.Pos+heredocFristPunctuationLen])
+
+					state.NamedGroups[`value`] =
+						string(text[state.Pos+heredocFristPunctuationLen : endPos])
+
+					if endHeredocPos > -1 {
+						state.NamedGroups[`closing_delimiters`] = string(heredocTerminator)
+					}
+				} else {
+					state.NamedGroups[`value`] = textBetweenBrackets
+					if nChars > 0 {
+						state.NamedGroups[`closing_delimiters`] = string(closingChars)
+					}
+				}
+			default:
+				state.Groups = []string{state.Groups[0] + string(text[state.Pos:endPos+nChars])}
+			}
+
+			state.Pos = endPos + nChars
+
+			return nil
+		}
+	}
+
+	// Raku rules
+	// Empty capture groups are placeholders and will be replaced by mutators
+	// DO NOT REMOVE THEM!
+	return Rules{
+		"root": {
+			// Placeholder, will be overwritten by mutators, DO NOT REMOVE!
+			{`\A\z`, nil, nil},
+			Include("common"),
+			{`{`, Punctuation, Push(`root`)},
+			{`\(`, Punctuation, Push(`root`)},
+			{`[)}]`, Punctuation, Pop(1)},
+			{`;`, Punctuation, nil},
+			{`\[|\]`, Operator, nil},
+			{`.+?`, Text, nil},
+		},
+		"common": {
+			{`^#![^\n]*$`, CommentHashbang, nil},
+			Include("pod"),
+			// Multi-line, Embedded comment
+			{
+				"#`(?<opening_delimiters>(?<delimiter>" + bracketsPattern + `)\k<delimiter>*)`,
+				CommentMultiline,
+				findBrackets(rakuMultilineComment),
+			},
+			{`#[^\n]*$`, CommentSingle, nil},
+			// /regex/
+			{
+				`(?<=(?:^|\(|=|:|~~|\[|{|,|=>)\s*)(/)(?!\]|\))((?:\\\\|\\/|.)*?)((?<!(?<!\\)\\)/(?!'|"))`,
+				ByGroups(Punctuation, UsingSelf("regex"), Punctuation),
+				nil,
+			},
+			Include("variable"),
+			// ::?VARIABLE
+			{`::\?\w+(?::[_UD])?`, NameVariableGlobal, nil},
+			// Version
+			{
+				`\b(v)(\d+)((?:\.(?:\*|[\d\w]+))*)(\+)?`,
+				ByGroups(Keyword, NumberInteger, NameEntity, Operator),
+				nil,
+			},
+			Include("number"),
+			// Hyperoperator | »*«
+			{`(>>)(\S+?)(<<)`, ByGroups(Operator, UsingSelf("root"), Operator), nil},
+			{`(»)(\S+?)(«)`, ByGroups(Operator, UsingSelf("root"), Operator), nil},
+			// Hyperoperator | «*«
+			{`(<<)(\S+?)(<<)`, ByGroups(Operator, UsingSelf("root"), Operator), nil},
+			{`(«)(\S+?)(«)`, ByGroups(Operator, UsingSelf("root"), Operator), nil},
+			// Hyperoperator | »*»
+			{`(>>)(\S+?)(>>)`, ByGroups(Operator, UsingSelf("root"), Operator), nil},
+			{`(»)(\S+?)(»)`, ByGroups(Operator, UsingSelf("root"), Operator), nil},
+			// <<quoted words>>
+			{`(?<!(?:\d+|\.(?:Int|Numeric)|[$@%]\*?[\w':-]+\s+|[\])}]\s+)\s*)(<<)(?!(?:(?!>>)[^\n])+?[},;] *\n)(?!(?:(?!>>).)+?>>\S+?>>)`, Punctuation, Push("<<")},
+			// «quoted words»
+			{`(?<!(?:\d+|\.(?:Int|Numeric)|[$@%]\*?[\w':-]+\s+|[\])}]\s+)\s*)(«)(?![^»]+?[},;] *\n)(?![^»]+?»\S+?»)`, Punctuation, Push("«")},
+			// [<]
+			{`(?<=\[\\?)<(?=\])`, Operator, nil},
+			// < and > operators | something < onething > something
+			{
+				`(?<=[$@%&]?\w[\w':-]* +)(<=?)( *[^ ]+? *)(>=?)(?= *[$@%&]?\w[\w':-]*)`,
+				ByGroups(Operator, UsingSelf("root"), Operator),
+				nil,
+			},
+			// <quoted words>
+			{
+				`(?<!(?:\d+|\.(?:Int|Numeric)|[$@%]\*?[\w':-]+\s+|[\])}]\s+)\s*)(<)((?:(?![,;)}] *(?:#[^\n]+)?\n)[^<>])+?)(>)(?!\s*(?:\d+|\.(?:Int|Numeric)|[$@%]\*?\w[\w':-]*[^(]|\s+\[))`,
+				ByGroups(Punctuation, String, Punctuation),
+				nil,
+			},
+			{`C?X::['\w:-]+`, NameException, nil},
+			Include("metaoperator"),
+			// Pair | key => value
+			{
+				`(\w[\w'-]*)(\s*)(=>)`,
+				ByGroups(String, Text, Operator),
+				nil,
+			},
+			Include("colon-pair"),
+			// Token
+			{
+				`(?<=(?:^|\s)(?:regex|token|rule)(\s+))` + namePattern + colonPairLookahead + `\s*[({])`,
+				NameFunction,
+				Push("token", "name-adverb"),
+			},
+			// Substitution
+			{`(?<=^|\b|\s)(?<!\.)(ss|S|s|TR|tr)\b(\s*)`, ByGroups(Keyword, Text), Push("substitution")},
+			{keywordsPattern, Keyword, nil},
+			{builtinTypesPattern, KeywordType, nil},
+			{builtinRoutinesPattern, NameBuiltin, nil},
+			// Class name
+			{
+				`(?<=(?:^|\s)(?:class|grammar|role|does|but|is|subset|of)\s+)` + namePattern,
+				NameClass,
+				Push("name-adverb"),
+			},
+			//  Routine
+			{
+				`(?<=(?:^|\s)(?:sub|method|multi sub|multi)\s+)!?` + namePattern + colonPairLookahead + `\s*[({])`,
+				NameFunction,
+				Push("name-adverb"),
+			},
+			// Constant
+			{`(?<=\bconstant\s+)` + namePattern, NameConstant, Push("name-adverb")},
+			// Namespace
+			{`(?<=\b(?:use|module|package)\s+)` + namePattern, NameNamespace, Push("name-adverb")},
+			Include("operator"),
+			Include("single-quote"),
+			{`(?<!(?<!\\)\\)"`, Punctuation, Push("double-quotes")},
+			// m,rx regex
+			{`(?<=^|\b|\s)(ms|m|rx)\b(\s*)`, ByGroups(Keyword, Text), Push("rx")},
+			// Quote constructs
+			{
+				`(?<=^|\b|\s)(?<keyword>(?:qq|q|Q))(?<adverbs>(?::?(?:heredoc|to|qq|ww|q|w|s|a|h|f|c|b|to|v|x))*)(?<ws>\s*)(?<opening_delimiters>(?<delimiter>[^0-9a-zA-Z:\s])\k<delimiter>*)`,
+				EmitterFunc(quote),
+				findBrackets(rakuQuote),
+			},
+			// Function
+			{
+				`\b` + namePattern + colonPairLookahead + `\()`,
+				NameFunction,
+				Push("name-adverb"),
+			},
+			// Method
+			{
+				`(?<!\.\.[?^*+]?)(?<=(?:\.[?^*+&]?)|self!)` + namePattern + colonPairLookahead + `\b)`,
+				NameFunction,
+				Push("name-adverb"),
+			},
+			// Indirect invocant
+			{namePattern + `(?=\s+\W?['\w:-]+:\W)`, NameFunction, Push("name-adverb")},
+			{`(?<=\W)(?:∅|i|e|𝑒|tau|τ|pi|π|Inf|∞)(?=\W)`, NameConstant, nil},
+			{`(「)([^」]*)(」)`, ByGroups(Punctuation, String, Punctuation), nil},
+			{`(?<=^ *)\b` + namePattern + `(?=:\s*(?:for|while|loop))`, NameLabel, nil},
+			// Sigilless variable
+			{
+				`(?<=\b(?:my|our|constant|let|temp)\s+)\\` + namePattern,
+				NameVariable,
+				Push("name-adverb"),
+			},
+			{namePattern, Name, Push("name-adverb")},
+		},
+		"rx": {
+			Include("colon-pair-attribute"),
+			{
+				`(?<opening_delimiters>(?<delimiter>[^\w:\s])\k<delimiter>*)`,
+				ByGroupNames(
+					map[string]Emitter{
+						`opening_delimiters`: Punctuation,
+						`delimiter`:          nil,
+					},
+				),
+				findBrackets(rakuMatchRegex),
+			},
+		},
+		"substitution": {
+			Include("colon-pair-attribute"),
+			// Substitution | s{regex} = value
+			{
+				`(?<opening_delimiters>(?<delimiter>` + bracketsPattern + `)\k<delimiter>*)`,
+				ByGroupNames(map[string]Emitter{
+					`opening_delimiters`: Punctuation,
+					`delimiter`:          nil,
+				}),
+				findBrackets(rakuMatchRegex),
+			},
+			// Substitution | s/regex/string/
+			{
+				`(?<opening_delimiters>[^\w:\s])`,
+				Punctuation,
+				findBrackets(rakuSubstitutionRegex),
+			},
+		},
+		"number": {
+			{`0_?[0-7]+(_[0-7]+)*`, LiteralNumberOct, nil},
+			{`0x[0-9A-Fa-f]+(_[0-9A-Fa-f]+)*`, LiteralNumberHex, nil},
+			{`0b[01]+(_[01]+)*`, LiteralNumberBin, nil},
+			{
+				`(?i)(\d*(_\d*)*\.\d+(_\d*)*|\d+(_\d*)*\.\d+(_\d*)*)(e[+-]?\d+)?`,
+				LiteralNumberFloat,
+				nil,
+			},
+			{`(?i)\d+(_\d*)*e[+-]?\d+(_\d*)*`, LiteralNumberFloat, nil},
+			{`(?<=\d+)i`, NameConstant, nil},
+			{`\d+(_\d+)*`, LiteralNumberInteger, nil},
+		},
+		"name-adverb": {
+			Include("colon-pair-attribute-keyvalue"),
+			Default(Pop(1)),
+		},
+		"colon-pair": {
+			// :key(value)
+			{colonPairPattern, colonPair(String), findBrackets(rakuNameAttribute)},
+			// :123abc
+			{
+				`(:)(\d+)(\w[\w'-]*)`,
+				ByGroups(Punctuation, UsingSelf("number"), String),
+				nil,
+			},
+			// :key
+			{`(:)(!?)(\w[\w'-]*)`, ByGroups(Punctuation, Operator, String), nil},
+			{`\s+`, Text, nil},
+		},
+		"colon-pair-attribute": {
+			// :key(value)
+			{colonPairPattern, colonPair(NameAttribute), findBrackets(rakuNameAttribute)},
+			// :123abc
+			{
+				`(:)(\d+)(\w[\w'-]*)`,
+				ByGroups(Punctuation, UsingSelf("number"), NameAttribute),
+				nil,
+			},
+			// :key
+			{`(:)(!?)(\w[\w'-]*)`, ByGroups(Punctuation, Operator, NameAttribute), nil},
+			{`\s+`, Text, nil},
+		},
+		"colon-pair-attribute-keyvalue": {
+			// :key(value)
+			{colonPairPattern, colonPair(NameAttribute), findBrackets(rakuNameAttribute)},
+		},
+		"escape-qq": {
+			{
+				`(?<!(?<!\\)\\)(\\qq)(\[)(.+?)(\])`,
+				ByGroups(StringEscape, Punctuation, UsingSelf("qq"), Punctuation),
+				nil,
+			},
+		},
+		`escape-char`: {
+			{`(?<!(?<!\\)\\)(\\[abfrnrt])`, StringEscape, nil},
+		},
+		`escape-single-quote`: {
+			{`(?<!(?<!\\)\\)(\\)(['\\])`, ByGroups(StringEscape, StringSingle), nil},
+		},
+		"escape-c-name": {
+			{
+				`(?<!(?<!\\)\\)(\\[c|C])(\[)(.+?)(\])`,
+				ByGroups(StringEscape, Punctuation, String, Punctuation),
+				nil,
+			},
+		},
+		"escape-hexadecimal": {
+			{
+				`(?<!(?<!\\)\\)(\\[x|X])(\[)([0-9a-fA-F]+)(\])`,
+				ByGroups(StringEscape, Punctuation, NumberHex, Punctuation),
+				nil,
+			},
+			{`(\\[x|X])([0-9a-fA-F]+)`, ByGroups(StringEscape, NumberHex), nil},
+		},
+		"regex": {
+			// Placeholder, will be overwritten by mutators, DO NOT REMOVE!
+			{`\A\z`, nil, nil},
+			Include("regex-escape-class"),
+			Include(`regex-character-escape`),
+			// $(code)
+			{
+				`([$@])((?<!(?<!\\)\\)\()`,
+				ByGroups(Keyword, Punctuation),
+				replaceRule(ruleReplacingConfig{
+					delimiter: []rune(`)`),
+					tokenType: Punctuation,
+					stateName: `root`,
+					pushState: true,
+				}),
+			},
+			// Exclude $/ from variables, because we can't get out of the end of the slash regex: $/;
+			{`\$(?=/)`, NameEntity, nil},
+			// Exclude $ from variables
+			{`\$(?=\z|\s|[^<(\w*!.])`, NameEntity, nil},
+			Include("variable"),
+			Include("escape-c-name"),
+			Include("escape-hexadecimal"),
+			Include("number"),
+			Include("single-quote"),
+			// :my variable code ...
+			{
+				`(?<!(?<!\\)\\)(:)(my|our|state|constant|temp|let)`,
+				ByGroups(Operator, KeywordDeclaration),
+				replaceRule(ruleReplacingConfig{
+					delimiter: []rune(`;`),
+					tokenType: Punctuation,
+					stateName: `root`,
+					pushState: true,
+				}),
+			},
+			// <{code}>
+			{
+				`(?<!(?<!\\)\\)(<)([?!.]*)((?<!(?<!\\)\\){)`,
+				ByGroups(Punctuation, Operator, Punctuation),
+				replaceRule(ruleReplacingConfig{
+					delimiter: []rune(`}>`),
+					tokenType: Punctuation,
+					stateName: `root`,
+					pushState: true,
+				}),
+			},
+			// {code}
+			Include(`closure`),
+			// Properties
+			{`(:)(\w+)`, ByGroups(Punctuation, NameAttribute), nil},
+			// Operator
+			{`\|\||\||&&|&|\.\.|\*\*|%%|%|:|!|<<|«|>>|»|\+|\*\*|\*|\?|=|~|<~~>`, Operator, nil},
+			// Anchors
+			{`\^\^|\^|\$\$|\$`, NameEntity, nil},
+			{`\.`, NameEntity, nil},
+			{`#[^\n]*\n`, CommentSingle, nil},
+			// Lookaround
+			{
+				`(?<!(?<!\\)\\)(<)(\s*)([?!.]+)(\s*)(after|before)`,
+				ByGroups(Punctuation, Text, Operator, Text, OperatorWord),
+				replaceRule(ruleReplacingConfig{
+					delimiter: []rune(`>`),
+					tokenType: Punctuation,
+					stateName: `regex`,
+					pushState: true,
+				}),
+			},
+			{
+				`(?<!(?<!\\)\\)(<)([|!?.]*)(wb|ww|ws|w)(>)`,
+				ByGroups(Punctuation, Operator, OperatorWord, Punctuation),
+				nil,
+			},
+			// <$variable>
+			{
+				`(?<!(?<!\\)\\)(<)([?!.]*)([$@]\w[\w:-]*)(>)`,
+				ByGroups(Punctuation, Operator, NameVariable, Punctuation),
+				nil,
+			},
+			// Capture markers
+			{`(?<!(?<!\\)\\)<\(|\)>`, Operator, nil},
+			{
+				`(?<!(?<!\\)\\)(<)(\w[\w:-]*)(=\.?)`,
+				ByGroups(Punctuation, NameVariable, Operator),
+				Push(`regex-variable`),
+			},
+			{
+				`(?<!(?<!\\)\\)(<)([|!?.&]*)(\w(?:(?!:\s)[\w':-])*)`,
+				ByGroups(Punctuation, Operator, NameFunction),
+				Push(`regex-function`),
+			},
+			{`(?<!(?<!\\)\\)<`, Punctuation, Push("regex-property")},
+			{`(?<!(?<!\\)\\)"`, Punctuation, Push("double-quotes")},
+			{`(?<!(?<!\\)\\)(?:\]|\))`, Punctuation, Pop(1)},
+			{`(?<!(?<!\\)\\)(?:\[|\()`, Punctuation, Push("regex")},
+			{`.+?`, StringRegex, nil},
+		},
+		"regex-class-builtin": {
+			{
+				`\b(?:alnum|alpha|blank|cntrl|digit|graph|lower|print|punct|space|upper|xdigit|same|ident)\b`,
+				NameBuiltin,
+				nil,
+			},
+		},
+		"regex-function": {
+			// <function>
+			{`(?<!(?<!\\)\\)>`, Punctuation, Pop(1)},
+			// <function(parameter)>
+			{
+				`\(`,
+				Punctuation,
+				replaceRule(ruleReplacingConfig{
+					delimiter: []rune(`)>`),
+					tokenType: Punctuation,
+					stateName: `root`,
+					popState:  true,
+					pushState: true,
+				}),
+			},
+			// <function value>
+			{
+				`\s+`,
+				StringRegex,
+				replaceRule(ruleReplacingConfig{
+					delimiter: []rune(`>`),
+					tokenType: Punctuation,
+					stateName: `regex`,
+					popState:  true,
+					pushState: true,
+				}),
+			},
+			// <function: value>
+			{
+				`:`,
+				Punctuation,
+				replaceRule(ruleReplacingConfig{
+					delimiter: []rune(`>`),
+					tokenType: Punctuation,
+					stateName: `root`,
+					popState:  true,
+					pushState: true,
+				}),
+			},
+		},
+		"regex-variable": {
+			Include(`regex-starting-operators`),
+			// <var=function(
+			{
+				`(&)?(\w(?:(?!:\s)[\w':-])*)(?=\()`,
+				ByGroups(Operator, NameFunction),
+				Mutators(Pop(1), Push(`regex-function`)),
+			},
+			// <var=function>
+			{`(&)?(\w[\w':-]*)(>)`, ByGroups(Operator, NameFunction, Punctuation), Pop(1)},
+			// <var=
+			Default(Pop(1), Push(`regex-property`)),
+		},
+		"regex-property": {
+			{`(?<!(?<!\\)\\)>`, Punctuation, Pop(1)},
+			Include("regex-class-builtin"),
+			Include("variable"),
+			Include(`regex-starting-operators`),
+			Include("colon-pair-attribute"),
+			{`(?<!(?<!\\)\\)\[`, Punctuation, Push("regex-character-class")},
+			{`\+|\-`, Operator, nil},
+			{`@[\w':-]+`, NameVariable, nil},
+			{`.+?`, StringRegex, nil},
+		},
+		`regex-starting-operators`: {
+			{`(?<=<)[|!?.]+`, Operator, nil},
+		},
+		"regex-escape-class": {
+			{`(?i)\\n|\\t|\\h|\\v|\\s|\\d|\\w`, StringEscape, nil},
+		},
+		`regex-character-escape`: {
+			{`(?<!(?<!\\)\\)(\\)(.)`, ByGroups(StringEscape, StringRegex), nil},
+		},
+		"regex-character-class": {
+			{`(?<!(?<!\\)\\)\]`, Punctuation, Pop(1)},
+			Include("regex-escape-class"),
+			Include("escape-c-name"),
+			Include("escape-hexadecimal"),
+			Include(`regex-character-escape`),
+			Include("number"),
+			{`\.\.`, Operator, nil},
+			{`.+?`, StringRegex, nil},
+		},
+		"metaoperator": {
+			// Z[=>]
+			{
+				`\b([RZX]+)\b(\[)([^\s\]]+?)(\])`,
+				ByGroups(OperatorWord, Punctuation, UsingSelf("root"), Punctuation),
+				nil,
+			},
+			// Z=>
+			{`\b([RZX]+)\b([^\s\]]+)`, ByGroups(OperatorWord, UsingSelf("operator")), nil},
+		},
+		"operator": {
+			// Word Operator
+			{wordOperatorsPattern, OperatorWord, nil},
+			// Operator
+			{operatorsPattern, Operator, nil},
+		},
+		"pod": {
+			// Single-line pod declaration
+			{`(#[|=])\s`, Keyword, Push("pod-single")},
+			// Multi-line pod declaration
+			{
+				"(?<keyword>#[|=])(?<opening_delimiters>(?<delimiter>" + bracketsPattern + `)\k<delimiter>*)(?<value>)(?<closing_delimiters>)`,
+				ByGroupNames(
+					map[string]Emitter{
+						`keyword`:            Keyword,
+						`opening_delimiters`: Punctuation,
+						`delimiter`:          nil,
+						`value`:              UsingSelf("pod-declaration"),
+						`closing_delimiters`: Punctuation,
+					}),
+				findBrackets(rakuPodDeclaration),
+			},
+			Include("pod-blocks"),
+		},
+		"pod-blocks": {
+			// =begin code
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=begin)(?<ws2> +)(?<name>code)(?<config>[^\n]*)(?<value>.*?)(?<ws3>^\k<ws>)(?<end_keyword>=end)(?<ws4> +)\k<name>`,
+				EmitterFunc(podCode),
+				nil,
+			},
+			// =begin
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=begin)(?<ws2> +)(?!code)(?<name>\w[\w'-]*)(?<config>[^\n]*)(?<value>)(?<closing_delimiters>)`,
+				ByGroupNames(
+					map[string]Emitter{
+						`ws`:                 Comment,
+						`keyword`:            Keyword,
+						`ws2`:                StringDoc,
+						`name`:               Keyword,
+						`config`:             EmitterFunc(podConfig),
+						`value`:              UsingSelf("pod-begin"),
+						`closing_delimiters`: Keyword,
+					}),
+				findBrackets(rakuPod),
+			},
+			// =for ...
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=(?:for|defn))(?<ws2> +)(?<name>\w[\w'-]*)(?<config>[^\n]*\n)`,
+				ByGroups(Comment, Keyword, StringDoc, Keyword, EmitterFunc(podConfig)),
+				Push("pod-paragraph"),
+			},
+			// =config
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=config)(?<ws2> +)(?<name>\w[\w'-]*)(?<config>[^\n]*\n)`,
+				ByGroups(Comment, Keyword, StringDoc, Keyword, EmitterFunc(podConfig)),
+				nil,
+			},
+			// =alias
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=alias)(?<ws2> +)(?<name>\w[\w'-]*)(?<value>[^\n]*\n)`,
+				ByGroups(Comment, Keyword, StringDoc, Keyword, StringDoc),
+				nil,
+			},
+			// =encoding
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=encoding)(?<ws2> +)(?<name>[^\n]+)`,
+				ByGroups(Comment, Keyword, StringDoc, Name),
+				nil,
+			},
+			// =para ...
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=(?:para|table|pod))(?<config>(?<!\n\s*)[^\n]*\n)`,
+				ByGroups(Comment, Keyword, EmitterFunc(podConfig)),
+				Push("pod-paragraph"),
+			},
+			// =head1 ...
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=head\d+)(?<ws2> *)(?<config>#?)`,
+				ByGroups(Comment, Keyword, GenericHeading, Keyword),
+				Push("pod-heading"),
+			},
+			// =item ...
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=(?:item\d*|comment|data|[A-Z]+))(?<ws2> *)(?<config>#?)`,
+				ByGroups(Comment, Keyword, StringDoc, Keyword),
+				Push("pod-paragraph"),
+			},
+			{
+				`(?<=^ *)(?<ws> *)(?<keyword>=finish)(?<config>[^\n]*)`,
+				ByGroups(Comment, Keyword, EmitterFunc(podConfig)),
+				Push("pod-finish"),
+			},
+			// ={custom} ...
+			{
+				`(?<=^ *)(?<ws> *)(?<name>=\w[\w'-]*)(?<ws2> *)(?<config>#?)`,
+				ByGroups(Comment, Name, StringDoc, Keyword),
+				Push("pod-paragraph"),
+			},
+			// = podconfig
+			{
+				`(?<=^ *)(?<keyword> *=)(?<ws> *)(?<config>(?::\w[\w'-]*(?:` + colonPairOpeningBrackets + `.+?` +
+					colonPairClosingBrackets + `) *)*\n)`,
+				ByGroups(Keyword, StringDoc, EmitterFunc(podConfig)),
+				nil,
+			},
+		},
+		"pod-begin": {
+			Include("pod-blocks"),
+			Include("pre-pod-formatter"),
+			{`.+?`, StringDoc, nil},
+		},
+		"pod-declaration": {
+			Include("pre-pod-formatter"),
+			{`.+?`, StringDoc, nil},
+		},
+		"pod-paragraph": {
+			{`\n *\n|\n(?=^ *=)`, StringDoc, Pop(1)},
+			Include("pre-pod-formatter"),
+			{`.+?`, StringDoc, nil},
+		},
+		"pod-single": {
+			{`\n`, StringDoc, Pop(1)},
+			Include("pre-pod-formatter"),
+			{`.+?`, StringDoc, nil},
+		},
+		"pod-heading": {
+			{`\n *\n|\n(?=^ *=)`, GenericHeading, Pop(1)},
+			Include("pre-pod-formatter"),
+			{`.+?`, GenericHeading, nil},
+		},
+		"pod-finish": {
+			{`\z`, nil, Pop(1)},
+			Include("pre-pod-formatter"),
+			{`.+?`, StringDoc, nil},
+		},
+		"pre-pod-formatter": {
+			// C<code>, B<bold>, ...
+			{
+				`(?<keyword>[CBIUDTKRPAELZVMSXN])(?<opening_delimiters><+|«)`,
+				ByGroups(Keyword, Punctuation),
+				findBrackets(rakuPodFormatter),
+			},
+		},
+		"pod-formatter": {
+			// Placeholder rule, will be replaced by mutators. DO NOT REMOVE!
+			{`>`, Punctuation, Pop(1)},
+			Include("pre-pod-formatter"),
+			// Placeholder rule, will be replaced by mutators. DO NOT REMOVE!
+			{`.+?`, StringOther, nil},
+		},
+		"variable": {
+			{variablePattern, NameVariable, Push("name-adverb")},
+			{globalVariablePattern, NameVariableGlobal, Push("name-adverb")},
+			{`[$@]<[^>]+>`, NameVariable, nil},
+			{`\$[/!¢]`, NameVariable, nil},
+			{`[$@%]`, NameVariable, nil},
+		},
+		"single-quote": {
+			{`(?<!(?<!\\)\\)'`, Punctuation, Push("single-quote-inner")},
+		},
+		"single-quote-inner": {
+			{`(?<!(?<!(?<!\\)\\)\\)'`, Punctuation, Pop(1)},
+			Include("escape-single-quote"),
+			Include("escape-qq"),
+			{`(?:\\\\|\\[^\\]|[^'\\])+?`, StringSingle, nil},
+		},
+		"double-quotes": {
+			{`(?<!(?<!\\)\\)"`, Punctuation, Pop(1)},
+			Include("qq"),
+		},
+		"<<": {
+			{`>>(?!\s*(?:\d+|\.(?:Int|Numeric)|[$@%]\*?[\w':-]+|\s+\[))`, Punctuation, Pop(1)},
+			Include("ww"),
+		},
+		"«": {
+			{`»(?!\s*(?:\d+|\.(?:Int|Numeric)|[$@%]\*?[\w':-]+|\s+\[))`, Punctuation, Pop(1)},
+			Include("ww"),
+		},
+		"ww": {
+			Include("single-quote"),
+			Include("qq"),
+		},
+		"qq": {
+			Include("qq-variable"),
+			Include("closure"),
+			Include(`escape-char`),
+			Include("escape-hexadecimal"),
+			Include("escape-c-name"),
+			Include("escape-qq"),
+			{`.+?`, StringDouble, nil},
+		},
+		"qq-variable": {
+			{
+				`(?<!(?<!\\)\\)(?:` + variablePattern + `|` + globalVariablePattern + `)` + colonPairLookahead + `)`,
+				NameVariable,
+				Push("qq-variable-extras", "name-adverb"),
+			},
+		},
+		"qq-variable-extras": {
+			// Method
+			{
+				`(?<operator>\.)(?<method_name>` + namePattern + `)` + colonPairLookahead + `\()`,
+				ByGroupNames(map[string]Emitter{
+					`operator`:    Operator,
+					`method_name`: NameFunction,
+				}),
+				Push(`name-adverb`),
+			},
+			// Function/Signature
+			{
+				`\(`, Punctuation, replaceRule(
+					ruleReplacingConfig{
+						delimiter: []rune(`)`),
+						tokenType: Punctuation,
+						stateName: `root`,
+						pushState: true,
+					}),
+			},
+			Default(Pop(1)),
+		},
+		"Q": {
+			Include("escape-qq"),
+			{`.+?`, String, nil},
+		},
+		"Q-closure": {
+			Include("escape-qq"),
+			Include("closure"),
+			{`.+?`, String, nil},
+		},
+		"Q-variable": {
+			Include("escape-qq"),
+			Include("qq-variable"),
+			{`.+?`, String, nil},
+		},
+		"closure": {
+			{`(?<!(?<!\\)\\){`, Punctuation, replaceRule(
+				ruleReplacingConfig{
+					delimiter: []rune(`}`),
+					tokenType: Punctuation,
+					stateName: `root`,
+					pushState: true,
+				}),
+			},
+		},
+		"token": {
+			// Token signature
+			{`\(`, Punctuation, replaceRule(
+				ruleReplacingConfig{
+					delimiter: []rune(`)`),
+					tokenType: Punctuation,
+					stateName: `root`,
+					pushState: true,
+				}),
+			},
+			{`{`, Punctuation, replaceRule(
+				ruleReplacingConfig{
+					delimiter: []rune(`}`),
+					tokenType: Punctuation,
+					stateName: `regex`,
+					popState:  true,
+					pushState: true,
+				}),
+			},
+			{`\s*`, Text, nil},
+			Default(Pop(1)),
+		},
+	}
+}
+
+// Joins keys of rune map
+func joinRuneMap(m map[rune]rune) string {
+	runes := make([]rune, 0, len(m))
+	for k := range m {
+		runes = append(runes, k)
+	}
+
+	return string(runes)
+}
+
+// Finds the index of substring in the string starting at position n
+func indexAt(str []rune, substr []rune, pos int) int {
+	strFromPos := str[pos:]
+	text := string(strFromPos)
+
+	idx := strings.Index(text, string(substr))
+	if idx > -1 {
+		idx = utf8.RuneCountInString(text[:idx])
+
+		// Search again if the substr is escaped with backslash
+		if (idx > 1 && strFromPos[idx-1] == '\\' && strFromPos[idx-2] != '\\') ||
+			(idx == 1 && strFromPos[idx-1] == '\\') {
+			idx = indexAt(str[pos:], substr, idx+1)
+
+			idx = utf8.RuneCountInString(text[:idx])
+
+			if idx < 0 {
+				return idx
+			}
+		}
+		idx += pos
+	}
+
+	return idx
+}
+
+// Tells if an array of string contains a string
+func contains(s []string, e string) bool {
+	for _, value := range s {
+		if value == e {
+			return true
+		}
+	}
+	return false
+}
+
+type rulePosition int
+
+const (
+	topRule    rulePosition = 0
+	bottomRule              = -1
+)
+
+type ruleMakingConfig struct {
+	delimiter              []rune
+	pattern                string
+	tokenType              Emitter
+	mutator                Mutator
+	numberOfDelimiterChars int
+}
+
+type ruleReplacingConfig struct {
+	delimiter              []rune
+	pattern                string
+	tokenType              Emitter
+	numberOfDelimiterChars int
+	mutator                Mutator
+	appendMutator          Mutator
+	rulePosition           rulePosition
+	stateName              string
+	pop                    bool
+	popState               bool
+	pushState              bool
+}
+
+// Pops rule from state-stack and replaces the rule with the previous rule
+func popRule(rule ruleReplacingConfig) MutatorFunc {
+	return func(state *LexerState) error {
+		stackName := genStackName(rule.stateName, rule.rulePosition)
+
+		stack, ok := state.Get(stackName).([]ruleReplacingConfig)
+
+		if ok && len(stack) > 0 {
+			// Pop from stack
+			stack = stack[:len(stack)-1]
+			lastRule := stack[len(stack)-1]
+			lastRule.pushState = false
+			lastRule.popState = false
+			lastRule.pop = true
+			state.Set(stackName, stack)
+
+			// Call replaceRule to use the last rule
+			err := replaceRule(lastRule)(state)
+			if err != nil {
+				panic(err)
+			}
+		}
+
+		return nil
+	}
+}
+
+// Replaces a state's rule based on the rule config and position
+func replaceRule(rule ruleReplacingConfig) MutatorFunc {
+	return func(state *LexerState) error {
+		stateName := rule.stateName
+		stackName := genStackName(rule.stateName, rule.rulePosition)
+
+		stack, ok := state.Get(stackName).([]ruleReplacingConfig)
+		if !ok {
+			stack = []ruleReplacingConfig{}
+		}
+
+		// If state-stack is empty fill it with the placeholder rule
+		if len(stack) == 0 {
+			stack = []ruleReplacingConfig{
+				{
+					// Placeholder, will be overwritten by mutators, DO NOT REMOVE!
+					pattern:      `\A\z`,
+					tokenType:    nil,
+					mutator:      nil,
+					stateName:    stateName,
+					rulePosition: rule.rulePosition,
+				},
+			}
+			state.Set(stackName, stack)
+		}
+
+		var mutator Mutator
+		mutators := []Mutator{}
+
+		switch {
+		case rule.rulePosition == topRule && rule.mutator == nil:
+			// Default mutator for top rule
+			mutators = []Mutator{Pop(1), popRule(rule)}
+		case rule.rulePosition == topRule && rule.mutator != nil:
+			// Default mutator for top rule, when rule.mutator is set
+			mutators = []Mutator{rule.mutator, popRule(rule)}
+		case rule.mutator != nil:
+			mutators = []Mutator{rule.mutator}
+		}
+
+		if rule.appendMutator != nil {
+			mutators = append(mutators, rule.appendMutator)
+		}
+
+		if len(mutators) > 0 {
+			mutator = Mutators(mutators...)
+		} else {
+			mutator = nil
+		}
+
+		ruleConfig := ruleMakingConfig{
+			pattern:                rule.pattern,
+			delimiter:              rule.delimiter,
+			numberOfDelimiterChars: rule.numberOfDelimiterChars,
+			tokenType:              rule.tokenType,
+			mutator:                mutator,
+		}
+
+		cRule := makeRule(ruleConfig)
+
+		switch rule.rulePosition {
+		case topRule:
+			state.Rules[stateName][0] = cRule
+		case bottomRule:
+			state.Rules[stateName][len(state.Rules[stateName])-1] = cRule
+		}
+
+		// Pop state name from stack if asked. State should be popped first before Pushing
+		if rule.popState {
+			err := Pop(1).Mutate(state)
+			if err != nil {
+				panic(err)
+			}
+		}
+
+		// Push state name to stack if asked
+		if rule.pushState {
+			err := Push(stateName).Mutate(state)
+			if err != nil {
+				panic(err)
+			}
+		}
+
+		if !rule.pop {
+			state.Set(stackName, append(stack, rule))
+		}
+
+		return nil
+	}
+}
+
+// Generates rule replacing stack using state name and rule position
+func genStackName(stateName string, rulePosition rulePosition) (stackName string) {
+	switch rulePosition {
+	case topRule:
+		stackName = stateName + `-top-stack`
+	case bottomRule:
+		stackName = stateName + `-bottom-stack`
+	}
+	return
+}
+
+// Makes a compiled rule and returns it
+func makeRule(config ruleMakingConfig) *CompiledRule {
+	var rePattern string
+
+	if len(config.delimiter) > 0 {
+		delimiter := string(config.delimiter)
+
+		if config.numberOfDelimiterChars > 1 {
+			delimiter = strings.Repeat(delimiter, config.numberOfDelimiterChars)
+		}
+
+		rePattern = `(?<!(?<!\\)\\)` + regexp2.Escape(delimiter)
+	} else {
+		rePattern = config.pattern
+	}
+
+	regex := regexp2.MustCompile(rePattern, regexp2.None)
+
+	cRule := &CompiledRule{
+		Rule:   Rule{rePattern, config.tokenType, config.mutator},
+		Regexp: regex,
+	}
+
+	return cRule
+}
+
+// Emitter for colon pairs, changes token state based on key and brackets
+func colonPair(tokenClass TokenType) Emitter {
+	return EmitterFunc(func(groups []string, state *LexerState) Iterator {
+		iterators := []Iterator{}
+		tokens := []Token{
+			{Punctuation, state.NamedGroups[`colon`]},
+			{Punctuation, state.NamedGroups[`opening_delimiters`]},
+			{Punctuation, state.NamedGroups[`closing_delimiters`]},
+		}
+
+		// Append colon
+		iterators = append(iterators, Literator(tokens[0]))
+
+		if tokenClass == NameAttribute {
+			iterators = append(iterators, Literator(Token{NameAttribute, state.NamedGroups[`key`]}))
+		} else {
+			var keyTokenState string
+			keyre := regexp.MustCompile(`^\d+$`)
+			if keyre.MatchString(state.NamedGroups[`key`]) {
+				keyTokenState = "common"
+			} else {
+				keyTokenState = "Q"
+			}
+
+			// Use token state to Tokenise key
+			if keyTokenState != "" {
+				iterator, err := state.Lexer.Tokenise(
+					&TokeniseOptions{
+						State:  keyTokenState,
+						Nested: true,
+					}, state.NamedGroups[`key`])
+
+				if err != nil {
+					panic(err)
+				} else {
+					// Append key
+					iterators = append(iterators, iterator)
+				}
+			}
+		}
+
+		// Append punctuation
+		iterators = append(iterators, Literator(tokens[1]))
+
+		var valueTokenState string
+
+		switch state.NamedGroups[`opening_delimiters`] {
+		case "(", "{", "[":
+			valueTokenState = "root"
+		case "<<", "«":
+			valueTokenState = "ww"
+		case "<":
+			valueTokenState = "Q"
+		}
+
+		// Use token state to Tokenise value
+		if valueTokenState != "" {
+			iterator, err := state.Lexer.Tokenise(
+				&TokeniseOptions{
+					State:  valueTokenState,
+					Nested: true,
+				}, state.NamedGroups[`value`])
+
+			if err != nil {
+				panic(err)
+			} else {
+				// Append value
+				iterators = append(iterators, iterator)
+			}
+		}
+		// Append last punctuation
+		iterators = append(iterators, Literator(tokens[2]))
+
+		return Concaterator(iterators...)
+	})
+}
+
+// Emitter for quoting constructs, changes token state based on quote name and adverbs
+func quote(groups []string, state *LexerState) Iterator {
+	keyword := state.NamedGroups[`keyword`]
+	adverbsStr := state.NamedGroups[`adverbs`]
+	iterators := []Iterator{}
+	tokens := []Token{
+		{Keyword, keyword},
+		{StringAffix, adverbsStr},
+		{Text, state.NamedGroups[`ws`]},
+		{Punctuation, state.NamedGroups[`opening_delimiters`]},
+		{Punctuation, state.NamedGroups[`closing_delimiters`]},
+	}
+
+	// Append all tokens before dealing with the main string
+	iterators = append(iterators, Literator(tokens[:4]...))
+
+	var tokenStates []string
+
+	// Set tokenStates based on adverbs
+	adverbs := strings.Split(adverbsStr, ":")
+	for _, adverb := range adverbs {
+		switch adverb {
+		case "c", "closure":
+			tokenStates = append(tokenStates, "Q-closure")
+		case "qq":
+			tokenStates = append(tokenStates, "qq")
+		case "ww":
+			tokenStates = append(tokenStates, "ww")
+		case "s", "scalar", "a", "array", "h", "hash", "f", "function":
+			tokenStates = append(tokenStates, "Q-variable")
+		}
+	}
+
+	var tokenState string
+
+	switch {
+	case keyword == "qq" || contains(tokenStates, "qq"):
+		tokenState = "qq"
+	case adverbsStr == "ww" || contains(tokenStates, "ww"):
+		tokenState = "ww"
+	case contains(tokenStates, "Q-closure") && contains(tokenStates, "Q-variable"):
+		tokenState = "qq"
+	case contains(tokenStates, "Q-closure"):
+		tokenState = "Q-closure"
+	case contains(tokenStates, "Q-variable"):
+		tokenState = "Q-variable"
+	default:
+		tokenState = "Q"
+	}
+
+	iterator, err := state.Lexer.Tokenise(
+		&TokeniseOptions{
+			State:  tokenState,
+			Nested: true,
+		}, state.NamedGroups[`value`])
+
+	if err != nil {
+		panic(err)
+	} else {
+		iterators = append(iterators, iterator)
+	}
+
+	// Append the last punctuation
+	iterators = append(iterators, Literator(tokens[4]))
+
+	return Concaterator(iterators...)
+}
+
+// Emitter for pod config, tokenises the properties with "colon-pair-attribute" state
+func podConfig(groups []string, state *LexerState) Iterator {
+	// Tokenise pod config
+	iterator, err := state.Lexer.Tokenise(
+		&TokeniseOptions{
+			State:  "colon-pair-attribute",
+			Nested: true,
+		}, groups[0])
+
+	if err != nil {
+		panic(err)
+	} else {
+		return iterator
+	}
+}
+
+// Emitter for pod code, tokenises the code based on the lang specified
+func podCode(groups []string, state *LexerState) Iterator {
+	iterators := []Iterator{}
+	tokens := []Token{
+		{Comment, state.NamedGroups[`ws`]},
+		{Keyword, state.NamedGroups[`keyword`]},
+		{Keyword, state.NamedGroups[`ws2`]},
+		{Keyword, state.NamedGroups[`name`]},
+		{StringDoc, state.NamedGroups[`value`]},
+		{Comment, state.NamedGroups[`ws3`]},
+		{Keyword, state.NamedGroups[`end_keyword`]},
+		{Keyword, state.NamedGroups[`ws4`]},
+		{Keyword, state.NamedGroups[`name`]},
+	}
+
+	// Append all tokens before dealing with the pod config
+	iterators = append(iterators, Literator(tokens[:4]...))
+
+	// Tokenise pod config
+	iterators = append(iterators, podConfig([]string{state.NamedGroups[`config`]}, state))
+
+	langMatch := regexp.MustCompile(`:lang\W+(\w+)`).FindStringSubmatch(state.NamedGroups[`config`])
+	var lang string
+	if len(langMatch) > 1 {
+		lang = langMatch[1]
+	}
+
+	// Tokenise code based on lang property
+	sublexer := Get(lang)
+	if sublexer != nil {
+		iterator, err := sublexer.Tokenise(nil, state.NamedGroups[`value`])
+
+		if err != nil {
+			panic(err)
+		} else {
+			iterators = append(iterators, iterator)
+		}
+	} else {
+		iterators = append(iterators, Literator(tokens[4]))
+	}
+
+	// Append the rest of the tokens
+	iterators = append(iterators, Literator(tokens[5:]...))
+
+	return Concaterator(iterators...)
+}

vendor/github.com/alecthomas/chroma/v2/lexers/rst.go 🔗

@@ -0,0 +1,89 @@
+package lexers
+
+import (
+	"strings"
+
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Restructuredtext lexer.
+var Restructuredtext = Register(MustNewLexer(
+	&Config{
+		Name:      "reStructuredText",
+		Aliases:   []string{"rst", "rest", "restructuredtext"},
+		Filenames: []string{"*.rst", "*.rest"},
+		MimeTypes: []string{"text/x-rst", "text/prs.fallenstein.rst"},
+	},
+	restructuredtextRules,
+))
+
+func restructuredtextRules() Rules {
+	return Rules{
+		"root": {
+			{"^(=+|-+|`+|:+|\\.+|\\'+|\"+|~+|\\^+|_+|\\*+|\\++|#+)([ \\t]*\\n)(.+)(\\n)(\\1)(\\n)", ByGroups(GenericHeading, Text, GenericHeading, Text, GenericHeading, Text), nil},
+			{"^(\\S.*)(\\n)(={3,}|-{3,}|`{3,}|:{3,}|\\.{3,}|\\'{3,}|\"{3,}|~{3,}|\\^{3,}|_{3,}|\\*{3,}|\\+{3,}|#{3,})(\\n)", ByGroups(GenericHeading, Text, GenericHeading, Text), nil},
+			{`^(\s*)([-*+])( .+\n(?:\1  .+\n)*)`, ByGroups(Text, LiteralNumber, UsingSelf("inline")), nil},
+			{`^(\s*)([0-9#ivxlcmIVXLCM]+\.)( .+\n(?:\1  .+\n)*)`, ByGroups(Text, LiteralNumber, UsingSelf("inline")), nil},
+			{`^(\s*)(\(?[0-9#ivxlcmIVXLCM]+\))( .+\n(?:\1  .+\n)*)`, ByGroups(Text, LiteralNumber, UsingSelf("inline")), nil},
+			{`^(\s*)([A-Z]+\.)( .+\n(?:\1  .+\n)+)`, ByGroups(Text, LiteralNumber, UsingSelf("inline")), nil},
+			{`^(\s*)(\(?[A-Za-z]+\))( .+\n(?:\1  .+\n)+)`, ByGroups(Text, LiteralNumber, UsingSelf("inline")), nil},
+			{`^(\s*)(\|)( .+\n(?:\|  .+\n)*)`, ByGroups(Text, Operator, UsingSelf("inline")), nil},
+			{`^( *\.\.)(\s*)((?:source)?code(?:-block)?)(::)([ \t]*)([^\n]+)(\n[ \t]*\n)([ \t]+)(.*)(\n)((?:(?:\8.*|)\n)+)`, EmitterFunc(rstCodeBlock), nil},
+			{`^( *\.\.)(\s*)([\w:-]+?)(::)(?:([ \t]*)(.*))`, ByGroups(Punctuation, Text, OperatorWord, Punctuation, Text, UsingSelf("inline")), nil},
+			{`^( *\.\.)(\s*)(_(?:[^:\\]|\\.)+:)(.*?)$`, ByGroups(Punctuation, Text, NameTag, UsingSelf("inline")), nil},
+			{`^( *\.\.)(\s*)(\[.+\])(.*?)$`, ByGroups(Punctuation, Text, NameTag, UsingSelf("inline")), nil},
+			{`^( *\.\.)(\s*)(\|.+\|)(\s*)([\w:-]+?)(::)(?:([ \t]*)(.*))`, ByGroups(Punctuation, Text, NameTag, Text, OperatorWord, Punctuation, Text, UsingSelf("inline")), nil},
+			{`^ *\.\..*(\n( +.*\n|\n)+)?`, CommentPreproc, nil},
+			{`^( *)(:[a-zA-Z-]+:)(\s*)$`, ByGroups(Text, NameClass, Text), nil},
+			{`^( *)(:.*?:)([ \t]+)(.*?)$`, ByGroups(Text, NameClass, Text, NameFunction), nil},
+			{`^(\S.*(?<!::)\n)((?:(?: +.*)\n)+)`, ByGroups(UsingSelf("inline"), UsingSelf("inline")), nil},
+			{`(::)(\n[ \t]*\n)([ \t]+)(.*)(\n)((?:(?:\3.*|)\n)+)`, ByGroups(LiteralStringEscape, Text, LiteralString, LiteralString, Text, LiteralString), nil},
+			Include("inline"),
+		},
+		"inline": {
+			{`\\.`, Text, nil},
+			{"``", LiteralString, Push("literal")},
+			{"(`.+?)(<.+?>)(`__?)", ByGroups(LiteralString, LiteralStringInterpol, LiteralString), nil},
+			{"`.+?`__?", LiteralString, nil},
+			{"(`.+?`)(:[a-zA-Z0-9:-]+?:)?", ByGroups(NameVariable, NameAttribute), nil},
+			{"(:[a-zA-Z0-9:-]+?:)(`.+?`)", ByGroups(NameAttribute, NameVariable), nil},
+			{`\*\*.+?\*\*`, GenericStrong, nil},
+			{`\*.+?\*`, GenericEmph, nil},
+			{`\[.*?\]_`, LiteralString, nil},
+			{`<.+?>`, NameTag, nil},
+			{"[^\\\\\\n\\[*`:]+", Text, nil},
+			{`.`, Text, nil},
+		},
+		"literal": {
+			{"[^`]+", LiteralString, nil},
+			{"``((?=$)|(?=[-/:.,; \\n\\x00\\\u2010\\\u2011\\\u2012\\\u2013\\\u2014\\\u00a0\\'\\\"\\)\\]\\}\\>\\\u2019\\\u201d\\\u00bb\\!\\?]))", LiteralString, Pop(1)},
+			{"`", LiteralString, nil},
+		},
+	}
+}
+
+func rstCodeBlock(groups []string, state *LexerState) Iterator {
+	iterators := []Iterator{}
+	tokens := []Token{
+		{Punctuation, groups[1]},
+		{Text, groups[2]},
+		{OperatorWord, groups[3]},
+		{Punctuation, groups[4]},
+		{Text, groups[5]},
+		{Keyword, groups[6]},
+		{Text, groups[7]},
+	}
+	code := strings.Join(groups[8:], "")
+	lexer := Get(groups[6])
+	if lexer == nil {
+		tokens = append(tokens, Token{String, code})
+		iterators = append(iterators, Literator(tokens...))
+	} else {
+		sub, err := lexer.Tokenise(nil, code)
+		if err != nil {
+			panic(err)
+		}
+		iterators = append(iterators, Literator(tokens...), sub)
+	}
+	return Concaterator(iterators...)
+}

vendor/github.com/alecthomas/chroma/v2/lexers/svelte.go 🔗

@@ -0,0 +1,70 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Svelte lexer.
+var Svelte = Register(DelegatingLexer(HTML, MustNewLexer(
+	&Config{
+		Name:      "Svelte",
+		Aliases:   []string{"svelte"},
+		Filenames: []string{"*.svelte"},
+		MimeTypes: []string{"application/x-svelte"},
+		DotAll:    true,
+	},
+	svelteRules,
+)))
+
+func svelteRules() Rules {
+	return Rules{
+		"root": {
+			// Let HTML handle the comments, including comments containing script and style tags
+			{`<!--`, Other, Push("comment")},
+			{
+				// Highlight script and style tags based on lang attribute
+				// and allow attributes besides lang
+				`(<\s*(?:script|style).*?lang\s*=\s*['"])` +
+					`(.+?)(['"].*?>)` +
+					`(.+?)` +
+					`(<\s*/\s*(?:script|style)\s*>)`,
+				UsingByGroup(2, 4, Other, Other, Other, Other, Other),
+				nil,
+			},
+			{
+				// Make sure `{` is not inside script or style tags
+				`(?<!<\s*(?:script|style)(?:(?!(?:script|style)\s*>).)*?)` +
+					`{` +
+					`(?!(?:(?!<\s*(?:script|style)).)*?(?:script|style)\s*>)`,
+				Punctuation,
+				Push("templates"),
+			},
+			// on:submit|preventDefault
+			{`(?<=\s+on:\w+(?:\|\w+)*)\|(?=\w+)`, Operator, nil},
+			{`.+?`, Other, nil},
+		},
+		"comment": {
+			{`-->`, Other, Pop(1)},
+			{`.+?`, Other, nil},
+		},
+		"templates": {
+			{`}`, Punctuation, Pop(1)},
+			// Let TypeScript handle strings and the curly braces inside them
+			{`(?<!(?<!\\)\\)(['"` + "`])" + `.*?(?<!(?<!\\)\\)\1`, Using("TypeScript"), nil},
+			// If there is another opening curly brace push to templates again
+			{"{", Punctuation, Push("templates")},
+			{`@(debug|html)\b`, Keyword, nil},
+			{
+				`(#await)(\s+)(\w+)(\s+)(then|catch)(\s+)(\w+)`,
+				ByGroups(Keyword, Text, Using("TypeScript"), Text,
+					Keyword, Text, Using("TypeScript"),
+				),
+				nil,
+			},
+			{`(#|/)(await|each|if|key)\b`, Keyword, nil},
+			{`(:else)(\s+)(if)?\b`, ByGroups(Keyword, Text, Keyword), nil},
+			{`:(catch|then)\b`, Keyword, nil},
+			{`[^{}]+`, Using("TypeScript"), nil},
+		},
+	}
+}

vendor/github.com/alecthomas/chroma/v2/lexers/typoscript.go 🔗

@@ -0,0 +1,85 @@
+package lexers
+
+import (
+	. "github.com/alecthomas/chroma/v2" // nolint
+)
+
+// Typoscript lexer.
+var Typoscript = Register(MustNewLexer(
+	&Config{
+		Name:      "TypoScript",
+		Aliases:   []string{"typoscript"},
+		Filenames: []string{"*.ts"},
+		MimeTypes: []string{"text/x-typoscript"},
+		DotAll:    true,
+		Priority:  0.1,
+	},
+	typoscriptRules,
+))
+
+func typoscriptRules() Rules {
+	return Rules{
+		"root": {
+			Include("comment"),
+			Include("constant"),
+			Include("html"),
+			Include("label"),
+			Include("whitespace"),
+			Include("keywords"),
+			Include("punctuation"),
+			Include("operator"),
+			Include("structure"),
+			Include("literal"),
+			Include("other"),
+		},
+		"keywords": {
+			{`(\[)(?i)(browser|compatVersion|dayofmonth|dayofweek|dayofyear|device|ELSE|END|GLOBAL|globalString|globalVar|hostname|hour|IP|language|loginUser|loginuser|minute|month|page|PIDinRootline|PIDupinRootline|system|treeLevel|useragent|userFunc|usergroup|version)([^\]]*)(\])`, ByGroups(LiteralStringSymbol, NameConstant, Text, LiteralStringSymbol), nil},
+			{`(?=[\w\-])(HTMLparser|HTMLparser_tags|addParams|cache|encapsLines|filelink|if|imageLinkWrap|imgResource|makelinks|numRows|numberFormat|parseFunc|replacement|round|select|split|stdWrap|strPad|tableStyle|tags|textStyle|typolink)(?![\w\-])`, NameFunction, nil},
+			{`(?:(=?\s*<?\s+|^\s*))(cObj|field|config|content|constants|FEData|file|frameset|includeLibs|lib|page|plugin|register|resources|sitemap|sitetitle|styles|temp|tt_[^:.\s]*|types|xmlnews|INCLUDE_TYPOSCRIPT|_CSS_DEFAULT_STYLE|_DEFAULT_PI_VARS|_LOCAL_LANG)(?![\w\-])`, ByGroups(Operator, NameBuiltin), nil},
+			{`(?=[\w\-])(CASE|CLEARGIF|COA|COA_INT|COBJ_ARRAY|COLUMNS|CONTENT|CTABLE|EDITPANEL|FILE|FILES|FLUIDTEMPLATE|FORM|HMENU|HRULER|HTML|IMAGE|IMGTEXT|IMG_RESOURCE|LOAD_REGISTER|MEDIA|MULTIMEDIA|OTABLE|PAGE|QTOBJECT|RECORDS|RESTORE_REGISTER|SEARCHRESULT|SVG|SWFOBJECT|TEMPLATE|TEXT|USER|USER_INT)(?![\w\-])`, NameClass, nil},
+			{`(?=[\w\-])(ACTIFSUBRO|ACTIFSUB|ACTRO|ACT|CURIFSUBRO|CURIFSUB|CURRO|CUR|IFSUBRO|IFSUB|NO|SPC|USERDEF1RO|USERDEF1|USERDEF2RO|USERDEF2|USRRO|USR)`, NameClass, nil},
+			{`(?=[\w\-])(GMENU_FOLDOUT|GMENU_LAYERS|GMENU|IMGMENUITEM|IMGMENU|JSMENUITEM|JSMENU|TMENUITEM|TMENU_LAYERS|TMENU)`, NameClass, nil},
+			{`(?=[\w\-])(PHP_SCRIPT(_EXT|_INT)?)`, NameClass, nil},
+			{`(?=[\w\-])(userFunc)(?![\w\-])`, NameFunction, nil},
+		},
+		"whitespace": {
+			{`\s+`, Text, nil},
+		},
+		"html": {
+			{`<\S[^\n>]*>`, Using("TypoScriptHTMLData"), nil},
+			{`&[^;\n]*;`, LiteralString, nil},
+			{`(_CSS_DEFAULT_STYLE)(\s*)(\()(?s)(.*(?=\n\)))`, ByGroups(NameClass, Text, LiteralStringSymbol, Using("TypoScriptCSSData")), nil},
+		},
+		"literal": {
+			{`0x[0-9A-Fa-f]+t?`, LiteralNumberHex, nil},
+			{`[0-9]+`, LiteralNumberInteger, nil},
+			{`(###\w+###)`, NameConstant, nil},
+		},
+		"label": {
+			{`(EXT|FILE|LLL):[^}\n"]*`, LiteralString, nil},
+			{`(?![^\w\-])([\w\-]+(?:/[\w\-]+)+/?)(\S*\n)`, ByGroups(LiteralString, LiteralString), nil},
+		},
+		"punctuation": {
+			{`[,.]`, Punctuation, nil},
+		},
+		"operator": {
+			{`[<>,:=.*%+|]`, Operator, nil},
+		},
+		"structure": {
+			{`[{}()\[\]\\]`, LiteralStringSymbol, nil},
+		},
+		"constant": {
+			{`(\{)(\$)((?:[\w\-]+\.)*)([\w\-]+)(\})`, ByGroups(LiteralStringSymbol, Operator, NameConstant, NameConstant, LiteralStringSymbol), nil},
+			{`(\{)([\w\-]+)(\s*:\s*)([\w\-]+)(\})`, ByGroups(LiteralStringSymbol, NameConstant, Operator, NameConstant, LiteralStringSymbol), nil},
+			{`(#[a-fA-F0-9]{6}\b|#[a-fA-F0-9]{3}\b)`, LiteralStringChar, nil},
+		},
+		"comment": {
+			{`(?<!(#|\'|"))(?:#(?!(?:[a-fA-F0-9]{6}|[a-fA-F0-9]{3}))[^\n#]+|//[^\n]*)`, Comment, nil},
+			{`/\*(?:(?!\*/).)*\*/`, Comment, nil},
+			{`(\s*#\s*\n)`, Comment, nil},
+		},
+		"other": {
+			{`[\w"\-!/&;]+`, Text, nil},
+		},
+	}
+}

vendor/github.com/alecthomas/chroma/v2/lexers/zed.go 🔗

@@ -0,0 +1,24 @@
+package lexers
+
+import (
+	"strings"
+)
+
+// Zed lexer.
+func init() { // nolint: gochecknoinits
+	Get("Zed").SetAnalyser(func(text string) float32 {
+		if strings.Contains(text, "definition ") && strings.Contains(text, "relation ") && strings.Contains(text, "permission ") {
+			return 0.9
+		}
+		if strings.Contains(text, "definition ") {
+			return 0.5
+		}
+		if strings.Contains(text, "relation ") {
+			return 0.5
+		}
+		if strings.Contains(text, "permission ") {
+			return 0.25
+		}
+		return 0.0
+	})
+}

vendor/github.com/alecthomas/chroma/v2/mutators.go 🔗

@@ -0,0 +1,201 @@
+package chroma
+
+import (
+	"encoding/xml"
+	"fmt"
+	"strings"
+)
+
+// A Mutator modifies the behaviour of the lexer.
+type Mutator interface {
+	// Mutate the lexer state machine as it is processing.
+	Mutate(state *LexerState) error
+}
+
+// SerialisableMutator is a Mutator that can be serialised and deserialised.
+type SerialisableMutator interface {
+	Mutator
+	MutatorKind() string
+}
+
+// A LexerMutator is an additional interface that a Mutator can implement
+// to modify the lexer when it is compiled.
+type LexerMutator interface {
+	// MutateLexer can be implemented to mutate the lexer itself.
+	//
+	// Rules are the lexer rules, state is the state key for the rule the mutator is associated with.
+	MutateLexer(rules CompiledRules, state string, rule int) error
+}
+
+// A MutatorFunc is a Mutator that mutates the lexer state machine as it is processing.
+type MutatorFunc func(state *LexerState) error
+
+func (m MutatorFunc) Mutate(state *LexerState) error { return m(state) } // nolint
+
+type multiMutator struct {
+	Mutators []Mutator `xml:"mutator"`
+}
+
+func (m *multiMutator) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
+	for {
+		token, err := d.Token()
+		if err != nil {
+			return err
+		}
+		switch token := token.(type) {
+		case xml.StartElement:
+			mutator, err := unmarshalMutator(d, token)
+			if err != nil {
+				return err
+			}
+			m.Mutators = append(m.Mutators, mutator)
+
+		case xml.EndElement:
+			return nil
+		}
+	}
+}
+
+func (m *multiMutator) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
+	name := xml.Name{Local: "mutators"}
+	if err := e.EncodeToken(xml.StartElement{Name: name}); err != nil {
+		return err
+	}
+	for _, m := range m.Mutators {
+		if err := marshalMutator(e, m); err != nil {
+			return err
+		}
+	}
+	return e.EncodeToken(xml.EndElement{Name: name})
+}
+
+func (m *multiMutator) MutatorKind() string { return "mutators" }
+
+func (m *multiMutator) Mutate(state *LexerState) error {
+	for _, modifier := range m.Mutators {
+		if err := modifier.Mutate(state); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+// Mutators applies a set of Mutators in order.
+func Mutators(modifiers ...Mutator) Mutator {
+	return &multiMutator{modifiers}
+}
+
+type includeMutator struct {
+	State string `xml:"state,attr"`
+}
+
+// Include the given state.
+func Include(state string) Rule {
+	return Rule{Mutator: &includeMutator{state}}
+}
+
+func (i *includeMutator) MutatorKind() string { return "include" }
+
+func (i *includeMutator) Mutate(s *LexerState) error {
+	return fmt.Errorf("should never reach here Include(%q)", i.State)
+}
+
+func (i *includeMutator) MutateLexer(rules CompiledRules, state string, rule int) error {
+	includedRules, ok := rules[i.State]
+	if !ok {
+		return fmt.Errorf("invalid include state %q", i.State)
+	}
+	rules[state] = append(rules[state][:rule], append(includedRules, rules[state][rule+1:]...)...)
+	return nil
+}
+
+type combinedMutator struct {
+	States []string `xml:"state,attr"`
+}
+
+func (c *combinedMutator) MutatorKind() string { return "combined" }
+
+// Combined creates a new anonymous state from the given states, and pushes that state.
+func Combined(states ...string) Mutator {
+	return &combinedMutator{states}
+}
+
+func (c *combinedMutator) Mutate(s *LexerState) error {
+	return fmt.Errorf("should never reach here Combined(%v)", c.States)
+}
+
+func (c *combinedMutator) MutateLexer(rules CompiledRules, state string, rule int) error {
+	name := "__combined_" + strings.Join(c.States, "__")
+	if _, ok := rules[name]; !ok {
+		combined := []*CompiledRule{}
+		for _, state := range c.States {
+			rules, ok := rules[state]
+			if !ok {
+				return fmt.Errorf("invalid combine state %q", state)
+			}
+			combined = append(combined, rules...)
+		}
+		rules[name] = combined
+	}
+	rules[state][rule].Mutator = Push(name)
+	return nil
+}
+
+type pushMutator struct {
+	States []string `xml:"state,attr"`
+}
+
+func (p *pushMutator) MutatorKind() string { return "push" }
+
+func (p *pushMutator) Mutate(s *LexerState) error {
+	if len(p.States) == 0 {
+		s.Stack = append(s.Stack, s.State)
+	} else {
+		for _, state := range p.States {
+			if state == "#pop" {
+				s.Stack = s.Stack[:len(s.Stack)-1]
+			} else {
+				s.Stack = append(s.Stack, state)
+			}
+		}
+	}
+	return nil
+}
+
+// Push states onto the stack.
+func Push(states ...string) Mutator {
+	return &pushMutator{states}
+}
+
+type popMutator struct {
+	Depth int `xml:"depth,attr"`
+}
+
+func (p *popMutator) MutatorKind() string { return "pop" }
+
+func (p *popMutator) Mutate(state *LexerState) error {
+	if len(state.Stack) == 0 {
+		return fmt.Errorf("nothing to pop")
+	}
+	state.Stack = state.Stack[:len(state.Stack)-p.Depth]
+	return nil
+}
+
+// Pop state from the stack when rule matches.
+func Pop(n int) Mutator {
+	return &popMutator{n}
+}
+
+// Default returns a Rule that applies a set of Mutators.
+func Default(mutators ...Mutator) Rule {
+	return Rule{Mutator: Mutators(mutators...)}
+}
+
+// Stringify returns the raw string for a set of tokens.
+func Stringify(tokens ...Token) string {
+	out := []string{}
+	for _, t := range tokens {
+		out = append(out, t.Value)
+	}
+	return strings.Join(out, "")
+}

vendor/github.com/alecthomas/chroma/v2/pygments-lexers.txt 🔗

@@ -0,0 +1,322 @@
+Generated with:
+
+  g 'class.*RegexLexer' | pawk --strict -F: '"pygments.lexers.%s.%s" % (f[0].split(".")[0], f[2].split()[1].split("(")[0])' > lexers.txt
+
+kotlin:
+  invalid unicode escape sequences
+  FIXED: Have to disable wide Unicode characters in unistring.py
+
+pygments.lexers.ambient.AmbientTalkLexer
+pygments.lexers.ampl.AmplLexer
+pygments.lexers.actionscript.ActionScriptLexer
+pygments.lexers.actionscript.ActionScript3Lexer
+pygments.lexers.actionscript.MxmlLexer
+pygments.lexers.algebra.GAPLexer
+pygments.lexers.algebra.MathematicaLexer
+pygments.lexers.algebra.MuPADLexer
+pygments.lexers.algebra.BCLexer
+pygments.lexers.apl.APLLexer
+pygments.lexers.bibtex.BibTeXLexer
+pygments.lexers.bibtex.BSTLexer
+pygments.lexers.basic.BlitzMaxLexer
+pygments.lexers.basic.BlitzBasicLexer
+pygments.lexers.basic.MonkeyLexer
+pygments.lexers.basic.CbmBasicV2Lexer
+pygments.lexers.basic.QBasicLexer
+pygments.lexers.automation.AutohotkeyLexer
+pygments.lexers.automation.AutoItLexer
+pygments.lexers.archetype.AtomsLexer
+pygments.lexers.c_like.ClayLexer
+pygments.lexers.c_like.ValaLexer
+pygments.lexers.asm.GasLexer
+pygments.lexers.asm.ObjdumpLexer
+pygments.lexers.asm.HsailLexer
+pygments.lexers.asm.LlvmLexer
+pygments.lexers.asm.NasmLexer
+pygments.lexers.asm.TasmLexer
+pygments.lexers.asm.Ca65Lexer
+pygments.lexers.business.CobolLexer
+pygments.lexers.business.ABAPLexer
+pygments.lexers.business.OpenEdgeLexer
+pygments.lexers.business.GoodDataCLLexer
+pygments.lexers.business.MaqlLexer
+pygments.lexers.capnproto.CapnProtoLexer
+pygments.lexers.chapel.ChapelLexer
+pygments.lexers.clean.CleanLexer
+pygments.lexers.c_cpp.CFamilyLexer
+pygments.lexers.console.VCTreeStatusLexer
+pygments.lexers.console.PyPyLogLexer
+pygments.lexers.csound.CsoundLexer
+pygments.lexers.csound.CsoundDocumentLexer
+pygments.lexers.csound.CsoundDocumentLexer
+pygments.lexers.crystal.CrystalLexer
+pygments.lexers.dalvik.SmaliLexer
+pygments.lexers.css.CssLexer
+pygments.lexers.css.SassLexer
+pygments.lexers.css.ScssLexer
+pygments.lexers.configs.IniLexer
+pygments.lexers.configs.RegeditLexer
+pygments.lexers.configs.PropertiesLexer
+pygments.lexers.configs.KconfigLexer
+pygments.lexers.configs.Cfengine3Lexer
+pygments.lexers.configs.ApacheConfLexer
+pygments.lexers.configs.SquidConfLexer
+pygments.lexers.configs.NginxConfLexer
+pygments.lexers.configs.LighttpdConfLexer
+pygments.lexers.configs.DockerLexer
+pygments.lexers.configs.TerraformLexer
+pygments.lexers.configs.TermcapLexer
+pygments.lexers.configs.TerminfoLexer
+pygments.lexers.configs.PkgConfigLexer
+pygments.lexers.configs.PacmanConfLexer
+pygments.lexers.data.YamlLexer
+pygments.lexers.data.JsonLexer
+pygments.lexers.diff.DiffLexer
+pygments.lexers.diff.DarcsPatchLexer
+pygments.lexers.diff.WDiffLexer
+pygments.lexers.dotnet.CSharpLexer
+pygments.lexers.dotnet.NemerleLexer
+pygments.lexers.dotnet.BooLexer
+pygments.lexers.dotnet.VbNetLexer
+pygments.lexers.dotnet.GenericAspxLexer
+pygments.lexers.dotnet.FSharpLexer
+pygments.lexers.dylan.DylanLexer
+pygments.lexers.dylan.DylanLidLexer
+pygments.lexers.ecl.ECLLexer
+pygments.lexers.eiffel.EiffelLexer
+pygments.lexers.dsls.ProtoBufLexer
+pygments.lexers.dsls.ThriftLexer
+pygments.lexers.dsls.BroLexer
+pygments.lexers.dsls.PuppetLexer
+pygments.lexers.dsls.RslLexer
+pygments.lexers.dsls.MscgenLexer
+pygments.lexers.dsls.VGLLexer
+pygments.lexers.dsls.AlloyLexer
+pygments.lexers.dsls.PanLexer
+pygments.lexers.dsls.CrmshLexer
+pygments.lexers.dsls.FlatlineLexer
+pygments.lexers.dsls.SnowballLexer
+pygments.lexers.elm.ElmLexer
+pygments.lexers.erlang.ErlangLexer
+pygments.lexers.erlang.ElixirLexer
+pygments.lexers.ezhil.EzhilLexer
+pygments.lexers.esoteric.BrainfuckLexer
+pygments.lexers.esoteric.BefungeLexer
+pygments.lexers.esoteric.CAmkESLexer
+pygments.lexers.esoteric.CapDLLexer
+pygments.lexers.esoteric.RedcodeLexer
+pygments.lexers.esoteric.AheuiLexer
+pygments.lexers.factor.FactorLexer
+pygments.lexers.fantom.FantomLexer
+pygments.lexers.felix.FelixLexer
+pygments.lexers.forth.ForthLexer
+pygments.lexers.fortran.FortranLexer
+pygments.lexers.fortran.FortranFixedLexer
+pygments.lexers.go.GoLexer
+pygments.lexers.foxpro.FoxProLexer
+pygments.lexers.graph.CypherLexer
+pygments.lexers.grammar_notation.BnfLexer
+pygments.lexers.grammar_notation.AbnfLexer
+pygments.lexers.grammar_notation.JsgfLexer
+pygments.lexers.graphics.GLShaderLexer
+pygments.lexers.graphics.PostScriptLexer
+pygments.lexers.graphics.AsymptoteLexer
+pygments.lexers.graphics.GnuplotLexer
+pygments.lexers.graphics.PovrayLexer
+pygments.lexers.hexdump.HexdumpLexer
+pygments.lexers.haskell.HaskellLexer
+pygments.lexers.haskell.IdrisLexer
+pygments.lexers.haskell.AgdaLexer
+pygments.lexers.haskell.CryptolLexer
+pygments.lexers.haskell.KokaLexer
+pygments.lexers.haxe.HaxeLexer
+pygments.lexers.haxe.HxmlLexer
+pygments.lexers.hdl.VerilogLexer
+pygments.lexers.hdl.SystemVerilogLexer
+pygments.lexers.hdl.VhdlLexer
+pygments.lexers.idl.IDLLexer
+pygments.lexers.inferno.LimboLexer
+pygments.lexers.igor.IgorLexer
+pygments.lexers.html.HtmlLexer
+pygments.lexers.html.DtdLexer
+pygments.lexers.html.XmlLexer
+pygments.lexers.html.HamlLexer
+pygments.lexers.html.ScamlLexer
+pygments.lexers.html.PugLexer
+pygments.lexers.installers.NSISLexer
+pygments.lexers.installers.RPMSpecLexer
+pygments.lexers.installers.SourcesListLexer
+pygments.lexers.installers.DebianControlLexer
+pygments.lexers.iolang.IoLexer
+pygments.lexers.julia.JuliaLexer
+pygments.lexers.int_fiction.Inform6Lexer
+pygments.lexers.int_fiction.Inform7Lexer
+pygments.lexers.int_fiction.Tads3Lexer
+pygments.lexers.make.BaseMakefileLexer
+pygments.lexers.make.CMakeLexer
+pygments.lexers.javascript.JavascriptLexer
+pygments.lexers.javascript.KalLexer
+pygments.lexers.javascript.LiveScriptLexer
+pygments.lexers.javascript.DartLexer
+pygments.lexers.javascript.TypeScriptLexer
+pygments.lexers.javascript.LassoLexer
+pygments.lexers.javascript.ObjectiveJLexer
+pygments.lexers.javascript.CoffeeScriptLexer
+pygments.lexers.javascript.MaskLexer
+pygments.lexers.javascript.EarlGreyLexer
+pygments.lexers.javascript.JuttleLexer
+pygments.lexers.jvm.JavaLexer
+pygments.lexers.jvm.ScalaLexer
+pygments.lexers.jvm.GosuLexer
+pygments.lexers.jvm.GroovyLexer
+pygments.lexers.jvm.IokeLexer
+pygments.lexers.jvm.ClojureLexer
+pygments.lexers.jvm.TeaLangLexer
+pygments.lexers.jvm.CeylonLexer
+pygments.lexers.jvm.KotlinLexer
+pygments.lexers.jvm.XtendLexer
+pygments.lexers.jvm.PigLexer
+pygments.lexers.jvm.GoloLexer
+pygments.lexers.jvm.JasminLexer
+pygments.lexers.markup.BBCodeLexer
+pygments.lexers.markup.MoinWikiLexer
+pygments.lexers.markup.RstLexer
+pygments.lexers.markup.TexLexer
+pygments.lexers.markup.GroffLexer
+pygments.lexers.markup.MozPreprocHashLexer
+pygments.lexers.markup.MarkdownLexer
+pygments.lexers.ml.SMLLexer
+pygments.lexers.ml.OcamlLexer
+pygments.lexers.ml.OpaLexer
+pygments.lexers.modeling.ModelicaLexer
+pygments.lexers.modeling.BugsLexer
+pygments.lexers.modeling.JagsLexer
+pygments.lexers.modeling.StanLexer
+pygments.lexers.matlab.MatlabLexer
+pygments.lexers.matlab.OctaveLexer
+pygments.lexers.matlab.ScilabLexer
+pygments.lexers.monte.MonteLexer
+pygments.lexers.lisp.SchemeLexer
+pygments.lexers.lisp.CommonLispLexer
+pygments.lexers.lisp.HyLexer
+pygments.lexers.lisp.RacketLexer
+pygments.lexers.lisp.NewLispLexer
+pygments.lexers.lisp.EmacsLispLexer
+pygments.lexers.lisp.ShenLexer
+pygments.lexers.lisp.XtlangLexer
+pygments.lexers.modula2.Modula2Lexer
+pygments.lexers.ncl.NCLLexer
+pygments.lexers.nim.NimLexer
+pygments.lexers.nit.NitLexer
+pygments.lexers.nix.NixLexer
+pygments.lexers.oberon.ComponentPascalLexer
+pygments.lexers.ooc.OocLexer
+pygments.lexers.objective.SwiftLexer
+pygments.lexers.parasail.ParaSailLexer
+pygments.lexers.pawn.SourcePawnLexer
+pygments.lexers.pawn.PawnLexer
+pygments.lexers.pascal.AdaLexer
+pygments.lexers.parsers.RagelLexer
+pygments.lexers.parsers.RagelEmbeddedLexer
+pygments.lexers.parsers.AntlrLexer
+pygments.lexers.parsers.TreetopBaseLexer
+pygments.lexers.parsers.EbnfLexer
+pygments.lexers.php.ZephirLexer
+pygments.lexers.php.PhpLexer
+pygments.lexers.perl.PerlLexer
+pygments.lexers.perl.Perl6Lexer
+pygments.lexers.praat.PraatLexer
+pygments.lexers.prolog.PrologLexer
+pygments.lexers.prolog.LogtalkLexer
+pygments.lexers.qvt.QVToLexer
+pygments.lexers.rdf.SparqlLexer
+pygments.lexers.rdf.TurtleLexer
+pygments.lexers.python.PythonLexer
+pygments.lexers.python.Python3Lexer
+pygments.lexers.python.PythonTracebackLexer
+pygments.lexers.python.Python3TracebackLexer
+pygments.lexers.python.CythonLexer
+pygments.lexers.python.DgLexer
+pygments.lexers.rebol.RebolLexer
+pygments.lexers.rebol.RedLexer
+pygments.lexers.resource.ResourceLexer
+pygments.lexers.rnc.RNCCompactLexer
+pygments.lexers.roboconf.RoboconfGraphLexer
+pygments.lexers.roboconf.RoboconfInstancesLexer
+pygments.lexers.rust.RustLexer
+pygments.lexers.ruby.RubyLexer
+pygments.lexers.ruby.FancyLexer
+pygments.lexers.sas.SASLexer
+pygments.lexers.smalltalk.SmalltalkLexer
+pygments.lexers.smalltalk.NewspeakLexer
+pygments.lexers.smv.NuSMVLexer
+pygments.lexers.shell.BashLexer
+pygments.lexers.shell.BatchLexer
+pygments.lexers.shell.TcshLexer
+pygments.lexers.shell.PowerShellLexer
+pygments.lexers.shell.FishShellLexer
+pygments.lexers.snobol.SnobolLexer
+pygments.lexers.scripting.LuaLexer
+pygments.lexers.scripting.ChaiscriptLexer
+pygments.lexers.scripting.LSLLexer
+pygments.lexers.scripting.AppleScriptLexer
+pygments.lexers.scripting.RexxLexer
+pygments.lexers.scripting.MOOCodeLexer
+pygments.lexers.scripting.HybrisLexer
+pygments.lexers.scripting.EasytrieveLexer
+pygments.lexers.scripting.JclLexer
+pygments.lexers.supercollider.SuperColliderLexer
+pygments.lexers.stata.StataLexer
+pygments.lexers.tcl.TclLexer
+pygments.lexers.sql.PostgresLexer
+pygments.lexers.sql.PlPgsqlLexer
+pygments.lexers.sql.PsqlRegexLexer
+pygments.lexers.sql.SqlLexer
+pygments.lexers.sql.TransactSqlLexer
+pygments.lexers.sql.MySqlLexer
+pygments.lexers.sql.RqlLexer
+pygments.lexers.testing.GherkinLexer
+pygments.lexers.testing.TAPLexer
+pygments.lexers.textedit.AwkLexer
+pygments.lexers.textedit.VimLexer
+pygments.lexers.textfmts.IrcLogsLexer
+pygments.lexers.textfmts.GettextLexer
+pygments.lexers.textfmts.HttpLexer
+pygments.lexers.textfmts.TodotxtLexer
+pygments.lexers.trafficscript.RtsLexer
+pygments.lexers.theorem.CoqLexer
+pygments.lexers.theorem.IsabelleLexer
+pygments.lexers.theorem.LeanLexer
+pygments.lexers.templates.SmartyLexer
+pygments.lexers.templates.VelocityLexer
+pygments.lexers.templates.DjangoLexer
+pygments.lexers.templates.MyghtyLexer
+pygments.lexers.templates.MasonLexer
+pygments.lexers.templates.MakoLexer
+pygments.lexers.templates.CheetahLexer
+pygments.lexers.templates.GenshiTextLexer
+pygments.lexers.templates.GenshiMarkupLexer
+pygments.lexers.templates.JspRootLexer
+pygments.lexers.templates.EvoqueLexer
+pygments.lexers.templates.ColdfusionLexer
+pygments.lexers.templates.ColdfusionMarkupLexer
+pygments.lexers.templates.TeaTemplateRootLexer
+pygments.lexers.templates.HandlebarsLexer
+pygments.lexers.templates.LiquidLexer
+pygments.lexers.templates.TwigLexer
+pygments.lexers.templates.Angular2Lexer
+pygments.lexers.urbi.UrbiscriptLexer
+pygments.lexers.typoscript.TypoScriptCssDataLexer
+pygments.lexers.typoscript.TypoScriptHtmlDataLexer
+pygments.lexers.typoscript.TypoScriptLexer
+pygments.lexers.varnish.VCLLexer
+pygments.lexers.verification.BoogieLexer
+pygments.lexers.verification.SilverLexer
+pygments.lexers.x10.X10Lexer
+pygments.lexers.whiley.WhileyLexer
+pygments.lexers.xorg.XorgLexer
+pygments.lexers.webmisc.DuelLexer
+pygments.lexers.webmisc.XQueryLexer
+pygments.lexers.webmisc.QmlLexer
+pygments.lexers.webmisc.CirruLexer
+pygments.lexers.webmisc.SlimLexer

vendor/github.com/alecthomas/chroma/v2/quick/quick.go 🔗

@@ -0,0 +1,44 @@
+// Package quick provides simple, no-configuration source code highlighting.
+package quick
+
+import (
+	"io"
+
+	"github.com/alecthomas/chroma/v2"
+	"github.com/alecthomas/chroma/v2/formatters"
+	"github.com/alecthomas/chroma/v2/lexers"
+	"github.com/alecthomas/chroma/v2/styles"
+)
+
+// Highlight some text.
+//
+// Lexer, formatter and style may be empty, in which case a best-effort is made.
+func Highlight(w io.Writer, source, lexer, formatter, style string) error {
+	// Determine lexer.
+	l := lexers.Get(lexer)
+	if l == nil {
+		l = lexers.Analyse(source)
+	}
+	if l == nil {
+		l = lexers.Fallback
+	}
+	l = chroma.Coalesce(l)
+
+	// Determine formatter.
+	f := formatters.Get(formatter)
+	if f == nil {
+		f = formatters.Fallback
+	}
+
+	// Determine style.
+	s := styles.Get(style)
+	if s == nil {
+		s = styles.Fallback
+	}
+
+	it, err := l.Tokenise(nil, source)
+	if err != nil {
+		return err
+	}
+	return f.Format(w, s, it)
+}

vendor/github.com/alecthomas/chroma/v2/regexp.go 🔗

@@ -0,0 +1,489 @@
+package chroma
+
+import (
+	"fmt"
+	"os"
+	"path/filepath"
+	"regexp"
+	"sort"
+	"strings"
+	"sync"
+	"time"
+	"unicode/utf8"
+
+	"github.com/dlclark/regexp2"
+)
+
+// A Rule is the fundamental matching unit of the Regex lexer state machine.
+type Rule struct {
+	Pattern string
+	Type    Emitter
+	Mutator Mutator
+}
+
+// Words creates a regex that matches any of the given literal words.
+func Words(prefix, suffix string, words ...string) string {
+	sort.Slice(words, func(i, j int) bool {
+		return len(words[j]) < len(words[i])
+	})
+	for i, word := range words {
+		words[i] = regexp.QuoteMeta(word)
+	}
+	return prefix + `(` + strings.Join(words, `|`) + `)` + suffix
+}
+
+// Tokenise text using lexer, returning tokens as a slice.
+func Tokenise(lexer Lexer, options *TokeniseOptions, text string) ([]Token, error) {
+	var out []Token
+	it, err := lexer.Tokenise(options, text)
+	if err != nil {
+		return nil, err
+	}
+	for t := it(); t != EOF; t = it() {
+		out = append(out, t)
+	}
+	return out, nil
+}
+
+// Rules maps from state to a sequence of Rules.
+type Rules map[string][]Rule
+
+// Rename clones rules then a rule.
+func (r Rules) Rename(oldRule, newRule string) Rules {
+	r = r.Clone()
+	r[newRule] = r[oldRule]
+	delete(r, oldRule)
+	return r
+}
+
+// Clone returns a clone of the Rules.
+func (r Rules) Clone() Rules {
+	out := map[string][]Rule{}
+	for key, rules := range r {
+		out[key] = make([]Rule, len(rules))
+		copy(out[key], rules)
+	}
+	return out
+}
+
+// Merge creates a clone of "r" then merges "rules" into the clone.
+func (r Rules) Merge(rules Rules) Rules {
+	out := r.Clone()
+	for k, v := range rules.Clone() {
+		out[k] = v
+	}
+	return out
+}
+
+// MustNewLexer creates a new Lexer with deferred rules generation or panics.
+func MustNewLexer(config *Config, rules func() Rules) *RegexLexer {
+	lexer, err := NewLexer(config, rules)
+	if err != nil {
+		panic(err)
+	}
+	return lexer
+}
+
+// NewLexer creates a new regex-based Lexer.
+//
+// "rules" is a state machine transition map. Each key is a state. Values are sets of rules
+// that match input, optionally modify lexer state, and output tokens.
+func NewLexer(config *Config, rulesFunc func() Rules) (*RegexLexer, error) {
+	if config == nil {
+		config = &Config{}
+	}
+	for _, glob := range append(config.Filenames, config.AliasFilenames...) {
+		_, err := filepath.Match(glob, "")
+		if err != nil {
+			return nil, fmt.Errorf("%s: %q is not a valid glob: %w", config.Name, glob, err)
+		}
+	}
+	r := &RegexLexer{
+		config:         config,
+		fetchRulesFunc: func() (Rules, error) { return rulesFunc(), nil },
+	}
+	// One-off code to generate XML lexers in the Chroma source tree.
+	// var nameCleanRe = regexp.MustCompile(`[^-+A-Za-z0-9_]`)
+	// name := strings.ToLower(nameCleanRe.ReplaceAllString(config.Name, "_"))
+	// data, err := Marshal(r)
+	// if err != nil {
+	// 	if errors.Is(err, ErrNotSerialisable) {
+	// 		fmt.Fprintf(os.Stderr, "warning: %q: %s\n", name, err)
+	// 		return r, nil
+	// 	}
+	// 	return nil, err
+	// }
+	// _, file, _, ok := runtime.Caller(2)
+	// if !ok {
+	// 	panic("??")
+	// }
+	// fmt.Println(file)
+	// if strings.Contains(file, "/lexers/") {
+	// 	dir := filepath.Join(filepath.Dir(file), "embedded")
+	// 	err = os.MkdirAll(dir, 0700)
+	// 	if err != nil {
+	// 		return nil, err
+	// 	}
+	// 	filename := filepath.Join(dir, name) + ".xml"
+	// 	fmt.Println(filename)
+	// 	err = ioutil.WriteFile(filename, data, 0600)
+	// 	if err != nil {
+	// 		return nil, err
+	// 	}
+	// }
+	return r, nil
+}
+
+// Trace enables debug tracing.
+func (r *RegexLexer) Trace(trace bool) *RegexLexer {
+	r.trace = trace
+	return r
+}
+
+// A CompiledRule is a Rule with a pre-compiled regex.
+//
+// Note that regular expressions are lazily compiled on first use of the lexer.
+type CompiledRule struct {
+	Rule
+	Regexp *regexp2.Regexp
+	flags  string
+}
+
+// CompiledRules is a map of rule name to sequence of compiled rules in that rule.
+type CompiledRules map[string][]*CompiledRule
+
+// LexerState contains the state for a single lex.
+type LexerState struct {
+	Lexer    *RegexLexer
+	Registry *LexerRegistry
+	Text     []rune
+	Pos      int
+	Rules    CompiledRules
+	Stack    []string
+	State    string
+	Rule     int
+	// Group matches.
+	Groups []string
+	// Named Group matches.
+	NamedGroups map[string]string
+	// Custum context for mutators.
+	MutatorContext map[interface{}]interface{}
+	iteratorStack  []Iterator
+	options        *TokeniseOptions
+	newlineAdded   bool
+}
+
+// Set mutator context.
+func (l *LexerState) Set(key interface{}, value interface{}) {
+	l.MutatorContext[key] = value
+}
+
+// Get mutator context.
+func (l *LexerState) Get(key interface{}) interface{} {
+	return l.MutatorContext[key]
+}
+
+// Iterator returns the next Token from the lexer.
+func (l *LexerState) Iterator() Token { // nolint: gocognit
+	end := len(l.Text)
+	if l.newlineAdded {
+		end--
+	}
+	for l.Pos < end && len(l.Stack) > 0 {
+		// Exhaust the iterator stack, if any.
+		for len(l.iteratorStack) > 0 {
+			n := len(l.iteratorStack) - 1
+			t := l.iteratorStack[n]()
+			if t.Type == Ignore {
+				continue
+			}
+			if t == EOF {
+				l.iteratorStack = l.iteratorStack[:n]
+				continue
+			}
+			return t
+		}
+
+		l.State = l.Stack[len(l.Stack)-1]
+		if l.Lexer.trace {
+			fmt.Fprintf(os.Stderr, "%s: pos=%d, text=%q\n", l.State, l.Pos, string(l.Text[l.Pos:]))
+		}
+		selectedRule, ok := l.Rules[l.State]
+		if !ok {
+			panic("unknown state " + l.State)
+		}
+		ruleIndex, rule, groups, namedGroups := matchRules(l.Text, l.Pos, selectedRule)
+		// No match.
+		if groups == nil {
+			// From Pygments :\
+			//
+			// If the RegexLexer encounters a newline that is flagged as an error token, the stack is
+			// emptied and the lexer continues scanning in the 'root' state. This can help producing
+			// error-tolerant highlighting for erroneous input, e.g. when a single-line string is not
+			// closed.
+			if l.Text[l.Pos] == '\n' && l.State != l.options.State {
+				l.Stack = []string{l.options.State}
+				continue
+			}
+			l.Pos++
+			return Token{Error, string(l.Text[l.Pos-1 : l.Pos])}
+		}
+		l.Rule = ruleIndex
+		l.Groups = groups
+		l.NamedGroups = namedGroups
+		l.Pos += utf8.RuneCountInString(groups[0])
+		if rule.Mutator != nil {
+			if err := rule.Mutator.Mutate(l); err != nil {
+				panic(err)
+			}
+		}
+		if rule.Type != nil {
+			l.iteratorStack = append(l.iteratorStack, rule.Type.Emit(l.Groups, l))
+		}
+	}
+	// Exhaust the IteratorStack, if any.
+	// Duplicate code, but eh.
+	for len(l.iteratorStack) > 0 {
+		n := len(l.iteratorStack) - 1
+		t := l.iteratorStack[n]()
+		if t.Type == Ignore {
+			continue
+		}
+		if t == EOF {
+			l.iteratorStack = l.iteratorStack[:n]
+			continue
+		}
+		return t
+	}
+
+	// If we get to here and we still have text, return it as an error.
+	if l.Pos != len(l.Text) && len(l.Stack) == 0 {
+		value := string(l.Text[l.Pos:])
+		l.Pos = len(l.Text)
+		return Token{Type: Error, Value: value}
+	}
+	return EOF
+}
+
+// RegexLexer is the default lexer implementation used in Chroma.
+type RegexLexer struct {
+	registry *LexerRegistry // The LexerRegistry this Lexer is associated with, if any.
+	config   *Config
+	analyser func(text string) float32
+	trace    bool
+
+	mu             sync.Mutex
+	compiled       bool
+	rawRules       Rules
+	rules          map[string][]*CompiledRule
+	fetchRulesFunc func() (Rules, error)
+	compileOnce    sync.Once
+}
+
+func (r *RegexLexer) String() string {
+	return r.config.Name
+}
+
+// Rules in the Lexer.
+func (r *RegexLexer) Rules() (Rules, error) {
+	if err := r.needRules(); err != nil {
+		return nil, err
+	}
+	return r.rawRules, nil
+}
+
+// SetRegistry the lexer will use to lookup other lexers if necessary.
+func (r *RegexLexer) SetRegistry(registry *LexerRegistry) Lexer {
+	r.registry = registry
+	return r
+}
+
+// SetAnalyser sets the analyser function used to perform content inspection.
+func (r *RegexLexer) SetAnalyser(analyser func(text string) float32) Lexer {
+	r.analyser = analyser
+	return r
+}
+
+// AnalyseText scores how likely a fragment of text is to match this lexer, between 0.0 and 1.0.
+func (r *RegexLexer) AnalyseText(text string) float32 {
+	if r.analyser != nil {
+		return r.analyser(text)
+	}
+	return 0
+}
+
+// SetConfig replaces the Config for this Lexer.
+func (r *RegexLexer) SetConfig(config *Config) *RegexLexer {
+	r.config = config
+	return r
+}
+
+// Config returns the Config for this Lexer.
+func (r *RegexLexer) Config() *Config {
+	return r.config
+}
+
+// Regex compilation is deferred until the lexer is used. This is to avoid significant init() time costs.
+func (r *RegexLexer) maybeCompile() (err error) {
+	r.mu.Lock()
+	defer r.mu.Unlock()
+	if r.compiled {
+		return nil
+	}
+	for state, rules := range r.rules {
+		for i, rule := range rules {
+			if rule.Regexp == nil {
+				pattern := "(?:" + rule.Pattern + ")"
+				if rule.flags != "" {
+					pattern = "(?" + rule.flags + ")" + pattern
+				}
+				pattern = `\G` + pattern
+				rule.Regexp, err = regexp2.Compile(pattern, 0)
+				if err != nil {
+					return fmt.Errorf("failed to compile rule %s.%d: %s", state, i, err)
+				}
+				rule.Regexp.MatchTimeout = time.Millisecond * 250
+			}
+		}
+	}
+restart:
+	seen := map[LexerMutator]bool{}
+	for state := range r.rules {
+		for i := 0; i < len(r.rules[state]); i++ {
+			rule := r.rules[state][i]
+			if compile, ok := rule.Mutator.(LexerMutator); ok {
+				if seen[compile] {
+					return fmt.Errorf("saw mutator %T twice; this should not happen", compile)
+				}
+				seen[compile] = true
+				if err := compile.MutateLexer(r.rules, state, i); err != nil {
+					return err
+				}
+				// Process the rules again in case the mutator added/removed rules.
+				//
+				// This sounds bad, but shouldn't be significant in practice.
+				goto restart
+			}
+		}
+	}
+	r.compiled = true
+	return nil
+}
+
+func (r *RegexLexer) fetchRules() error {
+	rules, err := r.fetchRulesFunc()
+	if err != nil {
+		return fmt.Errorf("%s: failed to compile rules: %w", r.config.Name, err)
+	}
+	if _, ok := rules["root"]; !ok {
+		return fmt.Errorf("no \"root\" state")
+	}
+	compiledRules := map[string][]*CompiledRule{}
+	for state, rules := range rules {
+		compiledRules[state] = nil
+		for _, rule := range rules {
+			flags := ""
+			if !r.config.NotMultiline {
+				flags += "m"
+			}
+			if r.config.CaseInsensitive {
+				flags += "i"
+			}
+			if r.config.DotAll {
+				flags += "s"
+			}
+			compiledRules[state] = append(compiledRules[state], &CompiledRule{Rule: rule, flags: flags})
+		}
+	}
+
+	r.rawRules = rules
+	r.rules = compiledRules
+	return nil
+}
+
+func (r *RegexLexer) needRules() error {
+	var err error
+	if r.fetchRulesFunc != nil {
+		r.compileOnce.Do(func() {
+			err = r.fetchRules()
+		})
+	}
+	if err := r.maybeCompile(); err != nil {
+		return err
+	}
+	return err
+}
+
+// Tokenise text using lexer, returning an iterator.
+func (r *RegexLexer) Tokenise(options *TokeniseOptions, text string) (Iterator, error) {
+	err := r.needRules()
+	if err != nil {
+		return nil, err
+	}
+	if options == nil {
+		options = defaultOptions
+	}
+	if options.EnsureLF {
+		text = ensureLF(text)
+	}
+	newlineAdded := false
+	if !options.Nested && r.config.EnsureNL && !strings.HasSuffix(text, "\n") {
+		text += "\n"
+		newlineAdded = true
+	}
+	state := &LexerState{
+		Registry:       r.registry,
+		newlineAdded:   newlineAdded,
+		options:        options,
+		Lexer:          r,
+		Text:           []rune(text),
+		Stack:          []string{options.State},
+		Rules:          r.rules,
+		MutatorContext: map[interface{}]interface{}{},
+	}
+	return state.Iterator, nil
+}
+
+// MustRules is like Rules() but will panic on error.
+func (r *RegexLexer) MustRules() Rules {
+	rules, err := r.Rules()
+	if err != nil {
+		panic(err)
+	}
+	return rules
+}
+
+func matchRules(text []rune, pos int, rules []*CompiledRule) (int, *CompiledRule, []string, map[string]string) {
+	for i, rule := range rules {
+		match, err := rule.Regexp.FindRunesMatchStartingAt(text, pos)
+		if match != nil && err == nil && match.Index == pos {
+			groups := []string{}
+			namedGroups := make(map[string]string)
+			for _, g := range match.Groups() {
+				namedGroups[g.Name] = g.String()
+				groups = append(groups, g.String())
+			}
+			return i, rule, groups, namedGroups
+		}
+	}
+	return 0, &CompiledRule{}, nil, nil
+}
+
+// replace \r and \r\n with \n
+// same as strings.ReplaceAll but more efficient
+func ensureLF(text string) string {
+	buf := make([]byte, len(text))
+	var j int
+	for i := 0; i < len(text); i++ {
+		c := text[i]
+		if c == '\r' {
+			if i < len(text)-1 && text[i+1] == '\n' {
+				continue
+			}
+			c = '\n'
+		}
+		buf[j] = c
+		j++
+	}
+	return string(buf[:j])
+}

vendor/github.com/alecthomas/chroma/v2/registry.go 🔗

@@ -0,0 +1,210 @@
+package chroma
+
+import (
+	"path/filepath"
+	"sort"
+	"strings"
+)
+
+var (
+	ignoredSuffixes = [...]string{
+		// Editor backups
+		"~", ".bak", ".old", ".orig",
+		// Debian and derivatives apt/dpkg/ucf backups
+		".dpkg-dist", ".dpkg-old", ".ucf-dist", ".ucf-new", ".ucf-old",
+		// Red Hat and derivatives rpm backups
+		".rpmnew", ".rpmorig", ".rpmsave",
+		// Build system input/template files
+		".in",
+	}
+)
+
+// LexerRegistry is a registry of Lexers.
+type LexerRegistry struct {
+	Lexers  Lexers
+	byName  map[string]Lexer
+	byAlias map[string]Lexer
+}
+
+// NewLexerRegistry creates a new LexerRegistry of Lexers.
+func NewLexerRegistry() *LexerRegistry {
+	return &LexerRegistry{
+		byName:  map[string]Lexer{},
+		byAlias: map[string]Lexer{},
+	}
+}
+
+// Names of all lexers, optionally including aliases.
+func (l *LexerRegistry) Names(withAliases bool) []string {
+	out := []string{}
+	for _, lexer := range l.Lexers {
+		config := lexer.Config()
+		out = append(out, config.Name)
+		if withAliases {
+			out = append(out, config.Aliases...)
+		}
+	}
+	sort.Strings(out)
+	return out
+}
+
+// Get a Lexer by name, alias or file extension.
+func (l *LexerRegistry) Get(name string) Lexer {
+	if lexer := l.byName[name]; lexer != nil {
+		return lexer
+	}
+	if lexer := l.byAlias[name]; lexer != nil {
+		return lexer
+	}
+	if lexer := l.byName[strings.ToLower(name)]; lexer != nil {
+		return lexer
+	}
+	if lexer := l.byAlias[strings.ToLower(name)]; lexer != nil {
+		return lexer
+	}
+
+	candidates := PrioritisedLexers{}
+	// Try file extension.
+	if lexer := l.Match("filename." + name); lexer != nil {
+		candidates = append(candidates, lexer)
+	}
+	// Try exact filename.
+	if lexer := l.Match(name); lexer != nil {
+		candidates = append(candidates, lexer)
+	}
+	if len(candidates) == 0 {
+		return nil
+	}
+	sort.Sort(candidates)
+	return candidates[0]
+}
+
+// MatchMimeType attempts to find a lexer for the given MIME type.
+func (l *LexerRegistry) MatchMimeType(mimeType string) Lexer {
+	matched := PrioritisedLexers{}
+	for _, l := range l.Lexers {
+		for _, lmt := range l.Config().MimeTypes {
+			if mimeType == lmt {
+				matched = append(matched, l)
+			}
+		}
+	}
+	if len(matched) != 0 {
+		sort.Sort(matched)
+		return matched[0]
+	}
+	return nil
+}
+
+// Match returns the first lexer matching filename.
+//
+// Note that this iterates over all file patterns in all lexers, so is not fast.
+func (l *LexerRegistry) Match(filename string) Lexer {
+	filename = filepath.Base(filename)
+	matched := PrioritisedLexers{}
+	// First, try primary filename matches.
+	for _, lexer := range l.Lexers {
+		config := lexer.Config()
+		for _, glob := range config.Filenames {
+			ok, err := filepath.Match(glob, filename)
+			if err != nil { // nolint
+				panic(err)
+			} else if ok {
+				matched = append(matched, lexer)
+			} else {
+				for _, suf := range &ignoredSuffixes {
+					ok, err := filepath.Match(glob+suf, filename)
+					if err != nil {
+						panic(err)
+					} else if ok {
+						matched = append(matched, lexer)
+						break
+					}
+				}
+			}
+		}
+	}
+	if len(matched) > 0 {
+		sort.Sort(matched)
+		return matched[0]
+	}
+	matched = nil
+	// Next, try filename aliases.
+	for _, lexer := range l.Lexers {
+		config := lexer.Config()
+		for _, glob := range config.AliasFilenames {
+			ok, err := filepath.Match(glob, filename)
+			if err != nil { // nolint
+				panic(err)
+			} else if ok {
+				matched = append(matched, lexer)
+			} else {
+				for _, suf := range &ignoredSuffixes {
+					ok, err := filepath.Match(glob+suf, filename)
+					if err != nil {
+						panic(err)
+					} else if ok {
+						matched = append(matched, lexer)
+						break
+					}
+				}
+			}
+		}
+	}
+	if len(matched) > 0 {
+		sort.Sort(matched)
+		return matched[0]
+	}
+	return nil
+}
+
+// Analyse text content and return the "best" lexer..
+func (l *LexerRegistry) Analyse(text string) Lexer {
+	var picked Lexer
+	highest := float32(0.0)
+	for _, lexer := range l.Lexers {
+		if analyser, ok := lexer.(Analyser); ok {
+			weight := analyser.AnalyseText(text)
+			if weight > highest {
+				picked = lexer
+				highest = weight
+			}
+		}
+	}
+	return picked
+}
+
+// Register a Lexer with the LexerRegistry. If the lexer is already registered
+// it will be replaced.
+func (l *LexerRegistry) Register(lexer Lexer) Lexer {
+	lexer.SetRegistry(l)
+	config := lexer.Config()
+
+	l.byName[config.Name] = lexer
+	l.byName[strings.ToLower(config.Name)] = lexer
+
+	for _, alias := range config.Aliases {
+		l.byAlias[alias] = lexer
+		l.byAlias[strings.ToLower(alias)] = lexer
+	}
+
+	l.Lexers = add(l.Lexers, lexer)
+
+	return lexer
+}
+
+// add adds a lexer to a slice of lexers if it doesn't already exist, or if found will replace it.
+func add(lexers Lexers, lexer Lexer) Lexers {
+	for i, val := range lexers {
+		if val == nil {
+			continue
+		}
+
+		if val.Config().Name == lexer.Config().Name {
+			lexers[i] = lexer
+			return lexers
+		}
+	}
+
+	return append(lexers, lexer)
+}

vendor/github.com/alecthomas/chroma/v2/remap.go 🔗

@@ -0,0 +1,94 @@
+package chroma
+
+type remappingLexer struct {
+	lexer  Lexer
+	mapper func(Token) []Token
+}
+
+// RemappingLexer remaps a token to a set of, potentially empty, tokens.
+func RemappingLexer(lexer Lexer, mapper func(Token) []Token) Lexer {
+	return &remappingLexer{lexer, mapper}
+}
+
+func (r *remappingLexer) AnalyseText(text string) float32 {
+	return r.lexer.AnalyseText(text)
+}
+
+func (r *remappingLexer) SetAnalyser(analyser func(text string) float32) Lexer {
+	r.lexer.SetAnalyser(analyser)
+	return r
+}
+
+func (r *remappingLexer) SetRegistry(registry *LexerRegistry) Lexer {
+	r.lexer.SetRegistry(registry)
+	return r
+}
+
+func (r *remappingLexer) Config() *Config {
+	return r.lexer.Config()
+}
+
+func (r *remappingLexer) Tokenise(options *TokeniseOptions, text string) (Iterator, error) {
+	it, err := r.lexer.Tokenise(options, text)
+	if err != nil {
+		return nil, err
+	}
+	var buffer []Token
+	return func() Token {
+		for {
+			if len(buffer) > 0 {
+				t := buffer[0]
+				buffer = buffer[1:]
+				return t
+			}
+			t := it()
+			if t == EOF {
+				return t
+			}
+			buffer = r.mapper(t)
+		}
+	}, nil
+}
+
+// TypeMapping defines type maps for the TypeRemappingLexer.
+type TypeMapping []struct {
+	From, To TokenType
+	Words    []string
+}
+
+// TypeRemappingLexer remaps types of tokens coming from a parent Lexer.
+//
+// eg. Map "defvaralias" tokens of type NameVariable to NameFunction:
+//
+//	mapping := TypeMapping{
+//		{NameVariable, NameFunction, []string{"defvaralias"},
+//	}
+//	lexer = TypeRemappingLexer(lexer, mapping)
+func TypeRemappingLexer(lexer Lexer, mapping TypeMapping) Lexer {
+	// Lookup table for fast remapping.
+	lut := map[TokenType]map[string]TokenType{}
+	for _, rt := range mapping {
+		km, ok := lut[rt.From]
+		if !ok {
+			km = map[string]TokenType{}
+			lut[rt.From] = km
+		}
+		if len(rt.Words) == 0 {
+			km[""] = rt.To
+		} else {
+			for _, k := range rt.Words {
+				km[k] = rt.To
+			}
+		}
+	}
+	return RemappingLexer(lexer, func(t Token) []Token {
+		if k, ok := lut[t.Type]; ok {
+			if tt, ok := k[t.Value]; ok {
+				t.Type = tt
+			} else if tt, ok := k[""]; ok {
+				t.Type = tt
+			}
+		}
+		return []Token{t}
+	})
+}

vendor/github.com/alecthomas/chroma/v2/renovate.json5 🔗

@@ -0,0 +1,18 @@
+{
+	$schema: "https://docs.renovatebot.com/renovate-schema.json",
+	extends: [
+		"config:recommended",
+		":semanticCommits",
+		":semanticCommitTypeAll(chore)",
+		":semanticCommitScope(deps)",
+		"group:allNonMajor",
+		"schedule:earlyMondays", // Run once a week.
+	],
+	packageRules: [
+		{
+			matchPackageNames: ["golangci-lint"],
+			matchManagers: ["hermit"],
+			enabled: false,
+		},
+	],
+}

vendor/github.com/alecthomas/chroma/v2/serialise.go 🔗

@@ -0,0 +1,479 @@
+package chroma
+
+import (
+	"compress/gzip"
+	"encoding/xml"
+	"errors"
+	"fmt"
+	"io"
+	"io/fs"
+	"math"
+	"path/filepath"
+	"reflect"
+	"regexp"
+	"strings"
+
+	"github.com/dlclark/regexp2"
+)
+
+// Serialisation of Chroma rules to XML. The format is:
+//
+//	<rules>
+//	  <state name="$STATE">
+//	    <rule [pattern="$PATTERN"]>
+//	      [<$EMITTER ...>]
+//	      [<$MUTATOR ...>]
+//	    </rule>
+//	  </state>
+//	</rules>
+//
+// eg. Include("String") would become:
+//
+//	<rule>
+//	  <include state="String" />
+//	</rule>
+//
+//	[null, null, {"kind": "include", "state": "String"}]
+//
+// eg. Rule{`\d+`, Text, nil} would become:
+//
+//	<rule pattern="\\d+">
+//	  <token type="Text"/>
+//	</rule>
+//
+// eg. Rule{`"`, String, Push("String")}
+//
+//	<rule pattern="\"">
+//	  <token type="String" />
+//	  <push state="String" />
+//	</rule>
+//
+// eg. Rule{`(\w+)(\n)`, ByGroups(Keyword, Whitespace), nil},
+//
+//	<rule pattern="(\\w+)(\\n)">
+//	  <bygroups token="Keyword" token="Whitespace" />
+//	  <push state="String" />
+//	</rule>
+var (
+	// ErrNotSerialisable is returned if a lexer contains Rules that cannot be serialised.
+	ErrNotSerialisable = fmt.Errorf("not serialisable")
+	emitterTemplates   = func() map[string]SerialisableEmitter {
+		out := map[string]SerialisableEmitter{}
+		for _, emitter := range []SerialisableEmitter{
+			&byGroupsEmitter{},
+			&usingSelfEmitter{},
+			TokenType(0),
+			&usingEmitter{},
+			&usingByGroup{},
+		} {
+			out[emitter.EmitterKind()] = emitter
+		}
+		return out
+	}()
+	mutatorTemplates = func() map[string]SerialisableMutator {
+		out := map[string]SerialisableMutator{}
+		for _, mutator := range []SerialisableMutator{
+			&includeMutator{},
+			&combinedMutator{},
+			&multiMutator{},
+			&pushMutator{},
+			&popMutator{},
+		} {
+			out[mutator.MutatorKind()] = mutator
+		}
+		return out
+	}()
+)
+
+// fastUnmarshalConfig unmarshals only the Config from a serialised lexer.
+func fastUnmarshalConfig(from fs.FS, path string) (*Config, error) {
+	r, err := from.Open(path)
+	if err != nil {
+		return nil, err
+	}
+	defer r.Close()
+	dec := xml.NewDecoder(r)
+	for {
+		token, err := dec.Token()
+		if err != nil {
+			if errors.Is(err, io.EOF) {
+				return nil, fmt.Errorf("could not find <config> element")
+			}
+			return nil, err
+		}
+		switch se := token.(type) {
+		case xml.StartElement:
+			if se.Name.Local != "config" {
+				break
+			}
+
+			var config Config
+			err = dec.DecodeElement(&config, &se)
+			if err != nil {
+				return nil, fmt.Errorf("%s: %w", path, err)
+			}
+			return &config, nil
+		}
+	}
+}
+
+// MustNewXMLLexer constructs a new RegexLexer from an XML file or panics.
+func MustNewXMLLexer(from fs.FS, path string) *RegexLexer {
+	lex, err := NewXMLLexer(from, path)
+	if err != nil {
+		panic(err)
+	}
+	return lex
+}
+
+// NewXMLLexer creates a new RegexLexer from a serialised RegexLexer.
+func NewXMLLexer(from fs.FS, path string) (*RegexLexer, error) {
+	config, err := fastUnmarshalConfig(from, path)
+	if err != nil {
+		return nil, err
+	}
+
+	for _, glob := range append(config.Filenames, config.AliasFilenames...) {
+		_, err := filepath.Match(glob, "")
+		if err != nil {
+			return nil, fmt.Errorf("%s: %q is not a valid glob: %w", config.Name, glob, err)
+		}
+	}
+
+	var analyserFn func(string) float32
+
+	if config.Analyse != nil {
+		type regexAnalyse struct {
+			re    *regexp2.Regexp
+			score float32
+		}
+
+		regexAnalysers := make([]regexAnalyse, 0, len(config.Analyse.Regexes))
+
+		for _, ra := range config.Analyse.Regexes {
+			re, err := regexp2.Compile(ra.Pattern, regexp2.None)
+			if err != nil {
+				return nil, fmt.Errorf("%s: %q is not a valid analyser regex: %w", config.Name, ra.Pattern, err)
+			}
+
+			regexAnalysers = append(regexAnalysers, regexAnalyse{re, ra.Score})
+		}
+
+		analyserFn = func(text string) float32 {
+			var score float32
+
+			for _, ra := range regexAnalysers {
+				ok, err := ra.re.MatchString(text)
+				if err != nil {
+					return 0
+				}
+
+				if ok && config.Analyse.First {
+					return float32(math.Min(float64(ra.score), 1.0))
+				}
+
+				if ok {
+					score += ra.score
+				}
+			}
+
+			return float32(math.Min(float64(score), 1.0))
+		}
+	}
+
+	return &RegexLexer{
+		config:   config,
+		analyser: analyserFn,
+		fetchRulesFunc: func() (Rules, error) {
+			var lexer struct {
+				Config
+				Rules Rules `xml:"rules"`
+			}
+			// Try to open .xml fallback to .xml.gz
+			fr, err := from.Open(path)
+			if err != nil {
+				if errors.Is(err, fs.ErrNotExist) {
+					path += ".gz"
+					fr, err = from.Open(path)
+					if err != nil {
+						return nil, err
+					}
+				} else {
+					return nil, err
+				}
+			}
+			defer fr.Close()
+			var r io.Reader = fr
+			if strings.HasSuffix(path, ".gz") {
+				r, err = gzip.NewReader(r)
+				if err != nil {
+					return nil, fmt.Errorf("%s: %w", path, err)
+				}
+			}
+			err = xml.NewDecoder(r).Decode(&lexer)
+			if err != nil {
+				return nil, fmt.Errorf("%s: %w", path, err)
+			}
+			return lexer.Rules, nil
+		},
+	}, nil
+}
+
+// Marshal a RegexLexer to XML.
+func Marshal(l *RegexLexer) ([]byte, error) {
+	type lexer struct {
+		Config Config `xml:"config"`
+		Rules  Rules  `xml:"rules"`
+	}
+
+	rules, err := l.Rules()
+	if err != nil {
+		return nil, err
+	}
+	root := &lexer{
+		Config: *l.Config(),
+		Rules:  rules,
+	}
+	data, err := xml.MarshalIndent(root, "", "  ")
+	if err != nil {
+		return nil, err
+	}
+	re := regexp.MustCompile(`></[a-zA-Z]+>`)
+	data = re.ReplaceAll(data, []byte(`/>`))
+	return data, nil
+}
+
+// Unmarshal a RegexLexer from XML.
+func Unmarshal(data []byte) (*RegexLexer, error) {
+	type lexer struct {
+		Config Config `xml:"config"`
+		Rules  Rules  `xml:"rules"`
+	}
+	root := &lexer{}
+	err := xml.Unmarshal(data, root)
+	if err != nil {
+		return nil, fmt.Errorf("invalid Lexer XML: %w", err)
+	}
+	lex, err := NewLexer(&root.Config, func() Rules { return root.Rules })
+	if err != nil {
+		return nil, err
+	}
+	return lex, nil
+}
+
+func marshalMutator(e *xml.Encoder, mutator Mutator) error {
+	if mutator == nil {
+		return nil
+	}
+	smutator, ok := mutator.(SerialisableMutator)
+	if !ok {
+		return fmt.Errorf("unsupported mutator: %w", ErrNotSerialisable)
+	}
+	return e.EncodeElement(mutator, xml.StartElement{Name: xml.Name{Local: smutator.MutatorKind()}})
+}
+
+func unmarshalMutator(d *xml.Decoder, start xml.StartElement) (Mutator, error) {
+	kind := start.Name.Local
+	mutator, ok := mutatorTemplates[kind]
+	if !ok {
+		return nil, fmt.Errorf("unknown mutator %q: %w", kind, ErrNotSerialisable)
+	}
+	value, target := newFromTemplate(mutator)
+	if err := d.DecodeElement(target, &start); err != nil {
+		return nil, err
+	}
+	return value().(SerialisableMutator), nil
+}
+
+func marshalEmitter(e *xml.Encoder, emitter Emitter) error {
+	if emitter == nil {
+		return nil
+	}
+	semitter, ok := emitter.(SerialisableEmitter)
+	if !ok {
+		return fmt.Errorf("unsupported emitter %T: %w", emitter, ErrNotSerialisable)
+	}
+	return e.EncodeElement(emitter, xml.StartElement{
+		Name: xml.Name{Local: semitter.EmitterKind()},
+	})
+}
+
+func unmarshalEmitter(d *xml.Decoder, start xml.StartElement) (Emitter, error) {
+	kind := start.Name.Local
+	mutator, ok := emitterTemplates[kind]
+	if !ok {
+		return nil, fmt.Errorf("unknown emitter %q: %w", kind, ErrNotSerialisable)
+	}
+	value, target := newFromTemplate(mutator)
+	if err := d.DecodeElement(target, &start); err != nil {
+		return nil, err
+	}
+	return value().(SerialisableEmitter), nil
+}
+
+func (r Rule) MarshalXML(e *xml.Encoder, _ xml.StartElement) error {
+	start := xml.StartElement{
+		Name: xml.Name{Local: "rule"},
+	}
+	if r.Pattern != "" {
+		start.Attr = append(start.Attr, xml.Attr{
+			Name:  xml.Name{Local: "pattern"},
+			Value: r.Pattern,
+		})
+	}
+	if err := e.EncodeToken(start); err != nil {
+		return err
+	}
+	if err := marshalEmitter(e, r.Type); err != nil {
+		return err
+	}
+	if err := marshalMutator(e, r.Mutator); err != nil {
+		return err
+	}
+	return e.EncodeToken(xml.EndElement{Name: start.Name})
+}
+
+func (r *Rule) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
+	for _, attr := range start.Attr {
+		if attr.Name.Local == "pattern" {
+			r.Pattern = attr.Value
+			break
+		}
+	}
+	for {
+		token, err := d.Token()
+		if err != nil {
+			return err
+		}
+		switch token := token.(type) {
+		case xml.StartElement:
+			mutator, err := unmarshalMutator(d, token)
+			if err != nil && !errors.Is(err, ErrNotSerialisable) {
+				return err
+			} else if err == nil {
+				if r.Mutator != nil {
+					return fmt.Errorf("duplicate mutator")
+				}
+				r.Mutator = mutator
+				continue
+			}
+			emitter, err := unmarshalEmitter(d, token)
+			if err != nil && !errors.Is(err, ErrNotSerialisable) { // nolint: gocritic
+				return err
+			} else if err == nil {
+				if r.Type != nil {
+					return fmt.Errorf("duplicate emitter")
+				}
+				r.Type = emitter
+				continue
+			} else {
+				return err
+			}
+
+		case xml.EndElement:
+			return nil
+		}
+	}
+}
+
+type xmlRuleState struct {
+	Name  string `xml:"name,attr"`
+	Rules []Rule `xml:"rule"`
+}
+
+type xmlRules struct {
+	States []xmlRuleState `xml:"state"`
+}
+
+func (r Rules) MarshalXML(e *xml.Encoder, _ xml.StartElement) error {
+	xr := xmlRules{}
+	for state, rules := range r {
+		xr.States = append(xr.States, xmlRuleState{
+			Name:  state,
+			Rules: rules,
+		})
+	}
+	return e.EncodeElement(xr, xml.StartElement{Name: xml.Name{Local: "rules"}})
+}
+
+func (r *Rules) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
+	xr := xmlRules{}
+	if err := d.DecodeElement(&xr, &start); err != nil {
+		return err
+	}
+	if *r == nil {
+		*r = Rules{}
+	}
+	for _, state := range xr.States {
+		(*r)[state.Name] = state.Rules
+	}
+	return nil
+}
+
+type xmlTokenType struct {
+	Type string `xml:"type,attr"`
+}
+
+func (t *TokenType) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
+	el := xmlTokenType{}
+	if err := d.DecodeElement(&el, &start); err != nil {
+		return err
+	}
+	tt, err := TokenTypeString(el.Type)
+	if err != nil {
+		return err
+	}
+	*t = tt
+	return nil
+}
+
+func (t TokenType) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
+	start.Attr = append(start.Attr, xml.Attr{Name: xml.Name{Local: "type"}, Value: t.String()})
+	if err := e.EncodeToken(start); err != nil {
+		return err
+	}
+	return e.EncodeToken(xml.EndElement{Name: start.Name})
+}
+
+// This hijinks is a bit unfortunate but without it we can't deserialise into TokenType.
+func newFromTemplate(template interface{}) (value func() interface{}, target interface{}) {
+	t := reflect.TypeOf(template)
+	if t.Kind() == reflect.Ptr {
+		v := reflect.New(t.Elem())
+		return v.Interface, v.Interface()
+	}
+	v := reflect.New(t)
+	return func() interface{} { return v.Elem().Interface() }, v.Interface()
+}
+
+func (b *Emitters) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
+	for {
+		token, err := d.Token()
+		if err != nil {
+			return err
+		}
+		switch token := token.(type) {
+		case xml.StartElement:
+			emitter, err := unmarshalEmitter(d, token)
+			if err != nil {
+				return err
+			}
+			*b = append(*b, emitter)
+
+		case xml.EndElement:
+			return nil
+		}
+	}
+}
+
+func (b Emitters) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
+	if err := e.EncodeToken(start); err != nil {
+		return err
+	}
+	for _, m := range b {
+		if err := marshalEmitter(e, m); err != nil {
+			return err
+		}
+	}
+	return e.EncodeToken(xml.EndElement{Name: start.Name})
+}

vendor/github.com/alecthomas/chroma/v2/style.go 🔗

@@ -0,0 +1,481 @@
+package chroma
+
+import (
+	"encoding/xml"
+	"fmt"
+	"io"
+	"sort"
+	"strings"
+)
+
+// Trilean value for StyleEntry value inheritance.
+type Trilean uint8
+
+// Trilean states.
+const (
+	Pass Trilean = iota
+	Yes
+	No
+)
+
+func (t Trilean) String() string {
+	switch t {
+	case Yes:
+		return "Yes"
+	case No:
+		return "No"
+	default:
+		return "Pass"
+	}
+}
+
+// Prefix returns s with "no" as a prefix if Trilean is no.
+func (t Trilean) Prefix(s string) string {
+	if t == Yes {
+		return s
+	} else if t == No {
+		return "no" + s
+	}
+	return ""
+}
+
+// A StyleEntry in the Style map.
+type StyleEntry struct {
+	// Hex colours.
+	Colour     Colour
+	Background Colour
+	Border     Colour
+
+	Bold      Trilean
+	Italic    Trilean
+	Underline Trilean
+	NoInherit bool
+}
+
+func (s StyleEntry) MarshalText() ([]byte, error) {
+	return []byte(s.String()), nil
+}
+
+func (s StyleEntry) String() string {
+	out := []string{}
+	if s.Bold != Pass {
+		out = append(out, s.Bold.Prefix("bold"))
+	}
+	if s.Italic != Pass {
+		out = append(out, s.Italic.Prefix("italic"))
+	}
+	if s.Underline != Pass {
+		out = append(out, s.Underline.Prefix("underline"))
+	}
+	if s.NoInherit {
+		out = append(out, "noinherit")
+	}
+	if s.Colour.IsSet() {
+		out = append(out, s.Colour.String())
+	}
+	if s.Background.IsSet() {
+		out = append(out, "bg:"+s.Background.String())
+	}
+	if s.Border.IsSet() {
+		out = append(out, "border:"+s.Border.String())
+	}
+	return strings.Join(out, " ")
+}
+
+// Sub subtracts e from s where elements match.
+func (s StyleEntry) Sub(e StyleEntry) StyleEntry {
+	out := StyleEntry{}
+	if e.Colour != s.Colour {
+		out.Colour = s.Colour
+	}
+	if e.Background != s.Background {
+		out.Background = s.Background
+	}
+	if e.Bold != s.Bold {
+		out.Bold = s.Bold
+	}
+	if e.Italic != s.Italic {
+		out.Italic = s.Italic
+	}
+	if e.Underline != s.Underline {
+		out.Underline = s.Underline
+	}
+	if e.Border != s.Border {
+		out.Border = s.Border
+	}
+	return out
+}
+
+// Inherit styles from ancestors.
+//
+// Ancestors should be provided from oldest to newest.
+func (s StyleEntry) Inherit(ancestors ...StyleEntry) StyleEntry {
+	out := s
+	for i := len(ancestors) - 1; i >= 0; i-- {
+		if out.NoInherit {
+			return out
+		}
+		ancestor := ancestors[i]
+		if !out.Colour.IsSet() {
+			out.Colour = ancestor.Colour
+		}
+		if !out.Background.IsSet() {
+			out.Background = ancestor.Background
+		}
+		if !out.Border.IsSet() {
+			out.Border = ancestor.Border
+		}
+		if out.Bold == Pass {
+			out.Bold = ancestor.Bold
+		}
+		if out.Italic == Pass {
+			out.Italic = ancestor.Italic
+		}
+		if out.Underline == Pass {
+			out.Underline = ancestor.Underline
+		}
+	}
+	return out
+}
+
+func (s StyleEntry) IsZero() bool {
+	return s.Colour == 0 && s.Background == 0 && s.Border == 0 && s.Bold == Pass && s.Italic == Pass &&
+		s.Underline == Pass && !s.NoInherit
+}
+
+// A StyleBuilder is a mutable structure for building styles.
+//
+// Once built, a Style is immutable.
+type StyleBuilder struct {
+	entries map[TokenType]string
+	name    string
+	parent  *Style
+}
+
+func NewStyleBuilder(name string) *StyleBuilder {
+	return &StyleBuilder{name: name, entries: map[TokenType]string{}}
+}
+
+func (s *StyleBuilder) AddAll(entries StyleEntries) *StyleBuilder {
+	for ttype, entry := range entries {
+		s.entries[ttype] = entry
+	}
+	return s
+}
+
+func (s *StyleBuilder) Get(ttype TokenType) StyleEntry {
+	// This is less than ideal, but it's the price for not having to check errors on each Add().
+	entry, _ := ParseStyleEntry(s.entries[ttype])
+	if s.parent != nil {
+		entry = entry.Inherit(s.parent.Get(ttype))
+	}
+	return entry
+}
+
+// Add an entry to the Style map.
+//
+// See http://pygments.org/docs/styles/#style-rules for details.
+func (s *StyleBuilder) Add(ttype TokenType, entry string) *StyleBuilder { // nolint: gocyclo
+	s.entries[ttype] = entry
+	return s
+}
+
+func (s *StyleBuilder) AddEntry(ttype TokenType, entry StyleEntry) *StyleBuilder {
+	s.entries[ttype] = entry.String()
+	return s
+}
+
+// Transform passes each style entry currently defined in the builder to the supplied
+// function and saves the returned value. This can be used to adjust a style's colours;
+// see Colour's ClampBrightness function, for example.
+func (s *StyleBuilder) Transform(transform func(StyleEntry) StyleEntry) *StyleBuilder {
+	types := make(map[TokenType]struct{})
+	for tt := range s.entries {
+		types[tt] = struct{}{}
+	}
+	if s.parent != nil {
+		for _, tt := range s.parent.Types() {
+			types[tt] = struct{}{}
+		}
+	}
+	for tt := range types {
+		s.AddEntry(tt, transform(s.Get(tt)))
+	}
+	return s
+}
+
+func (s *StyleBuilder) Build() (*Style, error) {
+	style := &Style{
+		Name:    s.name,
+		entries: map[TokenType]StyleEntry{},
+		parent:  s.parent,
+	}
+	for ttype, descriptor := range s.entries {
+		entry, err := ParseStyleEntry(descriptor)
+		if err != nil {
+			return nil, fmt.Errorf("invalid entry for %s: %s", ttype, err)
+		}
+		style.entries[ttype] = entry
+	}
+	return style, nil
+}
+
+// StyleEntries mapping TokenType to colour definition.
+type StyleEntries map[TokenType]string
+
+// NewXMLStyle parses an XML style definition.
+func NewXMLStyle(r io.Reader) (*Style, error) {
+	dec := xml.NewDecoder(r)
+	style := &Style{}
+	return style, dec.Decode(style)
+}
+
+// MustNewXMLStyle is like NewXMLStyle but panics on error.
+func MustNewXMLStyle(r io.Reader) *Style {
+	style, err := NewXMLStyle(r)
+	if err != nil {
+		panic(err)
+	}
+	return style
+}
+
+// NewStyle creates a new style definition.
+func NewStyle(name string, entries StyleEntries) (*Style, error) {
+	return NewStyleBuilder(name).AddAll(entries).Build()
+}
+
+// MustNewStyle creates a new style or panics.
+func MustNewStyle(name string, entries StyleEntries) *Style {
+	style, err := NewStyle(name, entries)
+	if err != nil {
+		panic(err)
+	}
+	return style
+}
+
+// A Style definition.
+//
+// See http://pygments.org/docs/styles/ for details. Semantics are intended to be identical.
+type Style struct {
+	Name    string
+	entries map[TokenType]StyleEntry
+	parent  *Style
+}
+
+func (s *Style) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
+	if s.parent != nil {
+		return fmt.Errorf("cannot marshal style with parent")
+	}
+	start.Name = xml.Name{Local: "style"}
+	start.Attr = []xml.Attr{{Name: xml.Name{Local: "name"}, Value: s.Name}}
+	if err := e.EncodeToken(start); err != nil {
+		return err
+	}
+	sorted := make([]TokenType, 0, len(s.entries))
+	for ttype := range s.entries {
+		sorted = append(sorted, ttype)
+	}
+	sort.Slice(sorted, func(i, j int) bool { return sorted[i] < sorted[j] })
+	for _, ttype := range sorted {
+		entry := s.entries[ttype]
+		el := xml.StartElement{Name: xml.Name{Local: "entry"}}
+		el.Attr = []xml.Attr{
+			{Name: xml.Name{Local: "type"}, Value: ttype.String()},
+			{Name: xml.Name{Local: "style"}, Value: entry.String()},
+		}
+		if err := e.EncodeToken(el); err != nil {
+			return err
+		}
+		if err := e.EncodeToken(xml.EndElement{Name: el.Name}); err != nil {
+			return err
+		}
+	}
+	return e.EncodeToken(xml.EndElement{Name: start.Name})
+}
+
+func (s *Style) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
+	for _, attr := range start.Attr {
+		if attr.Name.Local == "name" {
+			s.Name = attr.Value
+		} else {
+			return fmt.Errorf("unexpected attribute %s", attr.Name.Local)
+		}
+	}
+	if s.Name == "" {
+		return fmt.Errorf("missing style name attribute")
+	}
+	s.entries = map[TokenType]StyleEntry{}
+	for {
+		tok, err := d.Token()
+		if err != nil {
+			return err
+		}
+		switch el := tok.(type) {
+		case xml.StartElement:
+			if el.Name.Local != "entry" {
+				return fmt.Errorf("unexpected element %s", el.Name.Local)
+			}
+			var ttype TokenType
+			var entry StyleEntry
+			for _, attr := range el.Attr {
+				switch attr.Name.Local {
+				case "type":
+					ttype, err = TokenTypeString(attr.Value)
+					if err != nil {
+						return err
+					}
+
+				case "style":
+					entry, err = ParseStyleEntry(attr.Value)
+					if err != nil {
+						return err
+					}
+
+				default:
+					return fmt.Errorf("unexpected attribute %s", attr.Name.Local)
+				}
+			}
+			s.entries[ttype] = entry
+
+		case xml.EndElement:
+			if el.Name.Local == start.Name.Local {
+				return nil
+			}
+		}
+	}
+}
+
+// Types that are styled.
+func (s *Style) Types() []TokenType {
+	dedupe := map[TokenType]bool{}
+	for tt := range s.entries {
+		dedupe[tt] = true
+	}
+	if s.parent != nil {
+		for _, tt := range s.parent.Types() {
+			dedupe[tt] = true
+		}
+	}
+	out := make([]TokenType, 0, len(dedupe))
+	for tt := range dedupe {
+		out = append(out, tt)
+	}
+	return out
+}
+
+// Builder creates a mutable builder from this Style.
+//
+// The builder can then be safely modified. This is a cheap operation.
+func (s *Style) Builder() *StyleBuilder {
+	return &StyleBuilder{
+		name:    s.Name,
+		entries: map[TokenType]string{},
+		parent:  s,
+	}
+}
+
+// Has checks if an exact style entry match exists for a token type.
+//
+// This is distinct from Get() which will merge parent tokens.
+func (s *Style) Has(ttype TokenType) bool {
+	return !s.get(ttype).IsZero() || s.synthesisable(ttype)
+}
+
+// Get a style entry. Will try sub-category or category if an exact match is not found, and
+// finally return the Background.
+func (s *Style) Get(ttype TokenType) StyleEntry {
+	return s.get(ttype).Inherit(
+		s.get(Background),
+		s.get(Text),
+		s.get(ttype.Category()),
+		s.get(ttype.SubCategory()))
+}
+
+func (s *Style) get(ttype TokenType) StyleEntry {
+	out := s.entries[ttype]
+	if out.IsZero() && s.parent != nil {
+		return s.parent.get(ttype)
+	}
+	if out.IsZero() && s.synthesisable(ttype) {
+		out = s.synthesise(ttype)
+	}
+	return out
+}
+
+func (s *Style) synthesise(ttype TokenType) StyleEntry {
+	bg := s.get(Background)
+	text := StyleEntry{Colour: bg.Colour}
+	text.Colour = text.Colour.BrightenOrDarken(0.5)
+
+	switch ttype {
+	// If we don't have a line highlight colour, make one that is 10% brighter/darker than the background.
+	case LineHighlight:
+		return StyleEntry{Background: bg.Background.BrightenOrDarken(0.1)}
+
+	// If we don't have line numbers, use the text colour but 20% brighter/darker
+	case LineNumbers, LineNumbersTable:
+		return text
+
+	default:
+		return StyleEntry{}
+	}
+}
+
+func (s *Style) synthesisable(ttype TokenType) bool {
+	return ttype == LineHighlight || ttype == LineNumbers || ttype == LineNumbersTable
+}
+
+// MustParseStyleEntry parses a Pygments style entry or panics.
+func MustParseStyleEntry(entry string) StyleEntry {
+	out, err := ParseStyleEntry(entry)
+	if err != nil {
+		panic(err)
+	}
+	return out
+}
+
+// ParseStyleEntry parses a Pygments style entry.
+func ParseStyleEntry(entry string) (StyleEntry, error) { // nolint: gocyclo
+	out := StyleEntry{}
+	parts := strings.Fields(entry)
+	for _, part := range parts {
+		switch {
+		case part == "italic":
+			out.Italic = Yes
+		case part == "noitalic":
+			out.Italic = No
+		case part == "bold":
+			out.Bold = Yes
+		case part == "nobold":
+			out.Bold = No
+		case part == "underline":
+			out.Underline = Yes
+		case part == "nounderline":
+			out.Underline = No
+		case part == "inherit":
+			out.NoInherit = false
+		case part == "noinherit":
+			out.NoInherit = true
+		case part == "bg:":
+			out.Background = 0
+		case strings.HasPrefix(part, "bg:#"):
+			out.Background = ParseColour(part[3:])
+			if !out.Background.IsSet() {
+				return StyleEntry{}, fmt.Errorf("invalid background colour %q", part)
+			}
+		case strings.HasPrefix(part, "border:#"):
+			out.Border = ParseColour(part[7:])
+			if !out.Border.IsSet() {
+				return StyleEntry{}, fmt.Errorf("invalid border colour %q", part)
+			}
+		case strings.HasPrefix(part, "#"):
+			out.Colour = ParseColour(part)
+			if !out.Colour.IsSet() {
+				return StyleEntry{}, fmt.Errorf("invalid colour %q", part)
+			}
+		default:
+			return StyleEntry{}, fmt.Errorf("unknown style element %q", part)
+		}
+	}
+	return out, nil
+}

vendor/github.com/alecthomas/chroma/v2/styles/abap.xml 🔗

@@ -0,0 +1,11 @@
+<style name="abap">
+  <entry type="Error" style="#ff0000"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#0000ff"/>
+  <entry type="Name" style="#000000"/>
+  <entry type="LiteralString" style="#55aa22"/>
+  <entry type="LiteralNumber" style="#33aaff"/>
+  <entry type="OperatorWord" style="#0000ff"/>
+  <entry type="Comment" style="italic #888888"/>
+  <entry type="CommentSpecial" style="#888888"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/algol.xml 🔗

@@ -0,0 +1,18 @@
+<style name="algol">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold underline"/>
+  <entry type="KeywordDeclaration" style="italic"/>
+  <entry type="NameBuiltin" style="bold italic"/>
+  <entry type="NameBuiltinPseudo" style="bold italic"/>
+  <entry type="NameClass" style="bold italic #666666"/>
+  <entry type="NameConstant" style="bold italic #666666"/>
+  <entry type="NameFunction" style="bold italic #666666"/>
+  <entry type="NameNamespace" style="bold italic #666666"/>
+  <entry type="NameVariable" style="bold italic #666666"/>
+  <entry type="LiteralString" style="italic #666666"/>
+  <entry type="OperatorWord" style="bold"/>
+  <entry type="Comment" style="italic #888888"/>
+  <entry type="CommentSpecial" style="bold noitalic #888888"/>
+  <entry type="CommentPreproc" style="bold noitalic #888888"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/algol_nu.xml 🔗

@@ -0,0 +1,18 @@
+<style name="algol_nu">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold"/>
+  <entry type="KeywordDeclaration" style="italic"/>
+  <entry type="NameBuiltin" style="bold italic"/>
+  <entry type="NameBuiltinPseudo" style="bold italic"/>
+  <entry type="NameClass" style="bold italic #666666"/>
+  <entry type="NameConstant" style="bold italic #666666"/>
+  <entry type="NameFunction" style="bold italic #666666"/>
+  <entry type="NameNamespace" style="bold italic #666666"/>
+  <entry type="NameVariable" style="bold italic #666666"/>
+  <entry type="LiteralString" style="italic #666666"/>
+  <entry type="OperatorWord" style="bold"/>
+  <entry type="Comment" style="italic #888888"/>
+  <entry type="CommentSpecial" style="bold noitalic #888888"/>
+  <entry type="CommentPreproc" style="bold noitalic #888888"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/api.go 🔗

@@ -0,0 +1,65 @@
+package styles
+
+import (
+	"embed"
+	"io/fs"
+	"sort"
+
+	"github.com/alecthomas/chroma/v2"
+)
+
+//go:embed *.xml
+var embedded embed.FS
+
+// Registry of Styles.
+var Registry = func() map[string]*chroma.Style {
+	registry := map[string]*chroma.Style{}
+	// Register all embedded styles.
+	files, err := fs.ReadDir(embedded, ".")
+	if err != nil {
+		panic(err)
+	}
+	for _, file := range files {
+		if file.IsDir() {
+			continue
+		}
+		r, err := embedded.Open(file.Name())
+		if err != nil {
+			panic(err)
+		}
+		style, err := chroma.NewXMLStyle(r)
+		if err != nil {
+			panic(err)
+		}
+		registry[style.Name] = style
+		_ = r.Close()
+	}
+	return registry
+}()
+
+// Fallback style. Reassign to change the default fallback style.
+var Fallback = Registry["swapoff"]
+
+// Register a chroma.Style.
+func Register(style *chroma.Style) *chroma.Style {
+	Registry[style.Name] = style
+	return style
+}
+
+// Names of all available styles.
+func Names() []string {
+	out := []string{}
+	for name := range Registry {
+		out = append(out, name)
+	}
+	sort.Strings(out)
+	return out
+}
+
+// Get named style, or Fallback.
+func Get(name string) *chroma.Style {
+	if style, ok := Registry[name]; ok {
+		return style
+	}
+	return Fallback
+}

vendor/github.com/alecthomas/chroma/v2/styles/arduino.xml 🔗

@@ -0,0 +1,18 @@
+<style name="arduino">
+  <entry type="Error" style="#a61717"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#728e00"/>
+  <entry type="KeywordConstant" style="#00979d"/>
+  <entry type="KeywordPseudo" style="#00979d"/>
+  <entry type="KeywordReserved" style="#00979d"/>
+  <entry type="KeywordType" style="#00979d"/>
+  <entry type="Name" style="#434f54"/>
+  <entry type="NameBuiltin" style="#728e00"/>
+  <entry type="NameFunction" style="#d35400"/>
+  <entry type="NameOther" style="#728e00"/>
+  <entry type="LiteralString" style="#7f8c8d"/>
+  <entry type="LiteralNumber" style="#8a7b52"/>
+  <entry type="Operator" style="#728e00"/>
+  <entry type="Comment" style="#95a5a6"/>
+  <entry type="CommentPreproc" style="#728e00"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/autumn.xml 🔗

@@ -0,0 +1,36 @@
+<style name="autumn">
+  <entry type="Error" style="#ff0000 bg:#ffaaaa"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#0000aa"/>
+  <entry type="KeywordType" style="#00aaaa"/>
+  <entry type="NameAttribute" style="#1e90ff"/>
+  <entry type="NameBuiltin" style="#00aaaa"/>
+  <entry type="NameClass" style="underline #00aa00"/>
+  <entry type="NameConstant" style="#aa0000"/>
+  <entry type="NameDecorator" style="#888888"/>
+  <entry type="NameEntity" style="bold #880000"/>
+  <entry type="NameFunction" style="#00aa00"/>
+  <entry type="NameNamespace" style="underline #00aaaa"/>
+  <entry type="NameTag" style="bold #1e90ff"/>
+  <entry type="NameVariable" style="#aa0000"/>
+  <entry type="LiteralString" style="#aa5500"/>
+  <entry type="LiteralStringRegex" style="#009999"/>
+  <entry type="LiteralStringSymbol" style="#0000aa"/>
+  <entry type="LiteralNumber" style="#009999"/>
+  <entry type="OperatorWord" style="#0000aa"/>
+  <entry type="Comment" style="italic #aaaaaa"/>
+  <entry type="CommentSpecial" style="italic #0000aa"/>
+  <entry type="CommentPreproc" style="noitalic #4c8317"/>
+  <entry type="GenericDeleted" style="#aa0000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#aa0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00aa00"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="#555555"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#aa0000"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/average.xml 🔗

@@ -0,0 +1,74 @@
+<style name="average">
+  <entry type="Other" style="#757575"/>
+  <entry type="Error" style="#ec0000"/>
+  <entry type="Background" style="bg:#000000"/>
+  <entry type="Keyword" style="#ec0000"/>
+  <entry type="KeywordConstant" style="#ec0000"/>
+  <entry type="KeywordDeclaration" style="#ec0000"/>
+  <entry type="KeywordNamespace" style="#ec0000"/>
+  <entry type="KeywordPseudo" style="#ec0000"/>
+  <entry type="KeywordReserved" style="#ec0000"/>
+  <entry type="KeywordType" style="#5f5fff"/>
+  <entry type="Name" style="#757575"/>
+  <entry type="NameAttribute" style="#5f5fff"/>
+  <entry type="NameBuiltin" style="#ec0000"/>
+  <entry type="NameBuiltinPseudo" style="#757575"/>
+  <entry type="NameClass" style="#5f5fff"/>
+  <entry type="NameConstant" style="#008900"/>
+  <entry type="NameDecorator" style="#008900"/>
+  <entry type="NameEntity" style="#757575"/>
+  <entry type="NameException" style="#757575"/>
+  <entry type="NameFunction" style="#5f5fff"/>
+  <entry type="NameLabel" style="#ec0000"/>
+  <entry type="NameNamespace" style="#757575"/>
+  <entry type="NameOther" style="#757575"/>
+  <entry type="NameTag" style="#ec0000"/>
+  <entry type="NameVariable" style="#ec0000"/>
+  <entry type="NameVariableClass" style="#ec0000"/>
+  <entry type="NameVariableGlobal" style="#ec0000"/>
+  <entry type="NameVariableInstance" style="#ec0000"/>
+  <entry type="Literal" style="#757575"/>
+  <entry type="LiteralDate" style="#757575"/>
+  <entry type="LiteralString" style="#008900"/>
+  <entry type="LiteralStringBacktick" style="#008900"/>
+  <entry type="LiteralStringChar" style="#008900"/>
+  <entry type="LiteralStringDoc" style="#008900"/>
+  <entry type="LiteralStringDouble" style="#008900"/>
+  <entry type="LiteralStringEscape" style="#008900"/>
+  <entry type="LiteralStringHeredoc" style="#008900"/>
+  <entry type="LiteralStringInterpol" style="#008900"/>
+  <entry type="LiteralStringOther" style="#008900"/>
+  <entry type="LiteralStringRegex" style="#008900"/>
+  <entry type="LiteralStringSingle" style="#008900"/>
+  <entry type="LiteralStringSymbol" style="#008900"/>
+  <entry type="LiteralNumber" style="#008900"/>
+  <entry type="LiteralNumberBin" style="#008900"/>
+  <entry type="LiteralNumberFloat" style="#008900"/>
+  <entry type="LiteralNumberHex" style="#008900"/>
+  <entry type="LiteralNumberInteger" style="#008900"/>
+  <entry type="LiteralNumberIntegerLong" style="#008900"/>
+  <entry type="LiteralNumberOct" style="#008900"/>
+  <entry type="Operator" style="#ec0000"/>
+  <entry type="OperatorWord" style="#ec0000"/>
+  <entry type="Punctuation" style="#757575"/>
+  <entry type="Comment" style="#757575"/>
+  <entry type="CommentHashbang" style="#757575"/>
+  <entry type="CommentMultiline" style="#757575"/>
+  <entry type="CommentSingle" style="#757575"/>
+  <entry type="CommentSpecial" style="#757575"/>
+  <entry type="CommentPreproc" style="#757575"/>
+  <entry type="Generic" style="#757575"/>
+  <entry type="GenericDeleted" style="#ec0000"/>
+  <entry type="GenericEmph" style="underline #757575"/>
+  <entry type="GenericError" style="#ec0000"/>
+  <entry type="GenericHeading" style="bold #757575"/>
+  <entry type="GenericInserted" style="bold #757575"/>
+  <entry type="GenericOutput" style="#757575"/>
+  <entry type="GenericPrompt" style="#757575"/>
+  <entry type="GenericStrong" style="italic #757575"/>
+  <entry type="GenericSubheading" style="bold #757575"/>
+  <entry type="GenericTraceback" style="#757575"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Text" style="#757575"/>
+  <entry type="TextWhitespace" style="#757575"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/base16-snazzy.xml 🔗

@@ -0,0 +1,74 @@
+<style name="base16-snazzy">
+  <entry type="Other" style="#e2e4e5"/>
+  <entry type="Error" style="#ff5c57"/>
+  <entry type="Background" style="bg:#282a36"/>
+  <entry type="Keyword" style="#ff6ac1"/>
+  <entry type="KeywordConstant" style="#ff6ac1"/>
+  <entry type="KeywordDeclaration" style="#ff5c57"/>
+  <entry type="KeywordNamespace" style="#ff6ac1"/>
+  <entry type="KeywordPseudo" style="#ff6ac1"/>
+  <entry type="KeywordReserved" style="#ff6ac1"/>
+  <entry type="KeywordType" style="#9aedfe"/>
+  <entry type="Name" style="#e2e4e5"/>
+  <entry type="NameAttribute" style="#57c7ff"/>
+  <entry type="NameBuiltin" style="#ff5c57"/>
+  <entry type="NameBuiltinPseudo" style="#e2e4e5"/>
+  <entry type="NameClass" style="#f3f99d"/>
+  <entry type="NameConstant" style="#ff9f43"/>
+  <entry type="NameDecorator" style="#ff9f43"/>
+  <entry type="NameEntity" style="#e2e4e5"/>
+  <entry type="NameException" style="#e2e4e5"/>
+  <entry type="NameFunction" style="#57c7ff"/>
+  <entry type="NameLabel" style="#ff5c57"/>
+  <entry type="NameNamespace" style="#e2e4e5"/>
+  <entry type="NameOther" style="#e2e4e5"/>
+  <entry type="NameTag" style="#ff6ac1"/>
+  <entry type="NameVariable" style="#ff5c57"/>
+  <entry type="NameVariableClass" style="#ff5c57"/>
+  <entry type="NameVariableGlobal" style="#ff5c57"/>
+  <entry type="NameVariableInstance" style="#ff5c57"/>
+  <entry type="Literal" style="#e2e4e5"/>
+  <entry type="LiteralDate" style="#e2e4e5"/>
+  <entry type="LiteralString" style="#5af78e"/>
+  <entry type="LiteralStringBacktick" style="#5af78e"/>
+  <entry type="LiteralStringChar" style="#5af78e"/>
+  <entry type="LiteralStringDoc" style="#5af78e"/>
+  <entry type="LiteralStringDouble" style="#5af78e"/>
+  <entry type="LiteralStringEscape" style="#5af78e"/>
+  <entry type="LiteralStringHeredoc" style="#5af78e"/>
+  <entry type="LiteralStringInterpol" style="#5af78e"/>
+  <entry type="LiteralStringOther" style="#5af78e"/>
+  <entry type="LiteralStringRegex" style="#5af78e"/>
+  <entry type="LiteralStringSingle" style="#5af78e"/>
+  <entry type="LiteralStringSymbol" style="#5af78e"/>
+  <entry type="LiteralNumber" style="#ff9f43"/>
+  <entry type="LiteralNumberBin" style="#ff9f43"/>
+  <entry type="LiteralNumberFloat" style="#ff9f43"/>
+  <entry type="LiteralNumberHex" style="#ff9f43"/>
+  <entry type="LiteralNumberInteger" style="#ff9f43"/>
+  <entry type="LiteralNumberIntegerLong" style="#ff9f43"/>
+  <entry type="LiteralNumberOct" style="#ff9f43"/>
+  <entry type="Operator" style="#ff6ac1"/>
+  <entry type="OperatorWord" style="#ff6ac1"/>
+  <entry type="Punctuation" style="#e2e4e5"/>
+  <entry type="Comment" style="#78787e"/>
+  <entry type="CommentHashbang" style="#78787e"/>
+  <entry type="CommentMultiline" style="#78787e"/>
+  <entry type="CommentSingle" style="#78787e"/>
+  <entry type="CommentSpecial" style="#78787e"/>
+  <entry type="CommentPreproc" style="#78787e"/>
+  <entry type="Generic" style="#e2e4e5"/>
+  <entry type="GenericDeleted" style="#ff5c57"/>
+  <entry type="GenericEmph" style="underline #e2e4e5"/>
+  <entry type="GenericError" style="#ff5c57"/>
+  <entry type="GenericHeading" style="bold #e2e4e5"/>
+  <entry type="GenericInserted" style="bold #e2e4e5"/>
+  <entry type="GenericOutput" style="#43454f"/>
+  <entry type="GenericPrompt" style="#e2e4e5"/>
+  <entry type="GenericStrong" style="italic #e2e4e5"/>
+  <entry type="GenericSubheading" style="bold #e2e4e5"/>
+  <entry type="GenericTraceback" style="#e2e4e5"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Text" style="#e2e4e5"/>
+  <entry type="TextWhitespace" style="#e2e4e5"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/borland.xml 🔗

@@ -0,0 +1,26 @@
+<style name="borland">
+  <entry type="Error" style="#a61717 bg:#e3d2d2"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold #000080"/>
+  <entry type="NameAttribute" style="#ff0000"/>
+  <entry type="NameTag" style="bold #000080"/>
+  <entry type="LiteralString" style="#0000ff"/>
+  <entry type="LiteralStringChar" style="#800080"/>
+  <entry type="LiteralNumber" style="#0000ff"/>
+  <entry type="OperatorWord" style="bold"/>
+  <entry type="Comment" style="italic #008800"/>
+  <entry type="CommentSpecial" style="bold noitalic"/>
+  <entry type="CommentPreproc" style="noitalic #008080"/>
+  <entry type="GenericDeleted" style="#000000 bg:#ffdddd"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#aa0000"/>
+  <entry type="GenericHeading" style="#999999"/>
+  <entry type="GenericInserted" style="#000000 bg:#ddffdd"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="#555555"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#aaaaaa"/>
+  <entry type="GenericTraceback" style="#aa0000"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/bw.xml 🔗

@@ -0,0 +1,23 @@
+<style name="bw">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="nobold"/>
+  <entry type="NameClass" style="bold"/>
+  <entry type="NameEntity" style="bold"/>
+  <entry type="NameException" style="bold"/>
+  <entry type="NameNamespace" style="bold"/>
+  <entry type="NameTag" style="bold"/>
+  <entry type="LiteralString" style="italic"/>
+  <entry type="LiteralStringEscape" style="bold"/>
+  <entry type="LiteralStringInterpol" style="bold"/>
+  <entry type="OperatorWord" style="bold"/>
+  <entry type="Comment" style="italic"/>
+  <entry type="CommentPreproc" style="noitalic"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericHeading" style="bold"/>
+  <entry type="GenericPrompt" style="bold"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-frappe.xml 🔗

@@ -0,0 +1,83 @@
+<style name="catppuccin-frappe">
+  <entry type="Background" style="bg:#303446 #c6d0f5"/>
+  <entry type="CodeLine" style="#c6d0f5"/>
+  <entry type="Error" style="#e78284"/>
+  <entry type="Other" style="#c6d0f5"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#51576d"/>
+  <entry type="LineNumbersTable" style="#838ba7"/>
+  <entry type="LineNumbers" style="#838ba7"/>
+  <entry type="Keyword" style="#ca9ee6"/>
+  <entry type="KeywordReserved" style="#ca9ee6"/>
+  <entry type="KeywordPseudo" style="#ca9ee6"/>
+  <entry type="KeywordConstant" style="#ef9f76"/>
+  <entry type="KeywordDeclaration" style="#e78284"/>
+  <entry type="KeywordNamespace" style="#81c8be"/>
+  <entry type="KeywordType" style="#e78284"/>
+  <entry type="Name" style="#c6d0f5"/>
+  <entry type="NameClass" style="#e5c890"/>
+  <entry type="NameConstant" style="#e5c890"/>
+  <entry type="NameDecorator" style="bold #8caaee"/>
+  <entry type="NameEntity" style="#81c8be"/>
+  <entry type="NameException" style="#ef9f76"/>
+  <entry type="NameFunction" style="#8caaee"/>
+  <entry type="NameFunctionMagic" style="#8caaee"/>
+  <entry type="NameLabel" style="#99d1db"/>
+  <entry type="NameNamespace" style="#ef9f76"/>
+  <entry type="NameProperty" style="#ef9f76"/>
+  <entry type="NameTag" style="#ca9ee6"/>
+  <entry type="NameVariable" style="#f2d5cf"/>
+  <entry type="NameVariableClass" style="#f2d5cf"/>
+  <entry type="NameVariableGlobal" style="#f2d5cf"/>
+  <entry type="NameVariableInstance" style="#f2d5cf"/>
+  <entry type="NameVariableMagic" style="#f2d5cf"/>
+  <entry type="NameAttribute" style="#8caaee"/>
+  <entry type="NameBuiltin" style="#99d1db"/>
+  <entry type="NameBuiltinPseudo" style="#99d1db"/>
+  <entry type="NameOther" style="#c6d0f5"/>
+  <entry type="Literal" style="#c6d0f5"/>
+  <entry type="LiteralDate" style="#c6d0f5"/>
+  <entry type="LiteralString" style="#a6d189"/>
+  <entry type="LiteralStringChar" style="#a6d189"/>
+  <entry type="LiteralStringSingle" style="#a6d189"/>
+  <entry type="LiteralStringDouble" style="#a6d189"/>
+  <entry type="LiteralStringBacktick" style="#a6d189"/>
+  <entry type="LiteralStringOther" style="#a6d189"/>
+  <entry type="LiteralStringSymbol" style="#a6d189"/>
+  <entry type="LiteralStringInterpol" style="#a6d189"/>
+  <entry type="LiteralStringAffix" style="#e78284"/>
+  <entry type="LiteralStringDelimiter" style="#8caaee"/>
+  <entry type="LiteralStringEscape" style="#8caaee"/>
+  <entry type="LiteralStringRegex" style="#81c8be"/>
+  <entry type="LiteralStringDoc" style="#737994"/>
+  <entry type="LiteralStringHeredoc" style="#737994"/>
+  <entry type="LiteralNumber" style="#ef9f76"/>
+  <entry type="LiteralNumberBin" style="#ef9f76"/>
+  <entry type="LiteralNumberHex" style="#ef9f76"/>
+  <entry type="LiteralNumberInteger" style="#ef9f76"/>
+  <entry type="LiteralNumberFloat" style="#ef9f76"/>
+  <entry type="LiteralNumberIntegerLong" style="#ef9f76"/>
+  <entry type="LiteralNumberOct" style="#ef9f76"/>
+  <entry type="Operator" style="bold #99d1db"/>
+  <entry type="OperatorWord" style="bold #99d1db"/>
+  <entry type="Comment" style="italic #737994"/>
+  <entry type="CommentSingle" style="italic #737994"/>
+  <entry type="CommentMultiline" style="italic #737994"/>
+  <entry type="CommentSpecial" style="italic #737994"/>
+  <entry type="CommentHashbang" style="italic #737994"/>
+  <entry type="CommentPreproc" style="italic #737994"/>
+  <entry type="CommentPreprocFile" style="bold #737994"/>
+  <entry type="Generic" style="#c6d0f5"/>
+  <entry type="GenericInserted" style="bg:#414559 #a6d189"/>
+  <entry type="GenericDeleted" style="#e78284 bg:#414559"/>
+  <entry type="GenericEmph" style="italic #c6d0f5"/>
+  <entry type="GenericStrong" style="bold #c6d0f5"/>
+  <entry type="GenericUnderline" style="underline #c6d0f5"/>
+  <entry type="GenericHeading" style="bold #ef9f76"/>
+  <entry type="GenericSubheading" style="bold #ef9f76"/>
+  <entry type="GenericOutput" style="#c6d0f5"/>
+  <entry type="GenericPrompt" style="#c6d0f5"/>
+  <entry type="GenericError" style="#e78284"/>
+  <entry type="GenericTraceback" style="#e78284"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-latte.xml 🔗

@@ -0,0 +1,83 @@
+<style name="catppuccin-latte">
+  <entry type="Background" style="bg:#eff1f5 #4c4f69"/>
+  <entry type="CodeLine" style="#4c4f69"/>
+  <entry type="Error" style="#d20f39"/>
+  <entry type="Other" style="#4c4f69"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#bcc0cc"/>
+  <entry type="LineNumbersTable" style="#8c8fa1"/>
+  <entry type="LineNumbers" style="#8c8fa1"/>
+  <entry type="Keyword" style="#8839ef"/>
+  <entry type="KeywordReserved" style="#8839ef"/>
+  <entry type="KeywordPseudo" style="#8839ef"/>
+  <entry type="KeywordConstant" style="#fe640b"/>
+  <entry type="KeywordDeclaration" style="#d20f39"/>
+  <entry type="KeywordNamespace" style="#179299"/>
+  <entry type="KeywordType" style="#d20f39"/>
+  <entry type="Name" style="#4c4f69"/>
+  <entry type="NameClass" style="#df8e1d"/>
+  <entry type="NameConstant" style="#df8e1d"/>
+  <entry type="NameDecorator" style="bold #1e66f5"/>
+  <entry type="NameEntity" style="#179299"/>
+  <entry type="NameException" style="#fe640b"/>
+  <entry type="NameFunction" style="#1e66f5"/>
+  <entry type="NameFunctionMagic" style="#1e66f5"/>
+  <entry type="NameLabel" style="#04a5e5"/>
+  <entry type="NameNamespace" style="#fe640b"/>
+  <entry type="NameProperty" style="#fe640b"/>
+  <entry type="NameTag" style="#8839ef"/>
+  <entry type="NameVariable" style="#dc8a78"/>
+  <entry type="NameVariableClass" style="#dc8a78"/>
+  <entry type="NameVariableGlobal" style="#dc8a78"/>
+  <entry type="NameVariableInstance" style="#dc8a78"/>
+  <entry type="NameVariableMagic" style="#dc8a78"/>
+  <entry type="NameAttribute" style="#1e66f5"/>
+  <entry type="NameBuiltin" style="#04a5e5"/>
+  <entry type="NameBuiltinPseudo" style="#04a5e5"/>
+  <entry type="NameOther" style="#4c4f69"/>
+  <entry type="Literal" style="#4c4f69"/>
+  <entry type="LiteralDate" style="#4c4f69"/>
+  <entry type="LiteralString" style="#40a02b"/>
+  <entry type="LiteralStringChar" style="#40a02b"/>
+  <entry type="LiteralStringSingle" style="#40a02b"/>
+  <entry type="LiteralStringDouble" style="#40a02b"/>
+  <entry type="LiteralStringBacktick" style="#40a02b"/>
+  <entry type="LiteralStringOther" style="#40a02b"/>
+  <entry type="LiteralStringSymbol" style="#40a02b"/>
+  <entry type="LiteralStringInterpol" style="#40a02b"/>
+  <entry type="LiteralStringAffix" style="#d20f39"/>
+  <entry type="LiteralStringDelimiter" style="#1e66f5"/>
+  <entry type="LiteralStringEscape" style="#1e66f5"/>
+  <entry type="LiteralStringRegex" style="#179299"/>
+  <entry type="LiteralStringDoc" style="#9ca0b0"/>
+  <entry type="LiteralStringHeredoc" style="#9ca0b0"/>
+  <entry type="LiteralNumber" style="#fe640b"/>
+  <entry type="LiteralNumberBin" style="#fe640b"/>
+  <entry type="LiteralNumberHex" style="#fe640b"/>
+  <entry type="LiteralNumberInteger" style="#fe640b"/>
+  <entry type="LiteralNumberFloat" style="#fe640b"/>
+  <entry type="LiteralNumberIntegerLong" style="#fe640b"/>
+  <entry type="LiteralNumberOct" style="#fe640b"/>
+  <entry type="Operator" style="bold #04a5e5"/>
+  <entry type="OperatorWord" style="bold #04a5e5"/>
+  <entry type="Comment" style="italic #9ca0b0"/>
+  <entry type="CommentSingle" style="italic #9ca0b0"/>
+  <entry type="CommentMultiline" style="italic #9ca0b0"/>
+  <entry type="CommentSpecial" style="italic #9ca0b0"/>
+  <entry type="CommentHashbang" style="italic #9ca0b0"/>
+  <entry type="CommentPreproc" style="italic #9ca0b0"/>
+  <entry type="CommentPreprocFile" style="bold #9ca0b0"/>
+  <entry type="Generic" style="#4c4f69"/>
+  <entry type="GenericInserted" style="bg:#ccd0da #40a02b"/>
+  <entry type="GenericDeleted" style="#d20f39 bg:#ccd0da"/>
+  <entry type="GenericEmph" style="italic #4c4f69"/>
+  <entry type="GenericStrong" style="bold #4c4f69"/>
+  <entry type="GenericUnderline" style="underline #4c4f69"/>
+  <entry type="GenericHeading" style="bold #fe640b"/>
+  <entry type="GenericSubheading" style="bold #fe640b"/>
+  <entry type="GenericOutput" style="#4c4f69"/>
+  <entry type="GenericPrompt" style="#4c4f69"/>
+  <entry type="GenericError" style="#d20f39"/>
+  <entry type="GenericTraceback" style="#d20f39"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-macchiato.xml 🔗

@@ -0,0 +1,83 @@
+<style name="catppuccin-macchiato">
+  <entry type="Background" style="bg:#24273a #cad3f5"/>
+  <entry type="CodeLine" style="#cad3f5"/>
+  <entry type="Error" style="#ed8796"/>
+  <entry type="Other" style="#cad3f5"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#494d64"/>
+  <entry type="LineNumbersTable" style="#8087a2"/>
+  <entry type="LineNumbers" style="#8087a2"/>
+  <entry type="Keyword" style="#c6a0f6"/>
+  <entry type="KeywordReserved" style="#c6a0f6"/>
+  <entry type="KeywordPseudo" style="#c6a0f6"/>
+  <entry type="KeywordConstant" style="#f5a97f"/>
+  <entry type="KeywordDeclaration" style="#ed8796"/>
+  <entry type="KeywordNamespace" style="#8bd5ca"/>
+  <entry type="KeywordType" style="#ed8796"/>
+  <entry type="Name" style="#cad3f5"/>
+  <entry type="NameClass" style="#eed49f"/>
+  <entry type="NameConstant" style="#eed49f"/>
+  <entry type="NameDecorator" style="bold #8aadf4"/>
+  <entry type="NameEntity" style="#8bd5ca"/>
+  <entry type="NameException" style="#f5a97f"/>
+  <entry type="NameFunction" style="#8aadf4"/>
+  <entry type="NameFunctionMagic" style="#8aadf4"/>
+  <entry type="NameLabel" style="#91d7e3"/>
+  <entry type="NameNamespace" style="#f5a97f"/>
+  <entry type="NameProperty" style="#f5a97f"/>
+  <entry type="NameTag" style="#c6a0f6"/>
+  <entry type="NameVariable" style="#f4dbd6"/>
+  <entry type="NameVariableClass" style="#f4dbd6"/>
+  <entry type="NameVariableGlobal" style="#f4dbd6"/>
+  <entry type="NameVariableInstance" style="#f4dbd6"/>
+  <entry type="NameVariableMagic" style="#f4dbd6"/>
+  <entry type="NameAttribute" style="#8aadf4"/>
+  <entry type="NameBuiltin" style="#91d7e3"/>
+  <entry type="NameBuiltinPseudo" style="#91d7e3"/>
+  <entry type="NameOther" style="#cad3f5"/>
+  <entry type="Literal" style="#cad3f5"/>
+  <entry type="LiteralDate" style="#cad3f5"/>
+  <entry type="LiteralString" style="#a6da95"/>
+  <entry type="LiteralStringChar" style="#a6da95"/>
+  <entry type="LiteralStringSingle" style="#a6da95"/>
+  <entry type="LiteralStringDouble" style="#a6da95"/>
+  <entry type="LiteralStringBacktick" style="#a6da95"/>
+  <entry type="LiteralStringOther" style="#a6da95"/>
+  <entry type="LiteralStringSymbol" style="#a6da95"/>
+  <entry type="LiteralStringInterpol" style="#a6da95"/>
+  <entry type="LiteralStringAffix" style="#ed8796"/>
+  <entry type="LiteralStringDelimiter" style="#8aadf4"/>
+  <entry type="LiteralStringEscape" style="#8aadf4"/>
+  <entry type="LiteralStringRegex" style="#8bd5ca"/>
+  <entry type="LiteralStringDoc" style="#6e738d"/>
+  <entry type="LiteralStringHeredoc" style="#6e738d"/>
+  <entry type="LiteralNumber" style="#f5a97f"/>
+  <entry type="LiteralNumberBin" style="#f5a97f"/>
+  <entry type="LiteralNumberHex" style="#f5a97f"/>
+  <entry type="LiteralNumberInteger" style="#f5a97f"/>
+  <entry type="LiteralNumberFloat" style="#f5a97f"/>
+  <entry type="LiteralNumberIntegerLong" style="#f5a97f"/>
+  <entry type="LiteralNumberOct" style="#f5a97f"/>
+  <entry type="Operator" style="bold #91d7e3"/>
+  <entry type="OperatorWord" style="bold #91d7e3"/>
+  <entry type="Comment" style="italic #6e738d"/>
+  <entry type="CommentSingle" style="italic #6e738d"/>
+  <entry type="CommentMultiline" style="italic #6e738d"/>
+  <entry type="CommentSpecial" style="italic #6e738d"/>
+  <entry type="CommentHashbang" style="italic #6e738d"/>
+  <entry type="CommentPreproc" style="italic #6e738d"/>
+  <entry type="CommentPreprocFile" style="bold #6e738d"/>
+  <entry type="Generic" style="#cad3f5"/>
+  <entry type="GenericInserted" style="bg:#363a4f #a6da95"/>
+  <entry type="GenericDeleted" style="#ed8796 bg:#363a4f"/>
+  <entry type="GenericEmph" style="italic #cad3f5"/>
+  <entry type="GenericStrong" style="bold #cad3f5"/>
+  <entry type="GenericUnderline" style="underline #cad3f5"/>
+  <entry type="GenericHeading" style="bold #f5a97f"/>
+  <entry type="GenericSubheading" style="bold #f5a97f"/>
+  <entry type="GenericOutput" style="#cad3f5"/>
+  <entry type="GenericPrompt" style="#cad3f5"/>
+  <entry type="GenericError" style="#ed8796"/>
+  <entry type="GenericTraceback" style="#ed8796"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/catppuccin-mocha.xml 🔗

@@ -0,0 +1,83 @@
+<style name="catppuccin-mocha">
+  <entry type="Background" style="bg:#1e1e2e #cdd6f4"/>
+  <entry type="CodeLine" style="#cdd6f4"/>
+  <entry type="Error" style="#f38ba8"/>
+  <entry type="Other" style="#cdd6f4"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#45475a"/>
+  <entry type="LineNumbersTable" style="#7f849c"/>
+  <entry type="LineNumbers" style="#7f849c"/>
+  <entry type="Keyword" style="#cba6f7"/>
+  <entry type="KeywordReserved" style="#cba6f7"/>
+  <entry type="KeywordPseudo" style="#cba6f7"/>
+  <entry type="KeywordConstant" style="#fab387"/>
+  <entry type="KeywordDeclaration" style="#f38ba8"/>
+  <entry type="KeywordNamespace" style="#94e2d5"/>
+  <entry type="KeywordType" style="#f38ba8"/>
+  <entry type="Name" style="#cdd6f4"/>
+  <entry type="NameClass" style="#f9e2af"/>
+  <entry type="NameConstant" style="#f9e2af"/>
+  <entry type="NameDecorator" style="bold #89b4fa"/>
+  <entry type="NameEntity" style="#94e2d5"/>
+  <entry type="NameException" style="#fab387"/>
+  <entry type="NameFunction" style="#89b4fa"/>
+  <entry type="NameFunctionMagic" style="#89b4fa"/>
+  <entry type="NameLabel" style="#89dceb"/>
+  <entry type="NameNamespace" style="#fab387"/>
+  <entry type="NameProperty" style="#fab387"/>
+  <entry type="NameTag" style="#cba6f7"/>
+  <entry type="NameVariable" style="#f5e0dc"/>
+  <entry type="NameVariableClass" style="#f5e0dc"/>
+  <entry type="NameVariableGlobal" style="#f5e0dc"/>
+  <entry type="NameVariableInstance" style="#f5e0dc"/>
+  <entry type="NameVariableMagic" style="#f5e0dc"/>
+  <entry type="NameAttribute" style="#89b4fa"/>
+  <entry type="NameBuiltin" style="#89dceb"/>
+  <entry type="NameBuiltinPseudo" style="#89dceb"/>
+  <entry type="NameOther" style="#cdd6f4"/>
+  <entry type="Literal" style="#cdd6f4"/>
+  <entry type="LiteralDate" style="#cdd6f4"/>
+  <entry type="LiteralString" style="#a6e3a1"/>
+  <entry type="LiteralStringChar" style="#a6e3a1"/>
+  <entry type="LiteralStringSingle" style="#a6e3a1"/>
+  <entry type="LiteralStringDouble" style="#a6e3a1"/>
+  <entry type="LiteralStringBacktick" style="#a6e3a1"/>
+  <entry type="LiteralStringOther" style="#a6e3a1"/>
+  <entry type="LiteralStringSymbol" style="#a6e3a1"/>
+  <entry type="LiteralStringInterpol" style="#a6e3a1"/>
+  <entry type="LiteralStringAffix" style="#f38ba8"/>
+  <entry type="LiteralStringDelimiter" style="#89b4fa"/>
+  <entry type="LiteralStringEscape" style="#89b4fa"/>
+  <entry type="LiteralStringRegex" style="#94e2d5"/>
+  <entry type="LiteralStringDoc" style="#6c7086"/>
+  <entry type="LiteralStringHeredoc" style="#6c7086"/>
+  <entry type="LiteralNumber" style="#fab387"/>
+  <entry type="LiteralNumberBin" style="#fab387"/>
+  <entry type="LiteralNumberHex" style="#fab387"/>
+  <entry type="LiteralNumberInteger" style="#fab387"/>
+  <entry type="LiteralNumberFloat" style="#fab387"/>
+  <entry type="LiteralNumberIntegerLong" style="#fab387"/>
+  <entry type="LiteralNumberOct" style="#fab387"/>
+  <entry type="Operator" style="bold #89dceb"/>
+  <entry type="OperatorWord" style="bold #89dceb"/>
+  <entry type="Comment" style="italic #6c7086"/>
+  <entry type="CommentSingle" style="italic #6c7086"/>
+  <entry type="CommentMultiline" style="italic #6c7086"/>
+  <entry type="CommentSpecial" style="italic #6c7086"/>
+  <entry type="CommentHashbang" style="italic #6c7086"/>
+  <entry type="CommentPreproc" style="italic #6c7086"/>
+  <entry type="CommentPreprocFile" style="bold #6c7086"/>
+  <entry type="Generic" style="#cdd6f4"/>
+  <entry type="GenericInserted" style="bg:#313244 #a6e3a1"/>
+  <entry type="GenericDeleted" style="#f38ba8 bg:#313244"/>
+  <entry type="GenericEmph" style="italic #cdd6f4"/>
+  <entry type="GenericStrong" style="bold #cdd6f4"/>
+  <entry type="GenericUnderline" style="underline #cdd6f4"/>
+  <entry type="GenericHeading" style="bold #fab387"/>
+  <entry type="GenericSubheading" style="bold #fab387"/>
+  <entry type="GenericOutput" style="#cdd6f4"/>
+  <entry type="GenericPrompt" style="#cdd6f4"/>
+  <entry type="GenericError" style="#f38ba8"/>
+  <entry type="GenericTraceback" style="#f38ba8"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/colorful.xml 🔗

@@ -0,0 +1,52 @@
+<style name="colorful">
+  <entry type="Error" style="#ff0000 bg:#ffaaaa"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold #008800"/>
+  <entry type="KeywordPseudo" style="#003388"/>
+  <entry type="KeywordType" style="#333399"/>
+  <entry type="NameAttribute" style="#0000cc"/>
+  <entry type="NameBuiltin" style="#007020"/>
+  <entry type="NameClass" style="bold #bb0066"/>
+  <entry type="NameConstant" style="bold #003366"/>
+  <entry type="NameDecorator" style="bold #555555"/>
+  <entry type="NameEntity" style="bold #880000"/>
+  <entry type="NameException" style="bold #ff0000"/>
+  <entry type="NameFunction" style="bold #0066bb"/>
+  <entry type="NameLabel" style="bold #997700"/>
+  <entry type="NameNamespace" style="bold #0e84b5"/>
+  <entry type="NameTag" style="#007700"/>
+  <entry type="NameVariable" style="#996633"/>
+  <entry type="NameVariableClass" style="#336699"/>
+  <entry type="NameVariableGlobal" style="bold #dd7700"/>
+  <entry type="NameVariableInstance" style="#3333bb"/>
+  <entry type="LiteralString" style="bg:#fff0f0"/>
+  <entry type="LiteralStringChar" style="#0044dd"/>
+  <entry type="LiteralStringDoc" style="#dd4422"/>
+  <entry type="LiteralStringEscape" style="bold #666666"/>
+  <entry type="LiteralStringInterpol" style="bg:#eeeeee"/>
+  <entry type="LiteralStringOther" style="#dd2200"/>
+  <entry type="LiteralStringRegex" style="#000000 bg:#fff0ff"/>
+  <entry type="LiteralStringSymbol" style="#aa6600"/>
+  <entry type="LiteralNumber" style="bold #6600ee"/>
+  <entry type="LiteralNumberFloat" style="bold #6600ee"/>
+  <entry type="LiteralNumberHex" style="bold #005588"/>
+  <entry type="LiteralNumberInteger" style="bold #0000dd"/>
+  <entry type="LiteralNumberOct" style="bold #4400ee"/>
+  <entry type="Operator" style="#333333"/>
+  <entry type="OperatorWord" style="bold #000000"/>
+  <entry type="Comment" style="#888888"/>
+  <entry type="CommentSpecial" style="bold #cc0000"/>
+  <entry type="CommentPreproc" style="#557799"/>
+  <entry type="GenericDeleted" style="#a00000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00a000"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="bold #c65d09"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#0044dd"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/compat.go 🔗

@@ -0,0 +1,66 @@
+package styles
+
+// Present for backwards compatibility.
+//
+// Deprecated: use styles.Get(name) instead.
+var (
+	Abap                = Registry["abap"]
+	Algol               = Registry["algol"]
+	AlgolNu             = Registry["algol_nu"]
+	Arduino             = Registry["arduino"]
+	Autumn              = Registry["autumn"]
+	Average             = Registry["average"]
+	Base16Snazzy        = Registry["base16-snazzy"]
+	Borland             = Registry["borland"]
+	BlackWhite          = Registry["bw"]
+	CatppuccinFrappe    = Registry["catppuccin-frappe"]
+	CatppuccinLatte     = Registry["catppuccin-latte"]
+	CatppuccinMacchiato = Registry["catppuccin-macchiato"]
+	CatppuccinMocha     = Registry["catppuccin-mocha"]
+	Colorful            = Registry["colorful"]
+	DoomOne             = Registry["doom-one"]
+	DoomOne2            = Registry["doom-one2"]
+	Dracula             = Registry["dracula"]
+	Emacs               = Registry["emacs"]
+	Friendly            = Registry["friendly"]
+	Fruity              = Registry["fruity"]
+	GitHubDark          = Registry["github-dark"]
+	GitHub              = Registry["github"]
+	GruvboxLight        = Registry["gruvbox-light"]
+	Gruvbox             = Registry["gruvbox"]
+	HrDark              = Registry["hrdark"]
+	HrHighContrast      = Registry["hr_high_contrast"]
+	Igor                = Registry["igor"]
+	Lovelace            = Registry["lovelace"]
+	Manni               = Registry["manni"]
+	ModusOperandi       = Registry["modus-operandi"]
+	ModusVivendi        = Registry["modus-vivendi"]
+	Monokai             = Registry["monokai"]
+	MonokaiLight        = Registry["monokailight"]
+	Murphy              = Registry["murphy"]
+	Native              = Registry["native"]
+	Nord                = Registry["nord"]
+	OnesEnterprise      = Registry["onesenterprise"]
+	ParaisoDark         = Registry["paraiso-dark"]
+	ParaisoLight        = Registry["paraiso-light"]
+	Pastie              = Registry["pastie"]
+	Perldoc             = Registry["perldoc"]
+	Pygments            = Registry["pygments"]
+	RainbowDash         = Registry["rainbow_dash"]
+	RosePineDawn        = Registry["rose-pine-dawn"]
+	RosePineMoon        = Registry["rose-pine-moon"]
+	RosePine            = Registry["rose-pine"]
+	Rrt                 = Registry["rrt"]
+	SolarizedDark       = Registry["solarized-dark"]
+	SolarizedDark256    = Registry["solarized-dark256"]
+	SolarizedLight      = Registry["solarized-light"]
+	SwapOff             = Registry["swapoff"]
+	Tango               = Registry["tango"]
+	Trac                = Registry["trac"]
+	Vim                 = Registry["vim"]
+	VisualStudio        = Registry["vs"]
+	Vulcan              = Registry["vulcan"]
+	WitchHazel          = Registry["witchhazel"]
+	XcodeDark           = Registry["xcode-dark"]
+	Xcode               = Registry["xcode"]
+)

vendor/github.com/alecthomas/chroma/v2/styles/doom-one.xml 🔗

@@ -0,0 +1,51 @@
+<style name="doom-one">
+  <entry type="Error" style="#b0c4de"/>
+  <entry type="Background" style="#b0c4de bg:#282c34"/>
+  <entry type="Keyword" style="#c678dd"/>
+  <entry type="KeywordConstant" style="bold #b756ff"/>
+  <entry type="KeywordType" style="#ef8383"/>
+  <entry type="Name" style="#c1abea"/>
+  <entry type="NameAttribute" style="#b3d23c"/>
+  <entry type="NameBuiltin" style="#ef8383"/>
+  <entry type="NameClass" style="#76a9f9"/>
+  <entry type="NameConstant" style="bold #b756ff"/>
+  <entry type="NameDecorator" style="#e5c07b"/>
+  <entry type="NameEntity" style="#bda26f"/>
+  <entry type="NameException" style="bold #fd7474"/>
+  <entry type="NameFunction" style="#00b1f7"/>
+  <entry type="NameLabel" style="#f5a40d"/>
+  <entry type="NameNamespace" style="#76a9f9"/>
+  <entry type="NameProperty" style="#cebc3a"/>
+  <entry type="NameTag" style="#e06c75"/>
+  <entry type="NameVariable" style="#dcaeea"/>
+  <entry type="NameVariableGlobal" style="bold #dcaeea"/>
+  <entry type="NameVariableInstance" style="#e06c75"/>
+  <entry type="Literal" style="#98c379"/>
+  <entry type="LiteralString" style="#98c379"/>
+  <entry type="LiteralStringDoc" style="#7e97c3"/>
+  <entry type="LiteralStringDouble" style="#63c381"/>
+  <entry type="LiteralStringEscape" style="bold #d26464"/>
+  <entry type="LiteralStringHeredoc" style="#98c379"/>
+  <entry type="LiteralStringInterpol" style="#98c379"/>
+  <entry type="LiteralStringOther" style="#70b33f"/>
+  <entry type="LiteralStringRegex" style="#56b6c2"/>
+  <entry type="LiteralStringSingle" style="#98c379"/>
+  <entry type="LiteralStringSymbol" style="#56b6c2"/>
+  <entry type="LiteralNumber" style="#d19a66"/>
+  <entry type="Operator" style="#c7bf54"/>
+  <entry type="OperatorWord" style="bold #b756ff"/>
+  <entry type="Punctuation" style="#b0c4de"/>
+  <entry type="Comment" style="italic #8a93a5"/>
+  <entry type="CommentHashbang" style="bold"/>
+  <entry type="Generic" style="#b0c4de"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericHeading" style="bold #a2cbff"/>
+  <entry type="GenericInserted" style="#a6e22e"/>
+  <entry type="GenericOutput" style="#a6e22e"/>
+  <entry type="GenericPrompt" style="#a6e22e"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#a2cbff"/>
+  <entry type="GenericTraceback" style="#a2cbff"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Text" style="#b0c4de"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/doom-one2.xml 🔗

@@ -0,0 +1,64 @@
+<style name="doom-one2">
+  <entry type="Error" style="#b0c4de"/>
+  <entry type="Background" style="#b0c4de bg:#282c34"/>
+  <entry type="Keyword" style="#76a9f9"/>
+  <entry type="KeywordConstant" style="#e5c07b"/>
+  <entry type="KeywordType" style="#e5c07b"/>
+  <entry type="Name" style="#aa89ea"/>
+  <entry type="NameAttribute" style="#cebc3a"/>
+  <entry type="NameBuiltin" style="#e5c07b"/>
+  <entry type="NameClass" style="#ca72ff"/>
+  <entry type="NameConstant" style="bold"/>
+  <entry type="NameDecorator" style="#e5c07b"/>
+  <entry type="NameEntity" style="#bda26f"/>
+  <entry type="NameException" style="bold #fd7474"/>
+  <entry type="NameFunction" style="#00b1f7"/>
+  <entry type="NameLabel" style="#f5a40d"/>
+  <entry type="NameNamespace" style="#ca72ff"/>
+  <entry type="NameProperty" style="#cebc3a"/>
+  <entry type="NameTag" style="#76a9f9"/>
+  <entry type="NameVariable" style="#dcaeea"/>
+  <entry type="NameVariableClass" style="#dcaeea"/>
+  <entry type="NameVariableGlobal" style="bold #dcaeea"/>
+  <entry type="NameVariableInstance" style="#e06c75"/>
+  <entry type="NameVariableMagic" style="#dcaeea"/>
+  <entry type="Literal" style="#98c379"/>
+  <entry type="LiteralDate" style="#98c379"/>
+  <entry type="LiteralString" style="#98c379"/>
+  <entry type="LiteralStringAffix" style="#98c379"/>
+  <entry type="LiteralStringBacktick" style="#98c379"/>
+  <entry type="LiteralStringDelimiter" style="#98c379"/>
+  <entry type="LiteralStringDoc" style="#7e97c3"/>
+  <entry type="LiteralStringDouble" style="#63c381"/>
+  <entry type="LiteralStringEscape" style="bold #d26464"/>
+  <entry type="LiteralStringHeredoc" style="#98c379"/>
+  <entry type="LiteralStringInterpol" style="#98c379"/>
+  <entry type="LiteralStringOther" style="#70b33f"/>
+  <entry type="LiteralStringRegex" style="#56b6c2"/>
+  <entry type="LiteralStringSingle" style="#98c379"/>
+  <entry type="LiteralStringSymbol" style="#56b6c2"/>
+  <entry type="LiteralNumber" style="#d19a66"/>
+  <entry type="LiteralNumberBin" style="#d19a66"/>
+  <entry type="LiteralNumberFloat" style="#d19a66"/>
+  <entry type="LiteralNumberHex" style="#d19a66"/>
+  <entry type="LiteralNumberInteger" style="#d19a66"/>
+  <entry type="LiteralNumberIntegerLong" style="#d19a66"/>
+  <entry type="LiteralNumberOct" style="#d19a66"/>
+  <entry type="Operator" style="#54b1c7"/>
+  <entry type="OperatorWord" style="bold #b756ff"/>
+  <entry type="Punctuation" style="#abb2bf"/>
+  <entry type="Comment" style="italic #8a93a5"/>
+  <entry type="CommentHashbang" style="bold"/>
+  <entry type="Generic" style="#b0c4de"/>
+  <entry type="GenericDeleted" style="#b0c4de"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericHeading" style="bold #a2cbff"/>
+  <entry type="GenericInserted" style="#a6e22e"/>
+  <entry type="GenericOutput" style="#a6e22e"/>
+  <entry type="GenericPrompt" style="#a6e22e"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#a2cbff"/>
+  <entry type="GenericTraceback" style="#a2cbff"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Text" style="#b0c4de"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/dracula.xml 🔗

@@ -0,0 +1,74 @@
+<style name="dracula">
+  <entry type="Other" style="#f8f8f2"/>
+  <entry type="Error" style="#f8f8f2"/>
+  <entry type="Background" style="bg:#282a36"/>
+  <entry type="Keyword" style="#ff79c6"/>
+  <entry type="KeywordConstant" style="#ff79c6"/>
+  <entry type="KeywordDeclaration" style="italic #8be9fd"/>
+  <entry type="KeywordNamespace" style="#ff79c6"/>
+  <entry type="KeywordPseudo" style="#ff79c6"/>
+  <entry type="KeywordReserved" style="#ff79c6"/>
+  <entry type="KeywordType" style="#8be9fd"/>
+  <entry type="Name" style="#f8f8f2"/>
+  <entry type="NameAttribute" style="#50fa7b"/>
+  <entry type="NameBuiltin" style="italic #8be9fd"/>
+  <entry type="NameBuiltinPseudo" style="#f8f8f2"/>
+  <entry type="NameClass" style="#50fa7b"/>
+  <entry type="NameConstant" style="#f8f8f2"/>
+  <entry type="NameDecorator" style="#f8f8f2"/>
+  <entry type="NameEntity" style="#f8f8f2"/>
+  <entry type="NameException" style="#f8f8f2"/>
+  <entry type="NameFunction" style="#50fa7b"/>
+  <entry type="NameLabel" style="italic #8be9fd"/>
+  <entry type="NameNamespace" style="#f8f8f2"/>
+  <entry type="NameOther" style="#f8f8f2"/>
+  <entry type="NameTag" style="#ff79c6"/>
+  <entry type="NameVariable" style="italic #8be9fd"/>
+  <entry type="NameVariableClass" style="italic #8be9fd"/>
+  <entry type="NameVariableGlobal" style="italic #8be9fd"/>
+  <entry type="NameVariableInstance" style="italic #8be9fd"/>
+  <entry type="Literal" style="#f8f8f2"/>
+  <entry type="LiteralDate" style="#f8f8f2"/>
+  <entry type="LiteralString" style="#f1fa8c"/>
+  <entry type="LiteralStringBacktick" style="#f1fa8c"/>
+  <entry type="LiteralStringChar" style="#f1fa8c"/>
+  <entry type="LiteralStringDoc" style="#f1fa8c"/>
+  <entry type="LiteralStringDouble" style="#f1fa8c"/>
+  <entry type="LiteralStringEscape" style="#f1fa8c"/>
+  <entry type="LiteralStringHeredoc" style="#f1fa8c"/>
+  <entry type="LiteralStringInterpol" style="#f1fa8c"/>
+  <entry type="LiteralStringOther" style="#f1fa8c"/>
+  <entry type="LiteralStringRegex" style="#f1fa8c"/>
+  <entry type="LiteralStringSingle" style="#f1fa8c"/>
+  <entry type="LiteralStringSymbol" style="#f1fa8c"/>
+  <entry type="LiteralNumber" style="#bd93f9"/>
+  <entry type="LiteralNumberBin" style="#bd93f9"/>
+  <entry type="LiteralNumberFloat" style="#bd93f9"/>
+  <entry type="LiteralNumberHex" style="#bd93f9"/>
+  <entry type="LiteralNumberInteger" style="#bd93f9"/>
+  <entry type="LiteralNumberIntegerLong" style="#bd93f9"/>
+  <entry type="LiteralNumberOct" style="#bd93f9"/>
+  <entry type="Operator" style="#ff79c6"/>
+  <entry type="OperatorWord" style="#ff79c6"/>
+  <entry type="Punctuation" style="#f8f8f2"/>
+  <entry type="Comment" style="#6272a4"/>
+  <entry type="CommentHashbang" style="#6272a4"/>
+  <entry type="CommentMultiline" style="#6272a4"/>
+  <entry type="CommentSingle" style="#6272a4"/>
+  <entry type="CommentSpecial" style="#6272a4"/>
+  <entry type="CommentPreproc" style="#ff79c6"/>
+  <entry type="Generic" style="#f8f8f2"/>
+  <entry type="GenericDeleted" style="#ff5555"/>
+  <entry type="GenericEmph" style="underline #f8f8f2"/>
+  <entry type="GenericError" style="#f8f8f2"/>
+  <entry type="GenericHeading" style="bold #f8f8f2"/>
+  <entry type="GenericInserted" style="bold #50fa7b"/>
+  <entry type="GenericOutput" style="#44475a"/>
+  <entry type="GenericPrompt" style="#f8f8f2"/>
+  <entry type="GenericStrong" style="#f8f8f2"/>
+  <entry type="GenericSubheading" style="bold #f8f8f2"/>
+  <entry type="GenericTraceback" style="#f8f8f2"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Text" style="#f8f8f2"/>
+  <entry type="TextWhitespace" style="#f8f8f2"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/emacs.xml 🔗

@@ -0,0 +1,44 @@
+<style name="emacs">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Background" style="bg:#f8f8f8"/>
+  <entry type="Keyword" style="bold #aa22ff"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="bold #00bb00"/>
+  <entry type="NameAttribute" style="#bb4444"/>
+  <entry type="NameBuiltin" style="#aa22ff"/>
+  <entry type="NameClass" style="#0000ff"/>
+  <entry type="NameConstant" style="#880000"/>
+  <entry type="NameDecorator" style="#aa22ff"/>
+  <entry type="NameEntity" style="bold #999999"/>
+  <entry type="NameException" style="bold #d2413a"/>
+  <entry type="NameFunction" style="#00a000"/>
+  <entry type="NameLabel" style="#a0a000"/>
+  <entry type="NameNamespace" style="bold #0000ff"/>
+  <entry type="NameTag" style="bold #008000"/>
+  <entry type="NameVariable" style="#b8860b"/>
+  <entry type="LiteralString" style="#bb4444"/>
+  <entry type="LiteralStringDoc" style="italic"/>
+  <entry type="LiteralStringEscape" style="bold #bb6622"/>
+  <entry type="LiteralStringInterpol" style="bold #bb6688"/>
+  <entry type="LiteralStringOther" style="#008000"/>
+  <entry type="LiteralStringRegex" style="#bb6688"/>
+  <entry type="LiteralStringSymbol" style="#b8860b"/>
+  <entry type="LiteralNumber" style="#666666"/>
+  <entry type="Operator" style="#666666"/>
+  <entry type="OperatorWord" style="bold #aa22ff"/>
+  <entry type="Comment" style="italic #008800"/>
+  <entry type="CommentSpecial" style="bold noitalic"/>
+  <entry type="CommentPreproc" style="noitalic"/>
+  <entry type="GenericDeleted" style="#a00000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00a000"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="bold #000080"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#0044dd"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/evergarden.xml 🔗

@@ -0,0 +1,33 @@
+<style name="evergarden">
+  <entry type="Background" style="noinherit #D6CBB4 bg:#252B2E"/>
+  <entry type="Keyword" style="noinherit #E67E80"/>
+  <entry type="KeywordType" style="noinherit #DBBC7F"/>
+  <entry type="Name" style="#D6CBB4"/>
+  <entry type="NameAttribute" style="bold #D699B6"/>
+  <entry type="NameBuiltin" style="#D699B6"/>
+  <entry type="NameConstant" style="noinherit #D699B6"/>
+  <entry type="NameEntity" style="noinherit #DBBC7F"/>
+  <entry type="NameException" style="noinherit #E67E80"/>
+  <entry type="NameFunction" style="#B2C98F"/>
+  <entry type="NameLabel" style="noinherit #E67E80"/>
+  <entry type="NameTag" style="noinherit #7a8478"/>
+  <entry type="NameVariable" style="noinherit #D6CBB4"/>
+  <entry type="LiteralString" style="noinherit #B2C98F"/>
+  <entry type="LiteralStringSymbol" style="#E69875"/>
+  <entry type="LiteralNumber" style="noinherit #D699B6"/>
+  <entry type="LiteralNumberFloat" style="noinherit #D699B6"/>
+  <entry type="Operator" style="#7a8478"/>
+  <entry type="Comment" style="italic #859289"/>
+  <entry type="CommentPreproc" style="noinherit #E67E80"/>
+  <entry type="Generic" style="#D6CBB4"/>
+  <entry type="GenericDeleted" style="noinherit #252B2E bg:#E67E80"/>
+  <entry type="GenericEmph" style="#6E8585"/>
+  <entry type="GenericError" style="bold bg:#E67E80"/>
+  <entry type="GenericHeading" style="bold #D699B6"/>
+  <entry type="GenericInserted" style="noinherit #252B2E bg:#B2C98F"/>
+  <entry type="GenericOutput" style="noinherit #6E8585"/>
+  <entry type="GenericPrompt" style="#D6CBB4"/>
+  <entry type="GenericStrong" style="#D6CBB4"/>
+  <entry type="GenericSubheading" style="bold #B2C98F"/>
+  <entry type="GenericTraceback" style="bold bg:#E67E80"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/friendly.xml 🔗

@@ -0,0 +1,44 @@
+<style name="friendly">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Background" style="bg:#f0f0f0"/>
+  <entry type="Keyword" style="bold #007020"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="nobold #902000"/>
+  <entry type="NameAttribute" style="#4070a0"/>
+  <entry type="NameBuiltin" style="#007020"/>
+  <entry type="NameClass" style="bold #0e84b5"/>
+  <entry type="NameConstant" style="#60add5"/>
+  <entry type="NameDecorator" style="bold #555555"/>
+  <entry type="NameEntity" style="bold #d55537"/>
+  <entry type="NameException" style="#007020"/>
+  <entry type="NameFunction" style="#06287e"/>
+  <entry type="NameLabel" style="bold #002070"/>
+  <entry type="NameNamespace" style="bold #0e84b5"/>
+  <entry type="NameTag" style="bold #062873"/>
+  <entry type="NameVariable" style="#bb60d5"/>
+  <entry type="LiteralString" style="#4070a0"/>
+  <entry type="LiteralStringDoc" style="italic"/>
+  <entry type="LiteralStringEscape" style="bold #4070a0"/>
+  <entry type="LiteralStringInterpol" style="#70a0d0"/>
+  <entry type="LiteralStringOther" style="#c65d09"/>
+  <entry type="LiteralStringRegex" style="#235388"/>
+  <entry type="LiteralStringSymbol" style="#517918"/>
+  <entry type="LiteralNumber" style="#40a070"/>
+  <entry type="Operator" style="#666666"/>
+  <entry type="OperatorWord" style="bold #007020"/>
+  <entry type="Comment" style="italic #60a0b0"/>
+  <entry type="CommentSpecial" style="noitalic bg:#fff0f0"/>
+  <entry type="CommentPreproc" style="noitalic #007020"/>
+  <entry type="GenericDeleted" style="#a00000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00a000"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="bold #c65d09"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#0044dd"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/fruity.xml 🔗

@@ -0,0 +1,19 @@
+<style name="fruity">
+  <entry type="Background" style="#ffffff bg:#111111"/>
+  <entry type="Keyword" style="bold #fb660a"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="bold #cdcaa9"/>
+  <entry type="NameAttribute" style="bold #ff0086"/>
+  <entry type="NameConstant" style="#0086d2"/>
+  <entry type="NameFunction" style="bold #ff0086"/>
+  <entry type="NameTag" style="bold #fb660a"/>
+  <entry type="NameVariable" style="#fb660a"/>
+  <entry type="LiteralString" style="#0086d2"/>
+  <entry type="LiteralNumber" style="bold #0086f7"/>
+  <entry type="Comment" style="italic #008800 bg:#0f140f"/>
+  <entry type="CommentPreproc" style="bold #ff0007"/>
+  <entry type="GenericHeading" style="bold #ffffff"/>
+  <entry type="GenericOutput" style="#444444 bg:#222222"/>
+  <entry type="GenericSubheading" style="bold #ffffff"/>
+  <entry type="TextWhitespace" style="#888888"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/github-dark.xml 🔗

@@ -0,0 +1,45 @@
+<style name="github-dark">
+  <entry type="Error" style="#f85149"/>
+  <entry type="LineHighlight" style="bg:#6e7681"/>
+  <entry type="LineNumbers" style="#6e7681"/>
+  <entry type="Background" style="#e6edf3 bg:#0d1117"/>
+  <entry type="Keyword" style="#ff7b72"/>
+  <entry type="KeywordConstant" style="#79c0ff"/>
+  <entry type="KeywordPseudo" style="#79c0ff"/>
+  <entry type="Name" style="#e6edf3"/>
+  <entry type="NameClass" style="bold #f0883e"/>
+  <entry type="NameConstant" style="bold #79c0ff"/>
+  <entry type="NameDecorator" style="bold #d2a8ff"/>
+  <entry type="NameEntity" style="#ffa657"/>
+  <entry type="NameException" style="bold #f0883e"/>
+  <entry type="NameFunction" style="bold #d2a8ff"/>
+  <entry type="NameLabel" style="bold #79c0ff"/>
+  <entry type="NameNamespace" style="#ff7b72"/>
+  <entry type="NameProperty" style="#79c0ff"/>
+  <entry type="NameTag" style="#7ee787"/>
+  <entry type="NameVariable" style="#79c0ff"/>
+  <entry type="Literal" style="#a5d6ff"/>
+  <entry type="LiteralDate" style="#79c0ff"/>
+  <entry type="LiteralStringAffix" style="#79c0ff"/>
+  <entry type="LiteralStringDelimiter" style="#79c0ff"/>
+  <entry type="LiteralStringEscape" style="#79c0ff"/>
+  <entry type="LiteralStringHeredoc" style="#79c0ff"/>
+  <entry type="LiteralStringRegex" style="#79c0ff"/>
+  <entry type="Operator" style="bold #ff7b72"/>
+  <entry type="Comment" style="italic #8b949e"/>
+  <entry type="CommentSpecial" style="bold italic #8b949e"/>
+  <entry type="CommentPreproc" style="bold #8b949e"/>
+  <entry type="Generic" style="#e6edf3"/>
+  <entry type="GenericDeleted" style="#ffa198 bg:#490202"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ffa198"/>
+  <entry type="GenericHeading" style="bold #79c0ff"/>
+  <entry type="GenericInserted" style="#56d364 bg:#0f5323"/>
+  <entry type="GenericOutput" style="#8b949e"/>
+  <entry type="GenericPrompt" style="#8b949e"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#79c0ff"/>
+  <entry type="GenericTraceback" style="#ff7b72"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#6e7681"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/github.xml 🔗

@@ -0,0 +1,39 @@
+<style name="github">
+  <entry type="Error" style="#f6f8fa bg:#82071e"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#cf222e"/>
+  <entry type="KeywordType" style="#cf222e"/>
+  <entry type="NameAttribute" style="#1f2328"/>
+  <entry type="NameBuiltin" style="#6639ba"/>
+  <entry type="NameBuiltinPseudo" style="#6a737d"/>
+  <entry type="NameClass" style="#1f2328"/>
+  <entry type="NameConstant" style="#0550ae"/>
+  <entry type="NameDecorator" style="#0550ae"/>
+  <entry type="NameEntity" style="#6639ba"/>
+  <entry type="NameFunction" style="#6639ba"/>
+  <entry type="NameLabel" style="bold #990000"/>
+  <entry type="NameNamespace" style="#24292e"/>
+  <entry type="NameOther" style="#1f2328"/>
+  <entry type="NameTag" style="#0550ae"/>
+  <entry type="NameVariable" style="#953800"/>
+  <entry type="NameVariableClass" style="#953800"/>
+  <entry type="NameVariableGlobal" style="#953800"/>
+  <entry type="NameVariableInstance" style="#953800"/>
+  <entry type="LiteralString" style="#0a3069"/>
+  <entry type="LiteralStringRegex" style="#0a3069"/>
+  <entry type="LiteralStringSymbol" style="#032f62"/>
+  <entry type="LiteralNumber" style="#0550ae"/>
+  <entry type="Operator" style="#0550ae"/>
+  <entry type="Comment" style="#57606a"/>
+  <entry type="CommentMultiline" style="#57606a"/>
+  <entry type="CommentSingle" style="#57606a"/>
+  <entry type="CommentSpecial" style="#57606a"/>
+  <entry type="CommentPreproc" style="#57606a"/>
+  <entry type="GenericDeleted" style="#82071e bg:#ffebe9"/>
+  <entry type="GenericEmph" style="#1f2328"/>
+  <entry type="GenericInserted" style="#116329 bg:#dafbe1"/>
+  <entry type="GenericOutput" style="#1f2328"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Punctuation" style="#1f2328"/>
+  <entry type="TextWhitespace" style="#ffffff"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/gruvbox-light.xml 🔗

@@ -0,0 +1,33 @@
+<style name="gruvbox-light">
+  <entry type="Background" style="noinherit #3c3836 bg:#fbf1c7"/>
+  <entry type="Keyword" style="noinherit #af3a03"/>
+  <entry type="KeywordType" style="noinherit #b57614"/>
+  <entry type="Name" style="#3c3836"/>
+  <entry type="NameAttribute" style="bold #79740e"/>
+  <entry type="NameBuiltin" style="#b57614"/>
+  <entry type="NameConstant" style="noinherit #d3869b"/>
+  <entry type="NameEntity" style="noinherit #b57614"/>
+  <entry type="NameException" style="noinherit #fb4934"/>
+  <entry type="NameFunction" style="#b57614"/>
+  <entry type="NameLabel" style="noinherit #9d0006"/>
+  <entry type="NameTag" style="noinherit #9d0006"/>
+  <entry type="NameVariable" style="noinherit #3c3836"/>
+  <entry type="LiteralString" style="noinherit #79740e"/>
+  <entry type="LiteralStringSymbol" style="#076678"/>
+  <entry type="LiteralNumber" style="noinherit #8f3f71"/>
+  <entry type="LiteralNumberFloat" style="noinherit #8f3f71"/>
+  <entry type="Operator" style="#af3a03"/>
+  <entry type="Comment" style="italic #928374"/>
+  <entry type="CommentPreproc" style="noinherit #427b58"/>
+  <entry type="Generic" style="#3c3836"/>
+  <entry type="GenericDeleted" style="noinherit #282828 bg:#9d0006"/>
+  <entry type="GenericEmph" style="underline #076678"/>
+  <entry type="GenericError" style="bold bg:#9d0006"/>
+  <entry type="GenericHeading" style="bold #79740e"/>
+  <entry type="GenericInserted" style="noinherit #282828 bg:#79740e"/>
+  <entry type="GenericOutput" style="noinherit #504945"/>
+  <entry type="GenericPrompt" style="#3c3836"/>
+  <entry type="GenericStrong" style="#3c3836"/>
+  <entry type="GenericSubheading" style="bold #79740e"/>
+  <entry type="GenericTraceback" style="bold bg:#3c3836"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/gruvbox.xml 🔗

@@ -0,0 +1,33 @@
+<style name="gruvbox">
+  <entry type="Background" style="noinherit #ebdbb2 bg:#282828"/>
+  <entry type="Keyword" style="noinherit #fe8019"/>
+  <entry type="KeywordType" style="noinherit #fabd2f"/>
+  <entry type="Name" style="#ebdbb2"/>
+  <entry type="NameAttribute" style="bold #b8bb26"/>
+  <entry type="NameBuiltin" style="#fabd2f"/>
+  <entry type="NameConstant" style="noinherit #d3869b"/>
+  <entry type="NameEntity" style="noinherit #fabd2f"/>
+  <entry type="NameException" style="noinherit #fb4934"/>
+  <entry type="NameFunction" style="#fabd2f"/>
+  <entry type="NameLabel" style="noinherit #fb4934"/>
+  <entry type="NameTag" style="noinherit #fb4934"/>
+  <entry type="NameVariable" style="noinherit #ebdbb2"/>
+  <entry type="LiteralString" style="noinherit #b8bb26"/>
+  <entry type="LiteralStringSymbol" style="#83a598"/>
+  <entry type="LiteralNumber" style="noinherit #d3869b"/>
+  <entry type="LiteralNumberFloat" style="noinherit #d3869b"/>
+  <entry type="Operator" style="#fe8019"/>
+  <entry type="Comment" style="italic #928374"/>
+  <entry type="CommentPreproc" style="noinherit #8ec07c"/>
+  <entry type="Generic" style="#ebdbb2"/>
+  <entry type="GenericDeleted" style="noinherit #282828 bg:#fb4934"/>
+  <entry type="GenericEmph" style="underline #83a598"/>
+  <entry type="GenericError" style="bold bg:#fb4934"/>
+  <entry type="GenericHeading" style="bold #b8bb26"/>
+  <entry type="GenericInserted" style="noinherit #282828 bg:#b8bb26"/>
+  <entry type="GenericOutput" style="noinherit #504945"/>
+  <entry type="GenericPrompt" style="#ebdbb2"/>
+  <entry type="GenericStrong" style="#ebdbb2"/>
+  <entry type="GenericSubheading" style="bold #b8bb26"/>
+  <entry type="GenericTraceback" style="bold bg:#fb4934"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/hr_high_contrast.xml 🔗

@@ -0,0 +1,12 @@
+<style name="hr_high_contrast">
+  <entry type="Other" style="#d5d500"/>
+  <entry type="Background" style="#000000"/>
+  <entry type="Keyword" style="#467faf"/>
+  <entry type="Name" style="#ffffff"/>
+  <entry type="LiteralString" style="#a87662"/>
+  <entry type="LiteralStringBoolean" style="#467faf"/>
+  <entry type="LiteralNumber" style="#ffffff"/>
+  <entry type="Operator" style="#e4e400"/>
+  <entry type="OperatorWord" style="#467faf"/>
+  <entry type="Comment" style="#5a8349"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/hrdark.xml 🔗

@@ -0,0 +1,10 @@
+<style name="hrdark">
+  <entry type="Other" style="#ffffff"/>
+  <entry type="Background" style="#1d2432"/>
+  <entry type="Keyword" style="#ff636f"/>
+  <entry type="Name" style="#58a1dd"/>
+  <entry type="Literal" style="#a6be9d"/>
+  <entry type="Operator" style="#ff636f"/>
+  <entry type="OperatorWord" style="#ff636f"/>
+  <entry type="Comment" style="italic #828b96"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/igor.xml 🔗

@@ -0,0 +1,9 @@
+<style name="igor">
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#0000ff"/>
+  <entry type="NameClass" style="#007575"/>
+  <entry type="NameDecorator" style="#cc00a3"/>
+  <entry type="NameFunction" style="#c34e00"/>
+  <entry type="LiteralString" style="#009c00"/>
+  <entry type="Comment" style="italic #ff0000"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/lovelace.xml 🔗

@@ -0,0 +1,53 @@
+<style name="lovelace">
+  <entry type="Error" style="bg:#a848a8"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#2838b0"/>
+  <entry type="KeywordConstant" style="italic #444444"/>
+  <entry type="KeywordDeclaration" style="italic"/>
+  <entry type="KeywordType" style="italic"/>
+  <entry type="NameAttribute" style="#388038"/>
+  <entry type="NameBuiltin" style="#388038"/>
+  <entry type="NameBuiltinPseudo" style="italic"/>
+  <entry type="NameClass" style="#287088"/>
+  <entry type="NameConstant" style="#b85820"/>
+  <entry type="NameDecorator" style="#287088"/>
+  <entry type="NameEntity" style="#709030"/>
+  <entry type="NameException" style="#908828"/>
+  <entry type="NameFunction" style="#785840"/>
+  <entry type="NameFunctionMagic" style="#b85820"/>
+  <entry type="NameLabel" style="#289870"/>
+  <entry type="NameNamespace" style="#289870"/>
+  <entry type="NameTag" style="#2838b0"/>
+  <entry type="NameVariable" style="#b04040"/>
+  <entry type="NameVariableGlobal" style="#908828"/>
+  <entry type="NameVariableMagic" style="#b85820"/>
+  <entry type="LiteralString" style="#b83838"/>
+  <entry type="LiteralStringAffix" style="#444444"/>
+  <entry type="LiteralStringChar" style="#a848a8"/>
+  <entry type="LiteralStringDelimiter" style="#b85820"/>
+  <entry type="LiteralStringDoc" style="italic #b85820"/>
+  <entry type="LiteralStringEscape" style="#709030"/>
+  <entry type="LiteralStringInterpol" style="underline"/>
+  <entry type="LiteralStringOther" style="#a848a8"/>
+  <entry type="LiteralStringRegex" style="#a848a8"/>
+  <entry type="LiteralNumber" style="#444444"/>
+  <entry type="Operator" style="#666666"/>
+  <entry type="OperatorWord" style="#a848a8"/>
+  <entry type="Punctuation" style="#888888"/>
+  <entry type="Comment" style="italic #888888"/>
+  <entry type="CommentHashbang" style="#287088"/>
+  <entry type="CommentMultiline" style="#888888"/>
+  <entry type="CommentPreproc" style="noitalic #289870"/>
+  <entry type="GenericDeleted" style="#c02828"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#c02828"/>
+  <entry type="GenericHeading" style="#666666"/>
+  <entry type="GenericInserted" style="#388038"/>
+  <entry type="GenericOutput" style="#666666"/>
+  <entry type="GenericPrompt" style="#444444"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#444444"/>
+  <entry type="GenericTraceback" style="#2838b0"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#a89028"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/manni.xml 🔗

@@ -0,0 +1,44 @@
+<style name="manni">
+  <entry type="Error" style="#aa0000 bg:#ffaaaa"/>
+  <entry type="Background" style="bg:#f0f3f3"/>
+  <entry type="Keyword" style="bold #006699"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="#007788"/>
+  <entry type="NameAttribute" style="#330099"/>
+  <entry type="NameBuiltin" style="#336666"/>
+  <entry type="NameClass" style="bold #00aa88"/>
+  <entry type="NameConstant" style="#336600"/>
+  <entry type="NameDecorator" style="#9999ff"/>
+  <entry type="NameEntity" style="bold #999999"/>
+  <entry type="NameException" style="bold #cc0000"/>
+  <entry type="NameFunction" style="#cc00ff"/>
+  <entry type="NameLabel" style="#9999ff"/>
+  <entry type="NameNamespace" style="bold #00ccff"/>
+  <entry type="NameTag" style="bold #330099"/>
+  <entry type="NameVariable" style="#003333"/>
+  <entry type="LiteralString" style="#cc3300"/>
+  <entry type="LiteralStringDoc" style="italic"/>
+  <entry type="LiteralStringEscape" style="bold #cc3300"/>
+  <entry type="LiteralStringInterpol" style="#aa0000"/>
+  <entry type="LiteralStringOther" style="#cc3300"/>
+  <entry type="LiteralStringRegex" style="#33aaaa"/>
+  <entry type="LiteralStringSymbol" style="#ffcc33"/>
+  <entry type="LiteralNumber" style="#ff6600"/>
+  <entry type="Operator" style="#555555"/>
+  <entry type="OperatorWord" style="bold #000000"/>
+  <entry type="Comment" style="italic #0099ff"/>
+  <entry type="CommentSpecial" style="bold"/>
+  <entry type="CommentPreproc" style="noitalic #009999"/>
+  <entry type="GenericDeleted" style="bg:#ffcccc border:#cc0000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #003300"/>
+  <entry type="GenericInserted" style="bg:#ccffcc border:#00cc00"/>
+  <entry type="GenericOutput" style="#aaaaaa"/>
+  <entry type="GenericPrompt" style="bold #000099"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #003300"/>
+  <entry type="GenericTraceback" style="#99cc66"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/modus-operandi.xml 🔗

@@ -0,0 +1,13 @@
+<style name="modus-operandi">
+  <entry type="Background" style="#000000 bg:#ffffff"/>
+  <entry type="Keyword" style="#5317ac"/>
+  <entry type="KeywordConstant" style="#0000c0"/>
+  <entry type="KeywordType" style="#005a5f"/>
+  <entry type="NameBuiltin" style="#8f0075"/>
+  <entry type="NameFunction" style="#721045"/>
+  <entry type="NameVariable" style="#00538b"/>
+  <entry type="Literal" style="#0000c0"/>
+  <entry type="LiteralString" style="#2544bb"/>
+  <entry type="Operator" style="#00538b"/>
+  <entry type="Comment" style="#505050"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/modus-vivendi.xml 🔗

@@ -0,0 +1,13 @@
+<style name="modus-vivendi">
+  <entry type="Background" style="#ffffff bg:#000000"/>
+  <entry type="Keyword" style="#b6a0ff"/>
+  <entry type="KeywordConstant" style="#00bcff"/>
+  <entry type="KeywordType" style="#6ae4b9"/>
+  <entry type="NameBuiltin" style="#f78fe7"/>
+  <entry type="NameFunction" style="#feacd0"/>
+  <entry type="NameVariable" style="#00d3d0"/>
+  <entry type="Literal" style="#00bcff"/>
+  <entry type="LiteralString" style="#79a8ff"/>
+  <entry type="Operator" style="#00d3d0"/>
+  <entry type="Comment" style="#a8a8a8"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/monokai.xml 🔗

@@ -0,0 +1,29 @@
+<style name="monokai">
+  <entry type="Error" style="#960050 bg:#1e0010"/>
+  <entry type="Background" style="bg:#272822"/>
+  <entry type="Keyword" style="#66d9ef"/>
+  <entry type="KeywordNamespace" style="#f92672"/>
+  <entry type="Name" style="#f8f8f2"/>
+  <entry type="NameAttribute" style="#a6e22e"/>
+  <entry type="NameClass" style="#a6e22e"/>
+  <entry type="NameConstant" style="#66d9ef"/>
+  <entry type="NameDecorator" style="#a6e22e"/>
+  <entry type="NameException" style="#a6e22e"/>
+  <entry type="NameFunction" style="#a6e22e"/>
+  <entry type="NameOther" style="#a6e22e"/>
+  <entry type="NameTag" style="#f92672"/>
+  <entry type="Literal" style="#ae81ff"/>
+  <entry type="LiteralDate" style="#e6db74"/>
+  <entry type="LiteralString" style="#e6db74"/>
+  <entry type="LiteralStringEscape" style="#ae81ff"/>
+  <entry type="LiteralNumber" style="#ae81ff"/>
+  <entry type="Operator" style="#f92672"/>
+  <entry type="Punctuation" style="#f8f8f2"/>
+  <entry type="Comment" style="#75715e"/>
+  <entry type="GenericDeleted" style="#f92672"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericInserted" style="#a6e22e"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#75715e"/>
+  <entry type="Text" style="#f8f8f2"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/monokailight.xml 🔗

@@ -0,0 +1,26 @@
+<style name="monokailight">
+  <entry type="Error" style="#960050 bg:#1e0010"/>
+  <entry type="Background" style="bg:#fafafa"/>
+  <entry type="Keyword" style="#00a8c8"/>
+  <entry type="KeywordNamespace" style="#f92672"/>
+  <entry type="Name" style="#111111"/>
+  <entry type="NameAttribute" style="#75af00"/>
+  <entry type="NameClass" style="#75af00"/>
+  <entry type="NameConstant" style="#00a8c8"/>
+  <entry type="NameDecorator" style="#75af00"/>
+  <entry type="NameException" style="#75af00"/>
+  <entry type="NameFunction" style="#75af00"/>
+  <entry type="NameOther" style="#75af00"/>
+  <entry type="NameTag" style="#f92672"/>
+  <entry type="Literal" style="#ae81ff"/>
+  <entry type="LiteralDate" style="#d88200"/>
+  <entry type="LiteralString" style="#d88200"/>
+  <entry type="LiteralStringEscape" style="#8045ff"/>
+  <entry type="LiteralNumber" style="#ae81ff"/>
+  <entry type="Operator" style="#f92672"/>
+  <entry type="Punctuation" style="#111111"/>
+  <entry type="Comment" style="#75715e"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="Text" style="#272822"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/murphy.xml 🔗

@@ -0,0 +1,52 @@
+<style name="murphy">
+  <entry type="Error" style="#ff0000 bg:#ffaaaa"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold #228899"/>
+  <entry type="KeywordPseudo" style="#0088ff"/>
+  <entry type="KeywordType" style="#6666ff"/>
+  <entry type="NameAttribute" style="#000077"/>
+  <entry type="NameBuiltin" style="#007722"/>
+  <entry type="NameClass" style="bold #ee99ee"/>
+  <entry type="NameConstant" style="bold #55eedd"/>
+  <entry type="NameDecorator" style="bold #555555"/>
+  <entry type="NameEntity" style="#880000"/>
+  <entry type="NameException" style="bold #ff0000"/>
+  <entry type="NameFunction" style="bold #55eedd"/>
+  <entry type="NameLabel" style="bold #997700"/>
+  <entry type="NameNamespace" style="bold #0e84b5"/>
+  <entry type="NameTag" style="#007700"/>
+  <entry type="NameVariable" style="#003366"/>
+  <entry type="NameVariableClass" style="#ccccff"/>
+  <entry type="NameVariableGlobal" style="#ff8844"/>
+  <entry type="NameVariableInstance" style="#aaaaff"/>
+  <entry type="LiteralString" style="bg:#e0e0ff"/>
+  <entry type="LiteralStringChar" style="#8888ff"/>
+  <entry type="LiteralStringDoc" style="#dd4422"/>
+  <entry type="LiteralStringEscape" style="bold #666666"/>
+  <entry type="LiteralStringInterpol" style="bg:#eeeeee"/>
+  <entry type="LiteralStringOther" style="#ff8888"/>
+  <entry type="LiteralStringRegex" style="#000000 bg:#e0e0ff"/>
+  <entry type="LiteralStringSymbol" style="#ffcc88"/>
+  <entry type="LiteralNumber" style="bold #6600ee"/>
+  <entry type="LiteralNumberFloat" style="bold #6600ee"/>
+  <entry type="LiteralNumberHex" style="bold #005588"/>
+  <entry type="LiteralNumberInteger" style="bold #6666ff"/>
+  <entry type="LiteralNumberOct" style="bold #4400ee"/>
+  <entry type="Operator" style="#333333"/>
+  <entry type="OperatorWord" style="bold #000000"/>
+  <entry type="Comment" style="italic #666666"/>
+  <entry type="CommentSpecial" style="bold #cc0000"/>
+  <entry type="CommentPreproc" style="noitalic #557799"/>
+  <entry type="GenericDeleted" style="#a00000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00a000"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="bold #c65d09"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#0044dd"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/native.xml 🔗

@@ -0,0 +1,35 @@
+<style name="native">
+  <entry type="Error" style="#a61717 bg:#e3d2d2"/>
+  <entry type="Background" style="#d0d0d0 bg:#202020"/>
+  <entry type="Keyword" style="bold #6ab825"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="NameAttribute" style="#bbbbbb"/>
+  <entry type="NameBuiltin" style="#24909d"/>
+  <entry type="NameClass" style="underline #447fcf"/>
+  <entry type="NameConstant" style="#40ffff"/>
+  <entry type="NameDecorator" style="#ffa500"/>
+  <entry type="NameException" style="#bbbbbb"/>
+  <entry type="NameFunction" style="#447fcf"/>
+  <entry type="NameNamespace" style="underline #447fcf"/>
+  <entry type="NameTag" style="bold #6ab825"/>
+  <entry type="NameVariable" style="#40ffff"/>
+  <entry type="LiteralString" style="#ed9d13"/>
+  <entry type="LiteralStringOther" style="#ffa500"/>
+  <entry type="LiteralNumber" style="#3677a9"/>
+  <entry type="OperatorWord" style="bold #6ab825"/>
+  <entry type="Comment" style="italic #999999"/>
+  <entry type="CommentSpecial" style="bold noitalic #e50808 bg:#520000"/>
+  <entry type="CommentPreproc" style="bold noitalic #cd2828"/>
+  <entry type="GenericDeleted" style="#d22323"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#d22323"/>
+  <entry type="GenericHeading" style="bold #ffffff"/>
+  <entry type="GenericInserted" style="#589819"/>
+  <entry type="GenericOutput" style="#cccccc"/>
+  <entry type="GenericPrompt" style="#aaaaaa"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="underline #ffffff"/>
+  <entry type="GenericTraceback" style="#d22323"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#666666"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/nord.xml 🔗

@@ -0,0 +1,46 @@
+<style name="nord">
+  <entry type="Error" style="#bf616a"/>
+  <entry type="Background" style="#d8dee9 bg:#2e3440"/>
+  <entry type="Keyword" style="bold #81a1c1"/>
+  <entry type="KeywordPseudo" style="nobold #81a1c1"/>
+  <entry type="KeywordType" style="nobold #81a1c1"/>
+  <entry type="Name" style="#d8dee9"/>
+  <entry type="NameAttribute" style="#8fbcbb"/>
+  <entry type="NameBuiltin" style="#81a1c1"/>
+  <entry type="NameClass" style="#8fbcbb"/>
+  <entry type="NameConstant" style="#8fbcbb"/>
+  <entry type="NameDecorator" style="#d08770"/>
+  <entry type="NameEntity" style="#d08770"/>
+  <entry type="NameException" style="#bf616a"/>
+  <entry type="NameFunction" style="#88c0d0"/>
+  <entry type="NameLabel" style="#8fbcbb"/>
+  <entry type="NameNamespace" style="#8fbcbb"/>
+  <entry type="NameOther" style="#d8dee9"/>
+  <entry type="NameTag" style="#81a1c1"/>
+  <entry type="NameVariable" style="#d8dee9"/>
+  <entry type="NameProperty" style="#8fbcbb"/>
+  <entry type="LiteralString" style="#a3be8c"/>
+  <entry type="LiteralStringDoc" style="#616e87"/>
+  <entry type="LiteralStringEscape" style="#ebcb8b"/>
+  <entry type="LiteralStringInterpol" style="#a3be8c"/>
+  <entry type="LiteralStringOther" style="#a3be8c"/>
+  <entry type="LiteralStringRegex" style="#ebcb8b"/>
+  <entry type="LiteralStringSymbol" style="#a3be8c"/>
+  <entry type="LiteralNumber" style="#b48ead"/>
+  <entry type="Operator" style="#81a1c1"/>
+  <entry type="OperatorWord" style="bold #81a1c1"/>
+  <entry type="Punctuation" style="#eceff4"/>
+  <entry type="Comment" style="italic #616e87"/>
+  <entry type="CommentPreproc" style="#5e81ac"/>
+  <entry type="GenericDeleted" style="#bf616a"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#bf616a"/>
+  <entry type="GenericHeading" style="bold #88c0d0"/>
+  <entry type="GenericInserted" style="#a3be8c"/>
+  <entry type="GenericOutput" style="#d8dee9"/>
+  <entry type="GenericPrompt" style="bold #4c566a"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #88c0d0"/>
+  <entry type="GenericTraceback" style="#bf616a"/>
+  <entry type="TextWhitespace" style="#d8dee9"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/nordic.xml 🔗

@@ -0,0 +1,46 @@
+<style name="nordic">
+  <entry type="Error" style="#C5727A"/>
+  <entry type="Background" style="#BBC3D4 bg:#242933"/>
+  <entry type="Keyword" style="bold #D08770"/>
+  <entry type="KeywordPseudo" style="nobold #D08770"/>
+  <entry type="KeywordType" style="nobold #D08770"/>
+  <entry type="Name" style="#BBC3D4"/>
+  <entry type="NameAttribute" style="#8FBCBB"/>
+  <entry type="NameBuiltin" style="#5E81AC"/>
+  <entry type="NameClass" style="#8FBCBB"/>
+  <entry type="NameConstant" style="#8FBCBB"/>
+  <entry type="NameDecorator" style="#D08770"/>
+  <entry type="NameEntity" style="#D08770"/>
+  <entry type="NameException" style="#C5727A"/>
+  <entry type="NameFunction" style="#88C0D0"/>
+  <entry type="NameLabel" style="#8FBCBB"/>
+  <entry type="NameNamespace" style="#8FBCBB"/>
+  <entry type="NameOther" style="#BBC3D4"/>
+  <entry type="NameTag" style="#5E81AC"/>
+  <entry type="NameVariable" style="#BBC3D4"/>
+  <entry type="NameProperty" style="#8FBCBB"/>
+  <entry type="LiteralString" style="#A3BE8C"/>
+  <entry type="LiteralStringDoc" style="#4C566A"/>
+  <entry type="LiteralStringEscape" style="#EBCB8B"/>
+  <entry type="LiteralStringInterpol" style="#A3BE8C"/>
+  <entry type="LiteralStringOther" style="#A3BE8C"/>
+  <entry type="LiteralStringRegex" style="#EBCB8B"/>
+  <entry type="LiteralStringSymbol" style="#A3BE8C"/>
+  <entry type="LiteralNumber" style="#B48EAD"/>
+  <entry type="Operator" style="#5E81AC"/>
+  <entry type="OperatorWord" style="bold #5E81AC"/>
+  <entry type="Punctuation" style="#ECEFF4"/>
+  <entry type="Comment" style="italic #4C566A"/>
+  <entry type="CommentPreproc" style="#5E81AC"/>
+  <entry type="GenericDeleted" style="#C5727A"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#C5727A"/>
+  <entry type="GenericHeading" style="bold #88C0D0"/>
+  <entry type="GenericInserted" style="#A3BE8C"/>
+  <entry type="GenericOutput" style="#BBC3D4"/>
+  <entry type="GenericPrompt" style="bold #1E222A"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #88C0D0"/>
+  <entry type="GenericTraceback" style="#C5727A"/>
+  <entry type="TextWhitespace" style="#BBC3D4"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/onedark.xml 🔗

@@ -0,0 +1,25 @@
+<style name="onedark">
+  <entry type="Background" style="#ABB2BF bg:#282C34"/>
+  <entry type="Punctuation"        style="#ABB2BF"/>
+  <entry type="Keyword"            style="#C678DD"/>
+  <entry type="KeywordConstant"    style="#E5C07B"/>
+  <entry type="KeywordDeclaration" style="#C678DD"/>
+  <entry type="KeywordNamespace"   style="#C678DD"/>
+  <entry type="KeywordReserved"    style="#C678DD"/>
+  <entry type="KeywordType"        style="#E5C07B"/>
+  <entry type="Name"               style="#E06C75"/>
+  <entry type="NameAttribute"      style="#E06C75"/>
+  <entry type="NameBuiltin"         style="#E5C07B"/>
+  <entry type="NameClass"           style="#E5C07B"/>
+  <entry type="NameFunction"        style="bold #61AFEF"/>
+  <entry type="NameFunctionMagic"   style="bold #56B6C2"/>
+  <entry type="NameOther"           style="#E06C75"/>
+  <entry type="NameTag"             style="#E06C75"/>
+  <entry type="NameDecorator"       style="#61AFEF"/>
+  <entry type="LiteralString"       style="#98C379"/>
+  <entry type="LiteralNumber"       style="#D19A66"/>
+  <entry type="Operator"            style="#56B6C2"/>
+  <entry type="Comment"             style="#7F848E"/>
+  <entry type="GenericDeleted"      style="#E06C75"/>
+  <entry type="GenericInserted"     style="bold #98C379"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/onesenterprise.xml 🔗

@@ -0,0 +1,10 @@
+<style name="onesenterprise">
+  <entry type="Keyword" style="#ff0000"/>
+  <entry type="Name" style="#0000ff"/>
+  <entry type="LiteralString" style="#000000"/>
+  <entry type="Operator" style="#ff0000"/>
+  <entry type="Punctuation" style="#ff0000"/>
+  <entry type="Comment" style="#008000"/>
+  <entry type="CommentPreproc" style="#963200"/>
+  <entry type="Text" style="#000000"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/paraiso-dark.xml 🔗

@@ -0,0 +1,37 @@
+<style name="paraiso-dark">
+  <entry type="Error" style="#ef6155"/>
+  <entry type="Background" style="bg:#2f1e2e"/>
+  <entry type="Keyword" style="#815ba4"/>
+  <entry type="KeywordNamespace" style="#5bc4bf"/>
+  <entry type="KeywordType" style="#fec418"/>
+  <entry type="Name" style="#e7e9db"/>
+  <entry type="NameAttribute" style="#06b6ef"/>
+  <entry type="NameClass" style="#fec418"/>
+  <entry type="NameConstant" style="#ef6155"/>
+  <entry type="NameDecorator" style="#5bc4bf"/>
+  <entry type="NameException" style="#ef6155"/>
+  <entry type="NameFunction" style="#06b6ef"/>
+  <entry type="NameNamespace" style="#fec418"/>
+  <entry type="NameOther" style="#06b6ef"/>
+  <entry type="NameTag" style="#5bc4bf"/>
+  <entry type="NameVariable" style="#ef6155"/>
+  <entry type="Literal" style="#f99b15"/>
+  <entry type="LiteralDate" style="#48b685"/>
+  <entry type="LiteralString" style="#48b685"/>
+  <entry type="LiteralStringChar" style="#e7e9db"/>
+  <entry type="LiteralStringDoc" style="#776e71"/>
+  <entry type="LiteralStringEscape" style="#f99b15"/>
+  <entry type="LiteralStringInterpol" style="#f99b15"/>
+  <entry type="LiteralNumber" style="#f99b15"/>
+  <entry type="Operator" style="#5bc4bf"/>
+  <entry type="Punctuation" style="#e7e9db"/>
+  <entry type="Comment" style="#776e71"/>
+  <entry type="GenericDeleted" style="#ef6155"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericHeading" style="bold #e7e9db"/>
+  <entry type="GenericInserted" style="#48b685"/>
+  <entry type="GenericPrompt" style="bold #776e71"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #5bc4bf"/>
+  <entry type="Text" style="#e7e9db"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/paraiso-light.xml 🔗

@@ -0,0 +1,37 @@
+<style name="paraiso-light">
+  <entry type="Error" style="#ef6155"/>
+  <entry type="Background" style="bg:#e7e9db"/>
+  <entry type="Keyword" style="#815ba4"/>
+  <entry type="KeywordNamespace" style="#5bc4bf"/>
+  <entry type="KeywordType" style="#fec418"/>
+  <entry type="Name" style="#2f1e2e"/>
+  <entry type="NameAttribute" style="#06b6ef"/>
+  <entry type="NameClass" style="#fec418"/>
+  <entry type="NameConstant" style="#ef6155"/>
+  <entry type="NameDecorator" style="#5bc4bf"/>
+  <entry type="NameException" style="#ef6155"/>
+  <entry type="NameFunction" style="#06b6ef"/>
+  <entry type="NameNamespace" style="#fec418"/>
+  <entry type="NameOther" style="#06b6ef"/>
+  <entry type="NameTag" style="#5bc4bf"/>
+  <entry type="NameVariable" style="#ef6155"/>
+  <entry type="Literal" style="#f99b15"/>
+  <entry type="LiteralDate" style="#48b685"/>
+  <entry type="LiteralString" style="#48b685"/>
+  <entry type="LiteralStringChar" style="#2f1e2e"/>
+  <entry type="LiteralStringDoc" style="#8d8687"/>
+  <entry type="LiteralStringEscape" style="#f99b15"/>
+  <entry type="LiteralStringInterpol" style="#f99b15"/>
+  <entry type="LiteralNumber" style="#f99b15"/>
+  <entry type="Operator" style="#5bc4bf"/>
+  <entry type="Punctuation" style="#2f1e2e"/>
+  <entry type="Comment" style="#8d8687"/>
+  <entry type="GenericDeleted" style="#ef6155"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericHeading" style="bold #2f1e2e"/>
+  <entry type="GenericInserted" style="#48b685"/>
+  <entry type="GenericPrompt" style="bold #8d8687"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #5bc4bf"/>
+  <entry type="Text" style="#2f1e2e"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/pastie.xml 🔗

@@ -0,0 +1,45 @@
+<style name="pastie">
+  <entry type="Error" style="#a61717 bg:#e3d2d2"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold #008800"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="#888888"/>
+  <entry type="NameAttribute" style="#336699"/>
+  <entry type="NameBuiltin" style="#003388"/>
+  <entry type="NameClass" style="bold #bb0066"/>
+  <entry type="NameConstant" style="bold #003366"/>
+  <entry type="NameDecorator" style="#555555"/>
+  <entry type="NameException" style="bold #bb0066"/>
+  <entry type="NameFunction" style="bold #0066bb"/>
+  <entry type="NameLabel" style="italic #336699"/>
+  <entry type="NameNamespace" style="bold #bb0066"/>
+  <entry type="NameProperty" style="bold #336699"/>
+  <entry type="NameTag" style="bold #bb0066"/>
+  <entry type="NameVariable" style="#336699"/>
+  <entry type="NameVariableClass" style="#336699"/>
+  <entry type="NameVariableGlobal" style="#dd7700"/>
+  <entry type="NameVariableInstance" style="#3333bb"/>
+  <entry type="LiteralString" style="#dd2200 bg:#fff0f0"/>
+  <entry type="LiteralStringEscape" style="#0044dd"/>
+  <entry type="LiteralStringInterpol" style="#3333bb"/>
+  <entry type="LiteralStringOther" style="#22bb22 bg:#f0fff0"/>
+  <entry type="LiteralStringRegex" style="#008800 bg:#fff0ff"/>
+  <entry type="LiteralStringSymbol" style="#aa6600"/>
+  <entry type="LiteralNumber" style="bold #0000dd"/>
+  <entry type="OperatorWord" style="#008800"/>
+  <entry type="Comment" style="#888888"/>
+  <entry type="CommentSpecial" style="bold #cc0000 bg:#fff0f0"/>
+  <entry type="CommentPreproc" style="bold #cc0000"/>
+  <entry type="GenericDeleted" style="#000000 bg:#ffdddd"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#aa0000"/>
+  <entry type="GenericHeading" style="#333333"/>
+  <entry type="GenericInserted" style="#000000 bg:#ddffdd"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="#555555"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#666666"/>
+  <entry type="GenericTraceback" style="#aa0000"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/perldoc.xml 🔗

@@ -0,0 +1,37 @@
+<style name="perldoc">
+  <entry type="Error" style="#a61717 bg:#e3d2d2"/>
+  <entry type="Background" style="bg:#eeeedd"/>
+  <entry type="Keyword" style="bold #8b008b"/>
+  <entry type="KeywordType" style="#00688b"/>
+  <entry type="NameAttribute" style="#658b00"/>
+  <entry type="NameBuiltin" style="#658b00"/>
+  <entry type="NameClass" style="bold #008b45"/>
+  <entry type="NameConstant" style="#00688b"/>
+  <entry type="NameDecorator" style="#707a7c"/>
+  <entry type="NameException" style="bold #008b45"/>
+  <entry type="NameFunction" style="#008b45"/>
+  <entry type="NameNamespace" style="underline #008b45"/>
+  <entry type="NameTag" style="bold #8b008b"/>
+  <entry type="NameVariable" style="#00688b"/>
+  <entry type="LiteralString" style="#cd5555"/>
+  <entry type="LiteralStringHeredoc" style="italic #1c7e71"/>
+  <entry type="LiteralStringOther" style="#cb6c20"/>
+  <entry type="LiteralStringRegex" style="#1c7e71"/>
+  <entry type="LiteralNumber" style="#b452cd"/>
+  <entry type="OperatorWord" style="#8b008b"/>
+  <entry type="Comment" style="#228b22"/>
+  <entry type="CommentSpecial" style="bold #8b008b"/>
+  <entry type="CommentPreproc" style="#1e889b"/>
+  <entry type="GenericDeleted" style="#aa0000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#aa0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00aa00"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="#555555"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#aa0000"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/pygments.xml 🔗

@@ -0,0 +1,42 @@
+<style name="pygments">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Keyword" style="bold #008000"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="nobold #b00040"/>
+  <entry type="NameAttribute" style="#7d9029"/>
+  <entry type="NameBuiltin" style="#008000"/>
+  <entry type="NameClass" style="bold #0000ff"/>
+  <entry type="NameConstant" style="#880000"/>
+  <entry type="NameDecorator" style="#aa22ff"/>
+  <entry type="NameEntity" style="bold #999999"/>
+  <entry type="NameException" style="bold #d2413a"/>
+  <entry type="NameFunction" style="#0000ff"/>
+  <entry type="NameLabel" style="#a0a000"/>
+  <entry type="NameNamespace" style="bold #0000ff"/>
+  <entry type="NameTag" style="bold #008000"/>
+  <entry type="NameVariable" style="#19177c"/>
+  <entry type="LiteralString" style="#ba2121"/>
+  <entry type="LiteralStringDoc" style="italic"/>
+  <entry type="LiteralStringEscape" style="bold #bb6622"/>
+  <entry type="LiteralStringInterpol" style="bold #bb6688"/>
+  <entry type="LiteralStringOther" style="#008000"/>
+  <entry type="LiteralStringRegex" style="#bb6688"/>
+  <entry type="LiteralStringSymbol" style="#19177c"/>
+  <entry type="LiteralNumber" style="#666666"/>
+  <entry type="Operator" style="#666666"/>
+  <entry type="OperatorWord" style="bold #aa22ff"/>
+  <entry type="Comment" style="italic #408080"/>
+  <entry type="CommentPreproc" style="noitalic #bc7a00"/>
+  <entry type="GenericDeleted" style="#a00000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00a000"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="bold #000080"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#0044dd"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/rainbow_dash.xml 🔗

@@ -0,0 +1,40 @@
+<style name="rainbow_dash">
+  <entry type="Error" style="#ffffff bg:#cc0000"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold #2c5dcd"/>
+  <entry type="KeywordPseudo" style="nobold"/>
+  <entry type="KeywordType" style="#5918bb"/>
+  <entry type="NameAttribute" style="italic #2c5dcd"/>
+  <entry type="NameBuiltin" style="bold #5918bb"/>
+  <entry type="NameClass" style="underline"/>
+  <entry type="NameConstant" style="#318495"/>
+  <entry type="NameDecorator" style="bold #ff8000"/>
+  <entry type="NameEntity" style="bold #5918bb"/>
+  <entry type="NameException" style="bold #5918bb"/>
+  <entry type="NameFunction" style="bold #ff8000"/>
+  <entry type="NameTag" style="bold #2c5dcd"/>
+  <entry type="LiteralString" style="#00cc66"/>
+  <entry type="LiteralStringDoc" style="italic"/>
+  <entry type="LiteralStringEscape" style="bold #c5060b"/>
+  <entry type="LiteralStringOther" style="#318495"/>
+  <entry type="LiteralStringSymbol" style="bold #c5060b"/>
+  <entry type="LiteralNumber" style="bold #5918bb"/>
+  <entry type="Operator" style="#2c5dcd"/>
+  <entry type="OperatorWord" style="bold"/>
+  <entry type="Comment" style="italic #0080ff"/>
+  <entry type="CommentSpecial" style="bold"/>
+  <entry type="CommentPreproc" style="noitalic"/>
+  <entry type="GenericDeleted" style="bg:#ffcccc border:#c5060b"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #2c5dcd"/>
+  <entry type="GenericInserted" style="bg:#ccffcc border:#00cc00"/>
+  <entry type="GenericOutput" style="#aaaaaa"/>
+  <entry type="GenericPrompt" style="bold #2c5dcd"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #2c5dcd"/>
+  <entry type="GenericTraceback" style="#c5060b"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Text" style="#4d4d4d"/>
+  <entry type="TextWhitespace" style="#cbcbcb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/rose-pine-dawn.xml 🔗

@@ -0,0 +1,29 @@
+<style name="rose-pine-dawn">
+  <entry type="Error" style="#b4637a"/>
+  <entry type="Background" style="bg:#faf4ed"/>
+  <entry type="Keyword" style="#286983"/>
+  <entry type="KeywordNamespace" style="#907aa9"/>
+  <entry type="Name" style="#d7827e"/>
+  <entry type="NameAttribute" style="#d7827e"/>
+  <entry type="NameClass" style="#56949f"/>
+  <entry type="NameConstant" style="#ea9d34"/>
+  <entry type="NameDecorator" style="#797593"/>
+  <entry type="NameException" style="#286983"/>
+  <entry type="NameFunction" style="#d7827e"/>
+  <entry type="NameOther" style="#575279"/>
+  <entry type="NameTag" style="#d7827e"/>
+  <entry type="Literal" style="#ea9d34"/>
+  <entry type="LiteralDate" style="#ea9d34"/>
+  <entry type="LiteralString" style="#ea9d34"/>
+  <entry type="LiteralStringEscape" style="#286983"/>
+  <entry type="LiteralNumber" style="#ea9d34"/>
+  <entry type="Operator" style="#797593"/>
+  <entry type="Punctuation" style="#797593"/>
+  <entry type="Comment" style="#9893a5"/>
+  <entry type="GenericDeleted" style="#b4637a"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericInserted" style="#56949f"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#907aa9"/>
+  <entry type="Text" style="#575279"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/rose-pine-moon.xml 🔗

@@ -0,0 +1,29 @@
+<style name="rose-pine-moon">
+  <entry type="Error" style="#eb6f92"/>
+  <entry type="Background" style="bg:#232136"/>
+  <entry type="Keyword" style="#3e8fb0"/>
+  <entry type="KeywordNamespace" style="#c4a7e7"/>
+  <entry type="Name" style="#ea9a97"/>
+  <entry type="NameAttribute" style="#ea9a97"/>
+  <entry type="NameClass" style="#9ccfd8"/>
+  <entry type="NameConstant" style="#f6c177"/>
+  <entry type="NameDecorator" style="#908caa"/>
+  <entry type="NameException" style="#3e8fb0"/>
+  <entry type="NameFunction" style="#ea9a97"/>
+  <entry type="NameOther" style="#e0def4"/>
+  <entry type="NameTag" style="#ea9a97"/>
+  <entry type="Literal" style="#f6c177"/>
+  <entry type="LiteralDate" style="#f6c177"/>
+  <entry type="LiteralString" style="#f6c177"/>
+  <entry type="LiteralStringEscape" style="#3e8fb0"/>
+  <entry type="LiteralNumber" style="#f6c177"/>
+  <entry type="Operator" style="#908caa"/>
+  <entry type="Punctuation" style="#908caa"/>
+  <entry type="Comment" style="#6e6a86"/>
+  <entry type="GenericDeleted" style="#eb6f92"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericInserted" style="#9ccfd8"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#c4a7e7"/>
+  <entry type="Text" style="#e0def4"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/rose-pine.xml 🔗

@@ -0,0 +1,29 @@
+<style name="rose-pine">
+  <entry type="Error" style="#eb6f92"/>
+  <entry type="Background" style="bg:#191724"/>
+  <entry type="Keyword" style="#31748f"/>
+  <entry type="KeywordNamespace" style="#c4a7e7"/>
+  <entry type="Name" style="#ebbcba"/>
+  <entry type="NameAttribute" style="#ebbcba"/>
+  <entry type="NameClass" style="#9ccfd8"/>
+  <entry type="NameConstant" style="#f6c177"/>
+  <entry type="NameDecorator" style="#908caa"/>
+  <entry type="NameException" style="#31748f"/>
+  <entry type="NameFunction" style="#ebbcba"/>
+  <entry type="NameOther" style="#e0def4"/>
+  <entry type="NameTag" style="#ebbcba"/>
+  <entry type="Literal" style="#f6c177"/>
+  <entry type="LiteralDate" style="#f6c177"/>
+  <entry type="LiteralString" style="#f6c177"/>
+  <entry type="LiteralStringEscape" style="#31748f"/>
+  <entry type="LiteralNumber" style="#f6c177"/>
+  <entry type="Operator" style="#908caa"/>
+  <entry type="Punctuation" style="#908caa"/>
+  <entry type="Comment" style="#6e6a86"/>
+  <entry type="GenericDeleted" style="#eb6f92"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericInserted" style="#9ccfd8"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#c4a7e7"/>
+  <entry type="Text" style="#e0def4"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/rrt.xml 🔗

@@ -0,0 +1,13 @@
+<style name="rrt">
+  <entry type="Background" style="#f8f8f2 bg:#000000"/>
+  <entry type="Keyword" style="#ff0000"/>
+  <entry type="KeywordType" style="#ee82ee"/>
+  <entry type="NameConstant" style="#7fffd4"/>
+  <entry type="NameFunction" style="#ffff00"/>
+  <entry type="NameVariable" style="#eedd82"/>
+  <entry type="LiteralString" style="#87ceeb"/>
+  <entry type="LiteralStringSymbol" style="#ff6600"/>
+  <entry type="LiteralNumber" style="#ff6600"/>
+  <entry type="Comment" style="#00ff00"/>
+  <entry type="CommentPreproc" style="#e5e5e5"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/solarized-dark.xml 🔗

@@ -0,0 +1,39 @@
+<style name="solarized-dark">
+  <entry type="Other" style="#cb4b16"/>
+  <entry type="Background" style="#93a1a1 bg:#002b36"/>
+  <entry type="Keyword" style="#719e07"/>
+  <entry type="KeywordConstant" style="#cb4b16"/>
+  <entry type="KeywordDeclaration" style="#268bd2"/>
+  <entry type="KeywordReserved" style="#268bd2"/>
+  <entry type="KeywordType" style="#dc322f"/>
+  <entry type="NameAttribute" style="#93a1a1"/>
+  <entry type="NameBuiltin" style="#b58900"/>
+  <entry type="NameBuiltinPseudo" style="#268bd2"/>
+  <entry type="NameClass" style="#268bd2"/>
+  <entry type="NameConstant" style="#cb4b16"/>
+  <entry type="NameDecorator" style="#268bd2"/>
+  <entry type="NameEntity" style="#cb4b16"/>
+  <entry type="NameException" style="#cb4b16"/>
+  <entry type="NameFunction" style="#268bd2"/>
+  <entry type="NameTag" style="#268bd2"/>
+  <entry type="NameVariable" style="#268bd2"/>
+  <entry type="LiteralString" style="#2aa198"/>
+  <entry type="LiteralStringBacktick" style="#586e75"/>
+  <entry type="LiteralStringChar" style="#2aa198"/>
+  <entry type="LiteralStringDoc" style="#93a1a1"/>
+  <entry type="LiteralStringEscape" style="#cb4b16"/>
+  <entry type="LiteralStringHeredoc" style="#93a1a1"/>
+  <entry type="LiteralStringRegex" style="#dc322f"/>
+  <entry type="LiteralNumber" style="#2aa198"/>
+  <entry type="Operator" style="#719e07"/>
+  <entry type="Comment" style="#586e75"/>
+  <entry type="CommentSpecial" style="#719e07"/>
+  <entry type="CommentPreproc" style="#719e07"/>
+  <entry type="GenericDeleted" style="#dc322f"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="bold #dc322f"/>
+  <entry type="GenericHeading" style="#cb4b16"/>
+  <entry type="GenericInserted" style="#719e07"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#268bd2"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/solarized-dark256.xml 🔗

@@ -0,0 +1,41 @@
+<style name="solarized-dark256">
+  <entry type="Other" style="#d75f00"/>
+  <entry type="Background" style="#8a8a8a bg:#1c1c1c"/>
+  <entry type="Keyword" style="#5f8700"/>
+  <entry type="KeywordConstant" style="#d75f00"/>
+  <entry type="KeywordDeclaration" style="#0087ff"/>
+  <entry type="KeywordNamespace" style="#d75f00"/>
+  <entry type="KeywordReserved" style="#0087ff"/>
+  <entry type="KeywordType" style="#af0000"/>
+  <entry type="NameAttribute" style="#8a8a8a"/>
+  <entry type="NameBuiltin" style="#0087ff"/>
+  <entry type="NameBuiltinPseudo" style="#0087ff"/>
+  <entry type="NameClass" style="#0087ff"/>
+  <entry type="NameConstant" style="#d75f00"/>
+  <entry type="NameDecorator" style="#0087ff"/>
+  <entry type="NameEntity" style="#d75f00"/>
+  <entry type="NameException" style="#af8700"/>
+  <entry type="NameFunction" style="#0087ff"/>
+  <entry type="NameTag" style="#0087ff"/>
+  <entry type="NameVariable" style="#0087ff"/>
+  <entry type="LiteralString" style="#00afaf"/>
+  <entry type="LiteralStringBacktick" style="#4e4e4e"/>
+  <entry type="LiteralStringChar" style="#00afaf"/>
+  <entry type="LiteralStringDoc" style="#00afaf"/>
+  <entry type="LiteralStringEscape" style="#af0000"/>
+  <entry type="LiteralStringHeredoc" style="#00afaf"/>
+  <entry type="LiteralStringRegex" style="#af0000"/>
+  <entry type="LiteralNumber" style="#00afaf"/>
+  <entry type="Operator" style="#8a8a8a"/>
+  <entry type="OperatorWord" style="#5f8700"/>
+  <entry type="Comment" style="#4e4e4e"/>
+  <entry type="CommentSpecial" style="#5f8700"/>
+  <entry type="CommentPreproc" style="#5f8700"/>
+  <entry type="GenericDeleted" style="#af0000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="bold #af0000"/>
+  <entry type="GenericHeading" style="#d75f00"/>
+  <entry type="GenericInserted" style="#5f8700"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#0087ff"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/solarized-light.xml 🔗

@@ -0,0 +1,17 @@
+<style name="solarized-light">
+  <entry type="Background" style="bg:#eee8d5"/>
+  <entry type="Keyword" style="#859900"/>
+  <entry type="KeywordConstant" style="bold"/>
+  <entry type="KeywordNamespace" style="bold #dc322f"/>
+  <entry type="KeywordType" style="bold"/>
+  <entry type="Name" style="#268bd2"/>
+  <entry type="NameBuiltin" style="#cb4b16"/>
+  <entry type="NameClass" style="#cb4b16"/>
+  <entry type="NameTag" style="bold"/>
+  <entry type="Literal" style="#2aa198"/>
+  <entry type="LiteralNumber" style="bold"/>
+  <entry type="OperatorWord" style="#859900"/>
+  <entry type="Comment" style="italic #93a1a1"/>
+  <entry type="Generic" style="#d33682"/>
+  <entry type="Text" style="#586e75"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/swapoff.xml 🔗

@@ -0,0 +1,18 @@
+<style name="swapoff">
+  <entry type="Error" style="#ff0000"/>
+  <entry type="Background" style="#e5e5e5 bg:#000000"/>
+  <entry type="Keyword" style="bold #ffffff"/>
+  <entry type="NameAttribute" style="#007f7f"/>
+  <entry type="NameBuiltin" style="bold #ffffff"/>
+  <entry type="NameKeyword" style="bold #ffffff"/>
+  <entry type="NameTag" style="bold"/>
+  <entry type="LiteralDate" style="bold #ffff00"/>
+  <entry type="LiteralString" style="bold #00ffff"/>
+  <entry type="LiteralNumber" style="bold #ffff00"/>
+  <entry type="Comment" style="#007f7f"/>
+  <entry type="CommentPreproc" style="bold #00ff00"/>
+  <entry type="GenericHeading" style="bold"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold"/>
+  <entry type="GenericUnderline" style="underline"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/tango.xml 🔗

@@ -0,0 +1,72 @@
+<style name="tango">
+  <entry type="Other" style="#000000"/>
+  <entry type="Error" style="#a40000 border:#ef2929"/>
+  <entry type="Background" style="bg:#f8f8f8"/>
+  <entry type="Keyword" style="bold #204a87"/>
+  <entry type="KeywordConstant" style="bold #204a87"/>
+  <entry type="KeywordDeclaration" style="bold #204a87"/>
+  <entry type="KeywordNamespace" style="bold #204a87"/>
+  <entry type="KeywordPseudo" style="bold #204a87"/>
+  <entry type="KeywordReserved" style="bold #204a87"/>
+  <entry type="KeywordType" style="bold #204a87"/>
+  <entry type="Name" style="#000000"/>
+  <entry type="NameAttribute" style="#c4a000"/>
+  <entry type="NameBuiltin" style="#204a87"/>
+  <entry type="NameBuiltinPseudo" style="#3465a4"/>
+  <entry type="NameClass" style="#000000"/>
+  <entry type="NameConstant" style="#000000"/>
+  <entry type="NameDecorator" style="bold #5c35cc"/>
+  <entry type="NameEntity" style="#ce5c00"/>
+  <entry type="NameException" style="bold #cc0000"/>
+  <entry type="NameFunction" style="#000000"/>
+  <entry type="NameLabel" style="#f57900"/>
+  <entry type="NameNamespace" style="#000000"/>
+  <entry type="NameOther" style="#000000"/>
+  <entry type="NameProperty" style="#000000"/>
+  <entry type="NameTag" style="bold #204a87"/>
+  <entry type="NameVariable" style="#000000"/>
+  <entry type="NameVariableClass" style="#000000"/>
+  <entry type="NameVariableGlobal" style="#000000"/>
+  <entry type="NameVariableInstance" style="#000000"/>
+  <entry type="Literal" style="#000000"/>
+  <entry type="LiteralDate" style="#000000"/>
+  <entry type="LiteralString" style="#4e9a06"/>
+  <entry type="LiteralStringBacktick" style="#4e9a06"/>
+  <entry type="LiteralStringChar" style="#4e9a06"/>
+  <entry type="LiteralStringDoc" style="italic #8f5902"/>
+  <entry type="LiteralStringDouble" style="#4e9a06"/>
+  <entry type="LiteralStringEscape" style="#4e9a06"/>
+  <entry type="LiteralStringHeredoc" style="#4e9a06"/>
+  <entry type="LiteralStringInterpol" style="#4e9a06"/>
+  <entry type="LiteralStringOther" style="#4e9a06"/>
+  <entry type="LiteralStringRegex" style="#4e9a06"/>
+  <entry type="LiteralStringSingle" style="#4e9a06"/>
+  <entry type="LiteralStringSymbol" style="#4e9a06"/>
+  <entry type="LiteralNumber" style="bold #0000cf"/>
+  <entry type="LiteralNumberFloat" style="bold #0000cf"/>
+  <entry type="LiteralNumberHex" style="bold #0000cf"/>
+  <entry type="LiteralNumberInteger" style="bold #0000cf"/>
+  <entry type="LiteralNumberIntegerLong" style="bold #0000cf"/>
+  <entry type="LiteralNumberOct" style="bold #0000cf"/>
+  <entry type="Operator" style="bold #ce5c00"/>
+  <entry type="OperatorWord" style="bold #204a87"/>
+  <entry type="Punctuation" style="bold #000000"/>
+  <entry type="Comment" style="italic #8f5902"/>
+  <entry type="CommentMultiline" style="italic #8f5902"/>
+  <entry type="CommentSingle" style="italic #8f5902"/>
+  <entry type="CommentSpecial" style="italic #8f5902"/>
+  <entry type="CommentPreproc" style="italic #8f5902"/>
+  <entry type="Generic" style="#000000"/>
+  <entry type="GenericDeleted" style="#a40000"/>
+  <entry type="GenericEmph" style="italic #000000"/>
+  <entry type="GenericError" style="#ef2929"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00a000"/>
+  <entry type="GenericOutput" style="italic #000000"/>
+  <entry type="GenericPrompt" style="#8f5902"/>
+  <entry type="GenericStrong" style="bold #000000"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="bold #a40000"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="underline #f8f8f8"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-day.xml 🔗

@@ -0,0 +1,83 @@
+<style name="tokyonight-day">
+  <entry type="Background" style="bg:#e1e2e7 #3760bf"/>
+  <entry type="CodeLine" style="#3760bf"/>
+  <entry type="Error" style="#c64343"/>
+  <entry type="Other" style="#3760bf"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#a1a6c5"/>
+  <entry type="LineNumbersTable" style="#6172b0"/>
+  <entry type="LineNumbers" style="#6172b0"/>
+  <entry type="Keyword" style="#9854f1"/>
+  <entry type="KeywordReserved" style="#9854f1"/>
+  <entry type="KeywordPseudo" style="#9854f1"/>
+  <entry type="KeywordConstant" style="#8c6c3e"/>
+  <entry type="KeywordDeclaration" style="#9d7cd8"/>
+  <entry type="KeywordNamespace" style="#007197"/>
+  <entry type="KeywordType" style="#0db9d7"/>
+  <entry type="Name" style="#3760bf"/>
+  <entry type="NameClass" style="#b15c00"/>
+  <entry type="NameConstant" style="#b15c00"/>
+  <entry type="NameDecorator" style="bold #2e7de9"/>
+  <entry type="NameEntity" style="#007197"/>
+  <entry type="NameException" style="#8c6c3e"/>
+  <entry type="NameFunction" style="#2e7de9"/>
+  <entry type="NameFunctionMagic" style="#2e7de9"/>
+  <entry type="NameLabel" style="#587539"/>
+  <entry type="NameNamespace" style="#8c6c3e"/>
+  <entry type="NameProperty" style="#8c6c3e"/>
+  <entry type="NameTag" style="#9854f1"/>
+  <entry type="NameVariable" style="#3760bf"/>
+  <entry type="NameVariableClass" style="#3760bf"/>
+  <entry type="NameVariableGlobal" style="#3760bf"/>
+  <entry type="NameVariableInstance" style="#3760bf"/>
+  <entry type="NameVariableMagic" style="#3760bf"/>
+  <entry type="NameAttribute" style="#2e7de9"/>
+  <entry type="NameBuiltin" style="#587539"/>
+  <entry type="NameBuiltinPseudo" style="#587539"/>
+  <entry type="NameOther" style="#3760bf"/>
+  <entry type="Literal" style="#3760bf"/>
+  <entry type="LiteralDate" style="#3760bf"/>
+  <entry type="LiteralString" style="#587539"/>
+  <entry type="LiteralStringChar" style="#587539"/>
+  <entry type="LiteralStringSingle" style="#587539"/>
+  <entry type="LiteralStringDouble" style="#587539"/>
+  <entry type="LiteralStringBacktick" style="#587539"/>
+  <entry type="LiteralStringOther" style="#587539"/>
+  <entry type="LiteralStringSymbol" style="#587539"/>
+  <entry type="LiteralStringInterpol" style="#587539"/>
+  <entry type="LiteralStringAffix" style="#9d7cd8"/>
+  <entry type="LiteralStringDelimiter" style="#2e7de9"/>
+  <entry type="LiteralStringEscape" style="#2e7de9"/>
+  <entry type="LiteralStringRegex" style="#007197"/>
+  <entry type="LiteralStringDoc" style="#a1a6c5"/>
+  <entry type="LiteralStringHeredoc" style="#a1a6c5"/>
+  <entry type="LiteralNumber" style="#8c6c3e"/>
+  <entry type="LiteralNumberBin" style="#8c6c3e"/>
+  <entry type="LiteralNumberHex" style="#8c6c3e"/>
+  <entry type="LiteralNumberInteger" style="#8c6c3e"/>
+  <entry type="LiteralNumberFloat" style="#8c6c3e"/>
+  <entry type="LiteralNumberIntegerLong" style="#8c6c3e"/>
+  <entry type="LiteralNumberOct" style="#8c6c3e"/>
+  <entry type="Operator" style="bold #587539"/>
+  <entry type="OperatorWord" style="bold #587539"/>
+  <entry type="Comment" style="italic #a1a6c5"/>
+  <entry type="CommentSingle" style="italic #a1a6c5"/>
+  <entry type="CommentMultiline" style="italic #a1a6c5"/>
+  <entry type="CommentSpecial" style="italic #a1a6c5"/>
+  <entry type="CommentHashbang" style="italic #a1a6c5"/>
+  <entry type="CommentPreproc" style="italic #a1a6c5"/>
+  <entry type="CommentPreprocFile" style="bold #a1a6c5"/>
+  <entry type="Generic" style="#3760bf"/>
+  <entry type="GenericInserted" style="bg:#e9e9ed #587539"/>
+  <entry type="GenericDeleted" style="#c64343 bg:#e9e9ed"/>
+  <entry type="GenericEmph" style="italic #3760bf"/>
+  <entry type="GenericStrong" style="bold #3760bf"/>
+  <entry type="GenericUnderline" style="underline #3760bf"/>
+  <entry type="GenericHeading" style="bold #8c6c3e"/>
+  <entry type="GenericSubheading" style="bold #8c6c3e"/>
+  <entry type="GenericOutput" style="#3760bf"/>
+  <entry type="GenericPrompt" style="#3760bf"/>
+  <entry type="GenericError" style="#c64343"/>
+  <entry type="GenericTraceback" style="#c64343"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-moon.xml 🔗

@@ -0,0 +1,83 @@
+<style name="tokyonight-moon">
+  <entry type="Background" style="bg:#222436 #c8d3f5"/>
+  <entry type="CodeLine" style="#c8d3f5"/>
+  <entry type="Error" style="#c53b53"/>
+  <entry type="Other" style="#c8d3f5"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#444a73"/>
+  <entry type="LineNumbersTable" style="#828bb8"/>
+  <entry type="LineNumbers" style="#828bb8"/>
+  <entry type="Keyword" style="#c099ff"/>
+  <entry type="KeywordReserved" style="#c099ff"/>
+  <entry type="KeywordPseudo" style="#c099ff"/>
+  <entry type="KeywordConstant" style="#ffc777"/>
+  <entry type="KeywordDeclaration" style="#c099ff"/>
+  <entry type="KeywordNamespace" style="#86e1fc"/>
+  <entry type="KeywordType" style="#4fd6be"/>
+  <entry type="Name" style="#c8d3f5"/>
+  <entry type="NameClass" style="#ff966c"/>
+  <entry type="NameConstant" style="#ff966c"/>
+  <entry type="NameDecorator" style="bold #82aaff"/>
+  <entry type="NameEntity" style="#86e1fc"/>
+  <entry type="NameException" style="#ffc777"/>
+  <entry type="NameFunction" style="#82aaff"/>
+  <entry type="NameFunctionMagic" style="#82aaff"/>
+  <entry type="NameLabel" style="#c3e88d"/>
+  <entry type="NameNamespace" style="#ffc777"/>
+  <entry type="NameProperty" style="#ffc777"/>
+  <entry type="NameTag" style="#c099ff"/>
+  <entry type="NameVariable" style="#c8d3f5"/>
+  <entry type="NameVariableClass" style="#c8d3f5"/>
+  <entry type="NameVariableGlobal" style="#c8d3f5"/>
+  <entry type="NameVariableInstance" style="#c8d3f5"/>
+  <entry type="NameVariableMagic" style="#c8d3f5"/>
+  <entry type="NameAttribute" style="#82aaff"/>
+  <entry type="NameBuiltin" style="#c3e88d"/>
+  <entry type="NameBuiltinPseudo" style="#c3e88d"/>
+  <entry type="NameOther" style="#c8d3f5"/>
+  <entry type="Literal" style="#c8d3f5"/>
+  <entry type="LiteralDate" style="#c8d3f5"/>
+  <entry type="LiteralString" style="#c3e88d"/>
+  <entry type="LiteralStringChar" style="#c3e88d"/>
+  <entry type="LiteralStringSingle" style="#c3e88d"/>
+  <entry type="LiteralStringDouble" style="#c3e88d"/>
+  <entry type="LiteralStringBacktick" style="#c3e88d"/>
+  <entry type="LiteralStringOther" style="#c3e88d"/>
+  <entry type="LiteralStringSymbol" style="#c3e88d"/>
+  <entry type="LiteralStringInterpol" style="#c3e88d"/>
+  <entry type="LiteralStringAffix" style="#c099ff"/>
+  <entry type="LiteralStringDelimiter" style="#82aaff"/>
+  <entry type="LiteralStringEscape" style="#82aaff"/>
+  <entry type="LiteralStringRegex" style="#86e1fc"/>
+  <entry type="LiteralStringDoc" style="#444a73"/>
+  <entry type="LiteralStringHeredoc" style="#444a73"/>
+  <entry type="LiteralNumber" style="#ffc777"/>
+  <entry type="LiteralNumberBin" style="#ffc777"/>
+  <entry type="LiteralNumberHex" style="#ffc777"/>
+  <entry type="LiteralNumberInteger" style="#ffc777"/>
+  <entry type="LiteralNumberFloat" style="#ffc777"/>
+  <entry type="LiteralNumberIntegerLong" style="#ffc777"/>
+  <entry type="LiteralNumberOct" style="#ffc777"/>
+  <entry type="Operator" style="bold #c3e88d"/>
+  <entry type="OperatorWord" style="bold #c3e88d"/>
+  <entry type="Comment" style="italic #444a73"/>
+  <entry type="CommentSingle" style="italic #444a73"/>
+  <entry type="CommentMultiline" style="italic #444a73"/>
+  <entry type="CommentSpecial" style="italic #444a73"/>
+  <entry type="CommentHashbang" style="italic #444a73"/>
+  <entry type="CommentPreproc" style="italic #444a73"/>
+  <entry type="CommentPreprocFile" style="bold #444a73"/>
+  <entry type="Generic" style="#c8d3f5"/>
+  <entry type="GenericInserted" style="bg:#1b1d2b #c3e88d"/>
+  <entry type="GenericDeleted" style="#c53b53 bg:#1b1d2b"/>
+  <entry type="GenericEmph" style="italic #c8d3f5"/>
+  <entry type="GenericStrong" style="bold #c8d3f5"/>
+  <entry type="GenericUnderline" style="underline #c8d3f5"/>
+  <entry type="GenericHeading" style="bold #ffc777"/>
+  <entry type="GenericSubheading" style="bold #ffc777"/>
+  <entry type="GenericOutput" style="#c8d3f5"/>
+  <entry type="GenericPrompt" style="#c8d3f5"/>
+  <entry type="GenericError" style="#c53b53"/>
+  <entry type="GenericTraceback" style="#c53b53"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-night.xml 🔗

@@ -0,0 +1,83 @@
+<style name="tokyonight-night">
+  <entry type="Background" style="bg:#1a1b26 #c0caf5"/>
+  <entry type="CodeLine" style="#c0caf5"/>
+  <entry type="Error" style="#db4b4b"/>
+  <entry type="Other" style="#c0caf5"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#414868"/>
+  <entry type="LineNumbersTable" style="#a9b1d6"/>
+  <entry type="LineNumbers" style="#a9b1d6"/>
+  <entry type="Keyword" style="#bb9af7"/>
+  <entry type="KeywordReserved" style="#bb9af7"/>
+  <entry type="KeywordPseudo" style="#bb9af7"/>
+  <entry type="KeywordConstant" style="#e0af68"/>
+  <entry type="KeywordDeclaration" style="#9d7cd8"/>
+  <entry type="KeywordNamespace" style="#7dcfff"/>
+  <entry type="KeywordType" style="#41a6b5"/>
+  <entry type="Name" style="#c0caf5"/>
+  <entry type="NameClass" style="#ff9e64"/>
+  <entry type="NameConstant" style="#ff9e64"/>
+  <entry type="NameDecorator" style="bold #7aa2f7"/>
+  <entry type="NameEntity" style="#7dcfff"/>
+  <entry type="NameException" style="#e0af68"/>
+  <entry type="NameFunction" style="#7aa2f7"/>
+  <entry type="NameFunctionMagic" style="#7aa2f7"/>
+  <entry type="NameLabel" style="#9ece6a"/>
+  <entry type="NameNamespace" style="#e0af68"/>
+  <entry type="NameProperty" style="#e0af68"/>
+  <entry type="NameTag" style="#bb9af7"/>
+  <entry type="NameVariable" style="#c0caf5"/>
+  <entry type="NameVariableClass" style="#c0caf5"/>
+  <entry type="NameVariableGlobal" style="#c0caf5"/>
+  <entry type="NameVariableInstance" style="#c0caf5"/>
+  <entry type="NameVariableMagic" style="#c0caf5"/>
+  <entry type="NameAttribute" style="#7aa2f7"/>
+  <entry type="NameBuiltin" style="#9ece6a"/>
+  <entry type="NameBuiltinPseudo" style="#9ece6a"/>
+  <entry type="NameOther" style="#c0caf5"/>
+  <entry type="Literal" style="#c0caf5"/>
+  <entry type="LiteralDate" style="#c0caf5"/>
+  <entry type="LiteralString" style="#9ece6a"/>
+  <entry type="LiteralStringChar" style="#9ece6a"/>
+  <entry type="LiteralStringSingle" style="#9ece6a"/>
+  <entry type="LiteralStringDouble" style="#9ece6a"/>
+  <entry type="LiteralStringBacktick" style="#9ece6a"/>
+  <entry type="LiteralStringOther" style="#9ece6a"/>
+  <entry type="LiteralStringSymbol" style="#9ece6a"/>
+  <entry type="LiteralStringInterpol" style="#9ece6a"/>
+  <entry type="LiteralStringAffix" style="#9d7cd8"/>
+  <entry type="LiteralStringDelimiter" style="#7aa2f7"/>
+  <entry type="LiteralStringEscape" style="#7aa2f7"/>
+  <entry type="LiteralStringRegex" style="#7dcfff"/>
+  <entry type="LiteralStringDoc" style="#414868"/>
+  <entry type="LiteralStringHeredoc" style="#414868"/>
+  <entry type="LiteralNumber" style="#e0af68"/>
+  <entry type="LiteralNumberBin" style="#e0af68"/>
+  <entry type="LiteralNumberHex" style="#e0af68"/>
+  <entry type="LiteralNumberInteger" style="#e0af68"/>
+  <entry type="LiteralNumberFloat" style="#e0af68"/>
+  <entry type="LiteralNumberIntegerLong" style="#e0af68"/>
+  <entry type="LiteralNumberOct" style="#e0af68"/>
+  <entry type="Operator" style="bold #9ece6a"/>
+  <entry type="OperatorWord" style="bold #9ece6a"/>
+  <entry type="Comment" style="italic #414868"/>
+  <entry type="CommentSingle" style="italic #414868"/>
+  <entry type="CommentMultiline" style="italic #414868"/>
+  <entry type="CommentSpecial" style="italic #414868"/>
+  <entry type="CommentHashbang" style="italic #414868"/>
+  <entry type="CommentPreproc" style="italic #414868"/>
+  <entry type="CommentPreprocFile" style="bold #414868"/>
+  <entry type="Generic" style="#c0caf5"/>
+  <entry type="GenericInserted" style="bg:#15161e #9ece6a"/>
+  <entry type="GenericDeleted" style="#db4b4b bg:#15161e"/>
+  <entry type="GenericEmph" style="italic #c0caf5"/>
+  <entry type="GenericStrong" style="bold #c0caf5"/>
+  <entry type="GenericUnderline" style="underline #c0caf5"/>
+  <entry type="GenericHeading" style="bold #e0af68"/>
+  <entry type="GenericSubheading" style="bold #e0af68"/>
+  <entry type="GenericOutput" style="#c0caf5"/>
+  <entry type="GenericPrompt" style="#c0caf5"/>
+  <entry type="GenericError" style="#db4b4b"/>
+  <entry type="GenericTraceback" style="#db4b4b"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/tokyonight-storm.xml 🔗

@@ -0,0 +1,83 @@
+<style name="tokyonight-storm">
+  <entry type="Background" style="bg:#1a1b26 #c0caf5"/>
+  <entry type="CodeLine" style="#c0caf5"/>
+  <entry type="Error" style="#db4b4b"/>
+  <entry type="Other" style="#c0caf5"/>
+  <entry type="LineTableTD" style=""/>
+  <entry type="LineTable" style=""/>
+  <entry type="LineHighlight" style="bg:#414868"/>
+  <entry type="LineNumbersTable" style="#a9b1d6"/>
+  <entry type="LineNumbers" style="#a9b1d6"/>
+  <entry type="Keyword" style="#bb9af7"/>
+  <entry type="KeywordReserved" style="#bb9af7"/>
+  <entry type="KeywordPseudo" style="#bb9af7"/>
+  <entry type="KeywordConstant" style="#e0af68"/>
+  <entry type="KeywordDeclaration" style="#9d7cd8"/>
+  <entry type="KeywordNamespace" style="#7dcfff"/>
+  <entry type="KeywordType" style="#41a6b5"/>
+  <entry type="Name" style="#c0caf5"/>
+  <entry type="NameClass" style="#ff9e64"/>
+  <entry type="NameConstant" style="#ff9e64"/>
+  <entry type="NameDecorator" style="bold #7aa2f7"/>
+  <entry type="NameEntity" style="#7dcfff"/>
+  <entry type="NameException" style="#e0af68"/>
+  <entry type="NameFunction" style="#7aa2f7"/>
+  <entry type="NameFunctionMagic" style="#7aa2f7"/>
+  <entry type="NameLabel" style="#9ece6a"/>
+  <entry type="NameNamespace" style="#e0af68"/>
+  <entry type="NameProperty" style="#e0af68"/>
+  <entry type="NameTag" style="#bb9af7"/>
+  <entry type="NameVariable" style="#c0caf5"/>
+  <entry type="NameVariableClass" style="#c0caf5"/>
+  <entry type="NameVariableGlobal" style="#c0caf5"/>
+  <entry type="NameVariableInstance" style="#c0caf5"/>
+  <entry type="NameVariableMagic" style="#c0caf5"/>
+  <entry type="NameAttribute" style="#7aa2f7"/>
+  <entry type="NameBuiltin" style="#9ece6a"/>
+  <entry type="NameBuiltinPseudo" style="#9ece6a"/>
+  <entry type="NameOther" style="#c0caf5"/>
+  <entry type="Literal" style="#c0caf5"/>
+  <entry type="LiteralDate" style="#c0caf5"/>
+  <entry type="LiteralString" style="#9ece6a"/>
+  <entry type="LiteralStringChar" style="#9ece6a"/>
+  <entry type="LiteralStringSingle" style="#9ece6a"/>
+  <entry type="LiteralStringDouble" style="#9ece6a"/>
+  <entry type="LiteralStringBacktick" style="#9ece6a"/>
+  <entry type="LiteralStringOther" style="#9ece6a"/>
+  <entry type="LiteralStringSymbol" style="#9ece6a"/>
+  <entry type="LiteralStringInterpol" style="#9ece6a"/>
+  <entry type="LiteralStringAffix" style="#9d7cd8"/>
+  <entry type="LiteralStringDelimiter" style="#7aa2f7"/>
+  <entry type="LiteralStringEscape" style="#7aa2f7"/>
+  <entry type="LiteralStringRegex" style="#7dcfff"/>
+  <entry type="LiteralStringDoc" style="#414868"/>
+  <entry type="LiteralStringHeredoc" style="#414868"/>
+  <entry type="LiteralNumber" style="#e0af68"/>
+  <entry type="LiteralNumberBin" style="#e0af68"/>
+  <entry type="LiteralNumberHex" style="#e0af68"/>
+  <entry type="LiteralNumberInteger" style="#e0af68"/>
+  <entry type="LiteralNumberFloat" style="#e0af68"/>
+  <entry type="LiteralNumberIntegerLong" style="#e0af68"/>
+  <entry type="LiteralNumberOct" style="#e0af68"/>
+  <entry type="Operator" style="bold #9ece6a"/>
+  <entry type="OperatorWord" style="bold #9ece6a"/>
+  <entry type="Comment" style="italic #414868"/>
+  <entry type="CommentSingle" style="italic #414868"/>
+  <entry type="CommentMultiline" style="italic #414868"/>
+  <entry type="CommentSpecial" style="italic #414868"/>
+  <entry type="CommentHashbang" style="italic #414868"/>
+  <entry type="CommentPreproc" style="italic #414868"/>
+  <entry type="CommentPreprocFile" style="bold #414868"/>
+  <entry type="Generic" style="#c0caf5"/>
+  <entry type="GenericInserted" style="bg:#15161e #9ece6a"/>
+  <entry type="GenericDeleted" style="#db4b4b bg:#15161e"/>
+  <entry type="GenericEmph" style="italic #c0caf5"/>
+  <entry type="GenericStrong" style="bold #c0caf5"/>
+  <entry type="GenericUnderline" style="underline #c0caf5"/>
+  <entry type="GenericHeading" style="bold #e0af68"/>
+  <entry type="GenericSubheading" style="bold #e0af68"/>
+  <entry type="GenericOutput" style="#c0caf5"/>
+  <entry type="GenericPrompt" style="#c0caf5"/>
+  <entry type="GenericError" style="#db4b4b"/>
+  <entry type="GenericTraceback" style="#db4b4b"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/trac.xml 🔗

@@ -0,0 +1,35 @@
+<style name="trac">
+  <entry type="Error" style="#a61717 bg:#e3d2d2"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="bold"/>
+  <entry type="KeywordType" style="#445588"/>
+  <entry type="NameAttribute" style="#008080"/>
+  <entry type="NameBuiltin" style="#999999"/>
+  <entry type="NameClass" style="bold #445588"/>
+  <entry type="NameConstant" style="#008080"/>
+  <entry type="NameEntity" style="#800080"/>
+  <entry type="NameException" style="bold #990000"/>
+  <entry type="NameFunction" style="bold #990000"/>
+  <entry type="NameNamespace" style="#555555"/>
+  <entry type="NameTag" style="#000080"/>
+  <entry type="NameVariable" style="#008080"/>
+  <entry type="LiteralString" style="#bb8844"/>
+  <entry type="LiteralStringRegex" style="#808000"/>
+  <entry type="LiteralNumber" style="#009999"/>
+  <entry type="Operator" style="bold"/>
+  <entry type="Comment" style="italic #999988"/>
+  <entry type="CommentSpecial" style="bold #999999"/>
+  <entry type="CommentPreproc" style="bold noitalic #999999"/>
+  <entry type="GenericDeleted" style="#000000 bg:#ffdddd"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#aa0000"/>
+  <entry type="GenericHeading" style="#999999"/>
+  <entry type="GenericInserted" style="#000000 bg:#ddffdd"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="#555555"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#aaaaaa"/>
+  <entry type="GenericTraceback" style="#aa0000"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="TextWhitespace" style="#bbbbbb"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/vim.xml 🔗

@@ -0,0 +1,29 @@
+<style name="vim">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Background" style="#cccccc bg:#000000"/>
+  <entry type="Keyword" style="#cdcd00"/>
+  <entry type="KeywordDeclaration" style="#00cd00"/>
+  <entry type="KeywordNamespace" style="#cd00cd"/>
+  <entry type="KeywordType" style="#00cd00"/>
+  <entry type="NameBuiltin" style="#cd00cd"/>
+  <entry type="NameClass" style="#00cdcd"/>
+  <entry type="NameException" style="bold #666699"/>
+  <entry type="NameVariable" style="#00cdcd"/>
+  <entry type="LiteralString" style="#cd0000"/>
+  <entry type="LiteralNumber" style="#cd00cd"/>
+  <entry type="Operator" style="#3399cc"/>
+  <entry type="OperatorWord" style="#cdcd00"/>
+  <entry type="Comment" style="#000080"/>
+  <entry type="CommentSpecial" style="bold #cd0000"/>
+  <entry type="GenericDeleted" style="#cd0000"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericError" style="#ff0000"/>
+  <entry type="GenericHeading" style="bold #000080"/>
+  <entry type="GenericInserted" style="#00cd00"/>
+  <entry type="GenericOutput" style="#888888"/>
+  <entry type="GenericPrompt" style="bold #000080"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold #800080"/>
+  <entry type="GenericTraceback" style="#0044dd"/>
+  <entry type="GenericUnderline" style="underline"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/vs.xml 🔗

@@ -0,0 +1,16 @@
+<style name="vs">
+  <entry type="Error" style="border:#ff0000"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#0000ff"/>
+  <entry type="KeywordType" style="#2b91af"/>
+  <entry type="NameClass" style="#2b91af"/>
+  <entry type="LiteralString" style="#a31515"/>
+  <entry type="OperatorWord" style="#0000ff"/>
+  <entry type="Comment" style="#008000"/>
+  <entry type="CommentPreproc" style="#0000ff"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericHeading" style="bold"/>
+  <entry type="GenericPrompt" style="bold"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="bold"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/vulcan.xml 🔗

@@ -0,0 +1,74 @@
+<style name="vulcan">
+  <entry type="Other" style="#c9c9c9"/>
+  <entry type="Error" style="#cf5967"/>
+  <entry type="Background" style="bg:#282c34"/>
+  <entry type="Keyword" style="#7fbaf5"/>
+  <entry type="KeywordConstant" style="#cf5967 bg:#43454f"/>
+  <entry type="KeywordDeclaration" style="#7fbaf5"/>
+  <entry type="KeywordNamespace" style="#bc74c4"/>
+  <entry type="KeywordPseudo" style="#bc74c4"/>
+  <entry type="KeywordReserved" style="#7fbaf5"/>
+  <entry type="KeywordType" style="bold #57c7ff"/>
+  <entry type="Name" style="#c9c9c9"/>
+  <entry type="NameAttribute" style="#bc74c4"/>
+  <entry type="NameBuiltin" style="#7fbaf5"/>
+  <entry type="NameBuiltinPseudo" style="#7fbaf5"/>
+  <entry type="NameClass" style="#ecbe7b"/>
+  <entry type="NameConstant" style="#ecbe7b"/>
+  <entry type="NameDecorator" style="#ecbe7b"/>
+  <entry type="NameEntity" style="#c9c9c9"/>
+  <entry type="NameException" style="#cf5967"/>
+  <entry type="NameFunction" style="#57c7ff"/>
+  <entry type="NameLabel" style="#cf5967"/>
+  <entry type="NameNamespace" style="#c9c9c9"/>
+  <entry type="NameOther" style="#c9c9c9"/>
+  <entry type="NameTag" style="#bc74c4"/>
+  <entry type="NameVariable" style="italic #bc74c4"/>
+  <entry type="NameVariableClass" style="bold #57c7ff"/>
+  <entry type="NameVariableGlobal" style="#ecbe7b"/>
+  <entry type="NameVariableInstance" style="#57c7ff"/>
+  <entry type="Literal" style="#c9c9c9"/>
+  <entry type="LiteralDate" style="#57c7ff"/>
+  <entry type="LiteralString" style="#82cc6a"/>
+  <entry type="LiteralStringBacktick" style="#57c7ff"/>
+  <entry type="LiteralStringChar" style="#57c7ff"/>
+  <entry type="LiteralStringDoc" style="#82cc6a"/>
+  <entry type="LiteralStringDouble" style="#82cc6a"/>
+  <entry type="LiteralStringEscape" style="#56b6c2"/>
+  <entry type="LiteralStringHeredoc" style="#56b6c2"/>
+  <entry type="LiteralStringInterpol" style="#82cc6a"/>
+  <entry type="LiteralStringOther" style="#82cc6a"/>
+  <entry type="LiteralStringRegex" style="#57c7ff"/>
+  <entry type="LiteralStringSingle" style="#82cc6a"/>
+  <entry type="LiteralStringSymbol" style="#82cc6a"/>
+  <entry type="LiteralNumber" style="#56b6c2"/>
+  <entry type="LiteralNumberBin" style="#57c7ff"/>
+  <entry type="LiteralNumberFloat" style="#56b6c2"/>
+  <entry type="LiteralNumberHex" style="#57c7ff"/>
+  <entry type="LiteralNumberInteger" style="#56b6c2"/>
+  <entry type="LiteralNumberIntegerLong" style="#56b6c2"/>
+  <entry type="LiteralNumberOct" style="#57c7ff"/>
+  <entry type="Operator" style="#bc74c4"/>
+  <entry type="OperatorWord" style="#bc74c4"/>
+  <entry type="Punctuation" style="#56b6c2"/>
+  <entry type="Comment" style="#3e4460"/>
+  <entry type="CommentHashbang" style="italic #3e4460"/>
+  <entry type="CommentMultiline" style="#3e4460"/>
+  <entry type="CommentSingle" style="#3e4460"/>
+  <entry type="CommentSpecial" style="italic #bc74c4"/>
+  <entry type="CommentPreproc" style="#7fbaf5"/>
+  <entry type="Generic" style="#c9c9c9"/>
+  <entry type="GenericDeleted" style="#cf5967"/>
+  <entry type="GenericEmph" style="underline #c9c9c9"/>
+  <entry type="GenericError" style="bold #cf5967"/>
+  <entry type="GenericHeading" style="bold #ecbe7b"/>
+  <entry type="GenericInserted" style="#ecbe7b"/>
+  <entry type="GenericOutput" style="#43454f"/>
+  <entry type="GenericPrompt" style="#c9c9c9"/>
+  <entry type="GenericStrong" style="bold #cf5967"/>
+  <entry type="GenericSubheading" style="italic #cf5967"/>
+  <entry type="GenericTraceback" style="#c9c9c9"/>
+  <entry type="GenericUnderline" style="underline"/>
+  <entry type="Text" style="#c9c9c9"/>
+  <entry type="TextWhitespace" style="#c9c9c9"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/witchhazel.xml 🔗

@@ -0,0 +1,31 @@
+<style name="witchhazel">
+  <entry type="Error" style="#960050 bg:#1e0010"/>
+  <entry type="Background" style="bg:#433e56"/>
+  <entry type="Keyword" style="#c2ffdf"/>
+  <entry type="KeywordNamespace" style="#ffb8d1"/>
+  <entry type="Name" style="#f8f8f2"/>
+  <entry type="NameAttribute" style="#ceb1ff"/>
+  <entry type="NameBuiltinPseudo" style="#80cbc4"/>
+  <entry type="NameClass" style="#ceb1ff"/>
+  <entry type="NameConstant" style="#c5a3ff"/>
+  <entry type="NameDecorator" style="#ceb1ff"/>
+  <entry type="NameException" style="#ceb1ff"/>
+  <entry type="NameFunction" style="#ceb1ff"/>
+  <entry type="NameProperty" style="#f8f8f2"/>
+  <entry type="NameTag" style="#ffb8d1"/>
+  <entry type="NameVariable" style="#f8f8f2"/>
+  <entry type="Literal" style="#ae81ff"/>
+  <entry type="LiteralDate" style="#e6db74"/>
+  <entry type="LiteralString" style="#1bc5e0"/>
+  <entry type="LiteralNumber" style="#c5a3ff"/>
+  <entry type="Operator" style="#ffb8d1"/>
+  <entry type="Punctuation" style="#f8f8f2"/>
+  <entry type="Comment" style="#b0bec5"/>
+  <entry type="GenericDeleted" style="#f92672"/>
+  <entry type="GenericEmph" style="italic"/>
+  <entry type="GenericInserted" style="#a6e22e"/>
+  <entry type="GenericStrong" style="bold"/>
+  <entry type="GenericSubheading" style="#75715e"/>
+  <entry type="Text" style="#f8f8f2"/>
+  <entry type="TextWhitespace" style="#a8757b"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/xcode-dark.xml 🔗

@@ -0,0 +1,31 @@
+<style name="xcode-dark">
+  <entry type="Error" style="#960050"/>
+  <entry type="Background" style="#ffffff bg:#1f1f24"/>
+  <entry type="Keyword" style="#fc5fa3"/>
+  <entry type="KeywordConstant" style="#fc5fa3"/>
+  <entry type="KeywordDeclaration" style="#fc5fa3"/>
+  <entry type="KeywordReserved" style="#fc5fa3"/>
+  <entry type="Name" style="#ffffff"/>
+  <entry type="NameBuiltin" style="#d0a8ff"/>
+  <entry type="NameBuiltinPseudo" style="#a167e6"/>
+  <entry type="NameClass" style="#5dd8ff"/>
+  <entry type="NameFunction" style="#41a1c0"/>
+  <entry type="NameVariable" style="#41a1c0"/>
+  <entry type="LiteralString" style="#fc6a5d"/>
+  <entry type="LiteralStringEscape" style="#fc6a5d"/>
+  <entry type="LiteralStringInterpol" style="#ffffff"/>
+  <entry type="LiteralNumber" style="#d0bf69"/>
+  <entry type="LiteralNumberBin" style="#d0bf69"/>
+  <entry type="LiteralNumberFloat" style="#d0bf69"/>
+  <entry type="LiteralNumberHex" style="#d0bf69"/>
+  <entry type="LiteralNumberInteger" style="#d0bf69"/>
+  <entry type="LiteralNumberOct" style="#d0bf69"/>
+  <entry type="Operator" style="#ffffff"/>
+  <entry type="Punctuation" style="#ffffff"/>
+  <entry type="Comment" style="#6c7986"/>
+  <entry type="CommentMultiline" style="#6c7986"/>
+  <entry type="CommentSingle" style="#6c7986"/>
+  <entry type="CommentSpecial" style="italic #6c7986"/>
+  <entry type="CommentPreproc" style="#fd8f3f"/>
+  <entry type="Text" style="#ffffff"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/styles/xcode.xml 🔗

@@ -0,0 +1,22 @@
+<style name="xcode">
+  <entry type="Error" style="#000000"/>
+  <entry type="Background" style="bg:#ffffff"/>
+  <entry type="Keyword" style="#a90d91"/>
+  <entry type="Name" style="#000000"/>
+  <entry type="NameAttribute" style="#836c28"/>
+  <entry type="NameBuiltin" style="#a90d91"/>
+  <entry type="NameBuiltinPseudo" style="#5b269a"/>
+  <entry type="NameClass" style="#3f6e75"/>
+  <entry type="NameDecorator" style="#000000"/>
+  <entry type="NameFunction" style="#000000"/>
+  <entry type="NameLabel" style="#000000"/>
+  <entry type="NameTag" style="#000000"/>
+  <entry type="NameVariable" style="#000000"/>
+  <entry type="Literal" style="#1c01ce"/>
+  <entry type="LiteralString" style="#c41a16"/>
+  <entry type="LiteralStringChar" style="#2300ce"/>
+  <entry type="LiteralNumber" style="#1c01ce"/>
+  <entry type="Operator" style="#000000"/>
+  <entry type="Comment" style="#177500"/>
+  <entry type="CommentPreproc" style="#633820"/>
+</style>

vendor/github.com/alecthomas/chroma/v2/table.py 🔗

@@ -0,0 +1,31 @@
+#!/usr/bin/env python3
+import re
+from collections import defaultdict
+from subprocess import check_output
+
+README_FILE = "README.md"
+
+lines = check_output(["chroma", "--list"]).decode("utf-8").splitlines()
+lines = [line.strip() for line in lines if line.startswith("  ") and not line.startswith("   ")]
+lines = sorted(lines, key=lambda l: l.lower())
+
+table = defaultdict(list)
+
+for line in lines:
+    table[line[0].upper()].append(line)
+
+rows = []
+for key, value in table.items():
+    rows.append("{} | {}".format(key, ", ".join(value)))
+tbody = "\n".join(rows)
+
+with open(README_FILE, "r") as f:
+    content = f.read()
+
+with open(README_FILE, "w") as f:
+    marker = re.compile(r"(?P<start>:----: \\| --------\n).*?(?P<end>\n\n)", re.DOTALL)
+    replacement = r"\g<start>%s\g<end>" % tbody
+    updated_content = marker.sub(replacement, content)
+    f.write(updated_content)
+
+print(tbody)

vendor/github.com/alecthomas/chroma/v2/types.go 🔗

@@ -0,0 +1,343 @@
+package chroma
+
+//go:generate enumer -text -type TokenType
+
+// TokenType is the type of token to highlight.
+//
+// It is also an Emitter, emitting a single token of itself
+type TokenType int
+
+// Set of TokenTypes.
+//
+// Categories of types are grouped in ranges of 1000, while sub-categories are in ranges of 100. For
+// example, the literal category is in the range 3000-3999. The sub-category for literal strings is
+// in the range 3100-3199.
+
+// Meta token types.
+const (
+	// Default background style.
+	Background TokenType = -1 - iota
+	// PreWrapper style.
+	PreWrapper
+	// Line style.
+	Line
+	// Line numbers in output.
+	LineNumbers
+	// Line numbers in output when in table.
+	LineNumbersTable
+	// Line higlight style.
+	LineHighlight
+	// Line numbers table wrapper style.
+	LineTable
+	// Line numbers table TD wrapper style.
+	LineTableTD
+	// Line number links.
+	LineLink
+	// Code line wrapper style.
+	CodeLine
+	// Input that could not be tokenised.
+	Error
+	// Other is used by the Delegate lexer to indicate which tokens should be handled by the delegate.
+	Other
+	// No highlighting.
+	None
+	// Don't emit this token to the output.
+	Ignore
+	// Used as an EOF marker / nil token
+	EOFType TokenType = 0
+)
+
+// Keywords.
+const (
+	Keyword TokenType = 1000 + iota
+	KeywordConstant
+	KeywordDeclaration
+	KeywordNamespace
+	KeywordPseudo
+	KeywordReserved
+	KeywordType
+)
+
+// Names.
+const (
+	Name TokenType = 2000 + iota
+	NameAttribute
+	NameBuiltin
+	NameBuiltinPseudo
+	NameClass
+	NameConstant
+	NameDecorator
+	NameEntity
+	NameException
+	NameFunction
+	NameFunctionMagic
+	NameKeyword
+	NameLabel
+	NameNamespace
+	NameOperator
+	NameOther
+	NamePseudo
+	NameProperty
+	NameTag
+	NameVariable
+	NameVariableAnonymous
+	NameVariableClass
+	NameVariableGlobal
+	NameVariableInstance
+	NameVariableMagic
+)
+
+// Literals.
+const (
+	Literal TokenType = 3000 + iota
+	LiteralDate
+	LiteralOther
+)
+
+// Strings.
+const (
+	LiteralString TokenType = 3100 + iota
+	LiteralStringAffix
+	LiteralStringAtom
+	LiteralStringBacktick
+	LiteralStringBoolean
+	LiteralStringChar
+	LiteralStringDelimiter
+	LiteralStringDoc
+	LiteralStringDouble
+	LiteralStringEscape
+	LiteralStringHeredoc
+	LiteralStringInterpol
+	LiteralStringName
+	LiteralStringOther
+	LiteralStringRegex
+	LiteralStringSingle
+	LiteralStringSymbol
+)
+
+// Literals.
+const (
+	LiteralNumber TokenType = 3200 + iota
+	LiteralNumberBin
+	LiteralNumberFloat
+	LiteralNumberHex
+	LiteralNumberInteger
+	LiteralNumberIntegerLong
+	LiteralNumberOct
+	LiteralNumberByte
+)
+
+// Operators.
+const (
+	Operator TokenType = 4000 + iota
+	OperatorWord
+)
+
+// Punctuation.
+const (
+	Punctuation TokenType = 5000 + iota
+)
+
+// Comments.
+const (
+	Comment TokenType = 6000 + iota
+	CommentHashbang
+	CommentMultiline
+	CommentSingle
+	CommentSpecial
+)
+
+// Preprocessor "comments".
+const (
+	CommentPreproc TokenType = 6100 + iota
+	CommentPreprocFile
+)
+
+// Generic tokens.
+const (
+	Generic TokenType = 7000 + iota
+	GenericDeleted
+	GenericEmph
+	GenericError
+	GenericHeading
+	GenericInserted
+	GenericOutput
+	GenericPrompt
+	GenericStrong
+	GenericSubheading
+	GenericTraceback
+	GenericUnderline
+)
+
+// Text.
+const (
+	Text TokenType = 8000 + iota
+	TextWhitespace
+	TextSymbol
+	TextPunctuation
+)
+
+// Aliases.
+const (
+	Whitespace = TextWhitespace
+
+	Date = LiteralDate
+
+	String          = LiteralString
+	StringAffix     = LiteralStringAffix
+	StringBacktick  = LiteralStringBacktick
+	StringChar      = LiteralStringChar
+	StringDelimiter = LiteralStringDelimiter
+	StringDoc       = LiteralStringDoc
+	StringDouble    = LiteralStringDouble
+	StringEscape    = LiteralStringEscape
+	StringHeredoc   = LiteralStringHeredoc
+	StringInterpol  = LiteralStringInterpol
+	StringOther     = LiteralStringOther
+	StringRegex     = LiteralStringRegex
+	StringSingle    = LiteralStringSingle
+	StringSymbol    = LiteralStringSymbol
+
+	Number            = LiteralNumber
+	NumberBin         = LiteralNumberBin
+	NumberFloat       = LiteralNumberFloat
+	NumberHex         = LiteralNumberHex
+	NumberInteger     = LiteralNumberInteger
+	NumberIntegerLong = LiteralNumberIntegerLong
+	NumberOct         = LiteralNumberOct
+)
+
+var (
+	StandardTypes = map[TokenType]string{
+		Background:       "bg",
+		PreWrapper:       "chroma",
+		Line:             "line",
+		LineNumbers:      "ln",
+		LineNumbersTable: "lnt",
+		LineHighlight:    "hl",
+		LineTable:        "lntable",
+		LineTableTD:      "lntd",
+		LineLink:         "lnlinks",
+		CodeLine:         "cl",
+		Text:             "",
+		Whitespace:       "w",
+		Error:            "err",
+		Other:            "x",
+		// I have no idea what this is used for...
+		// Escape:     "esc",
+
+		Keyword:            "k",
+		KeywordConstant:    "kc",
+		KeywordDeclaration: "kd",
+		KeywordNamespace:   "kn",
+		KeywordPseudo:      "kp",
+		KeywordReserved:    "kr",
+		KeywordType:        "kt",
+
+		Name:                 "n",
+		NameAttribute:        "na",
+		NameBuiltin:          "nb",
+		NameBuiltinPseudo:    "bp",
+		NameClass:            "nc",
+		NameConstant:         "no",
+		NameDecorator:        "nd",
+		NameEntity:           "ni",
+		NameException:        "ne",
+		NameFunction:         "nf",
+		NameFunctionMagic:    "fm",
+		NameProperty:         "py",
+		NameLabel:            "nl",
+		NameNamespace:        "nn",
+		NameOther:            "nx",
+		NameTag:              "nt",
+		NameVariable:         "nv",
+		NameVariableClass:    "vc",
+		NameVariableGlobal:   "vg",
+		NameVariableInstance: "vi",
+		NameVariableMagic:    "vm",
+
+		Literal:     "l",
+		LiteralDate: "ld",
+
+		String:          "s",
+		StringAffix:     "sa",
+		StringBacktick:  "sb",
+		StringChar:      "sc",
+		StringDelimiter: "dl",
+		StringDoc:       "sd",
+		StringDouble:    "s2",
+		StringEscape:    "se",
+		StringHeredoc:   "sh",
+		StringInterpol:  "si",
+		StringOther:     "sx",
+		StringRegex:     "sr",
+		StringSingle:    "s1",
+		StringSymbol:    "ss",
+
+		Number:            "m",
+		NumberBin:         "mb",
+		NumberFloat:       "mf",
+		NumberHex:         "mh",
+		NumberInteger:     "mi",
+		NumberIntegerLong: "il",
+		NumberOct:         "mo",
+
+		Operator:     "o",
+		OperatorWord: "ow",
+
+		Punctuation: "p",
+
+		Comment:            "c",
+		CommentHashbang:    "ch",
+		CommentMultiline:   "cm",
+		CommentPreproc:     "cp",
+		CommentPreprocFile: "cpf",
+		CommentSingle:      "c1",
+		CommentSpecial:     "cs",
+
+		Generic:           "g",
+		GenericDeleted:    "gd",
+		GenericEmph:       "ge",
+		GenericError:      "gr",
+		GenericHeading:    "gh",
+		GenericInserted:   "gi",
+		GenericOutput:     "go",
+		GenericPrompt:     "gp",
+		GenericStrong:     "gs",
+		GenericSubheading: "gu",
+		GenericTraceback:  "gt",
+		GenericUnderline:  "gl",
+	}
+)
+
+func (t TokenType) Parent() TokenType {
+	if t%100 != 0 {
+		return t / 100 * 100
+	}
+	if t%1000 != 0 {
+		return t / 1000 * 1000
+	}
+	return 0
+}
+
+func (t TokenType) Category() TokenType {
+	return t / 1000 * 1000
+}
+
+func (t TokenType) SubCategory() TokenType {
+	return t / 100 * 100
+}
+
+func (t TokenType) InCategory(other TokenType) bool {
+	return t/1000 == other/1000
+}
+
+func (t TokenType) InSubCategory(other TokenType) bool {
+	return t/100 == other/100
+}
+
+func (t TokenType) Emit(groups []string, _ *LexerState) Iterator {
+	return Literator(Token{Type: t, Value: groups[0]})
+}
+
+func (t TokenType) EmitterKind() string { return "token" }

vendor/github.com/andybalholm/cascadia/LICENSE 🔗

@@ -0,0 +1,24 @@
+Copyright (c) 2011 Andy Balholm. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

vendor/github.com/andybalholm/cascadia/README.md 🔗

@@ -0,0 +1,144 @@
+# cascadia
+
+[![](https://travis-ci.org/andybalholm/cascadia.svg)](https://travis-ci.org/andybalholm/cascadia)
+
+The Cascadia package implements CSS selectors for use with the parse trees produced by the html package.
+
+To test CSS selectors without writing Go code, check out [cascadia](https://github.com/suntong/cascadia) the command line tool, a thin wrapper around this package.
+
+[Refer to godoc here](https://godoc.org/github.com/andybalholm/cascadia).
+
+## Example
+
+The following is an example of how you can use Cascadia.
+
+```go
+package main
+
+import (
+	"fmt"
+	"log"
+	"strings"
+
+	"github.com/andybalholm/cascadia"
+	"golang.org/x/net/html"
+)
+
+var pricingHtml string = `
+<div class="card mb-4 box-shadow">
+	<div class="card-header">
+		<h4 class="my-0 font-weight-normal">Free</h4>
+	</div>
+	<div class="card-body">
+		<h1 class="card-title pricing-card-title">$0/mo</h1>
+		<ul class="list-unstyled mt-3 mb-4">
+			<li>10 users included</li>
+			<li>2 GB of storage</li>
+			<li><a href="https://example.com">See more</a></li>
+		</ul>
+	</div>
+</div>
+
+<div class="card mb-4 box-shadow">
+	<div class="card-header">
+		<h4 class="my-0 font-weight-normal">Pro</h4>
+	</div>
+	<div class="card-body">
+		<h1 class="card-title pricing-card-title">$15/mo</h1>
+		<ul class="list-unstyled mt-3 mb-4">
+			<li>20 users included</li>
+			<li>10 GB of storage</li>
+			<li><a href="https://example.com">See more</a></li>
+		</ul>
+	</div>
+</div>
+
+<div class="card mb-4 box-shadow">
+	<div class="card-header">
+		<h4 class="my-0 font-weight-normal">Enterprise</h4>
+	</div>
+	<div class="card-body">
+		<h1 class="card-title pricing-card-title">$29/mo</h1>
+		<ul class="list-unstyled mt-3 mb-4">
+			<li>30 users included</li>
+			<li>15 GB of storage</li>
+			<li><a>See more</a></li>
+		</ul>
+	</div>
+</div>
+`
+
+func Query(n *html.Node, query string) *html.Node {
+	sel, err := cascadia.Parse(query)
+	if err != nil {
+		return &html.Node{}
+	}
+	return cascadia.Query(n, sel)
+}
+
+func QueryAll(n *html.Node, query string) []*html.Node {
+	sel, err := cascadia.Parse(query)
+	if err != nil {
+		return []*html.Node{}
+	}
+	return cascadia.QueryAll(n, sel)
+}
+
+func AttrOr(n *html.Node, attrName, or string) string {
+	for _, a := range n.Attr {
+		if a.Key == attrName {
+			return a.Val
+		}
+	}
+	return or
+}
+
+func main() {
+	doc, err := html.Parse(strings.NewReader(pricingHtml))
+	if err != nil {
+		log.Fatal(err)
+	}
+	fmt.Printf("List of pricing plans:\n\n")
+	for i, p := range QueryAll(doc, "div.card.mb-4.box-shadow") {
+		planName := Query(p, "h4").FirstChild.Data
+		price := Query(p, ".pricing-card-title").FirstChild.Data
+		usersIncluded := Query(p, "li:first-child").FirstChild.Data
+		storage := Query(p, "li:nth-child(2)").FirstChild.Data
+		detailsUrl := AttrOr(Query(p, "li:last-child a"), "href", "(No link available)")
+		fmt.Printf(
+			"Plan #%d\nName: %s\nPrice: %s\nUsers: %s\nStorage: %s\nDetails: %s\n\n",
+			i+1,
+			planName,
+			price,
+			usersIncluded,
+			storage,
+			detailsUrl,
+		)
+	}
+}
+```
+The output is:
+```
+List of pricing plans:
+
+Plan #1
+Name: Free
+Price: $0/mo
+Users: 10 users included
+Storage: 2 GB of storage
+Details: https://example.com
+
+Plan #2
+Name: Pro
+Price: $15/mo
+Users: 20 users included
+Storage: 10 GB of storage
+Details: https://example.com
+
+Plan #3
+Name: Enterprise
+Price: $29/mo
+Users: 30 users included
+Storage: 15 GB of storage
+Details: (No link available)
+```

vendor/github.com/andybalholm/cascadia/parser.go 🔗

@@ -0,0 +1,889 @@
+// Package cascadia is an implementation of CSS selectors.
+package cascadia
+
+import (
+	"errors"
+	"fmt"
+	"regexp"
+	"strconv"
+	"strings"
+)
+
+// a parser for CSS selectors
+type parser struct {
+	s string // the source text
+	i int    // the current position
+
+	// if `false`, parsing a pseudo-element
+	// returns an error.
+	acceptPseudoElements bool
+}
+
+// parseEscape parses a backslash escape.
+func (p *parser) parseEscape() (result string, err error) {
+	if len(p.s) < p.i+2 || p.s[p.i] != '\\' {
+		return "", errors.New("invalid escape sequence")
+	}
+
+	start := p.i + 1
+	c := p.s[start]
+	switch {
+	case c == '\r' || c == '\n' || c == '\f':
+		return "", errors.New("escaped line ending outside string")
+	case hexDigit(c):
+		// unicode escape (hex)
+		var i int
+		for i = start; i < start+6 && i < len(p.s) && hexDigit(p.s[i]); i++ {
+			// empty
+		}
+		v, _ := strconv.ParseUint(p.s[start:i], 16, 64)
+		if len(p.s) > i {
+			switch p.s[i] {
+			case '\r':
+				i++
+				if len(p.s) > i && p.s[i] == '\n' {
+					i++
+				}
+			case ' ', '\t', '\n', '\f':
+				i++
+			}
+		}
+		p.i = i
+		return string(rune(v)), nil
+	}
+
+	// Return the literal character after the backslash.
+	result = p.s[start : start+1]
+	p.i += 2
+	return result, nil
+}
+
+// toLowerASCII returns s with all ASCII capital letters lowercased.
+func toLowerASCII(s string) string {
+	var b []byte
+	for i := 0; i < len(s); i++ {
+		if c := s[i]; 'A' <= c && c <= 'Z' {
+			if b == nil {
+				b = make([]byte, len(s))
+				copy(b, s)
+			}
+			b[i] = s[i] + ('a' - 'A')
+		}
+	}
+
+	if b == nil {
+		return s
+	}
+
+	return string(b)
+}
+
+func hexDigit(c byte) bool {
+	return '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F'
+}
+
+// nameStart returns whether c can be the first character of an identifier
+// (not counting an initial hyphen, or an escape sequence).
+func nameStart(c byte) bool {
+	return 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '_' || c > 127
+}
+
+// nameChar returns whether c can be a character within an identifier
+// (not counting an escape sequence).
+func nameChar(c byte) bool {
+	return 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '_' || c > 127 ||
+		c == '-' || '0' <= c && c <= '9'
+}
+
+// parseIdentifier parses an identifier.
+func (p *parser) parseIdentifier() (result string, err error) {
+	const prefix = '-'
+	var numPrefix int
+
+	for len(p.s) > p.i && p.s[p.i] == prefix {
+		p.i++
+		numPrefix++
+	}
+
+	if len(p.s) <= p.i {
+		return "", errors.New("expected identifier, found EOF instead")
+	}
+
+	if c := p.s[p.i]; !(nameStart(c) || c == '\\') {
+		return "", fmt.Errorf("expected identifier, found %c instead", c)
+	}
+
+	result, err = p.parseName()
+	if numPrefix > 0 && err == nil {
+		result = strings.Repeat(string(prefix), numPrefix) + result
+	}
+	return
+}
+
+// parseName parses a name (which is like an identifier, but doesn't have
+// extra restrictions on the first character).
+func (p *parser) parseName() (result string, err error) {
+	i := p.i
+loop:
+	for i < len(p.s) {
+		c := p.s[i]
+		switch {
+		case nameChar(c):
+			start := i
+			for i < len(p.s) && nameChar(p.s[i]) {
+				i++
+			}
+			result += p.s[start:i]
+		case c == '\\':
+			p.i = i
+			val, err := p.parseEscape()
+			if err != nil {
+				return "", err
+			}
+			i = p.i
+			result += val
+		default:
+			break loop
+		}
+	}
+
+	if result == "" {
+		return "", errors.New("expected name, found EOF instead")
+	}
+
+	p.i = i
+	return result, nil
+}
+
+// parseString parses a single- or double-quoted string.
+func (p *parser) parseString() (result string, err error) {
+	i := p.i
+	if len(p.s) < i+2 {
+		return "", errors.New("expected string, found EOF instead")
+	}
+
+	quote := p.s[i]
+	i++
+
+loop:
+	for i < len(p.s) {
+		switch p.s[i] {
+		case '\\':
+			if len(p.s) > i+1 {
+				switch c := p.s[i+1]; c {
+				case '\r':
+					if len(p.s) > i+2 && p.s[i+2] == '\n' {
+						i += 3
+						continue loop
+					}
+					fallthrough
+				case '\n', '\f':
+					i += 2
+					continue loop
+				}
+			}
+			p.i = i
+			val, err := p.parseEscape()
+			if err != nil {
+				return "", err
+			}
+			i = p.i
+			result += val
+		case quote:
+			break loop
+		case '\r', '\n', '\f':
+			return "", errors.New("unexpected end of line in string")
+		default:
+			start := i
+			for i < len(p.s) {
+				if c := p.s[i]; c == quote || c == '\\' || c == '\r' || c == '\n' || c == '\f' {
+					break
+				}
+				i++
+			}
+			result += p.s[start:i]
+		}
+	}
+
+	if i >= len(p.s) {
+		return "", errors.New("EOF in string")
+	}
+
+	// Consume the final quote.
+	i++
+
+	p.i = i
+	return result, nil
+}
+
+// parseRegex parses a regular expression; the end is defined by encountering an
+// unmatched closing ')' or ']' which is not consumed
+func (p *parser) parseRegex() (rx *regexp.Regexp, err error) {
+	i := p.i
+	if len(p.s) < i+2 {
+		return nil, errors.New("expected regular expression, found EOF instead")
+	}
+
+	// number of open parens or brackets;
+	// when it becomes negative, finished parsing regex
+	open := 0
+
+loop:
+	for i < len(p.s) {
+		switch p.s[i] {
+		case '(', '[':
+			open++
+		case ')', ']':
+			open--
+			if open < 0 {
+				break loop
+			}
+		}
+		i++
+	}
+
+	if i >= len(p.s) {
+		return nil, errors.New("EOF in regular expression")
+	}
+	rx, err = regexp.Compile(p.s[p.i:i])
+	p.i = i
+	return rx, err
+}
+
+// skipWhitespace consumes whitespace characters and comments.
+// It returns true if there was actually anything to skip.
+func (p *parser) skipWhitespace() bool {
+	i := p.i
+	for i < len(p.s) {
+		switch p.s[i] {
+		case ' ', '\t', '\r', '\n', '\f':
+			i++
+			continue
+		case '/':
+			if strings.HasPrefix(p.s[i:], "/*") {
+				end := strings.Index(p.s[i+len("/*"):], "*/")
+				if end != -1 {
+					i += end + len("/**/")
+					continue
+				}
+			}
+		}
+		break
+	}
+
+	if i > p.i {
+		p.i = i
+		return true
+	}
+
+	return false
+}
+
+// consumeParenthesis consumes an opening parenthesis and any following
+// whitespace. It returns true if there was actually a parenthesis to skip.
+func (p *parser) consumeParenthesis() bool {
+	if p.i < len(p.s) && p.s[p.i] == '(' {
+		p.i++
+		p.skipWhitespace()
+		return true
+	}
+	return false
+}
+
+// consumeClosingParenthesis consumes a closing parenthesis and any preceding
+// whitespace. It returns true if there was actually a parenthesis to skip.
+func (p *parser) consumeClosingParenthesis() bool {
+	i := p.i
+	p.skipWhitespace()
+	if p.i < len(p.s) && p.s[p.i] == ')' {
+		p.i++
+		return true
+	}
+	p.i = i
+	return false
+}
+
+// parseTypeSelector parses a type selector (one that matches by tag name).
+func (p *parser) parseTypeSelector() (result tagSelector, err error) {
+	tag, err := p.parseIdentifier()
+	if err != nil {
+		return
+	}
+	return tagSelector{tag: toLowerASCII(tag)}, nil
+}
+
+// parseIDSelector parses a selector that matches by id attribute.
+func (p *parser) parseIDSelector() (idSelector, error) {
+	if p.i >= len(p.s) {
+		return idSelector{}, fmt.Errorf("expected id selector (#id), found EOF instead")
+	}
+	if p.s[p.i] != '#' {
+		return idSelector{}, fmt.Errorf("expected id selector (#id), found '%c' instead", p.s[p.i])
+	}
+
+	p.i++
+	id, err := p.parseName()
+	if err != nil {
+		return idSelector{}, err
+	}
+
+	return idSelector{id: id}, nil
+}
+
+// parseClassSelector parses a selector that matches by class attribute.
+func (p *parser) parseClassSelector() (classSelector, error) {
+	if p.i >= len(p.s) {
+		return classSelector{}, fmt.Errorf("expected class selector (.class), found EOF instead")
+	}
+	if p.s[p.i] != '.' {
+		return classSelector{}, fmt.Errorf("expected class selector (.class), found '%c' instead", p.s[p.i])
+	}
+
+	p.i++
+	class, err := p.parseIdentifier()
+	if err != nil {
+		return classSelector{}, err
+	}
+
+	return classSelector{class: class}, nil
+}
+
+// parseAttributeSelector parses a selector that matches by attribute value.
+func (p *parser) parseAttributeSelector() (attrSelector, error) {
+	if p.i >= len(p.s) {
+		return attrSelector{}, fmt.Errorf("expected attribute selector ([attribute]), found EOF instead")
+	}
+	if p.s[p.i] != '[' {
+		return attrSelector{}, fmt.Errorf("expected attribute selector ([attribute]), found '%c' instead", p.s[p.i])
+	}
+
+	p.i++
+	p.skipWhitespace()
+	key, err := p.parseIdentifier()
+	if err != nil {
+		return attrSelector{}, err
+	}
+	key = toLowerASCII(key)
+
+	p.skipWhitespace()
+	if p.i >= len(p.s) {
+		return attrSelector{}, errors.New("unexpected EOF in attribute selector")
+	}
+
+	if p.s[p.i] == ']' {
+		p.i++
+		return attrSelector{key: key, operation: ""}, nil
+	}
+
+	if p.i+2 >= len(p.s) {
+		return attrSelector{}, errors.New("unexpected EOF in attribute selector")
+	}
+
+	op := p.s[p.i : p.i+2]
+	if op[0] == '=' {
+		op = "="
+	} else if op[1] != '=' {
+		return attrSelector{}, fmt.Errorf(`expected equality operator, found "%s" instead`, op)
+	}
+	p.i += len(op)
+
+	p.skipWhitespace()
+	if p.i >= len(p.s) {
+		return attrSelector{}, errors.New("unexpected EOF in attribute selector")
+	}
+	var val string
+	var rx *regexp.Regexp
+	if op == "#=" {
+		rx, err = p.parseRegex()
+	} else {
+		switch p.s[p.i] {
+		case '\'', '"':
+			val, err = p.parseString()
+		default:
+			val, err = p.parseIdentifier()
+		}
+	}
+	if err != nil {
+		return attrSelector{}, err
+	}
+
+	p.skipWhitespace()
+	if p.i >= len(p.s) {
+		return attrSelector{}, errors.New("unexpected EOF in attribute selector")
+	}
+
+	// check if the attribute contains an ignore case flag
+	ignoreCase := false
+	if p.s[p.i] == 'i' || p.s[p.i] == 'I' {
+		ignoreCase = true
+		p.i++
+	}
+
+	p.skipWhitespace()
+	if p.i >= len(p.s) {
+		return attrSelector{}, errors.New("unexpected EOF in attribute selector")
+	}
+
+	if p.s[p.i] != ']' {
+		return attrSelector{}, fmt.Errorf("expected ']', found '%c' instead", p.s[p.i])
+	}
+	p.i++
+
+	switch op {
+	case "=", "!=", "~=", "|=", "^=", "$=", "*=", "#=":
+		return attrSelector{key: key, val: val, operation: op, regexp: rx, insensitive: ignoreCase}, nil
+	default:
+		return attrSelector{}, fmt.Errorf("attribute operator %q is not supported", op)
+	}
+}
+
+var (
+	errExpectedParenthesis        = errors.New("expected '(' but didn't find it")
+	errExpectedClosingParenthesis = errors.New("expected ')' but didn't find it")
+	errUnmatchedParenthesis       = errors.New("unmatched '('")
+)
+
+// parsePseudoclassSelector parses a pseudoclass selector like :not(p) or a pseudo-element
+// For backwards compatibility, both ':' and '::' prefix are allowed for pseudo-elements.
+// https://drafts.csswg.org/selectors-3/#pseudo-elements
+// Returning a nil `Sel` (and a nil `error`) means we found a pseudo-element.
+func (p *parser) parsePseudoclassSelector() (out Sel, pseudoElement string, err error) {
+	if p.i >= len(p.s) {
+		return nil, "", fmt.Errorf("expected pseudoclass selector (:pseudoclass), found EOF instead")
+	}
+	if p.s[p.i] != ':' {
+		return nil, "", fmt.Errorf("expected attribute selector (:pseudoclass), found '%c' instead", p.s[p.i])
+	}
+
+	p.i++
+	var mustBePseudoElement bool
+	if p.i >= len(p.s) {
+		return nil, "", fmt.Errorf("got empty pseudoclass (or pseudoelement)")
+	}
+	if p.s[p.i] == ':' { // we found a pseudo-element
+		mustBePseudoElement = true
+		p.i++
+	}
+
+	name, err := p.parseIdentifier()
+	if err != nil {
+		return
+	}
+	name = toLowerASCII(name)
+	if mustBePseudoElement && (name != "after" && name != "backdrop" && name != "before" &&
+		name != "cue" && name != "first-letter" && name != "first-line" && name != "grammar-error" &&
+		name != "marker" && name != "placeholder" && name != "selection" && name != "spelling-error") {
+		return out, "", fmt.Errorf("unknown pseudoelement :%s", name)
+	}
+
+	switch name {
+	case "not", "has", "haschild":
+		if !p.consumeParenthesis() {
+			return out, "", errExpectedParenthesis
+		}
+		sel, parseErr := p.parseSelectorGroup()
+		if parseErr != nil {
+			return out, "", parseErr
+		}
+		if !p.consumeClosingParenthesis() {
+			return out, "", errExpectedClosingParenthesis
+		}
+
+		out = relativePseudoClassSelector{name: name, match: sel}
+
+	case "contains", "containsown":
+		if !p.consumeParenthesis() {
+			return out, "", errExpectedParenthesis
+		}
+		if p.i == len(p.s) {
+			return out, "", errUnmatchedParenthesis
+		}
+		var val string
+		switch p.s[p.i] {
+		case '\'', '"':
+			val, err = p.parseString()
+		default:
+			val, err = p.parseIdentifier()
+		}
+		if err != nil {
+			return out, "", err
+		}
+		val = strings.ToLower(val)
+		p.skipWhitespace()
+		if p.i >= len(p.s) {
+			return out, "", errors.New("unexpected EOF in pseudo selector")
+		}
+		if !p.consumeClosingParenthesis() {
+			return out, "", errExpectedClosingParenthesis
+		}
+
+		out = containsPseudoClassSelector{own: name == "containsown", value: val}
+
+	case "matches", "matchesown":
+		if !p.consumeParenthesis() {
+			return out, "", errExpectedParenthesis
+		}
+		rx, err := p.parseRegex()
+		if err != nil {
+			return out, "", err
+		}
+		if p.i >= len(p.s) {
+			return out, "", errors.New("unexpected EOF in pseudo selector")
+		}
+		if !p.consumeClosingParenthesis() {
+			return out, "", errExpectedClosingParenthesis
+		}
+
+		out = regexpPseudoClassSelector{own: name == "matchesown", regexp: rx}
+
+	case "nth-child", "nth-last-child", "nth-of-type", "nth-last-of-type":
+		if !p.consumeParenthesis() {
+			return out, "", errExpectedParenthesis
+		}
+		a, b, err := p.parseNth()
+		if err != nil {
+			return out, "", err
+		}
+		if !p.consumeClosingParenthesis() {
+			return out, "", errExpectedClosingParenthesis
+		}
+		last := name == "nth-last-child" || name == "nth-last-of-type"
+		ofType := name == "nth-of-type" || name == "nth-last-of-type"
+		out = nthPseudoClassSelector{a: a, b: b, last: last, ofType: ofType}
+
+	case "first-child":
+		out = nthPseudoClassSelector{a: 0, b: 1, ofType: false, last: false}
+	case "last-child":
+		out = nthPseudoClassSelector{a: 0, b: 1, ofType: false, last: true}
+	case "first-of-type":
+		out = nthPseudoClassSelector{a: 0, b: 1, ofType: true, last: false}
+	case "last-of-type":
+		out = nthPseudoClassSelector{a: 0, b: 1, ofType: true, last: true}
+	case "only-child":
+		out = onlyChildPseudoClassSelector{ofType: false}
+	case "only-of-type":
+		out = onlyChildPseudoClassSelector{ofType: true}
+	case "input":
+		out = inputPseudoClassSelector{}
+	case "empty":
+		out = emptyElementPseudoClassSelector{}
+	case "root":
+		out = rootPseudoClassSelector{}
+	case "link":
+		out = linkPseudoClassSelector{}
+	case "lang":
+		if !p.consumeParenthesis() {
+			return out, "", errExpectedParenthesis
+		}
+		if p.i == len(p.s) {
+			return out, "", errUnmatchedParenthesis
+		}
+		val, err := p.parseIdentifier()
+		if err != nil {
+			return out, "", err
+		}
+		val = strings.ToLower(val)
+		p.skipWhitespace()
+		if p.i >= len(p.s) {
+			return out, "", errors.New("unexpected EOF in pseudo selector")
+		}
+		if !p.consumeClosingParenthesis() {
+			return out, "", errExpectedClosingParenthesis
+		}
+		out = langPseudoClassSelector{lang: val}
+	case "enabled":
+		out = enabledPseudoClassSelector{}
+	case "disabled":
+		out = disabledPseudoClassSelector{}
+	case "checked":
+		out = checkedPseudoClassSelector{}
+	case "visited", "hover", "active", "focus", "target":
+		// Not applicable in a static context: never match.
+		out = neverMatchSelector{value: ":" + name}
+	case "after", "backdrop", "before", "cue", "first-letter", "first-line", "grammar-error", "marker", "placeholder", "selection", "spelling-error":
+		return nil, name, nil
+	default:
+		return out, "", fmt.Errorf("unknown pseudoclass or pseudoelement :%s", name)
+	}
+	return
+}
+
+// parseInteger parses a  decimal integer.
+func (p *parser) parseInteger() (int, error) {
+	i := p.i
+	start := i
+	for i < len(p.s) && '0' <= p.s[i] && p.s[i] <= '9' {
+		i++
+	}
+	if i == start {
+		return 0, errors.New("expected integer, but didn't find it")
+	}
+	p.i = i
+
+	val, err := strconv.Atoi(p.s[start:i])
+	if err != nil {
+		return 0, err
+	}
+
+	return val, nil
+}
+
+// parseNth parses the argument for :nth-child (normally of the form an+b).
+func (p *parser) parseNth() (a, b int, err error) {
+	// initial state
+	if p.i >= len(p.s) {
+		goto eof
+	}
+	switch p.s[p.i] {
+	case '-':
+		p.i++
+		goto negativeA
+	case '+':
+		p.i++
+		goto positiveA
+	case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+		goto positiveA
+	case 'n', 'N':
+		a = 1
+		p.i++
+		goto readN
+	case 'o', 'O', 'e', 'E':
+		id, nameErr := p.parseName()
+		if nameErr != nil {
+			return 0, 0, nameErr
+		}
+		id = toLowerASCII(id)
+		if id == "odd" {
+			return 2, 1, nil
+		}
+		if id == "even" {
+			return 2, 0, nil
+		}
+		return 0, 0, fmt.Errorf("expected 'odd' or 'even', but found '%s' instead", id)
+	default:
+		goto invalid
+	}
+
+positiveA:
+	if p.i >= len(p.s) {
+		goto eof
+	}
+	switch p.s[p.i] {
+	case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+		a, err = p.parseInteger()
+		if err != nil {
+			return 0, 0, err
+		}
+		goto readA
+	case 'n', 'N':
+		a = 1
+		p.i++
+		goto readN
+	default:
+		goto invalid
+	}
+
+negativeA:
+	if p.i >= len(p.s) {
+		goto eof
+	}
+	switch p.s[p.i] {
+	case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+		a, err = p.parseInteger()
+		if err != nil {
+			return 0, 0, err
+		}
+		a = -a
+		goto readA
+	case 'n', 'N':
+		a = -1
+		p.i++
+		goto readN
+	default:
+		goto invalid
+	}
+
+readA:
+	if p.i >= len(p.s) {
+		goto eof
+	}
+	switch p.s[p.i] {
+	case 'n', 'N':
+		p.i++
+		goto readN
+	default:
+		// The number we read as a is actually b.
+		return 0, a, nil
+	}
+
+readN:
+	p.skipWhitespace()
+	if p.i >= len(p.s) {
+		goto eof
+	}
+	switch p.s[p.i] {
+	case '+':
+		p.i++
+		p.skipWhitespace()
+		b, err = p.parseInteger()
+		if err != nil {
+			return 0, 0, err
+		}
+		return a, b, nil
+	case '-':
+		p.i++
+		p.skipWhitespace()
+		b, err = p.parseInteger()
+		if err != nil {
+			return 0, 0, err
+		}
+		return a, -b, nil
+	default:
+		return a, 0, nil
+	}
+
+eof:
+	return 0, 0, errors.New("unexpected EOF while attempting to parse expression of form an+b")
+
+invalid:
+	return 0, 0, errors.New("unexpected character while attempting to parse expression of form an+b")
+}
+
+// parseSimpleSelectorSequence parses a selector sequence that applies to
+// a single element.
+func (p *parser) parseSimpleSelectorSequence() (Sel, error) {
+	var selectors []Sel
+
+	if p.i >= len(p.s) {
+		return nil, errors.New("expected selector, found EOF instead")
+	}
+
+	switch p.s[p.i] {
+	case '*':
+		// It's the universal selector. Just skip over it, since it doesn't affect the meaning.
+		p.i++
+		if p.i+2 < len(p.s) && p.s[p.i:p.i+2] == "|*" { // other version of universal selector
+			p.i += 2
+		}
+	case '#', '.', '[', ':':
+		// There's no type selector. Wait to process the other till the main loop.
+	default:
+		r, err := p.parseTypeSelector()
+		if err != nil {
+			return nil, err
+		}
+		selectors = append(selectors, r)
+	}
+
+	var pseudoElement string
+loop:
+	for p.i < len(p.s) {
+		var (
+			ns               Sel
+			newPseudoElement string
+			err              error
+		)
+		switch p.s[p.i] {
+		case '#':
+			ns, err = p.parseIDSelector()
+		case '.':
+			ns, err = p.parseClassSelector()
+		case '[':
+			ns, err = p.parseAttributeSelector()
+		case ':':
+			ns, newPseudoElement, err = p.parsePseudoclassSelector()
+		default:
+			break loop
+		}
+		if err != nil {
+			return nil, err
+		}
+		// From https://drafts.csswg.org/selectors-3/#pseudo-elements :
+		// "Only one pseudo-element may appear per selector, and if present
+		// it must appear after the sequence of simple selectors that
+		// represents the subjects of the selector.""
+		if ns == nil { // we found a pseudo-element
+			if pseudoElement != "" {
+				return nil, fmt.Errorf("only one pseudo-element is accepted per selector, got %s and %s", pseudoElement, newPseudoElement)
+			}
+			if !p.acceptPseudoElements {
+				return nil, fmt.Errorf("pseudo-element %s found, but pseudo-elements support is disabled", newPseudoElement)
+			}
+			pseudoElement = newPseudoElement
+		} else {
+			if pseudoElement != "" {
+				return nil, fmt.Errorf("pseudo-element %s must be at the end of selector", pseudoElement)
+			}
+			selectors = append(selectors, ns)
+		}
+
+	}
+	if len(selectors) == 1 && pseudoElement == "" { // no need wrap the selectors in compoundSelector
+		return selectors[0], nil
+	}
+	return compoundSelector{selectors: selectors, pseudoElement: pseudoElement}, nil
+}
+
+// parseSelector parses a selector that may include combinators.
+func (p *parser) parseSelector() (Sel, error) {
+	p.skipWhitespace()
+	result, err := p.parseSimpleSelectorSequence()
+	if err != nil {
+		return nil, err
+	}
+
+	for {
+		var (
+			combinator byte
+			c          Sel
+		)
+		if p.skipWhitespace() {
+			combinator = ' '
+		}
+		if p.i >= len(p.s) {
+			return result, nil
+		}
+
+		switch p.s[p.i] {
+		case '+', '>', '~':
+			combinator = p.s[p.i]
+			p.i++
+			p.skipWhitespace()
+		case ',', ')':
+			// These characters can't begin a selector, but they can legally occur after one.
+			return result, nil
+		}
+
+		if combinator == 0 {
+			return result, nil
+		}
+
+		c, err = p.parseSimpleSelectorSequence()
+		if err != nil {
+			return nil, err
+		}
+		result = combinedSelector{first: result, combinator: combinator, second: c}
+	}
+}
+
+// parseSelectorGroup parses a group of selectors, separated by commas.
+func (p *parser) parseSelectorGroup() (SelectorGroup, error) {
+	current, err := p.parseSelector()
+	if err != nil {
+		return nil, err
+	}
+	result := SelectorGroup{current}
+
+	for p.i < len(p.s) {
+		if p.s[p.i] != ',' {
+			break
+		}
+		p.i++
+		c, err := p.parseSelector()
+		if err != nil {
+			return nil, err
+		}
+		result = append(result, c)
+	}
+	return result, nil
+}

vendor/github.com/andybalholm/cascadia/pseudo_classes.go 🔗

@@ -0,0 +1,458 @@
+package cascadia
+
+import (
+	"bytes"
+	"fmt"
+	"regexp"
+	"strings"
+
+	"golang.org/x/net/html"
+	"golang.org/x/net/html/atom"
+)
+
+// This file implements the pseudo classes selectors,
+// which share the implementation of PseudoElement() and Specificity()
+
+type abstractPseudoClass struct{}
+
+func (s abstractPseudoClass) Specificity() Specificity {
+	return Specificity{0, 1, 0}
+}
+
+func (c abstractPseudoClass) PseudoElement() string {
+	return ""
+}
+
+type relativePseudoClassSelector struct {
+	name  string // one of "not", "has", "haschild"
+	match SelectorGroup
+}
+
+func (s relativePseudoClassSelector) Match(n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+	switch s.name {
+	case "not":
+		// matches elements that do not match a.
+		return !s.match.Match(n)
+	case "has":
+		//  matches elements with any descendant that matches a.
+		return hasDescendantMatch(n, s.match)
+	case "haschild":
+		// matches elements with a child that matches a.
+		return hasChildMatch(n, s.match)
+	default:
+		panic(fmt.Sprintf("unsupported relative pseudo class selector : %s", s.name))
+	}
+}
+
+// hasChildMatch returns whether n has any child that matches a.
+func hasChildMatch(n *html.Node, a Matcher) bool {
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		if a.Match(c) {
+			return true
+		}
+	}
+	return false
+}
+
+// hasDescendantMatch performs a depth-first search of n's descendants,
+// testing whether any of them match a. It returns true as soon as a match is
+// found, or false if no match is found.
+func hasDescendantMatch(n *html.Node, a Matcher) bool {
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		if a.Match(c) || (c.Type == html.ElementNode && hasDescendantMatch(c, a)) {
+			return true
+		}
+	}
+	return false
+}
+
+// Specificity returns the specificity of the most specific selectors
+// in the pseudo-class arguments.
+// See https://www.w3.org/TR/selectors/#specificity-rules
+func (s relativePseudoClassSelector) Specificity() Specificity {
+	var max Specificity
+	for _, sel := range s.match {
+		newSpe := sel.Specificity()
+		if max.Less(newSpe) {
+			max = newSpe
+		}
+	}
+	return max
+}
+
+func (c relativePseudoClassSelector) PseudoElement() string {
+	return ""
+}
+
+type containsPseudoClassSelector struct {
+	abstractPseudoClass
+	value string
+	own   bool
+}
+
+func (s containsPseudoClassSelector) Match(n *html.Node) bool {
+	var text string
+	if s.own {
+		// matches nodes that directly contain the given text
+		text = strings.ToLower(nodeOwnText(n))
+	} else {
+		// matches nodes that contain the given text.
+		text = strings.ToLower(nodeText(n))
+	}
+	return strings.Contains(text, s.value)
+}
+
+type regexpPseudoClassSelector struct {
+	abstractPseudoClass
+	regexp *regexp.Regexp
+	own    bool
+}
+
+func (s regexpPseudoClassSelector) Match(n *html.Node) bool {
+	var text string
+	if s.own {
+		// matches nodes whose text directly matches the specified regular expression
+		text = nodeOwnText(n)
+	} else {
+		// matches nodes whose text matches the specified regular expression
+		text = nodeText(n)
+	}
+	return s.regexp.MatchString(text)
+}
+
+// writeNodeText writes the text contained in n and its descendants to b.
+func writeNodeText(n *html.Node, b *bytes.Buffer) {
+	switch n.Type {
+	case html.TextNode:
+		b.WriteString(n.Data)
+	case html.ElementNode:
+		for c := n.FirstChild; c != nil; c = c.NextSibling {
+			writeNodeText(c, b)
+		}
+	}
+}
+
+// nodeText returns the text contained in n and its descendants.
+func nodeText(n *html.Node) string {
+	var b bytes.Buffer
+	writeNodeText(n, &b)
+	return b.String()
+}
+
+// nodeOwnText returns the contents of the text nodes that are direct
+// children of n.
+func nodeOwnText(n *html.Node) string {
+	var b bytes.Buffer
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		if c.Type == html.TextNode {
+			b.WriteString(c.Data)
+		}
+	}
+	return b.String()
+}
+
+type nthPseudoClassSelector struct {
+	abstractPseudoClass
+	a, b         int
+	last, ofType bool
+}
+
+func (s nthPseudoClassSelector) Match(n *html.Node) bool {
+	if s.a == 0 {
+		if s.last {
+			return simpleNthLastChildMatch(s.b, s.ofType, n)
+		} else {
+			return simpleNthChildMatch(s.b, s.ofType, n)
+		}
+	}
+	return nthChildMatch(s.a, s.b, s.last, s.ofType, n)
+}
+
+// nthChildMatch implements :nth-child(an+b).
+// If last is true, implements :nth-last-child instead.
+// If ofType is true, implements :nth-of-type instead.
+func nthChildMatch(a, b int, last, ofType bool, n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+
+	parent := n.Parent
+	if parent == nil {
+		return false
+	}
+
+	i := -1
+	count := 0
+	for c := parent.FirstChild; c != nil; c = c.NextSibling {
+		if (c.Type != html.ElementNode) || (ofType && c.Data != n.Data) {
+			continue
+		}
+		count++
+		if c == n {
+			i = count
+			if !last {
+				break
+			}
+		}
+	}
+
+	if i == -1 {
+		// This shouldn't happen, since n should always be one of its parent's children.
+		return false
+	}
+
+	if last {
+		i = count - i + 1
+	}
+
+	i -= b
+	if a == 0 {
+		return i == 0
+	}
+
+	return i%a == 0 && i/a >= 0
+}
+
+// simpleNthChildMatch implements :nth-child(b).
+// If ofType is true, implements :nth-of-type instead.
+func simpleNthChildMatch(b int, ofType bool, n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+
+	parent := n.Parent
+	if parent == nil {
+		return false
+	}
+
+	count := 0
+	for c := parent.FirstChild; c != nil; c = c.NextSibling {
+		if c.Type != html.ElementNode || (ofType && c.Data != n.Data) {
+			continue
+		}
+		count++
+		if c == n {
+			return count == b
+		}
+		if count >= b {
+			return false
+		}
+	}
+	return false
+}
+
+// simpleNthLastChildMatch implements :nth-last-child(b).
+// If ofType is true, implements :nth-last-of-type instead.
+func simpleNthLastChildMatch(b int, ofType bool, n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+
+	parent := n.Parent
+	if parent == nil {
+		return false
+	}
+
+	count := 0
+	for c := parent.LastChild; c != nil; c = c.PrevSibling {
+		if c.Type != html.ElementNode || (ofType && c.Data != n.Data) {
+			continue
+		}
+		count++
+		if c == n {
+			return count == b
+		}
+		if count >= b {
+			return false
+		}
+	}
+	return false
+}
+
+type onlyChildPseudoClassSelector struct {
+	abstractPseudoClass
+	ofType bool
+}
+
+// Match implements :only-child.
+// If `ofType` is true, it implements :only-of-type instead.
+func (s onlyChildPseudoClassSelector) Match(n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+
+	parent := n.Parent
+	if parent == nil {
+		return false
+	}
+
+	count := 0
+	for c := parent.FirstChild; c != nil; c = c.NextSibling {
+		if (c.Type != html.ElementNode) || (s.ofType && c.Data != n.Data) {
+			continue
+		}
+		count++
+		if count > 1 {
+			return false
+		}
+	}
+
+	return count == 1
+}
+
+type inputPseudoClassSelector struct {
+	abstractPseudoClass
+}
+
+// Matches input, select, textarea and button elements.
+func (s inputPseudoClassSelector) Match(n *html.Node) bool {
+	return n.Type == html.ElementNode && (n.Data == "input" || n.Data == "select" || n.Data == "textarea" || n.Data == "button")
+}
+
+type emptyElementPseudoClassSelector struct {
+	abstractPseudoClass
+}
+
+// Matches empty elements.
+func (s emptyElementPseudoClassSelector) Match(n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		switch c.Type {
+		case html.ElementNode:
+			return false
+		case html.TextNode:
+			if strings.TrimSpace(nodeText(c)) == "" {
+				continue
+			} else {
+				return false
+			}
+		}
+	}
+
+	return true
+}
+
+type rootPseudoClassSelector struct {
+	abstractPseudoClass
+}
+
+// Match implements :root
+func (s rootPseudoClassSelector) Match(n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+	if n.Parent == nil {
+		return false
+	}
+	return n.Parent.Type == html.DocumentNode
+}
+
+func hasAttr(n *html.Node, attr string) bool {
+	return matchAttribute(n, attr, func(string) bool { return true })
+}
+
+type linkPseudoClassSelector struct {
+	abstractPseudoClass
+}
+
+// Match implements :link
+func (s linkPseudoClassSelector) Match(n *html.Node) bool {
+	return (n.DataAtom == atom.A || n.DataAtom == atom.Area || n.DataAtom == atom.Link) && hasAttr(n, "href")
+}
+
+type langPseudoClassSelector struct {
+	abstractPseudoClass
+	lang string
+}
+
+func (s langPseudoClassSelector) Match(n *html.Node) bool {
+	own := matchAttribute(n, "lang", func(val string) bool {
+		return val == s.lang || strings.HasPrefix(val, s.lang+"-")
+	})
+	if n.Parent == nil {
+		return own
+	}
+	return own || s.Match(n.Parent)
+}
+
+type enabledPseudoClassSelector struct {
+	abstractPseudoClass
+}
+
+func (s enabledPseudoClassSelector) Match(n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+	switch n.DataAtom {
+	case atom.A, atom.Area, atom.Link:
+		return hasAttr(n, "href")
+	case atom.Optgroup, atom.Menuitem, atom.Fieldset:
+		return !hasAttr(n, "disabled")
+	case atom.Button, atom.Input, atom.Select, atom.Textarea, atom.Option:
+		return !hasAttr(n, "disabled") && !inDisabledFieldset(n)
+	}
+	return false
+}
+
+type disabledPseudoClassSelector struct {
+	abstractPseudoClass
+}
+
+func (s disabledPseudoClassSelector) Match(n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+	switch n.DataAtom {
+	case atom.Optgroup, atom.Menuitem, atom.Fieldset:
+		return hasAttr(n, "disabled")
+	case atom.Button, atom.Input, atom.Select, atom.Textarea, atom.Option:
+		return hasAttr(n, "disabled") || inDisabledFieldset(n)
+	}
+	return false
+}
+
+func hasLegendInPreviousSiblings(n *html.Node) bool {
+	for s := n.PrevSibling; s != nil; s = s.PrevSibling {
+		if s.DataAtom == atom.Legend {
+			return true
+		}
+	}
+	return false
+}
+
+func inDisabledFieldset(n *html.Node) bool {
+	if n.Parent == nil {
+		return false
+	}
+	if n.Parent.DataAtom == atom.Fieldset && hasAttr(n.Parent, "disabled") &&
+		(n.DataAtom != atom.Legend || hasLegendInPreviousSiblings(n)) {
+		return true
+	}
+	return inDisabledFieldset(n.Parent)
+}
+
+type checkedPseudoClassSelector struct {
+	abstractPseudoClass
+}
+
+func (s checkedPseudoClassSelector) Match(n *html.Node) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+	switch n.DataAtom {
+	case atom.Input, atom.Menuitem:
+		return hasAttr(n, "checked") && matchAttribute(n, "type", func(val string) bool {
+			t := toLowerASCII(val)
+			return t == "checkbox" || t == "radio"
+		})
+	case atom.Option:
+		return hasAttr(n, "selected")
+	}
+	return false
+}

vendor/github.com/andybalholm/cascadia/selector.go 🔗

@@ -0,0 +1,586 @@
+package cascadia
+
+import (
+	"fmt"
+	"regexp"
+	"strings"
+
+	"golang.org/x/net/html"
+)
+
+// Matcher is the interface for basic selector functionality.
+// Match returns whether a selector matches n.
+type Matcher interface {
+	Match(n *html.Node) bool
+}
+
+// Sel is the interface for all the functionality provided by selectors.
+type Sel interface {
+	Matcher
+	Specificity() Specificity
+
+	// Returns a CSS input compiling to this selector.
+	String() string
+
+	// Returns a pseudo-element, or an empty string.
+	PseudoElement() string
+}
+
+// Parse parses a selector. Use `ParseWithPseudoElement`
+// if you need support for pseudo-elements.
+func Parse(sel string) (Sel, error) {
+	p := &parser{s: sel}
+	compiled, err := p.parseSelector()
+	if err != nil {
+		return nil, err
+	}
+
+	if p.i < len(sel) {
+		return nil, fmt.Errorf("parsing %q: %d bytes left over", sel, len(sel)-p.i)
+	}
+
+	return compiled, nil
+}
+
+// ParseWithPseudoElement parses a single selector,
+// with support for pseudo-element.
+func ParseWithPseudoElement(sel string) (Sel, error) {
+	p := &parser{s: sel, acceptPseudoElements: true}
+	compiled, err := p.parseSelector()
+	if err != nil {
+		return nil, err
+	}
+
+	if p.i < len(sel) {
+		return nil, fmt.Errorf("parsing %q: %d bytes left over", sel, len(sel)-p.i)
+	}
+
+	return compiled, nil
+}
+
+// ParseGroup parses a selector, or a group of selectors separated by commas.
+// Use `ParseGroupWithPseudoElements`
+// if you need support for pseudo-elements.
+func ParseGroup(sel string) (SelectorGroup, error) {
+	p := &parser{s: sel}
+	compiled, err := p.parseSelectorGroup()
+	if err != nil {
+		return nil, err
+	}
+
+	if p.i < len(sel) {
+		return nil, fmt.Errorf("parsing %q: %d bytes left over", sel, len(sel)-p.i)
+	}
+
+	return compiled, nil
+}
+
+// ParseGroupWithPseudoElements parses a selector, or a group of selectors separated by commas.
+// It supports pseudo-elements.
+func ParseGroupWithPseudoElements(sel string) (SelectorGroup, error) {
+	p := &parser{s: sel, acceptPseudoElements: true}
+	compiled, err := p.parseSelectorGroup()
+	if err != nil {
+		return nil, err
+	}
+
+	if p.i < len(sel) {
+		return nil, fmt.Errorf("parsing %q: %d bytes left over", sel, len(sel)-p.i)
+	}
+
+	return compiled, nil
+}
+
+// A Selector is a function which tells whether a node matches or not.
+//
+// This type is maintained for compatibility; I recommend using the newer and
+// more idiomatic interfaces Sel and Matcher.
+type Selector func(*html.Node) bool
+
+// Compile parses a selector and returns, if successful, a Selector object
+// that can be used to match against html.Node objects.
+func Compile(sel string) (Selector, error) {
+	compiled, err := ParseGroup(sel)
+	if err != nil {
+		return nil, err
+	}
+
+	return Selector(compiled.Match), nil
+}
+
+// MustCompile is like Compile, but panics instead of returning an error.
+func MustCompile(sel string) Selector {
+	compiled, err := Compile(sel)
+	if err != nil {
+		panic(err)
+	}
+	return compiled
+}
+
+// MatchAll returns a slice of the nodes that match the selector,
+// from n and its children.
+func (s Selector) MatchAll(n *html.Node) []*html.Node {
+	return s.matchAllInto(n, nil)
+}
+
+func (s Selector) matchAllInto(n *html.Node, storage []*html.Node) []*html.Node {
+	if s(n) {
+		storage = append(storage, n)
+	}
+
+	for child := n.FirstChild; child != nil; child = child.NextSibling {
+		storage = s.matchAllInto(child, storage)
+	}
+
+	return storage
+}
+
+func queryInto(n *html.Node, m Matcher, storage []*html.Node) []*html.Node {
+	for child := n.FirstChild; child != nil; child = child.NextSibling {
+		if m.Match(child) {
+			storage = append(storage, child)
+		}
+		storage = queryInto(child, m, storage)
+	}
+
+	return storage
+}
+
+// QueryAll returns a slice of all the nodes that match m, from the descendants
+// of n.
+func QueryAll(n *html.Node, m Matcher) []*html.Node {
+	return queryInto(n, m, nil)
+}
+
+// Match returns true if the node matches the selector.
+func (s Selector) Match(n *html.Node) bool {
+	return s(n)
+}
+
+// MatchFirst returns the first node that matches s, from n and its children.
+func (s Selector) MatchFirst(n *html.Node) *html.Node {
+	if s.Match(n) {
+		return n
+	}
+
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		m := s.MatchFirst(c)
+		if m != nil {
+			return m
+		}
+	}
+	return nil
+}
+
+// Query returns the first node that matches m, from the descendants of n.
+// If none matches, it returns nil.
+func Query(n *html.Node, m Matcher) *html.Node {
+	for c := n.FirstChild; c != nil; c = c.NextSibling {
+		if m.Match(c) {
+			return c
+		}
+		if matched := Query(c, m); matched != nil {
+			return matched
+		}
+	}
+
+	return nil
+}
+
+// Filter returns the nodes in nodes that match the selector.
+func (s Selector) Filter(nodes []*html.Node) (result []*html.Node) {
+	for _, n := range nodes {
+		if s(n) {
+			result = append(result, n)
+		}
+	}
+	return result
+}
+
+// Filter returns the nodes that match m.
+func Filter(nodes []*html.Node, m Matcher) (result []*html.Node) {
+	for _, n := range nodes {
+		if m.Match(n) {
+			result = append(result, n)
+		}
+	}
+	return result
+}
+
+type tagSelector struct {
+	tag string
+}
+
+// Matches elements with a given tag name.
+func (t tagSelector) Match(n *html.Node) bool {
+	return n.Type == html.ElementNode && n.Data == t.tag
+}
+
+func (c tagSelector) Specificity() Specificity {
+	return Specificity{0, 0, 1}
+}
+
+func (c tagSelector) PseudoElement() string {
+	return ""
+}
+
+type classSelector struct {
+	class string
+}
+
+// Matches elements by class attribute.
+func (t classSelector) Match(n *html.Node) bool {
+	return matchAttribute(n, "class", func(s string) bool {
+		return matchInclude(t.class, s, false)
+	})
+}
+
+func (c classSelector) Specificity() Specificity {
+	return Specificity{0, 1, 0}
+}
+
+func (c classSelector) PseudoElement() string {
+	return ""
+}
+
+type idSelector struct {
+	id string
+}
+
+// Matches elements by id attribute.
+func (t idSelector) Match(n *html.Node) bool {
+	return matchAttribute(n, "id", func(s string) bool {
+		return s == t.id
+	})
+}
+
+func (c idSelector) Specificity() Specificity {
+	return Specificity{1, 0, 0}
+}
+
+func (c idSelector) PseudoElement() string {
+	return ""
+}
+
+type attrSelector struct {
+	key, val, operation string
+	regexp              *regexp.Regexp
+	insensitive         bool
+}
+
+// Matches elements by attribute value.
+func (t attrSelector) Match(n *html.Node) bool {
+	switch t.operation {
+	case "":
+		return matchAttribute(n, t.key, func(string) bool { return true })
+	case "=":
+		return matchAttribute(n, t.key, func(s string) bool { return matchInsensitiveValue(s, t.val, t.insensitive) })
+	case "!=":
+		return attributeNotEqualMatch(t.key, t.val, n, t.insensitive)
+	case "~=":
+		// matches elements where the attribute named key is a whitespace-separated list that includes val.
+		return matchAttribute(n, t.key, func(s string) bool { return matchInclude(t.val, s, t.insensitive) })
+	case "|=":
+		return attributeDashMatch(t.key, t.val, n, t.insensitive)
+	case "^=":
+		return attributePrefixMatch(t.key, t.val, n, t.insensitive)
+	case "$=":
+		return attributeSuffixMatch(t.key, t.val, n, t.insensitive)
+	case "*=":
+		return attributeSubstringMatch(t.key, t.val, n, t.insensitive)
+	case "#=":
+		return attributeRegexMatch(t.key, t.regexp, n)
+	default:
+		panic(fmt.Sprintf("unsuported operation : %s", t.operation))
+	}
+}
+
+// matches elements where we ignore (or not) the case of the attribute value
+// the user attribute is the value set by the user to match elements
+// the real attribute is the attribute value found in the code parsed
+func matchInsensitiveValue(userAttr string, realAttr string, ignoreCase bool) bool {
+	if ignoreCase {
+		return strings.EqualFold(userAttr, realAttr)
+	}
+	return userAttr == realAttr
+
+}
+
+// matches elements where the attribute named key satisifes the function f.
+func matchAttribute(n *html.Node, key string, f func(string) bool) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+	for _, a := range n.Attr {
+		if a.Key == key && f(a.Val) {
+			return true
+		}
+	}
+	return false
+}
+
+// attributeNotEqualMatch matches elements where
+// the attribute named key does not have the value val.
+func attributeNotEqualMatch(key, val string, n *html.Node, ignoreCase bool) bool {
+	if n.Type != html.ElementNode {
+		return false
+	}
+	for _, a := range n.Attr {
+		if a.Key == key && matchInsensitiveValue(a.Val, val, ignoreCase) {
+			return false
+		}
+	}
+	return true
+}
+
+// returns true if s is a whitespace-separated list that includes val.
+func matchInclude(val string, s string, ignoreCase bool) bool {
+	for s != "" {
+		i := strings.IndexAny(s, " \t\r\n\f")
+		if i == -1 {
+			return matchInsensitiveValue(s, val, ignoreCase)
+		}
+		if matchInsensitiveValue(s[:i], val, ignoreCase) {
+			return true
+		}
+		s = s[i+1:]
+	}
+	return false
+}
+
+//  matches elements where the attribute named key equals val or starts with val plus a hyphen.
+func attributeDashMatch(key, val string, n *html.Node, ignoreCase bool) bool {
+	return matchAttribute(n, key,
+		func(s string) bool {
+			if matchInsensitiveValue(s, val, ignoreCase) {
+				return true
+			}
+			if len(s) <= len(val) {
+				return false
+			}
+			if matchInsensitiveValue(s[:len(val)], val, ignoreCase) && s[len(val)] == '-' {
+				return true
+			}
+			return false
+		})
+}
+
+// attributePrefixMatch returns a Selector that matches elements where
+// the attribute named key starts with val.
+func attributePrefixMatch(key, val string, n *html.Node, ignoreCase bool) bool {
+	return matchAttribute(n, key,
+		func(s string) bool {
+			if strings.TrimSpace(s) == "" {
+				return false
+			}
+			if ignoreCase {
+				return strings.HasPrefix(strings.ToLower(s), strings.ToLower(val))
+			}
+			return strings.HasPrefix(s, val)
+		})
+}
+
+// attributeSuffixMatch matches elements where
+// the attribute named key ends with val.
+func attributeSuffixMatch(key, val string, n *html.Node, ignoreCase bool) bool {
+	return matchAttribute(n, key,
+		func(s string) bool {
+			if strings.TrimSpace(s) == "" {
+				return false
+			}
+			if ignoreCase {
+				return strings.HasSuffix(strings.ToLower(s), strings.ToLower(val))
+			}
+			return strings.HasSuffix(s, val)
+		})
+}
+
+// attributeSubstringMatch matches nodes where
+// the attribute named key contains val.
+func attributeSubstringMatch(key, val string, n *html.Node, ignoreCase bool) bool {
+	return matchAttribute(n, key,
+		func(s string) bool {
+			if strings.TrimSpace(s) == "" {
+				return false
+			}
+			if ignoreCase {
+				return strings.Contains(strings.ToLower(s), strings.ToLower(val))
+			}
+			return strings.Contains(s, val)
+		})
+}
+
+// attributeRegexMatch  matches nodes where
+// the attribute named key matches the regular expression rx
+func attributeRegexMatch(key string, rx *regexp.Regexp, n *html.Node) bool {
+	return matchAttribute(n, key,
+		func(s string) bool {
+			return rx.MatchString(s)
+		})
+}
+
+func (c attrSelector) Specificity() Specificity {
+	return Specificity{0, 1, 0}
+}
+
+func (c attrSelector) PseudoElement() string {
+	return ""
+}
+
+// see pseudo_classes.go for pseudo classes selectors
+
+// on a static context, some selectors can't match anything
+type neverMatchSelector struct {
+	value string
+}
+
+func (s neverMatchSelector) Match(n *html.Node) bool {
+	return false
+}
+
+func (s neverMatchSelector) Specificity() Specificity {
+	return Specificity{0, 0, 0}
+}
+
+func (c neverMatchSelector) PseudoElement() string {
+	return ""
+}
+
+type compoundSelector struct {
+	selectors     []Sel
+	pseudoElement string
+}
+
+// Matches elements if each sub-selectors matches.
+func (t compoundSelector) Match(n *html.Node) bool {
+	if len(t.selectors) == 0 {
+		return n.Type == html.ElementNode
+	}
+
+	for _, sel := range t.selectors {
+		if !sel.Match(n) {
+			return false
+		}
+	}
+	return true
+}
+
+func (s compoundSelector) Specificity() Specificity {
+	var out Specificity
+	for _, sel := range s.selectors {
+		out = out.Add(sel.Specificity())
+	}
+	if s.pseudoElement != "" {
+		// https://drafts.csswg.org/selectors-3/#specificity
+		out = out.Add(Specificity{0, 0, 1})
+	}
+	return out
+}
+
+func (c compoundSelector) PseudoElement() string {
+	return c.pseudoElement
+}
+
+type combinedSelector struct {
+	first      Sel
+	combinator byte
+	second     Sel
+}
+
+func (t combinedSelector) Match(n *html.Node) bool {
+	if t.first == nil {
+		return false // maybe we should panic
+	}
+	switch t.combinator {
+	case 0:
+		return t.first.Match(n)
+	case ' ':
+		return descendantMatch(t.first, t.second, n)
+	case '>':
+		return childMatch(t.first, t.second, n)
+	case '+':
+		return siblingMatch(t.first, t.second, true, n)
+	case '~':
+		return siblingMatch(t.first, t.second, false, n)
+	default:
+		panic("unknown combinator")
+	}
+}
+
+// matches an element if it matches d and has an ancestor that matches a.
+func descendantMatch(a, d Matcher, n *html.Node) bool {
+	if !d.Match(n) {
+		return false
+	}
+
+	for p := n.Parent; p != nil; p = p.Parent {
+		if a.Match(p) {
+			return true
+		}
+	}
+
+	return false
+}
+
+// matches an element if it matches d and its parent matches a.
+func childMatch(a, d Matcher, n *html.Node) bool {
+	return d.Match(n) && n.Parent != nil && a.Match(n.Parent)
+}
+
+// matches an element if it matches s2 and is preceded by an element that matches s1.
+// If adjacent is true, the sibling must be immediately before the element.
+func siblingMatch(s1, s2 Matcher, adjacent bool, n *html.Node) bool {
+	if !s2.Match(n) {
+		return false
+	}
+
+	if adjacent {
+		for n = n.PrevSibling; n != nil; n = n.PrevSibling {
+			if n.Type == html.TextNode || n.Type == html.CommentNode {
+				continue
+			}
+			return s1.Match(n)
+		}
+		return false
+	}
+
+	// Walk backwards looking for element that matches s1
+	for c := n.PrevSibling; c != nil; c = c.PrevSibling {
+		if s1.Match(c) {
+			return true
+		}
+	}
+
+	return false
+}
+
+func (s combinedSelector) Specificity() Specificity {
+	spec := s.first.Specificity()
+	if s.second != nil {
+		spec = spec.Add(s.second.Specificity())
+	}
+	return spec
+}
+
+// on combinedSelector, a pseudo-element only makes sens on the last
+// selector, although others increase specificity.
+func (c combinedSelector) PseudoElement() string {
+	if c.second == nil {
+		return ""
+	}
+	return c.second.PseudoElement()
+}
+
+// A SelectorGroup is a list of selectors, which matches if any of the
+// individual selectors matches.
+type SelectorGroup []Sel
+
+// Match returns true if the node matches one of the single selectors.
+func (s SelectorGroup) Match(n *html.Node) bool {
+	for _, sel := range s {
+		if sel.Match(n) {
+			return true
+		}
+	}
+	return false
+}

vendor/github.com/andybalholm/cascadia/serialize.go 🔗

@@ -0,0 +1,176 @@
+package cascadia
+
+import (
+	"fmt"
+	"strconv"
+	"strings"
+)
+
+// implements the reverse operation Sel -> string
+
+var specialCharReplacer *strings.Replacer
+
+func init() {
+	var pairs []string
+	for _, s := range ",!\"#$%&'()*+ -./:;<=>?@[\\]^`{|}~" {
+		pairs = append(pairs, string(s), "\\"+string(s))
+	}
+	specialCharReplacer = strings.NewReplacer(pairs...)
+}
+
+// espace special CSS char
+func escape(s string) string { return specialCharReplacer.Replace(s) }
+
+func (c tagSelector) String() string {
+	return c.tag
+}
+
+func (c idSelector) String() string {
+	return "#" + escape(c.id)
+}
+
+func (c classSelector) String() string {
+	return "." + escape(c.class)
+}
+
+func (c attrSelector) String() string {
+	val := c.val
+	if c.operation == "#=" {
+		val = c.regexp.String()
+	} else if c.operation != "" {
+		val = fmt.Sprintf(`"%s"`, val)
+	}
+
+	ignoreCase := ""
+
+	if c.insensitive {
+		ignoreCase = " i"
+	}
+
+	return fmt.Sprintf(`[%s%s%s%s]`, c.key, c.operation, val, ignoreCase)
+}
+
+func (c relativePseudoClassSelector) String() string {
+	return fmt.Sprintf(":%s(%s)", c.name, c.match.String())
+}
+
+func (c containsPseudoClassSelector) String() string {
+	s := "contains"
+	if c.own {
+		s += "Own"
+	}
+	return fmt.Sprintf(`:%s("%s")`, s, c.value)
+}
+
+func (c regexpPseudoClassSelector) String() string {
+	s := "matches"
+	if c.own {
+		s += "Own"
+	}
+	return fmt.Sprintf(":%s(%s)", s, c.regexp.String())
+}
+
+func (c nthPseudoClassSelector) String() string {
+	if c.a == 0 && c.b == 1 { // special cases
+		s := ":first-"
+		if c.last {
+			s = ":last-"
+		}
+		if c.ofType {
+			s += "of-type"
+		} else {
+			s += "child"
+		}
+		return s
+	}
+	var name string
+	switch [2]bool{c.last, c.ofType} {
+	case [2]bool{true, true}:
+		name = "nth-last-of-type"
+	case [2]bool{true, false}:
+		name = "nth-last-child"
+	case [2]bool{false, true}:
+		name = "nth-of-type"
+	case [2]bool{false, false}:
+		name = "nth-child"
+	}
+	s := fmt.Sprintf("+%d", c.b)
+	if c.b < 0 { // avoid +-8 invalid syntax
+		s = strconv.Itoa(c.b)
+	}
+	return fmt.Sprintf(":%s(%dn%s)", name, c.a, s)
+}
+
+func (c onlyChildPseudoClassSelector) String() string {
+	if c.ofType {
+		return ":only-of-type"
+	}
+	return ":only-child"
+}
+
+func (c inputPseudoClassSelector) String() string {
+	return ":input"
+}
+
+func (c emptyElementPseudoClassSelector) String() string {
+	return ":empty"
+}
+
+func (c rootPseudoClassSelector) String() string {
+	return ":root"
+}
+
+func (c linkPseudoClassSelector) String() string {
+	return ":link"
+}
+
+func (c langPseudoClassSelector) String() string {
+	return fmt.Sprintf(":lang(%s)", c.lang)
+}
+
+func (c neverMatchSelector) String() string {
+	return c.value
+}
+
+func (c enabledPseudoClassSelector) String() string {
+	return ":enabled"
+}
+
+func (c disabledPseudoClassSelector) String() string {
+	return ":disabled"
+}
+
+func (c checkedPseudoClassSelector) String() string {
+	return ":checked"
+}
+
+func (c compoundSelector) String() string {
+	if len(c.selectors) == 0 && c.pseudoElement == "" {
+		return "*"
+	}
+	chunks := make([]string, len(c.selectors))
+	for i, sel := range c.selectors {
+		chunks[i] = sel.String()
+	}
+	s := strings.Join(chunks, "")
+	if c.pseudoElement != "" {
+		s += "::" + c.pseudoElement
+	}
+	return s
+}
+
+func (c combinedSelector) String() string {
+	start := c.first.String()
+	if c.second != nil {
+		start += fmt.Sprintf(" %s %s", string(c.combinator), c.second.String())
+	}
+	return start
+}
+
+func (c SelectorGroup) String() string {
+	ck := make([]string, len(c))
+	for i, s := range c {
+		ck[i] = s.String()
+	}
+	return strings.Join(ck, ", ")
+}

vendor/github.com/andybalholm/cascadia/specificity.go 🔗

@@ -0,0 +1,26 @@
+package cascadia
+
+// Specificity is the CSS specificity as defined in
+// https://www.w3.org/TR/selectors/#specificity-rules
+// with the convention Specificity = [A,B,C].
+type Specificity [3]int
+
+// returns `true` if s < other (strictly), false otherwise
+func (s Specificity) Less(other Specificity) bool {
+	for i := range s {
+		if s[i] < other[i] {
+			return true
+		}
+		if s[i] > other[i] {
+			return false
+		}
+	}
+	return false
+}
+
+func (s Specificity) Add(other Specificity) Specificity {
+	for i, sp := range other {
+		s[i] += sp
+	}
+	return s
+}

vendor/github.com/anthropics/anthropic-sdk-go/.stats.yml 🔗

@@ -0,0 +1,4 @@
+configured_endpoints: 26
+openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic%2Fanthropic-a7b84017aa1126ad99443296dcd81ab2b53f1c346014b92096226cf993f30502.yml
+openapi_spec_hash: 58d4e72c7906bd8a680ab17b99de6215
+config_hash: b08362db009c073fa7b1c154969cb200

vendor/github.com/anthropics/anthropic-sdk-go/CHANGELOG.md 🔗

@@ -0,0 +1,240 @@
+# Changelog
+
+## 1.4.0 (2025-06-04)
+
+Full Changelog: [v1.3.0...v1.4.0](https://github.com/anthropics/anthropic-sdk-go/compare/v1.3.0...v1.4.0)
+
+### Features
+
+* **client:** allow overriding unions ([079149c](https://github.com/anthropics/anthropic-sdk-go/commit/079149c673981891ecd35906cd610f8d4a4b69a9))
+
+
+### Chores
+
+* **internal:** codegen related update ([853ba1f](https://github.com/anthropics/anthropic-sdk-go/commit/853ba1f46d2b6c476ee04d9c061368e708cc9e18))
+
+## 1.3.0 (2025-06-03)
+
+Full Changelog: [v1.2.2...v1.3.0](https://github.com/anthropics/anthropic-sdk-go/compare/v1.2.2...v1.3.0)
+
+### Features
+
+* **client:** add support for new text_editor_20250429 tool ([b33c543](https://github.com/anthropics/anthropic-sdk-go/commit/b33c543f7dc3b74c3322b6f84c189b81f67b6154))
+
+## 1.2.2 (2025-06-02)
+
+Full Changelog: [v1.2.1...v1.2.2](https://github.com/anthropics/anthropic-sdk-go/compare/v1.2.1...v1.2.2)
+
+### Bug Fixes
+
+* **client:** access subunions properly ([f29c162](https://github.com/anthropics/anthropic-sdk-go/commit/f29c1627fe94c6371937659d02f1af7b55583d60))
+* fix error ([bbc002c](https://github.com/anthropics/anthropic-sdk-go/commit/bbc002ccbbf9df681201d9b8ba806c37338c0fd3))
+
+
+### Chores
+
+* make go mod tidy continue on error ([ac184b4](https://github.com/anthropics/anthropic-sdk-go/commit/ac184b4f7afee4015d133a05ce819a8dac35be52))
+
+## 1.2.1 (2025-05-23)
+
+Full Changelog: [v1.2.0...v1.2.1](https://github.com/anthropics/anthropic-sdk-go/compare/v1.2.0...v1.2.1)
+
+### Chores
+
+* **examples:** clean up MCP example ([66f406a](https://github.com/anthropics/anthropic-sdk-go/commit/66f406a04b9756281e7716e9b635c3e3f29397fb))
+* **internal:** fix release workflows ([6a0ff4c](https://github.com/anthropics/anthropic-sdk-go/commit/6a0ff4cad1c1b4ab6435df80fccd945d6ce07be7))
+
+## 1.2.0 (2025-05-22)
+
+Full Changelog: [v1.1.0...v1.2.0](https://github.com/anthropics/anthropic-sdk-go/compare/v1.1.0...v1.2.0)
+
+### Features
+
+* **api:** add claude 4 models, files API, code execution tool, MCP connector and more ([b2e5cbf](https://github.com/anthropics/anthropic-sdk-go/commit/b2e5cbffd9d05228c2c2569974a6fa260c3f46be))
+
+
+### Bug Fixes
+
+* **tests:** fix model testing for anthropic.CalculateNonStreamingTimeout ([9956842](https://github.com/anthropics/anthropic-sdk-go/commit/995684240b77284a4590b1b9ae34a85e525d1e52))
+
+## 1.1.0 (2025-05-22)
+
+Full Changelog: [v1.0.0...v1.1.0](https://github.com/anthropics/anthropic-sdk-go/compare/v1.0.0...v1.1.0)
+
+### Features
+
+* **api:** add claude 4 models, files API, code execution tool, MCP connector and more ([2740935](https://github.com/anthropics/anthropic-sdk-go/commit/2740935f444de2d46103a7c777ea75e7e214872e))
+
+
+### Bug Fixes
+
+* **tests:** fix model testing for anthropic.CalculateNonStreamingTimeout ([f1aa0a1](https://github.com/anthropics/anthropic-sdk-go/commit/f1aa0a1a32d1ca87b87a7d688daab31f2a36071c))
+
+## 1.0.0 (2025-05-21)
+
+Full Changelog: [v0.2.0-beta.4...v1.0.0](https://github.com/anthropics/anthropic-sdk-go/compare/v0.2.0-beta.4...v1.0.0)
+
+### ⚠ BREAKING CHANGES
+
+* **client:** rename variant constructors
+* **client:** remove is present
+
+### Features
+
+* **client:** improve variant constructor names ([227c96b](https://github.com/anthropics/anthropic-sdk-go/commit/227c96bf50e14827e112c31ad0f512354477a409))
+* **client:** rename variant constructors ([078fad6](https://github.com/anthropics/anthropic-sdk-go/commit/078fad6558642a20b5fb3e82186b03c2efc0ab47))
+
+
+### Bug Fixes
+
+* **client:** correctly set stream key for multipart ([f17bfe0](https://github.com/anthropics/anthropic-sdk-go/commit/f17bfe0aac0fb8228d9cad87ccca0deb7449a824))
+* **client:** don't panic on marshal with extra null field ([d67a151](https://github.com/anthropics/anthropic-sdk-go/commit/d67a151a6ef0870918c5eaf84ce996cb5b1860b7))
+* **client:** elide nil citations array ([09cadec](https://github.com/anthropics/anthropic-sdk-go/commit/09cadec3c076d74bda74e67c345a1aee1fdb7ce4))
+* **client:** fix bug with empty tool inputs and citation deltas in Accumulate ([f4ac348](https://github.com/anthropics/anthropic-sdk-go/commit/f4ac348658fb83485d6555c63f90920599c98d99))
+* **client:** increase max stream buffer size ([18a6ccf](https://github.com/anthropics/anthropic-sdk-go/commit/18a6ccf1961922a342467800c737fa000bdd254e))
+* **client:** remove is present ([385d99f](https://github.com/anthropics/anthropic-sdk-go/commit/385d99fa225c755d9af737425ad2ef4d66ad5ba9))
+* **client:** resolve naming collisions in union variants ([2cb6904](https://github.com/anthropics/anthropic-sdk-go/commit/2cb69048a6b583954934bc2926186564b5c74bf6))
+* **client:** use scanner for streaming ([82a2840](https://github.com/anthropics/anthropic-sdk-go/commit/82a2840ce0f8aa8bd63f7697c566f437c06bb132))
+
+
+### Chores
+
+* **examples:** remove fmt ([872e055](https://github.com/anthropics/anthropic-sdk-go/commit/872e0550171942c405786c7eedb23b8270f6e8de))
+* formatting ([1ce0ee8](https://github.com/anthropics/anthropic-sdk-go/commit/1ce0ee863c5df658909d81b138dc1ebedb78844a))
+* improve devcontainer setup ([9021490](https://github.com/anthropics/anthropic-sdk-go/commit/90214901d77ba57901e77d6ea31aafb06c120f2c))
+
+
+### Documentation
+
+* upgrade security note to warning ([#346](https://github.com/anthropics/anthropic-sdk-go/issues/346)) ([83e70de](https://github.com/anthropics/anthropic-sdk-go/commit/83e70decfb5da14a1ecf78402302f7f0600515ea))
+
+## 0.2.0-beta.4 (2025-05-18)
+
+Full Changelog: [v0.2.0-beta.3...v0.2.0-beta.4](https://github.com/anthropics/anthropic-sdk-go/compare/v0.2.0-beta.3...v0.2.0-beta.4)
+
+### ⚠ BREAKING CHANGES
+
+* **client:** clearer array variant names
+* **client:** rename resp package
+* **client:** improve core function names
+* **client:** improve union variant names
+* **client:** improve param subunions & deduplicate types
+
+### Features
+
+* **api:** adds web search capabilities to the Claude API ([9ca314a](https://github.com/anthropics/anthropic-sdk-go/commit/9ca314a74998f24b5f17427698a8fa709b103581))
+* **api:** extract ContentBlockDelta events into their own schemas ([#165](https://github.com/anthropics/anthropic-sdk-go/issues/165)) ([6d75486](https://github.com/anthropics/anthropic-sdk-go/commit/6d75486e9f524f5511f787181106a679e3414498))
+* **api:** manual updates ([d405f97](https://github.com/anthropics/anthropic-sdk-go/commit/d405f97373cd7ae863a7400441d1d79c85f0ddd5))
+* **api:** manual updates ([e1326cd](https://github.com/anthropics/anthropic-sdk-go/commit/e1326cdd756beb871e939af8be8b45fd3d5fdc9a))
+* **api:** manual updates ([a92a382](https://github.com/anthropics/anthropic-sdk-go/commit/a92a382976d595dd32208109b480bf26dbbdc00f))
+* **api:** manual updates ([59bd507](https://github.com/anthropics/anthropic-sdk-go/commit/59bd5071282403373ddca9333fafc9efc90a16d6))
+* **client:** add dynamic streaming buffer to handle large lines ([510e099](https://github.com/anthropics/anthropic-sdk-go/commit/510e099e19fa71411502650eb387f1fee79f5d0d))
+* **client:** add escape hatch to omit required param fields ([#175](https://github.com/anthropics/anthropic-sdk-go/issues/175)) ([6df8184](https://github.com/anthropics/anthropic-sdk-go/commit/6df8184947d6568260fa0bc22a89a27d10eaacd0))
+* **client:** add helper method to generate constant structs ([015e8bc](https://github.com/anthropics/anthropic-sdk-go/commit/015e8bc7f74582fb5a3d69021ad3d61e96d65b36))
+* **client:** add support for endpoint-specific base URLs in python ([44645c9](https://github.com/anthropics/anthropic-sdk-go/commit/44645c9fd0b883db4deeb88bfee6922ec9845ace))
+* **client:** add support for reading base URL from environment variable ([835e632](https://github.com/anthropics/anthropic-sdk-go/commit/835e6326b658cd40590cd8bbed0932ab219e6d2d))
+* **client:** clearer array variant names ([1fdea8f](https://github.com/anthropics/anthropic-sdk-go/commit/1fdea8f9fedc470a917d12607b3b7ebe3f0f6439))
+* **client:** experimental support for unmarshalling into param structs ([94c8fa4](https://github.com/anthropics/anthropic-sdk-go/commit/94c8fa41ecb4792cb7da043bde2c0f5ddafe84b0))
+* **client:** improve param subunions & deduplicate types ([8daacf6](https://github.com/anthropics/anthropic-sdk-go/commit/8daacf6866e8bc706ec29e17046e53d4ed100364))
+* **client:** make response union's AsAny method type safe ([#174](https://github.com/anthropics/anthropic-sdk-go/issues/174)) ([f410ed0](https://github.com/anthropics/anthropic-sdk-go/commit/f410ed025ee57a05b0cec8d72a1cb43d30e564a6))
+* **client:** rename resp package ([8e7d278](https://github.com/anthropics/anthropic-sdk-go/commit/8e7d2788e9be7b954d07de731e7b27ad2e2a9e8e))
+* **client:** support custom http clients ([#177](https://github.com/anthropics/anthropic-sdk-go/issues/177)) ([ff7a793](https://github.com/anthropics/anthropic-sdk-go/commit/ff7a793b43b99dc148b30e408edfdc19e19c28b2))
+* **client:** support more time formats ([af2df86](https://github.com/anthropics/anthropic-sdk-go/commit/af2df86f24acbe6b9cdcc4e055c3ff754303e0ef))
+* **client:** support param struct overrides ([#167](https://github.com/anthropics/anthropic-sdk-go/issues/167)) ([e0d5eb0](https://github.com/anthropics/anthropic-sdk-go/commit/e0d5eb098c6441e99d53c6d997c7bcca460a238b))
+* **client:** support unions in query and forms ([#171](https://github.com/anthropics/anthropic-sdk-go/issues/171)) ([6bf1ce3](https://github.com/anthropics/anthropic-sdk-go/commit/6bf1ce36f0155dba20afd4b63bf96c4527e2baa5))
+
+
+### Bug Fixes
+
+* **client:** clean up reader resources ([2234386](https://github.com/anthropics/anthropic-sdk-go/commit/223438673ade3be3435bebf7063fd34ddf3dfb8e))
+* **client:** correctly update body in WithJSONSet ([f531c77](https://github.com/anthropics/anthropic-sdk-go/commit/f531c77c15859b1f2e61d654f4d9956cdfafa082))
+* **client:** deduplicate stop reason type ([#155](https://github.com/anthropics/anthropic-sdk-go/issues/155)) ([0f985ad](https://github.com/anthropics/anthropic-sdk-go/commit/0f985ad54ef47849d7d478c84d34c7350a4349b5))
+* **client:** fix bug where types occasionally wouldn't generate ([8988713](https://github.com/anthropics/anthropic-sdk-go/commit/8988713904ce73d3c82de635d98da48b98532366))
+* **client:** improve core function names ([0a2777f](https://github.com/anthropics/anthropic-sdk-go/commit/0a2777fd597a5eb74bcf6b1da48a9ff1988059de))
+* **client:** improve union variant names ([92718fd](https://github.com/anthropics/anthropic-sdk-go/commit/92718fd4058fd8535fd888a56f83fc2d3ec505ef))
+* **client:** include path for type names in example code ([5bbe836](https://github.com/anthropics/anthropic-sdk-go/commit/5bbe83639793878aa0ea52e8ff06b1d9ee72ed7c))
+* **client:** resolve issue with optional multipart files ([e2af94c](https://github.com/anthropics/anthropic-sdk-go/commit/e2af94c840a8f9da566c781fc99c57084e490ec1))
+* **client:** return error on bad custom url instead of panic ([#169](https://github.com/anthropics/anthropic-sdk-go/issues/169)) ([b086b55](https://github.com/anthropics/anthropic-sdk-go/commit/b086b55f4886474282d4e2ea9ee3495cbf25ec6b))
+* **client:** support multipart encoding array formats ([#170](https://github.com/anthropics/anthropic-sdk-go/issues/170)) ([611a25a](https://github.com/anthropics/anthropic-sdk-go/commit/611a25a427fc5303bb311fa4a2fec836d55b0933))
+* **client:** time format encoding fix ([d589846](https://github.com/anthropics/anthropic-sdk-go/commit/d589846c1a08ad56d639d60736e2b8e190f7f2b1))
+* **client:** unmarshal responses properly ([8344a1c](https://github.com/anthropics/anthropic-sdk-go/commit/8344a1c58dd497abbed8e9e689efca544256eaa8))
+* **client:** unmarshal stream events into fresh memory ([#168](https://github.com/anthropics/anthropic-sdk-go/issues/168)) ([9cc1257](https://github.com/anthropics/anthropic-sdk-go/commit/9cc1257a67340e446ac415ec9ddddded24bb1f9a))
+* handle empty bodies in WithJSONSet ([0bad01e](https://github.com/anthropics/anthropic-sdk-go/commit/0bad01e40a2a4b5b376ba27513d7e16d604459d9))
+* **internal:** fix type changes ([d8ef353](https://github.com/anthropics/anthropic-sdk-go/commit/d8ef3531840ac1dc0541d3b1cf0015d1db29e2b6))
+* **pagination:** handle errors when applying options ([2381476](https://github.com/anthropics/anthropic-sdk-go/commit/2381476e64991e781b696890c98f78001e256b3b))
+
+
+### Chores
+
+* **ci:** add timeout thresholds for CI jobs ([335e9f0](https://github.com/anthropics/anthropic-sdk-go/commit/335e9f0af2275f1af21aa7062afb50bee81771b6))
+* **ci:** only use depot for staging repos ([6818451](https://github.com/anthropics/anthropic-sdk-go/commit/68184515143aa1e4473208f794fa593668c94df4))
+* **ci:** run on more branches and use depot runners ([b0ca09d](https://github.com/anthropics/anthropic-sdk-go/commit/b0ca09d1d39a8de390c47be804847a7647ca3c67))
+* **client:** use new opt conversion ([#184](https://github.com/anthropics/anthropic-sdk-go/issues/184)) ([58dc74f](https://github.com/anthropics/anthropic-sdk-go/commit/58dc74f951aa6a0eb4355a0213c8695bfa7cb0ed))
+* **docs:** doc improvements ([#173](https://github.com/anthropics/anthropic-sdk-go/issues/173)) ([aebe8f6](https://github.com/anthropics/anthropic-sdk-go/commit/aebe8f68afa3de4460cda6e4032c7859e13cda81))
+* **docs:** document pre-request options ([8f5eb18](https://github.com/anthropics/anthropic-sdk-go/commit/8f5eb188146bd46ba990558a7e2348c8697d6405))
+* **docs:** readme improvements ([#176](https://github.com/anthropics/anthropic-sdk-go/issues/176)) ([b5769ff](https://github.com/anthropics/anthropic-sdk-go/commit/b5769ffcf5ef5345659ae848b875227718ea2425))
+* **docs:** update file uploads in README ([#166](https://github.com/anthropics/anthropic-sdk-go/issues/166)) ([a4a36bf](https://github.com/anthropics/anthropic-sdk-go/commit/a4a36bfbefa5a166774c23d8c5428fb55c1b4abe))
+* **docs:** update respjson package name ([28910b5](https://github.com/anthropics/anthropic-sdk-go/commit/28910b57821cab670561a25bee413375187ed747))
+* **internal:** expand CI branch coverage ([#178](https://github.com/anthropics/anthropic-sdk-go/issues/178)) ([900e2df](https://github.com/anthropics/anthropic-sdk-go/commit/900e2df3eb2d3e1309d85fdcf807998f701bea8a))
+* **internal:** reduce CI branch coverage ([343f6c6](https://github.com/anthropics/anthropic-sdk-go/commit/343f6c6c295dc3d39f65aae481bc10969dbb5694))
+* **internal:** remove CI condition ([#160](https://github.com/anthropics/anthropic-sdk-go/issues/160)) ([adfa1e2](https://github.com/anthropics/anthropic-sdk-go/commit/adfa1e2e349842aa88262af70b209d1a59dbb419))
+* **internal:** update config ([#157](https://github.com/anthropics/anthropic-sdk-go/issues/157)) ([46f0194](https://github.com/anthropics/anthropic-sdk-go/commit/46f019497bd9533390c4b9f0ebee6863263ce009))
+* **readme:** improve formatting ([66be9bb](https://github.com/anthropics/anthropic-sdk-go/commit/66be9bbb17ccc9d878e79b3c39605da3e2846297))
+
+
+### Documentation
+
+* remove or fix invalid readme examples ([142576c](https://github.com/anthropics/anthropic-sdk-go/commit/142576c73b4dab5b84a2bf2481506ad642ad31cc))
+* update documentation links to be more uniform ([457122b](https://github.com/anthropics/anthropic-sdk-go/commit/457122b79646dc17fa8752c98dbf4991edffc548))
+
+## 0.2.0-beta.3 (2025-03-27)
+
+Full Changelog: [v0.2.0-beta.2...v0.2.0-beta.3](https://github.com/anthropics/anthropic-sdk-go/compare/v0.2.0-beta.2...v0.2.0-beta.3)
+
+### Chores
+
+* add hash of OpenAPI spec/config inputs to .stats.yml ([#154](https://github.com/anthropics/anthropic-sdk-go/issues/154)) ([76b91b5](https://github.com/anthropics/anthropic-sdk-go/commit/76b91b56fbf42fe8982e7b861885db179b1bdcc5))
+* fix typos ([#152](https://github.com/anthropics/anthropic-sdk-go/issues/152)) ([1cf6a6a](https://github.com/anthropics/anthropic-sdk-go/commit/1cf6a6ae25231b88d2eedbe0758f1281cbe439d8))
+
+## 0.2.0-beta.2 (2025-03-25)
+
+Full Changelog: [v0.2.0-beta.1...v0.2.0-beta.2](https://github.com/anthropics/anthropic-sdk-go/compare/v0.2.0-beta.1...v0.2.0-beta.2)
+
+### Bug Fixes
+
+* **client:** use raw json for tool input ([1013c2b](https://github.com/anthropics/anthropic-sdk-go/commit/1013c2bdb87a27d2420dbe0dcadc57d1fe3589f2))
+
+
+### Chores
+
+* add request options to client tests ([#150](https://github.com/anthropics/anthropic-sdk-go/issues/150)) ([7c70ae1](https://github.com/anthropics/anthropic-sdk-go/commit/7c70ae134a345aff775694abcad255c76e7dfcba))
+
+## 0.2.0-beta.1 (2025-03-25)
+
+Full Changelog: [v0.2.0-alpha.13...v0.2.0-beta.1](https://github.com/anthropics/anthropic-sdk-go/compare/v0.2.0-alpha.13...v0.2.0-beta.1)
+
+### ⚠ BREAKING CHANGES
+
+* **api:** migrate to v2
+
+### Features
+
+* add SKIP_BREW env var to ./scripts/bootstrap ([#137](https://github.com/anthropics/anthropic-sdk-go/issues/137)) ([4057111](https://github.com/anthropics/anthropic-sdk-go/commit/40571110129d5c66f171ead36f5d725663262bc4))
+* **api:** migrate to v2 ([fcd95eb](https://github.com/anthropics/anthropic-sdk-go/commit/fcd95eb8f45d0ffedcd1e47cd0879d7e66783540))
+* **client:** accept RFC6838 JSON content types ([#139](https://github.com/anthropics/anthropic-sdk-go/issues/139)) ([78d17cd](https://github.com/anthropics/anthropic-sdk-go/commit/78d17cd4122893ba62b1e14714a1da004c128344))
+* **client:** allow custom baseurls without trailing slash ([#135](https://github.com/anthropics/anthropic-sdk-go/issues/135)) ([9b30fce](https://github.com/anthropics/anthropic-sdk-go/commit/9b30fce0a71a35910315e02cd3a2f2afc1fd7962))
+* **client:** improve default client options support ([07f82a6](https://github.com/anthropics/anthropic-sdk-go/commit/07f82a6f9e07bf9aadf4ca150287887cb9e75bc4))
+* **client:** improve default client options support ([#142](https://github.com/anthropics/anthropic-sdk-go/issues/142)) ([f261355](https://github.com/anthropics/anthropic-sdk-go/commit/f261355e497748bcb112eecb67a95d7c7c5075c0))
+* **client:** support v2 ([#147](https://github.com/anthropics/anthropic-sdk-go/issues/147)) ([6b3af98](https://github.com/anthropics/anthropic-sdk-go/commit/6b3af98e02a9b6126bd715d43f83b8adf8b861e8))
+
+
+### Chores
+
+* **docs:** clarify breaking changes ([#146](https://github.com/anthropics/anthropic-sdk-go/issues/146)) ([a2586b4](https://github.com/anthropics/anthropic-sdk-go/commit/a2586b4beb2b9a0ad252e90223fbb471e6c25bc1))
+* **internal:** codegen metadata ([ce0eca2](https://github.com/anthropics/anthropic-sdk-go/commit/ce0eca25c6a83fca9ececccb41faf04e74566e2d))
+* **internal:** remove extra empty newlines ([#143](https://github.com/anthropics/anthropic-sdk-go/issues/143)) ([2ed1584](https://github.com/anthropics/anthropic-sdk-go/commit/2ed1584c7d80fddf2ef5143eabbd33b8f1a4603d))
+
+
+### Refactors
+
+* tidy up dependencies ([#140](https://github.com/anthropics/anthropic-sdk-go/issues/140)) ([289cc1b](https://github.com/anthropics/anthropic-sdk-go/commit/289cc1b007094421305dfc4ef01ae68bb2d50ee5))

vendor/github.com/anthropics/anthropic-sdk-go/CONTRIBUTING.md 🔗

@@ -0,0 +1,66 @@
+## Setting up the environment
+
+To set up the repository, run:
+
+```sh
+$ ./scripts/bootstrap
+$ ./scripts/lint
+```
+
+This will install all the required dependencies and build the SDK.
+
+You can also [install go 1.18+ manually](https://go.dev/doc/install).
+
+## Modifying/Adding code
+
+Most of the SDK is generated code. Modifications to code will be persisted between generations, but may
+result in merge conflicts between manual patches and changes from the generator. The generator will never
+modify the contents of the `lib/` and `examples/` directories.
+
+## Adding and running examples
+
+All files in the `examples/` directory are not modified by the generator and can be freely edited or added to.
+
+```go
+# add an example to examples/<your-example>/main.go
+
+package main
+
+func main() {
+  // ...
+}
+```
+
+```sh
+$ go run ./examples/<your-example>
+```
+
+## Using the repository from source
+
+To use a local version of this library from source in another project, edit the `go.mod` with a replace
+directive. This can be done through the CLI with the following:
+
+```sh
+$ go mod edit -replace github.com/anthropics/anthropic-sdk-go=/path/to/anthropic-sdk-go
+```
+
+## Running tests
+
+Most tests require you to [set up a mock server](https://github.com/stoplightio/prism) against the OpenAPI spec to run the tests.
+
+```sh
+# you will need npm installed
+$ npx prism mock path/to/your/openapi.yml
+```
+
+```sh
+$ ./scripts/test
+```
+
+## Formatting
+
+This library uses the standard gofmt code formatter:
+
+```sh
+$ ./scripts/format
+```

vendor/github.com/anthropics/anthropic-sdk-go/LICENSE 🔗

@@ -0,0 +1,8 @@
+Copyright 2023 Anthropic, PBC.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+

vendor/github.com/anthropics/anthropic-sdk-go/README.md 🔗

@@ -0,0 +1,876 @@
+# Anthropic Go API Library
+
+<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go"><img src="https://pkg.go.dev/badge/github.com/anthropics/anthropic-sdk-go.svg" alt="Go Reference"></a>
+
+The Anthropic Go library provides convenient access to the [Anthropic REST API](https://docs.anthropic.com/claude/reference/)
+from applications written in Go.
+
+## Installation
+
+<!-- x-release-please-start-version -->
+
+```go
+import (
+	"github.com/anthropics/anthropic-sdk-go" // imported as anthropic
+)
+```
+
+<!-- x-release-please-end -->
+
+Or to pin the version:
+
+<!-- x-release-please-start-version -->
+
+```sh
+go get -u 'github.com/anthropics/anthropic-sdk-go@v1.4.0'
+```
+
+<!-- x-release-please-end -->
+
+## Requirements
+
+This library requires Go 1.18+.
+
+## Usage
+
+The full API of this library can be found in [api.md](api.md).
+
+```go
+package main
+
+import (
+	"context"
+	"fmt"
+
+	"github.com/anthropics/anthropic-sdk-go"
+	"github.com/anthropics/anthropic-sdk-go/option"
+)
+
+func main() {
+	client := anthropic.NewClient(
+		option.WithAPIKey("my-anthropic-api-key"), // defaults to os.LookupEnv("ANTHROPIC_API_KEY")
+	)
+	message, err := client.Messages.New(context.TODO(), anthropic.MessageNewParams{
+		MaxTokens: 1024,
+		Messages: []anthropic.MessageParam{{
+			Content: []anthropic.ContentBlockParamUnion{{
+				OfRequestTextBlock: &anthropic.TextBlockParam{Text: "What is a quaternion?"},
+			}},
+			Role: anthropic.MessageParamRoleUser,
+		}},
+		Model: anthropic.ModelClaude3_7SonnetLatest,
+	})
+	if err != nil {
+		panic(err.Error())
+	}
+	fmt.Printf("%+v\n", message.Content)
+}
+
+```
+
+<details>
+<summary>Conversations</summary>
+
+```go
+messages := []anthropic.MessageParam{
+    anthropic.NewUserMessage(anthropic.NewTextBlock("What is my first name?")),
+}
+
+message, err := client.Messages.New(context.TODO(), anthropic.MessageNewParams{
+    Model:     anthropic.ModelClaude3_7SonnetLatest,
+    Messages:  messages,
+    MaxTokens: 1024,
+})
+if err != nil {
+    panic(err)
+}
+
+fmt.Printf("%+v\n", message.Content)
+
+messages = append(messages, message.ToParam())
+messages = append(messages, anthropic.NewUserMessage(
+    anthropic.NewTextBlock("My full name is John Doe"),
+))
+
+message, err = client.Messages.New(context.TODO(), anthropic.MessageNewParams{
+    Model:     anthropic.ModelClaude3_7SonnetLatest,
+    Messages:  messages,
+    MaxTokens: 1024,
+})
+
+fmt.Printf("%+v\n", message.Content)
+```
+
+</details>
+
+<details>
+<summary>System prompts</summary>
+
+```go
+message, err := client.Messages.New(context.TODO(), anthropic.MessageNewParams{
+    Model:     anthropic.ModelClaude3_7SonnetLatest,
+    MaxTokens: 1024,
+    System: []anthropic.TextBlockParam{
+        {Text: "Be very serious at all times."},
+    },
+    Messages: messages,
+})
+```
+
+</details>
+
+<details>
+<summary>Streaming</summary>
+
+```go
+content := "What is a quaternion?"
+
+stream := client.Messages.NewStreaming(context.TODO(), anthropic.MessageNewParams{
+    Model:     anthropic.ModelClaude3_7SonnetLatest,
+    MaxTokens: 1024,
+    Messages: []anthropic.MessageParam{
+        anthropic.NewUserMessage(anthropic.NewTextBlock(content)),
+    },
+})
+
+message := anthropic.Message{}
+for stream.Next() {
+    event := stream.Current()
+    err := message.Accumulate(event)
+    if err != nil {
+        panic(err)
+    }
+
+    switch eventVariant := event.AsAny().(type) {
+        case anthropic.ContentBlockDeltaEvent:
+        switch deltaVariant := eventVariant.Delta.AsAny().(type) {
+        case anthropic.TextDelta:
+            print(deltaVariant.Text)
+        }
+
+    }
+}
+
+if stream.Err() != nil {
+    panic(stream.Err())
+}
+```
+
+</details>
+
+<details>
+<summary>Tool calling</summary>
+
+```go
+package main
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+
+	"github.com/anthropics/anthropic-sdk-go"
+	"github.com/invopop/jsonschema"
+)
+
+func main() {
+	client := anthropic.NewClient()
+
+	content := "Where is San Francisco?"
+
+	println("[user]: " + content)
+
+	messages := []anthropic.MessageParam{
+		anthropic.NewUserMessage(anthropic.NewTextBlock(content)),
+	}
+
+	toolParams := []anthropic.ToolParam{
+		{
+			Name:        "get_coordinates",
+			Description: anthropic.String("Accepts a place as an address, then returns the latitude and longitude coordinates."),
+			InputSchema: GetCoordinatesInputSchema,
+		},
+	}
+	tools := make([]anthropic.ToolUnionParam, len(toolParams))
+	for i, toolParam := range toolParams {
+		tools[i] = anthropic.ToolUnionParam{OfTool: &toolParam}
+	}
+
+	for {
+		message, err := client.Messages.New(context.TODO(), anthropic.MessageNewParams{
+			Model:     anthropic.ModelClaude3_7SonnetLatest,
+			MaxTokens: 1024,
+			Messages:  messages,
+			Tools:     tools,
+		})
+
+		if err != nil {
+			panic(err)
+		}
+
+		print(color("[assistant]: "))
+		for _, block := range message.Content {
+			switch block := block.AsAny().(type) {
+			case anthropic.TextBlock:
+				println(block.Text)
+				println()
+			case anthropic.ToolUseBlock:
+				inputJSON, _ := json.Marshal(block.Input)
+				println(block.Name + ": " + string(inputJSON))
+				println()
+			}
+		}
+
+		messages = append(messages, message.ToParam())
+		toolResults := []anthropic.ContentBlockParamUnion{}
+
+		for _, block := range message.Content {
+			switch variant := block.AsAny().(type) {
+			case anthropic.ToolUseBlock:
+				print(color("[user (" + block.Name + ")]: "))
+
+				var response interface{}
+				switch block.Name {
+				case "get_coordinates":
+					var input struct {
+						Location string `json:"location"`
+					}
+
+					err := json.Unmarshal([]byte(variant.JSON.Input.Raw()), &input)
+					if err != nil {
+						panic(err)
+					}
+
+					response = GetCoordinates(input.Location)
+				}
+
+				b, err := json.Marshal(response)
+				if err != nil {
+					panic(err)
+				}
+
+				println(string(b))
+
+				toolResults = append(toolResults, anthropic.NewToolResultBlock(block.ID, string(b), false))
+			}
+
+		}
+		if len(toolResults) == 0 {
+			break
+		}
+		messages = append(messages, anthropic.NewUserMessage(toolResults...))
+	}
+}
+
+type GetCoordinatesInput struct {
+	Location string `json:"location" jsonschema_description:"The location to look up."`
+}
+
+var GetCoordinatesInputSchema = GenerateSchema[GetCoordinatesInput]()
+
+type GetCoordinateResponse struct {
+	Long float64 `json:"long"`
+	Lat  float64 `json:"lat"`
+}
+
+func GetCoordinates(location string) GetCoordinateResponse {
+	return GetCoordinateResponse{
+		Long: -122.4194,
+		Lat:  37.7749,
+	}
+}
+
+func GenerateSchema[T any]() anthropic.ToolInputSchemaParam {
+	reflector := jsonschema.Reflector{
+		AllowAdditionalProperties: false,
+		DoNotReference:            true,
+	}
+	var v T
+
+	schema := reflector.Reflect(v)
+
+	return anthropic.ToolInputSchemaParam{
+		Properties: schema.Properties,
+	}
+}
+
+func color(s string) string {
+	return fmt.Sprintf("\033[1;%sm%s\033[0m", "33", s)
+}
+```
+
+</details>
+
+### Request fields
+
+The anthropic library uses the [`omitzero`](https://tip.golang.org/doc/go1.24#encodingjsonpkgencodingjson)
+semantics from the Go 1.24+ `encoding/json` release for request fields.
+
+Required primitive fields (`int64`, `string`, etc.) feature the tag <code>\`json:"...,required"\`</code>. These
+fields are always serialized, even their zero values.
+
+Optional primitive types are wrapped in a `param.Opt[T]`. These fields can be set with the provided constructors, `anthropic.String(string)`, `anthropic.Int(int64)`, etc.
+
+Any `param.Opt[T]`, map, slice, struct or string enum uses the
+tag <code>\`json:"...,omitzero"\`</code>. Its zero value is considered omitted.
+
+The `param.IsOmitted(any)` function can confirm the presence of any `omitzero` field.
+
+```go
+p := anthropic.ExampleParams{
+	ID:   "id_xxx",                // required property
+	Name: anthropic.String("..."), // optional property
+
+	Point: anthropic.Point{
+		X: 0,                // required field will serialize as 0
+		Y: anthropic.Int(1), // optional field will serialize as 1
+		// ... omitted non-required fields will not be serialized
+	},
+
+	Origin: anthropic.Origin{}, // the zero value of [Origin] is considered omitted
+}
+```
+
+To send `null` instead of a `param.Opt[T]`, use `param.Null[T]()`.
+To send `null` instead of a struct `T`, use `param.NullStruct[T]()`.
+
+```go
+p.Name = param.Null[string]()       // 'null' instead of string
+p.Point = param.NullStruct[Point]() // 'null' instead of struct
+
+param.IsNull(p.Name)  // true
+param.IsNull(p.Point) // true
+```
+
+Request structs contain a `.SetExtraFields(map[string]any)` method which can send non-conforming
+fields in the request body. Extra fields overwrite any struct fields with a matching
+key.
+
+> [!WARNING]
+> For security reasons, only use `SetExtraFields` with trusted data.
+
+To send a custom value instead of a struct, use `param.Override[T](value)`.
+
+```go
+// In cases where the API specifies a given type,
+// but you want to send something else, use [SetExtraFields]:
+p.SetExtraFields(map[string]any{
+	"x": 0.01, // send "x" as a float instead of int
+})
+
+// Send a number instead of an object
+custom := param.Override[anthropic.FooParams](12)
+```
+
+### Request unions
+
+Unions are represented as a struct with fields prefixed by "Of" for each of it's variants,
+only one field can be non-zero. The non-zero field will be serialized.
+
+Sub-properties of the union can be accessed via methods on the union struct.
+These methods return a mutable pointer to the underlying data, if present.
+
+```go
+// Only one field can be non-zero, use param.IsOmitted() to check if a field is set
+type AnimalUnionParam struct {
+	OfCat *Cat `json:",omitzero,inline`
+	OfDog *Dog `json:",omitzero,inline`
+}
+
+animal := AnimalUnionParam{
+	OfCat: &Cat{
+		Name: "Whiskers",
+		Owner: PersonParam{
+			Address: AddressParam{Street: "3333 Coyote Hill Rd", Zip: 0},
+		},
+	},
+}
+
+// Mutating a field
+if address := animal.GetOwner().GetAddress(); address != nil {
+	address.ZipCode = 94304
+}
+```
+
+### Response objects
+
+All fields in response structs are ordinary value types (not pointers or wrappers).
+Response structs also include a special `JSON` field containing metadata about
+each property.
+
+```go
+type Animal struct {
+	Name   string `json:"name,nullable"`
+	Owners int    `json:"owners"`
+	Age    int    `json:"age"`
+	JSON   struct {
+		Name        respjson.Field
+		Owner       respjson.Field
+		Age         respjson.Field
+		ExtraFields map[string]respjson.Field
+	} `json:"-"`
+}
+```
+
+To handle optional data, use the `.Valid()` method on the JSON field.
+`.Valid()` returns true if a field is not `null`, not present, or couldn't be marshaled.
+
+If `.Valid()` is false, the corresponding field will simply be its zero value.
+
+```go
+raw := `{"owners": 1, "name": null}`
+
+var res Animal
+json.Unmarshal([]byte(raw), &res)
+
+// Accessing regular fields
+
+res.Owners // 1
+res.Name   // ""
+res.Age    // 0
+
+// Optional field checks
+
+res.JSON.Owners.Valid() // true
+res.JSON.Name.Valid()   // false
+res.JSON.Age.Valid()    // false
+
+// Raw JSON values
+
+res.JSON.Owners.Raw()                  // "1"
+res.JSON.Name.Raw() == "null"          // true
+res.JSON.Name.Raw() == respjson.Null   // true
+res.JSON.Age.Raw() == ""               // true
+res.JSON.Age.Raw() == respjson.Omitted // true
+```
+
+These `.JSON` structs also include an `ExtraFields` map containing
+any properties in the json response that were not specified
+in the struct. This can be useful for API features not yet
+present in the SDK.
+
+```go
+body := res.JSON.ExtraFields["my_unexpected_field"].Raw()
+```
+
+### Response Unions
+
+In responses, unions are represented by a flattened struct containing all possible fields from each of the
+object variants.
+To convert it to a variant use the `.AsFooVariant()` method or the `.AsAny()` method if present.
+
+If a response value union contains primitive values, primitive fields will be alongside
+the properties but prefixed with `Of` and feature the tag `json:"...,inline"`.
+
+```go
+type AnimalUnion struct {
+	// From variants [Dog], [Cat]
+	Owner Person `json:"owner"`
+	// From variant [Dog]
+	DogBreed string `json:"dog_breed"`
+	// From variant [Cat]
+	CatBreed string `json:"cat_breed"`
+	// ...
+
+	JSON struct {
+		Owner respjson.Field
+		// ...
+	} `json:"-"`
+}
+
+// If animal variant
+if animal.Owner.Address.ZipCode == "" {
+	panic("missing zip code")
+}
+
+// Switch on the variant
+switch variant := animal.AsAny().(type) {
+case Dog:
+case Cat:
+default:
+	panic("unexpected type")
+}
+```
+
+### RequestOptions
+
+This library uses the functional options pattern. Functions defined in the
+`option` package return a `RequestOption`, which is a closure that mutates a
+`RequestConfig`. These options can be supplied to the client or at individual
+requests. For example:
+
+```go
+client := anthropic.NewClient(
+	// Adds a header to every request made by the client
+	option.WithHeader("X-Some-Header", "custom_header_info"),
+)
+
+client.Messages.New(context.TODO(), ...,
+	// Override the header
+	option.WithHeader("X-Some-Header", "some_other_custom_header_info"),
+	// Add an undocumented field to the request body, using sjson syntax
+	option.WithJSONSet("some.json.path", map[string]string{"my": "object"}),
+)
+```
+
+See the [full list of request options](https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/option).
+
+### Pagination
+
+This library provides some conveniences for working with paginated list endpoints.
+
+You can use `.ListAutoPaging()` methods to iterate through items across all pages:
+
+```go
+iter := client.Beta.Messages.Batches.ListAutoPaging(context.TODO(), anthropic.BetaMessageBatchListParams{
+	Limit: anthropic.Int(20),
+})
+// Automatically fetches more pages as needed.
+for iter.Next() {
+	betaMessageBatch := iter.Current()
+	fmt.Printf("%+v\n", betaMessageBatch)
+}
+if err := iter.Err(); err != nil {
+	panic(err.Error())
+}
+```
+
+Or you can use simple `.List()` methods to fetch a single page and receive a standard response object
+with additional helper methods like `.GetNextPage()`, e.g.:
+
+```go
+page, err := client.Beta.Messages.Batches.List(context.TODO(), anthropic.BetaMessageBatchListParams{
+	Limit: anthropic.Int(20),
+})
+for page != nil {
+	for _, batch := range page.Data {
+		fmt.Printf("%+v\n", batch)
+	}
+	page, err = page.GetNextPage()
+}
+if err != nil {
+	panic(err.Error())
+}
+```
+
+### Errors
+
+When the API returns a non-success status code, we return an error with type
+`*anthropic.Error`. This contains the `StatusCode`, `*http.Request`, and
+`*http.Response` values of the request, as well as the JSON of the error body
+(much like other response objects in the SDK).
+
+To handle errors, we recommend that you use the `errors.As` pattern:
+
+```go
+_, err := client.Messages.New(context.TODO(), anthropic.MessageNewParams{
+	MaxTokens: 1024,
+	Messages: []anthropic.MessageParam{{
+		Content: []anthropic.ContentBlockParamUnion{{
+			OfText: &anthropic.TextBlockParam{Text: "What is a quaternion?", CacheControl: anthropic.NewCacheControlEphemeralParam(), Citations: []anthropic.TextCitationParamUnion{{
+				OfCharLocation: &anthropic.CitationCharLocationParam{CitedText: "cited_text", DocumentIndex: 0, DocumentTitle: anthropic.String("x"), EndCharIndex: 0, StartCharIndex: 0},
+			}}},
+		}},
+		Role: anthropic.MessageParamRoleUser,
+	}},
+	Model: anthropic.ModelClaude3_7SonnetLatest,
+})
+if err != nil {
+	var apierr *anthropic.Error
+	if errors.As(err, &apierr) {
+		println(string(apierr.DumpRequest(true)))  // Prints the serialized HTTP request
+		println(string(apierr.DumpResponse(true))) // Prints the serialized HTTP response
+	}
+	panic(err.Error()) // GET "/v1/messages": 400 Bad Request { ... }
+}
+```
+
+When other errors occur, they are returned unwrapped; for example,
+if HTTP transport fails, you might receive `*url.Error` wrapping `*net.OpError`.
+
+### Timeouts
+
+Requests do not time out by default; use context to configure a timeout for a request lifecycle.
+
+Note that if a request is [retried](#retries), the context timeout does not start over.
+To set a per-retry timeout, use `option.WithRequestTimeout()`.
+
+```go
+// This sets the timeout for the request, including all the retries.
+ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
+defer cancel()
+client.Messages.New(
+	ctx,
+	anthropic.MessageNewParams{
+		MaxTokens: 1024,
+		Messages: []anthropic.MessageParam{{
+			Content: []anthropic.ContentBlockParamUnion{{
+				OfRequestTextBlock: &anthropic.TextBlockParam{Text: "What is a quaternion?"},
+			}},
+			Role: anthropic.MessageParamRoleUser,
+		}},
+		Model: anthropic.ModelClaude3_7SonnetLatest,
+	},
+	// This sets the per-retry timeout
+	option.WithRequestTimeout(20*time.Second),
+)
+```
+
+### Long Requests
+
+> [!IMPORTANT]
+> We highly encourage you use the streaming Messages API for longer running requests.
+
+We do not recommend setting a large `MaxTokens` value without using streaming as some networks may drop idle connections after a certain period of time, which
+can cause the request to fail or [timeout](#timeouts) without receiving a response from Anthropic.
+
+This SDK will also return an error if a non-streaming request is expected to be above roughly 10 minutes long.
+Calling `.Messages.NewStreaming()` or [setting a custom timeout](#timeouts) disables this error.
+
+### File uploads
+
+Request parameters that correspond to file uploads in multipart requests are typed as
+`io.Reader`. The contents of the `io.Reader` will by default be sent as a multipart form
+part with the file name of "anonymous_file" and content-type of "application/octet-stream", so we
+recommend always specifyig a custom content-type with the `anthropic.File(reader io.Reader, filename string, contentType string)`
+helper we provide to easily wrap any `io.Reader` with the appropriate file name and content type.
+
+```go
+// A file from the file system
+file, err := os.Open("/path/to/file.json")
+anthropic.BetaFileUploadParams{
+	File: anthropic.File(file, "custom-name.json", "application/json"),
+	Betas: []anthropic.AnthropicBeta{anthropic.AnthropicBetaFilesAPI2025_04_14},
+}
+
+// A file from a string
+anthropic.BetaFileUploadParams{
+	File: anthropic.File(strings.NewReader("my file contents"), "custom-name.json", "application/json"),
+	Betas: []anthropic.AnthropicBeta{anthropic.AnthropicBetaFilesAPI2025_04_14},
+}
+```
+
+The file name and content-type can also be customized by implementing `Name() string` or `ContentType()
+string` on the run-time type of `io.Reader`. Note that `os.File` implements `Name() string`, so a
+file returned by `os.Open` will be sent with the file name on disk.
+
+### Retries
+
+Certain errors will be automatically retried 2 times by default, with a short exponential backoff.
+We retry by default all connection errors, 408 Request Timeout, 409 Conflict, 429 Rate Limit,
+and >=500 Internal errors.
+
+You can use the `WithMaxRetries` option to configure or disable this:
+
+```go
+// Configure the default for all requests:
+client := anthropic.NewClient(
+	option.WithMaxRetries(0), // default is 2
+)
+
+// Override per-request:
+client.Messages.New(
+	context.TODO(),
+	anthropic.MessageNewParams{
+		MaxTokens: 1024,
+		Messages: []anthropic.MessageParam{{
+			Content: []anthropic.ContentBlockParamUnion{{
+				OfRequestTextBlock: &anthropic.TextBlockParam{Text: "What is a quaternion?"},
+			}},
+			Role: anthropic.MessageParamRoleUser,
+		}},
+		Model: anthropic.ModelClaude3_7SonnetLatest,
+	},
+	option.WithMaxRetries(5),
+)
+```
+
+### Accessing raw response data (e.g. response headers)
+
+You can access the raw HTTP response data by using the `option.WithResponseInto()` request option. This is useful when
+you need to examine response headers, status codes, or other details.
+
+```go
+// Create a variable to store the HTTP response
+var response *http.Response
+message, err := client.Messages.New(
+	context.TODO(),
+	anthropic.MessageNewParams{
+		MaxTokens: 1024,
+		Messages: []anthropic.MessageParam{{
+			Content: []anthropic.ContentBlockParamUnion{{
+				OfText: &anthropic.TextBlockParam{Text: "What is a quaternion?", CacheControl: anthropic.NewCacheControlEphemeralParam(), Citations: []anthropic.TextCitationParamUnion{{
+					OfCharLocation: &anthropic.CitationCharLocationParam{CitedText: "cited_text", DocumentIndex: 0, DocumentTitle: anthropic.String("x"), EndCharIndex: 0, StartCharIndex: 0},
+				}}},
+			}},
+			Role: anthropic.MessageParamRoleUser,
+		}},
+		Model: anthropic.ModelClaude3_7SonnetLatest,
+	},
+	option.WithResponseInto(&response),
+)
+if err != nil {
+	// handle error
+}
+fmt.Printf("%+v\n", message)
+
+fmt.Printf("Status Code: %d\n", response.StatusCode)
+fmt.Printf("Headers: %+#v\n", response.Header)
+```
+
+### Making custom/undocumented requests
+
+This library is typed for convenient access to the documented API. If you need to access undocumented
+endpoints, params, or response properties, the library can still be used.
+
+#### Undocumented endpoints
+
+To make requests to undocumented endpoints, you can use `client.Get`, `client.Post`, and other HTTP verbs.
+`RequestOptions` on the client, such as retries, will be respected when making these requests.
+
+```go
+var (
+    // params can be an io.Reader, a []byte, an encoding/json serializable object,
+    // or a "…Params" struct defined in this library.
+    params map[string]any
+
+    // result can be an []byte, *http.Response, a encoding/json deserializable object,
+    // or a model defined in this library.
+    result *http.Response
+)
+err := client.Post(context.Background(), "/unspecified", params, &result)
+if err != nil {
+    …
+}
+```
+
+#### Undocumented request params
+
+To make requests using undocumented parameters, you may use either the `option.WithQuerySet()`
+or the `option.WithJSONSet()` methods.
+
+```go
+params := FooNewParams{
+    ID:   "id_xxxx",
+    Data: FooNewParamsData{
+        FirstName: anthropic.String("John"),
+    },
+}
+client.Foo.New(context.Background(), params, option.WithJSONSet("data.last_name", "Doe"))
+```
+
+#### Undocumented response properties
+
+To access undocumented response properties, you may either access the raw JSON of the response as a string
+with `result.JSON.RawJSON()`, or get the raw JSON of a particular field on the result with
+`result.JSON.Foo.Raw()`.
+
+Any fields that are not present on the response struct will be saved and can be accessed by `result.JSON.ExtraFields()` which returns the extra fields as a `map[string]Field`.
+
+### Middleware
+
+We provide `option.WithMiddleware` which applies the given
+middleware to requests.
+
+```go
+func Logger(req *http.Request, next option.MiddlewareNext) (res *http.Response, err error) {
+	// Before the request
+	start := time.Now()
+	LogReq(req)
+
+	// Forward the request to the next handler
+	res, err = next(req)
+
+	// Handle stuff after the request
+	end := time.Now()
+	LogRes(res, err, start - end)
+
+    return res, err
+}
+
+client := anthropic.NewClient(
+	option.WithMiddleware(Logger),
+)
+```
+
+When multiple middlewares are provided as variadic arguments, the middlewares
+are applied left to right. If `option.WithMiddleware` is given
+multiple times, for example first in the client then the method, the
+middleware in the client will run first and the middleware given in the method
+will run next.
+
+You may also replace the default `http.Client` with
+`option.WithHTTPClient(client)`. Only one http client is
+accepted (this overwrites any previous client) and receives requests after any
+middleware has been applied.
+
+## Amazon Bedrock
+
+To use this library with [Amazon Bedrock](https://aws.amazon.com/bedrock/claude/),
+use the bedrock request option `bedrock.WithLoadDefaultConfig(…)` which reads the
+[default config](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
+
+Importing the `bedrock` library also globally registers a decoder for `application/vnd.amazon.eventstream` for
+streaming.
+
+```go
+package main
+
+import (
+	"github.com/anthropics/anthropic-sdk-go"
+	"github.com/anthropics/anthropic-sdk-go/bedrock"
+)
+
+func main() {
+	client := anthropic.NewClient(
+		bedrock.WithLoadDefaultConfig(context.Background()),
+	)
+}
+```
+
+If you already have an `aws.Config`, you can also use it directly with `bedrock.WithConfig(cfg)`.
+
+Read more about Anthropic and Amazon Bedrock [here](https://docs.anthropic.com/en/api/claude-on-amazon-bedrock).
+
+## Google Vertex AI
+
+To use this library with [Google Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude),
+use the request option `vertex.WithGoogleAuth(…)` which reads the
+[Application Default Credentials](https://cloud.google.com/docs/authentication/application-default-credentials).
+
+```go
+package main
+
+import (
+	"context"
+
+	"github.com/anthropics/anthropic-sdk-go"
+	"github.com/anthropics/anthropic-sdk-go/vertex"
+)
+
+func main() {
+	client := anthropic.NewClient(
+		vertex.WithGoogleAuth(context.Background(), "us-central1", "id-xxx"),
+	)
+}
+```
+
+If you already have `*google.Credentials`, you can also use it directly with
+`vertex.WithCredentials(ctx, region, projectId, creds)`.
+
+Read more about Anthropic and Google Vertex [here](https://docs.anthropic.com/en/api/claude-on-vertex-ai).
+
+## Semantic versioning
+
+This package generally follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions, though certain backwards-incompatible changes may be released as minor versions:
+
+1. Changes to library internals which are technically public but not intended or documented for external use. _(Please open a GitHub issue to let us know if you are relying on such internals.)_
+2. Changes that we do not expect to impact the vast majority of users in practice.
+
+We take backwards-compatibility seriously and work hard to ensure you can rely on a smooth upgrade experience.
+
+We are keen for your feedback; please open an [issue](https://www.github.com/anthropics/anthropic-sdk-go/issues) with questions, bugs, or suggestions.
+
+## Contributing
+
+See [the contributing documentation](./CONTRIBUTING.md).

vendor/github.com/anthropics/anthropic-sdk-go/SECURITY.md 🔗

@@ -0,0 +1,27 @@
+# Security Policy
+
+## Reporting Security Issues
+
+This SDK is generated by [Stainless Software Inc](http://stainless.com). Stainless takes security seriously, and encourages you to report any security vulnerability promptly so that appropriate action can be taken.
+
+To report a security issue, please contact the Stainless team at security@stainless.com.
+
+## Responsible Disclosure
+
+We appreciate the efforts of security researchers and individuals who help us maintain the security of
+SDKs we generate. If you believe you have found a security vulnerability, please adhere to responsible
+disclosure practices by allowing us a reasonable amount of time to investigate and address the issue
+before making any information public.
+
+## Reporting Non-SDK Related Security Issues
+
+If you encounter security issues that are not directly related to SDKs but pertain to the services
+or products provided by Anthropic, please follow the respective company's security reporting guidelines.
+
+### Anthropic Terms and Policies
+
+Please contact support@anthropic.com for any questions or concerns regarding the security of our services.
+
+---
+
+Thank you for helping us keep the SDKs and systems they interact with secure.

vendor/github.com/anthropics/anthropic-sdk-go/aliases.go 🔗

@@ -0,0 +1,50 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/internal/apierror"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/shared"
+)
+
+// aliased to make [param.APIUnion] private when embedding
+type paramUnion = param.APIUnion
+
+// aliased to make [param.APIObject] private when embedding
+type paramObj = param.APIObject
+
+type Error = apierror.Error
+
+// This is an alias to an internal type.
+type APIErrorObject = shared.APIErrorObject
+
+// This is an alias to an internal type.
+type AuthenticationError = shared.AuthenticationError
+
+// This is an alias to an internal type.
+type BillingError = shared.BillingError
+
+// This is an alias to an internal type.
+type ErrorObjectUnion = shared.ErrorObjectUnion
+
+// This is an alias to an internal type.
+type ErrorResponse = shared.ErrorResponse
+
+// This is an alias to an internal type.
+type GatewayTimeoutError = shared.GatewayTimeoutError
+
+// This is an alias to an internal type.
+type InvalidRequestError = shared.InvalidRequestError
+
+// This is an alias to an internal type.
+type NotFoundError = shared.NotFoundError
+
+// This is an alias to an internal type.
+type OverloadedError = shared.OverloadedError
+
+// This is an alias to an internal type.
+type PermissionError = shared.PermissionError
+
+// This is an alias to an internal type.
+type RateLimitError = shared.RateLimitError

vendor/github.com/anthropics/anthropic-sdk-go/api.md 🔗

@@ -0,0 +1,329 @@
+# Shared Response Types
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#APIErrorObject">APIErrorObject</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#AuthenticationError">AuthenticationError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#BillingError">BillingError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#ErrorObjectUnion">ErrorObjectUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#ErrorResponse">ErrorResponse</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#GatewayTimeoutError">GatewayTimeoutError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#InvalidRequestError">InvalidRequestError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#NotFoundError">NotFoundError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#OverloadedError">OverloadedError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#PermissionError">PermissionError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared">shared</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/shared#RateLimitError">RateLimitError</a>
+
+# Messages
+
+Params Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#Base64ImageSourceParam">Base64ImageSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#Base64PDFSourceParam">Base64PDFSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CacheControlEphemeralParam">CacheControlEphemeralParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationCharLocationParam">CitationCharLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationContentBlockLocationParam">CitationContentBlockLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationPageLocationParam">CitationPageLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationWebSearchResultLocationParam">CitationWebSearchResultLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationsConfigParam">CitationsConfigParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ContentBlockParamUnion">ContentBlockParamUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ContentBlockSourceParam">ContentBlockSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ContentBlockSourceContentUnionParam">ContentBlockSourceContentUnionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#DocumentBlockParam">DocumentBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ImageBlockParam">ImageBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageCountTokensToolUnionParam">MessageCountTokensToolUnionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageParam">MessageParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MetadataParam">MetadataParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#Model">Model</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#PlainTextSourceParam">PlainTextSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#RedactedThinkingBlockParam">RedactedThinkingBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ServerToolUseBlockParam">ServerToolUseBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#TextBlockParam">TextBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#TextCitationParamUnion">TextCitationParamUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ThinkingBlockParam">ThinkingBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ThinkingConfigDisabledParam">ThinkingConfigDisabledParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ThinkingConfigEnabledParam">ThinkingConfigEnabledParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ThinkingConfigParamUnion">ThinkingConfigParamUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolParam">ToolParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolBash20250124Param">ToolBash20250124Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolChoiceUnionParam">ToolChoiceUnionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolChoiceAnyParam">ToolChoiceAnyParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolChoiceAutoParam">ToolChoiceAutoParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolChoiceNoneParam">ToolChoiceNoneParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolChoiceToolParam">ToolChoiceToolParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolResultBlockParam">ToolResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolTextEditor20250124Param">ToolTextEditor20250124Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolUnionParam">ToolUnionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolUseBlockParam">ToolUseBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#URLImageSourceParam">URLImageSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#URLPDFSourceParam">URLPDFSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchResultBlockParam">WebSearchResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchTool20250305Param">WebSearchTool20250305Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchToolRequestErrorParam">WebSearchToolRequestErrorParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchToolResultBlockParam">WebSearchToolResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchToolResultBlockParamContentUnion">WebSearchToolResultBlockParamContentUnion</a>
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationCharLocation">CitationCharLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationContentBlockLocation">CitationContentBlockLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationPageLocation">CitationPageLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationsDelta">CitationsDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#CitationsWebSearchResultLocation">CitationsWebSearchResultLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ContentBlockUnion">ContentBlockUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#InputJSONDelta">InputJSONDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#Message">Message</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageDeltaUsage">MessageDeltaUsage</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageTokensCount">MessageTokensCount</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#Model">Model</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#RawContentBlockDeltaUnion">RawContentBlockDeltaUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ContentBlockDeltaEvent">ContentBlockDeltaEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ContentBlockStartEvent">ContentBlockStartEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ContentBlockStopEvent">ContentBlockStopEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageDeltaEvent">MessageDeltaEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageStartEvent">MessageStartEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageStopEvent">MessageStopEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageStreamEventUnion">MessageStreamEventUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#RedactedThinkingBlock">RedactedThinkingBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ServerToolUsage">ServerToolUsage</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ServerToolUseBlock">ServerToolUseBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#SignatureDelta">SignatureDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#StopReason">StopReason</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#TextBlock">TextBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#TextCitationUnion">TextCitationUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#TextDelta">TextDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ThinkingBlock">ThinkingBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ThinkingDelta">ThinkingDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ToolUseBlock">ToolUseBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#Usage">Usage</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchResultBlock">WebSearchResultBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchToolResultBlock">WebSearchToolResultBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchToolResultBlockContentUnion">WebSearchToolResultBlockContentUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#WebSearchToolResultError">WebSearchToolResultError</a>
+
+Methods:
+
+- <code title="post /v1/messages">client.Messages.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageService.New">New</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, body <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageNewParams">MessageNewParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#Message">Message</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="post /v1/messages/count_tokens">client.Messages.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageService.CountTokens">CountTokens</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, body <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageCountTokensParams">MessageCountTokensParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageTokensCount">MessageTokensCount</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+
+## Batches
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#DeletedMessageBatch">DeletedMessageBatch</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatch">MessageBatch</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchCanceledResult">MessageBatchCanceledResult</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchErroredResult">MessageBatchErroredResult</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchExpiredResult">MessageBatchExpiredResult</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchIndividualResponse">MessageBatchIndividualResponse</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchRequestCounts">MessageBatchRequestCounts</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchResultUnion">MessageBatchResultUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchSucceededResult">MessageBatchSucceededResult</a>
+
+Methods:
+
+- <code title="post /v1/messages/batches">client.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchService.New">New</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, body <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchNewParams">MessageBatchNewParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatch">MessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/messages/batches/{message_batch_id}">client.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchService.Get">Get</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatch">MessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/messages/batches">client.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchService.List">List</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, query <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchListParams">MessageBatchListParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination">pagination</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination#Page">Page</a>[<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatch">MessageBatch</a>], <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="delete /v1/messages/batches/{message_batch_id}">client.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchService.Delete">Delete</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#DeletedMessageBatch">DeletedMessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="post /v1/messages/batches/{message_batch_id}/cancel">client.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchService.Cancel">Cancel</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatch">MessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/messages/batches/{message_batch_id}/results">client.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchService.Results">Results</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#MessageBatchIndividualResponse">MessageBatchIndividualResponse</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+
+# Models
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ModelInfo">ModelInfo</a>
+
+Methods:
+
+- <code title="get /v1/models/{model_id}">client.Models.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ModelService.Get">Get</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, modelID <a href="https://pkg.go.dev/builtin#string">string</a>, query <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ModelGetParams">ModelGetParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ModelInfo">ModelInfo</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/models">client.Models.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ModelService.List">List</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ModelListParams">ModelListParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination">pagination</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination#Page">Page</a>[<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#ModelInfo">ModelInfo</a>], <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+
+# Beta
+
+Params Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#AnthropicBeta">AnthropicBeta</a>
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaAPIError">BetaAPIError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaAuthenticationError">BetaAuthenticationError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaBillingError">BetaBillingError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaErrorUnion">BetaErrorUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaErrorResponse">BetaErrorResponse</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaGatewayTimeoutError">BetaGatewayTimeoutError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaInvalidRequestError">BetaInvalidRequestError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaNotFoundError">BetaNotFoundError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaOverloadedError">BetaOverloadedError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaPermissionError">BetaPermissionError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRateLimitError">BetaRateLimitError</a>
+
+## Models
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaModelInfo">BetaModelInfo</a>
+
+Methods:
+
+- <code title="get /v1/models/{model_id}?beta=true">client.Beta.Models.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaModelService.Get">Get</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, modelID <a href="https://pkg.go.dev/builtin#string">string</a>, query <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaModelGetParams">BetaModelGetParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaModelInfo">BetaModelInfo</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/models?beta=true">client.Beta.Models.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaModelService.List">List</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaModelListParams">BetaModelListParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination">pagination</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination#Page">Page</a>[<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaModelInfo">BetaModelInfo</a>], <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+
+## Messages
+
+Params Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaBase64ImageSourceParam">BetaBase64ImageSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaBase64PDFBlockParam">BetaBase64PDFBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaBase64PDFSourceParam">BetaBase64PDFSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCacheControlEphemeralParam">BetaCacheControlEphemeralParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationCharLocationParam">BetaCitationCharLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationContentBlockLocationParam">BetaCitationContentBlockLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationPageLocationParam">BetaCitationPageLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationWebSearchResultLocationParam">BetaCitationWebSearchResultLocationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationsConfigParam">BetaCitationsConfigParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionOutputBlockParam">BetaCodeExecutionOutputBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionResultBlockParam">BetaCodeExecutionResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionTool20250522Param">BetaCodeExecutionTool20250522Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultBlockParam">BetaCodeExecutionToolResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultBlockParamContentUnion">BetaCodeExecutionToolResultBlockParamContentUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultErrorCode">BetaCodeExecutionToolResultErrorCode</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultErrorParam">BetaCodeExecutionToolResultErrorParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaContainerUploadBlockParam">BetaContainerUploadBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaContentBlockParamUnion">BetaContentBlockParamUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaContentBlockSourceParam">BetaContentBlockSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaContentBlockSourceContentUnionParam">BetaContentBlockSourceContentUnionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileDocumentSourceParam">BetaFileDocumentSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileImageSourceParam">BetaFileImageSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaImageBlockParam">BetaImageBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMCPToolUseBlockParam">BetaMCPToolUseBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageParam">BetaMessageParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMetadataParam">BetaMetadataParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaPlainTextSourceParam">BetaPlainTextSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRedactedThinkingBlockParam">BetaRedactedThinkingBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRequestMCPServerToolConfigurationParam">BetaRequestMCPServerToolConfigurationParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRequestMCPServerURLDefinitionParam">BetaRequestMCPServerURLDefinitionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRequestMCPToolResultBlockParam">BetaRequestMCPToolResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaServerToolUseBlockParam">BetaServerToolUseBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaTextBlockParam">BetaTextBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaTextCitationParamUnion">BetaTextCitationParamUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaThinkingBlockParam">BetaThinkingBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaThinkingConfigDisabledParam">BetaThinkingConfigDisabledParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaThinkingConfigEnabledParam">BetaThinkingConfigEnabledParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaThinkingConfigParamUnion">BetaThinkingConfigParamUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolParam">BetaToolParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolBash20241022Param">BetaToolBash20241022Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolBash20250124Param">BetaToolBash20250124Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolChoiceUnionParam">BetaToolChoiceUnionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolChoiceAnyParam">BetaToolChoiceAnyParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolChoiceAutoParam">BetaToolChoiceAutoParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolChoiceNoneParam">BetaToolChoiceNoneParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolChoiceToolParam">BetaToolChoiceToolParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolComputerUse20241022Param">BetaToolComputerUse20241022Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolComputerUse20250124Param">BetaToolComputerUse20250124Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolResultBlockParam">BetaToolResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolTextEditor20241022Param">BetaToolTextEditor20241022Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolTextEditor20250124Param">BetaToolTextEditor20250124Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolTextEditor20250429Param">BetaToolTextEditor20250429Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolUnionParam">BetaToolUnionParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolUseBlockParam">BetaToolUseBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaURLImageSourceParam">BetaURLImageSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaURLPDFSourceParam">BetaURLPDFSourceParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchResultBlockParam">BetaWebSearchResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchTool20250305Param">BetaWebSearchTool20250305Param</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolRequestErrorParam">BetaWebSearchToolRequestErrorParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolResultBlockParam">BetaWebSearchToolResultBlockParam</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolResultBlockParamContentUnion">BetaWebSearchToolResultBlockParamContentUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolResultErrorCode">BetaWebSearchToolResultErrorCode</a>
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCacheCreation">BetaCacheCreation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationCharLocation">BetaCitationCharLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationContentBlockLocation">BetaCitationContentBlockLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationPageLocation">BetaCitationPageLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationsDelta">BetaCitationsDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCitationsWebSearchResultLocation">BetaCitationsWebSearchResultLocation</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionOutputBlock">BetaCodeExecutionOutputBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionResultBlock">BetaCodeExecutionResultBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultBlock">BetaCodeExecutionToolResultBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultBlockContentUnion">BetaCodeExecutionToolResultBlockContentUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultError">BetaCodeExecutionToolResultError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaCodeExecutionToolResultErrorCode">BetaCodeExecutionToolResultErrorCode</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaContainer">BetaContainer</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaContainerUploadBlock">BetaContainerUploadBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaContentBlockUnion">BetaContentBlockUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaInputJSONDelta">BetaInputJSONDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMCPToolResultBlock">BetaMCPToolResultBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMCPToolUseBlock">BetaMCPToolUseBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessage">BetaMessage</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageDeltaUsage">BetaMessageDeltaUsage</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageTokensCount">BetaMessageTokensCount</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawContentBlockDeltaUnion">BetaRawContentBlockDeltaUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawContentBlockDeltaEvent">BetaRawContentBlockDeltaEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawContentBlockStartEvent">BetaRawContentBlockStartEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawContentBlockStopEvent">BetaRawContentBlockStopEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawMessageDeltaEvent">BetaRawMessageDeltaEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawMessageStartEvent">BetaRawMessageStartEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawMessageStopEvent">BetaRawMessageStopEvent</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRawMessageStreamEventUnion">BetaRawMessageStreamEventUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaRedactedThinkingBlock">BetaRedactedThinkingBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaServerToolUsage">BetaServerToolUsage</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaServerToolUseBlock">BetaServerToolUseBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaSignatureDelta">BetaSignatureDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaStopReason">BetaStopReason</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaTextBlock">BetaTextBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaTextCitationUnion">BetaTextCitationUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaTextDelta">BetaTextDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaThinkingBlock">BetaThinkingBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaThinkingDelta">BetaThinkingDelta</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaToolUseBlock">BetaToolUseBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaUsage">BetaUsage</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchResultBlock">BetaWebSearchResultBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolResultBlock">BetaWebSearchToolResultBlock</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolResultBlockContentUnion">BetaWebSearchToolResultBlockContentUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolResultError">BetaWebSearchToolResultError</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaWebSearchToolResultErrorCode">BetaWebSearchToolResultErrorCode</a>
+
+Methods:
+
+- <code title="post /v1/messages?beta=true">client.Beta.Messages.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageService.New">New</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageNewParams">BetaMessageNewParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessage">BetaMessage</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="post /v1/messages/count_tokens?beta=true">client.Beta.Messages.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageService.CountTokens">CountTokens</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageCountTokensParams">BetaMessageCountTokensParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageTokensCount">BetaMessageTokensCount</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+
+### Batches
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaDeletedMessageBatch">BetaDeletedMessageBatch</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatch">BetaMessageBatch</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchCanceledResult">BetaMessageBatchCanceledResult</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchErroredResult">BetaMessageBatchErroredResult</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchExpiredResult">BetaMessageBatchExpiredResult</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchIndividualResponse">BetaMessageBatchIndividualResponse</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchRequestCounts">BetaMessageBatchRequestCounts</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchResultUnion">BetaMessageBatchResultUnion</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchSucceededResult">BetaMessageBatchSucceededResult</a>
+
+Methods:
+
+- <code title="post /v1/messages/batches?beta=true">client.Beta.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchService.New">New</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchNewParams">BetaMessageBatchNewParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatch">BetaMessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/messages/batches/{message_batch_id}?beta=true">client.Beta.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchService.Get">Get</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>, query <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchGetParams">BetaMessageBatchGetParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatch">BetaMessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/messages/batches?beta=true">client.Beta.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchService.List">List</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchListParams">BetaMessageBatchListParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination">pagination</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination#Page">Page</a>[<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatch">BetaMessageBatch</a>], <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="delete /v1/messages/batches/{message_batch_id}?beta=true">client.Beta.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchService.Delete">Delete</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>, body <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchDeleteParams">BetaMessageBatchDeleteParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaDeletedMessageBatch">BetaDeletedMessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="post /v1/messages/batches/{message_batch_id}/cancel?beta=true">client.Beta.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchService.Cancel">Cancel</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>, body <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchCancelParams">BetaMessageBatchCancelParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatch">BetaMessageBatch</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/messages/batches/{message_batch_id}/results?beta=true">client.Beta.Messages.Batches.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchService.Results">Results</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, messageBatchID <a href="https://pkg.go.dev/builtin#string">string</a>, query <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchResultsParams">BetaMessageBatchResultsParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaMessageBatchIndividualResponse">BetaMessageBatchIndividualResponse</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+
+## Files
+
+Response Types:
+
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#DeletedFile">DeletedFile</a>
+- <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#FileMetadata">FileMetadata</a>
+
+Methods:
+
+- <code title="get /v1/files?beta=true">client.Beta.Files.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileService.List">List</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileListParams">BetaFileListParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination">pagination</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go/packages/pagination#Page">Page</a>[<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#FileMetadata">FileMetadata</a>], <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="delete /v1/files/{file_id}?beta=true">client.Beta.Files.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileService.Delete">Delete</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, fileID <a href="https://pkg.go.dev/builtin#string">string</a>, body <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileDeleteParams">BetaFileDeleteParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#DeletedFile">DeletedFile</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/files/{file_id}/content?beta=true">client.Beta.Files.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileService.Download">Download</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, fileID <a href="https://pkg.go.dev/builtin#string">string</a>, query <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileDownloadParams">BetaFileDownloadParams</a>) (http.Response, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="get /v1/files/{file_id}?beta=true">client.Beta.Files.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileService.GetMetadata">GetMetadata</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, fileID <a href="https://pkg.go.dev/builtin#string">string</a>, query <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileGetMetadataParams">BetaFileGetMetadataParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#FileMetadata">FileMetadata</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>
+- <code title="post /v1/files?beta=true">client.Beta.Files.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileService.Upload">Upload</a>(ctx <a href="https://pkg.go.dev/context">context</a>.<a href="https://pkg.go.dev/context#Context">Context</a>, params <a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#BetaFileUploadParams">BetaFileUploadParams</a>) (<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go">anthropic</a>.<a href="https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#FileMetadata">FileMetadata</a>, <a href="https://pkg.go.dev/builtin#error">error</a>)</code>

vendor/github.com/anthropics/anthropic-sdk-go/bedrock/bedrock.go 🔗

@@ -0,0 +1,245 @@
+package bedrock
+
+import (
+	"bytes"
+	"context"
+	"crypto/sha256"
+	"encoding/base64"
+	"encoding/hex"
+	"encoding/json"
+	"fmt"
+	"io"
+	"net/http"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream"
+	"github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi"
+	v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+	"github.com/aws/aws-sdk-go-v2/config"
+	"github.com/tidwall/gjson"
+	"github.com/tidwall/sjson"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/ssestream"
+)
+
+const DefaultVersion = "bedrock-2023-05-31"
+
+var DefaultEndpoints = map[string]bool{
+	"/v1/complete": true,
+	"/v1/messages": true,
+}
+
+type eventstreamChunk struct {
+	Bytes string `json:"bytes"`
+	P     string `json:"p"`
+}
+
+type eventstreamDecoder struct {
+	eventstream.Decoder
+
+	rc  io.ReadCloser
+	evt ssestream.Event
+	err error
+}
+
+func (e *eventstreamDecoder) Close() error {
+	return e.rc.Close()
+}
+
+func (e *eventstreamDecoder) Err() error {
+	return e.err
+}
+
+func (e *eventstreamDecoder) Next() bool {
+	if e.err != nil {
+		return false
+	}
+
+	msg, err := e.Decoder.Decode(e.rc, nil)
+	if err != nil {
+		e.err = err
+		return false
+	}
+
+	messageType := msg.Headers.Get(eventstreamapi.MessageTypeHeader)
+	if messageType == nil {
+		e.err = fmt.Errorf("%s event header not present", eventstreamapi.MessageTypeHeader)
+		return false
+	}
+
+	switch messageType.String() {
+	case eventstreamapi.EventMessageType:
+		eventType := msg.Headers.Get(eventstreamapi.EventTypeHeader)
+		if eventType == nil {
+			e.err = fmt.Errorf("%s event header not present", eventstreamapi.EventTypeHeader)
+			return false
+		}
+
+		if eventType.String() == "chunk" {
+			chunk := eventstreamChunk{}
+			err = json.Unmarshal(msg.Payload, &chunk)
+			if err != nil {
+				e.err = err
+				return false
+			}
+			decoded, err := base64.StdEncoding.DecodeString(chunk.Bytes)
+			if err != nil {
+				e.err = err
+				return false
+			}
+			e.evt = ssestream.Event{
+				Type: gjson.GetBytes(decoded, "type").String(),
+				Data: decoded,
+			}
+		}
+
+	case eventstreamapi.ExceptionMessageType:
+		// See https://github.com/aws/aws-sdk-go-v2/blob/885de40869f9bcee29ad11d60967aa0f1b571d46/service/iotsitewise/deserializers.go#L15511C1-L15567C2
+		exceptionType := msg.Headers.Get(eventstreamapi.ExceptionTypeHeader)
+		if exceptionType == nil {
+			e.err = fmt.Errorf("%s event header not present", eventstreamapi.ExceptionTypeHeader)
+			return false
+		}
+
+		// See https://github.com/aws/aws-sdk-go-v2/blob/885de40869f9bcee29ad11d60967aa0f1b571d46/aws/protocol/restjson/decoder_util.go#L15-L48k
+		var errInfo struct {
+			Code    string
+			Type    string `json:"__type"`
+			Message string
+		}
+		err = json.Unmarshal(msg.Payload, &errInfo)
+		if err != nil && err != io.EOF {
+			e.err = fmt.Errorf("received exception %s: parsing exception payload failed: %w", exceptionType.String(), err)
+			return false
+		}
+
+		errorCode := "UnknownError"
+		errorMessage := errorCode
+		if ev := exceptionType.String(); len(ev) > 0 {
+			errorCode = ev
+		} else if len(errInfo.Code) > 0 {
+			errorCode = errInfo.Code
+		} else if len(errInfo.Type) > 0 {
+			errorCode = errInfo.Type
+		}
+
+		if len(errInfo.Message) > 0 {
+			errorMessage = errInfo.Message
+		}
+		e.err = fmt.Errorf("received exception %s: %s", errorCode, errorMessage)
+		return false
+
+	case eventstreamapi.ErrorMessageType:
+		errorCode := "UnknownError"
+		errorMessage := errorCode
+		if header := msg.Headers.Get(eventstreamapi.ErrorCodeHeader); header != nil {
+			errorCode = header.String()
+		}
+		if header := msg.Headers.Get(eventstreamapi.ErrorMessageHeader); header != nil {
+			errorMessage = header.String()
+		}
+		e.err = fmt.Errorf("received error %s: %s", errorCode, errorMessage)
+		return false
+	}
+
+	return true
+}
+
+func (e *eventstreamDecoder) Event() ssestream.Event {
+	return e.evt
+}
+
+var (
+	_ ssestream.Decoder = &eventstreamDecoder{}
+)
+
+func init() {
+	ssestream.RegisterDecoder("application/vnd.amazon.eventstream", func(rc io.ReadCloser) ssestream.Decoder {
+		return &eventstreamDecoder{rc: rc}
+	})
+}
+
+// WithLoadDefaultConfig returns a request option which loads the default config for Amazon and registers
+// middleware that intercepts request to the Messages API so that this SDK can be used with Amazon Bedrock.
+//
+// If you already have an [aws.Config], it is recommended that you instead call [WithConfig] directly.
+func WithLoadDefaultConfig(ctx context.Context, optFns ...func(*config.LoadOptions) error) option.RequestOption {
+	cfg, err := config.LoadDefaultConfig(ctx, optFns...)
+	if err != nil {
+		panic(err)
+	}
+	return WithConfig(cfg)
+}
+
+// WithConfig returns a request option which uses the provided config  and registers middleware that
+// intercepts request to the Messages API so that this SDK can be used with Amazon Bedrock.
+func WithConfig(cfg aws.Config) option.RequestOption {
+	signer := v4.NewSigner()
+	middleware := bedrockMiddleware(signer, cfg)
+
+	return requestconfig.RequestOptionFunc(func(rc *requestconfig.RequestConfig) error {
+		return rc.Apply(
+			option.WithBaseURL(fmt.Sprintf("https://bedrock-runtime.%s.amazonaws.com", cfg.Region)),
+			option.WithMiddleware(middleware),
+		)
+	})
+}
+
+func bedrockMiddleware(signer *v4.Signer, cfg aws.Config) option.Middleware {
+	return func(r *http.Request, next option.MiddlewareNext) (res *http.Response, err error) {
+		var body []byte
+		if r.Body != nil {
+			body, err = io.ReadAll(r.Body)
+			if err != nil {
+				return nil, err
+			}
+			r.Body.Close()
+
+			if !gjson.GetBytes(body, "anthropic_version").Exists() {
+				body, _ = sjson.SetBytes(body, "anthropic_version", DefaultVersion)
+			}
+
+			if r.Method == http.MethodPost && DefaultEndpoints[r.URL.Path] {
+				model := gjson.GetBytes(body, "model").String()
+				stream := gjson.GetBytes(body, "stream").Bool()
+
+				body, _ = sjson.DeleteBytes(body, "model")
+				body, _ = sjson.DeleteBytes(body, "stream")
+
+				var path string
+				if stream {
+					path = fmt.Sprintf("/model/%s/invoke-with-response-stream", model)
+				} else {
+					path = fmt.Sprintf("/model/%s/invoke", model)
+				}
+
+				r.URL.Path = path
+			}
+
+			reader := bytes.NewReader(body)
+			r.Body = io.NopCloser(reader)
+			r.GetBody = func() (io.ReadCloser, error) {
+				_, err := reader.Seek(0, 0)
+				return io.NopCloser(reader), err
+			}
+			r.ContentLength = int64(len(body))
+		}
+
+		ctx := r.Context()
+		credentials, err := cfg.Credentials.Retrieve(ctx)
+		if err != nil {
+			return nil, err
+		}
+
+		hash := sha256.Sum256(body)
+		err = signer.SignHTTP(ctx, credentials, r, hex.EncodeToString(hash[:]), "bedrock", cfg.Region, time.Now())
+		if err != nil {
+			return nil, err
+		}
+
+		return next(r)
+	}
+}

vendor/github.com/anthropics/anthropic-sdk-go/beta.go 🔗

@@ -0,0 +1,364 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"encoding/json"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// BetaService contains methods and other services that help with interacting with
+// the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewBetaService] method instead.
+type BetaService struct {
+	Options  []option.RequestOption
+	Models   BetaModelService
+	Messages BetaMessageService
+	Files    BetaFileService
+}
+
+// NewBetaService generates a new service that applies the given options to each
+// request. These options are applied after the parent client's options (if there
+// is one), and before any request-specific options.
+func NewBetaService(opts ...option.RequestOption) (r BetaService) {
+	r = BetaService{}
+	r.Options = opts
+	r.Models = NewBetaModelService(opts...)
+	r.Messages = NewBetaMessageService(opts...)
+	r.Files = NewBetaFileService(opts...)
+	return
+}
+
+type AnthropicBeta = string
+
+const (
+	AnthropicBetaMessageBatches2024_09_24      AnthropicBeta = "message-batches-2024-09-24"
+	AnthropicBetaPromptCaching2024_07_31       AnthropicBeta = "prompt-caching-2024-07-31"
+	AnthropicBetaComputerUse2024_10_22         AnthropicBeta = "computer-use-2024-10-22"
+	AnthropicBetaComputerUse2025_01_24         AnthropicBeta = "computer-use-2025-01-24"
+	AnthropicBetaPDFs2024_09_25                AnthropicBeta = "pdfs-2024-09-25"
+	AnthropicBetaTokenCounting2024_11_01       AnthropicBeta = "token-counting-2024-11-01"
+	AnthropicBetaTokenEfficientTools2025_02_19 AnthropicBeta = "token-efficient-tools-2025-02-19"
+	AnthropicBetaOutput128k2025_02_19          AnthropicBeta = "output-128k-2025-02-19"
+	AnthropicBetaFilesAPI2025_04_14            AnthropicBeta = "files-api-2025-04-14"
+	AnthropicBetaMCPClient2025_04_04           AnthropicBeta = "mcp-client-2025-04-04"
+	AnthropicBetaDevFullThinking2025_05_14     AnthropicBeta = "dev-full-thinking-2025-05-14"
+	AnthropicBetaInterleavedThinking2025_05_14 AnthropicBeta = "interleaved-thinking-2025-05-14"
+	AnthropicBetaCodeExecution2025_05_22       AnthropicBeta = "code-execution-2025-05-22"
+	AnthropicBetaExtendedCacheTTL2025_04_11    AnthropicBeta = "extended-cache-ttl-2025-04-11"
+)
+
+type BetaAPIError struct {
+	Message string            `json:"message,required"`
+	Type    constant.APIError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaAPIError) RawJSON() string { return r.JSON.raw }
+func (r *BetaAPIError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaAuthenticationError struct {
+	Message string                       `json:"message,required"`
+	Type    constant.AuthenticationError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaAuthenticationError) RawJSON() string { return r.JSON.raw }
+func (r *BetaAuthenticationError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaBillingError struct {
+	Message string                `json:"message,required"`
+	Type    constant.BillingError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaBillingError) RawJSON() string { return r.JSON.raw }
+func (r *BetaBillingError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaErrorUnion contains all possible properties and values from
+// [BetaInvalidRequestError], [BetaAuthenticationError], [BetaBillingError],
+// [BetaPermissionError], [BetaNotFoundError], [BetaRateLimitError],
+// [BetaGatewayTimeoutError], [BetaAPIError], [BetaOverloadedError].
+//
+// Use the [BetaErrorUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaErrorUnion struct {
+	Message string `json:"message"`
+	// Any of "invalid_request_error", "authentication_error", "billing_error",
+	// "permission_error", "not_found_error", "rate_limit_error", "timeout_error",
+	// "api_error", "overloaded_error".
+	Type string `json:"type"`
+	JSON struct {
+		Message respjson.Field
+		Type    respjson.Field
+		raw     string
+	} `json:"-"`
+}
+
+// anyBetaError is implemented by each variant of [BetaErrorUnion] to add type
+// safety for the return type of [BetaErrorUnion.AsAny]
+type anyBetaError interface {
+	implBetaErrorUnion()
+}
+
+func (BetaInvalidRequestError) implBetaErrorUnion() {}
+func (BetaAuthenticationError) implBetaErrorUnion() {}
+func (BetaBillingError) implBetaErrorUnion()        {}
+func (BetaPermissionError) implBetaErrorUnion()     {}
+func (BetaNotFoundError) implBetaErrorUnion()       {}
+func (BetaRateLimitError) implBetaErrorUnion()      {}
+func (BetaGatewayTimeoutError) implBetaErrorUnion() {}
+func (BetaAPIError) implBetaErrorUnion()            {}
+func (BetaOverloadedError) implBetaErrorUnion()     {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaErrorUnion.AsAny().(type) {
+//	case anthropic.BetaInvalidRequestError:
+//	case anthropic.BetaAuthenticationError:
+//	case anthropic.BetaBillingError:
+//	case anthropic.BetaPermissionError:
+//	case anthropic.BetaNotFoundError:
+//	case anthropic.BetaRateLimitError:
+//	case anthropic.BetaGatewayTimeoutError:
+//	case anthropic.BetaAPIError:
+//	case anthropic.BetaOverloadedError:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaErrorUnion) AsAny() anyBetaError {
+	switch u.Type {
+	case "invalid_request_error":
+		return u.AsInvalidRequestError()
+	case "authentication_error":
+		return u.AsAuthenticationError()
+	case "billing_error":
+		return u.AsBillingError()
+	case "permission_error":
+		return u.AsPermissionError()
+	case "not_found_error":
+		return u.AsNotFoundError()
+	case "rate_limit_error":
+		return u.AsRateLimitError()
+	case "timeout_error":
+		return u.AsTimeoutError()
+	case "api_error":
+		return u.AsAPIError()
+	case "overloaded_error":
+		return u.AsOverloadedError()
+	}
+	return nil
+}
+
+func (u BetaErrorUnion) AsInvalidRequestError() (v BetaInvalidRequestError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsAuthenticationError() (v BetaAuthenticationError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsBillingError() (v BetaBillingError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsPermissionError() (v BetaPermissionError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsNotFoundError() (v BetaNotFoundError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsRateLimitError() (v BetaRateLimitError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsTimeoutError() (v BetaGatewayTimeoutError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsAPIError() (v BetaAPIError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaErrorUnion) AsOverloadedError() (v BetaOverloadedError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaErrorUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaErrorUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaErrorResponse struct {
+	Error BetaErrorUnion `json:"error,required"`
+	Type  constant.Error `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Error       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaErrorResponse) RawJSON() string { return r.JSON.raw }
+func (r *BetaErrorResponse) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaGatewayTimeoutError struct {
+	Message string                `json:"message,required"`
+	Type    constant.TimeoutError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaGatewayTimeoutError) RawJSON() string { return r.JSON.raw }
+func (r *BetaGatewayTimeoutError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaInvalidRequestError struct {
+	Message string                       `json:"message,required"`
+	Type    constant.InvalidRequestError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaInvalidRequestError) RawJSON() string { return r.JSON.raw }
+func (r *BetaInvalidRequestError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaNotFoundError struct {
+	Message string                 `json:"message,required"`
+	Type    constant.NotFoundError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaNotFoundError) RawJSON() string { return r.JSON.raw }
+func (r *BetaNotFoundError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaOverloadedError struct {
+	Message string                   `json:"message,required"`
+	Type    constant.OverloadedError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaOverloadedError) RawJSON() string { return r.JSON.raw }
+func (r *BetaOverloadedError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaPermissionError struct {
+	Message string                   `json:"message,required"`
+	Type    constant.PermissionError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaPermissionError) RawJSON() string { return r.JSON.raw }
+func (r *BetaPermissionError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRateLimitError struct {
+	Message string                  `json:"message,required"`
+	Type    constant.RateLimitError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRateLimitError) RawJSON() string { return r.JSON.raw }
+func (r *BetaRateLimitError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}

vendor/github.com/anthropics/anthropic-sdk-go/betafile.go 🔗

@@ -0,0 +1,270 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"bytes"
+	"context"
+	"errors"
+	"fmt"
+	"io"
+	"mime/multipart"
+	"net/http"
+	"net/url"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apiform"
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/apiquery"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/pagination"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// BetaFileService contains methods and other services that help with interacting
+// with the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewBetaFileService] method instead.
+type BetaFileService struct {
+	Options []option.RequestOption
+}
+
+// NewBetaFileService generates a new service that applies the given options to
+// each request. These options are applied after the parent client's options (if
+// there is one), and before any request-specific options.
+func NewBetaFileService(opts ...option.RequestOption) (r BetaFileService) {
+	r = BetaFileService{}
+	r.Options = opts
+	return
+}
+
+// List Files
+func (r *BetaFileService) List(ctx context.Context, params BetaFileListParams, opts ...option.RequestOption) (res *pagination.Page[FileMetadata], err error) {
+	var raw *http.Response
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "files-api-2025-04-14"), option.WithResponseInto(&raw)}, opts...)
+	path := "v1/files?beta=true"
+	cfg, err := requestconfig.NewRequestConfig(ctx, http.MethodGet, path, params, &res, opts...)
+	if err != nil {
+		return nil, err
+	}
+	err = cfg.Execute()
+	if err != nil {
+		return nil, err
+	}
+	res.SetPageConfig(cfg, raw)
+	return res, nil
+}
+
+// List Files
+func (r *BetaFileService) ListAutoPaging(ctx context.Context, params BetaFileListParams, opts ...option.RequestOption) *pagination.PageAutoPager[FileMetadata] {
+	return pagination.NewPageAutoPager(r.List(ctx, params, opts...))
+}
+
+// Delete File
+func (r *BetaFileService) Delete(ctx context.Context, fileID string, body BetaFileDeleteParams, opts ...option.RequestOption) (res *DeletedFile, err error) {
+	for _, v := range body.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "files-api-2025-04-14")}, opts...)
+	if fileID == "" {
+		err = errors.New("missing required file_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/files/%s?beta=true", fileID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodDelete, path, nil, &res, opts...)
+	return
+}
+
+// Download File
+func (r *BetaFileService) Download(ctx context.Context, fileID string, query BetaFileDownloadParams, opts ...option.RequestOption) (res *http.Response, err error) {
+	for _, v := range query.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "files-api-2025-04-14"), option.WithHeader("Accept", "application/binary")}, opts...)
+	if fileID == "" {
+		err = errors.New("missing required file_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/files/%s/content?beta=true", fileID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &res, opts...)
+	return
+}
+
+// Get File Metadata
+func (r *BetaFileService) GetMetadata(ctx context.Context, fileID string, query BetaFileGetMetadataParams, opts ...option.RequestOption) (res *FileMetadata, err error) {
+	for _, v := range query.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "files-api-2025-04-14")}, opts...)
+	if fileID == "" {
+		err = errors.New("missing required file_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/files/%s?beta=true", fileID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &res, opts...)
+	return
+}
+
+// Upload File
+func (r *BetaFileService) Upload(ctx context.Context, params BetaFileUploadParams, opts ...option.RequestOption) (res *FileMetadata, err error) {
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "files-api-2025-04-14")}, opts...)
+	path := "v1/files?beta=true"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, params, &res, opts...)
+	return
+}
+
+type DeletedFile struct {
+	// ID of the deleted file.
+	ID string `json:"id,required"`
+	// Deleted object type.
+	//
+	// For file deletion, this is always `"file_deleted"`.
+	//
+	// Any of "file_deleted".
+	Type DeletedFileType `json:"type"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r DeletedFile) RawJSON() string { return r.JSON.raw }
+func (r *DeletedFile) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Deleted object type.
+//
+// For file deletion, this is always `"file_deleted"`.
+type DeletedFileType string
+
+const (
+	DeletedFileTypeFileDeleted DeletedFileType = "file_deleted"
+)
+
+type FileMetadata struct {
+	// Unique object identifier.
+	//
+	// The format and length of IDs may change over time.
+	ID string `json:"id,required"`
+	// RFC 3339 datetime string representing when the file was created.
+	CreatedAt time.Time `json:"created_at,required" format:"date-time"`
+	// Original filename of the uploaded file.
+	Filename string `json:"filename,required"`
+	// MIME type of the file.
+	MimeType string `json:"mime_type,required"`
+	// Size of the file in bytes.
+	SizeBytes int64 `json:"size_bytes,required"`
+	// Object type.
+	//
+	// For files, this is always `"file"`.
+	Type constant.File `json:"type,required"`
+	// Whether the file can be downloaded.
+	Downloadable bool `json:"downloadable"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID           respjson.Field
+		CreatedAt    respjson.Field
+		Filename     respjson.Field
+		MimeType     respjson.Field
+		SizeBytes    respjson.Field
+		Type         respjson.Field
+		Downloadable respjson.Field
+		ExtraFields  map[string]respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r FileMetadata) RawJSON() string { return r.JSON.raw }
+func (r *FileMetadata) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaFileListParams struct {
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately after this object.
+	AfterID param.Opt[string] `query:"after_id,omitzero" json:"-"`
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately before this object.
+	BeforeID param.Opt[string] `query:"before_id,omitzero" json:"-"`
+	// Number of items to return per page.
+	//
+	// Defaults to `20`. Ranges from `1` to `1000`.
+	Limit param.Opt[int64] `query:"limit,omitzero" json:"-"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+// URLQuery serializes [BetaFileListParams]'s query parameters as `url.Values`.
+func (r BetaFileListParams) URLQuery() (v url.Values, err error) {
+	return apiquery.MarshalWithSettings(r, apiquery.QuerySettings{
+		ArrayFormat:  apiquery.ArrayQueryFormatComma,
+		NestedFormat: apiquery.NestedQueryFormatBrackets,
+	})
+}
+
+type BetaFileDeleteParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type BetaFileDownloadParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type BetaFileGetMetadataParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type BetaFileUploadParams struct {
+	// The file to upload
+	File io.Reader `json:"file,omitzero,required" format:"binary"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+func (r BetaFileUploadParams) MarshalMultipart() (data []byte, contentType string, err error) {
+	buf := bytes.NewBuffer(nil)
+	writer := multipart.NewWriter(buf)
+	err = apiform.MarshalRoot(r, writer)
+	if err == nil {
+		err = apiform.WriteExtras(writer, r.ExtraFields())
+	}
+	if err != nil {
+		writer.Close()
+		return nil, "", err
+	}
+	err = writer.Close()
+	if err != nil {
+		return nil, "", err
+	}
+	return buf.Bytes(), writer.FormDataContentType(), nil
+}

vendor/github.com/anthropics/anthropic-sdk-go/betamessage.go 🔗

@@ -0,0 +1,5823 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"net/http"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/paramutil"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/packages/ssestream"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// BetaMessageService contains methods and other services that help with
+// interacting with the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewBetaMessageService] method instead.
+type BetaMessageService struct {
+	Options []option.RequestOption
+	Batches BetaMessageBatchService
+}
+
+// NewBetaMessageService generates a new service that applies the given options to
+// each request. These options are applied after the parent client's options (if
+// there is one), and before any request-specific options.
+func NewBetaMessageService(opts ...option.RequestOption) (r BetaMessageService) {
+	r = BetaMessageService{}
+	r.Options = opts
+	r.Batches = NewBetaMessageBatchService(opts...)
+	return
+}
+
+// Send a structured list of input messages with text and/or image content, and the
+// model will generate the next message in the conversation.
+//
+// The Messages API can be used for either single queries or stateless multi-turn
+// conversations.
+//
+// Learn more about the Messages API in our [user guide](/en/docs/initial-setup)
+//
+// Note: If you choose to set a timeout for this request, we recommend 10 minutes.
+func (r *BetaMessageService) New(ctx context.Context, params BetaMessageNewParams, opts ...option.RequestOption) (res *BetaMessage, err error) {
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+
+	// For non-streaming requests, calculate the appropriate timeout based on maxTokens
+	// and check against model-specific limits
+	timeout, timeoutErr := CalculateNonStreamingTimeout(int(params.MaxTokens), params.Model, opts)
+	if timeoutErr != nil {
+		return nil, timeoutErr
+	}
+	opts = append(opts, option.WithRequestTimeout(timeout))
+
+	path := "v1/messages?beta=true"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, params, &res, opts...)
+	return
+}
+
+// Send a structured list of input messages with text and/or image content, and the
+// model will generate the next message in the conversation.
+//
+// The Messages API can be used for either single queries or stateless multi-turn
+// conversations.
+//
+// Learn more about the Messages API in our [user guide](/en/docs/initial-setup)
+//
+// Note: If you choose to set a timeout for this request, we recommend 10 minutes.
+func (r *BetaMessageService) NewStreaming(ctx context.Context, params BetaMessageNewParams, opts ...option.RequestOption) (stream *ssestream.Stream[BetaRawMessageStreamEventUnion]) {
+	var (
+		raw *http.Response
+		err error
+	)
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithJSONSet("stream", true)}, opts...)
+	path := "v1/messages?beta=true"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, params, &raw, opts...)
+	return ssestream.NewStream[BetaRawMessageStreamEventUnion](ssestream.NewDecoder(raw), err)
+}
+
+// Count the number of tokens in a Message.
+//
+// The Token Count API can be used to count the number of tokens in a Message,
+// including tools, images, and documents, without creating it.
+//
+// Learn more about token counting in our
+// [user guide](/en/docs/build-with-claude/token-counting)
+func (r *BetaMessageService) CountTokens(ctx context.Context, params BetaMessageCountTokensParams, opts ...option.RequestOption) (res *BetaMessageTokensCount, err error) {
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	path := "v1/messages/count_tokens?beta=true"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, params, &res, opts...)
+	return
+}
+
+// The properties Data, MediaType, Type are required.
+type BetaBase64ImageSourceParam struct {
+	Data string `json:"data,required" format:"byte"`
+	// Any of "image/jpeg", "image/png", "image/gif", "image/webp".
+	MediaType BetaBase64ImageSourceMediaType `json:"media_type,omitzero,required"`
+	// This field can be elided, and will marshal its zero value as "base64".
+	Type constant.Base64 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaBase64ImageSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaBase64ImageSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaBase64ImageSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaBase64ImageSourceMediaType string
+
+const (
+	BetaBase64ImageSourceMediaTypeImageJPEG BetaBase64ImageSourceMediaType = "image/jpeg"
+	BetaBase64ImageSourceMediaTypeImagePNG  BetaBase64ImageSourceMediaType = "image/png"
+	BetaBase64ImageSourceMediaTypeImageGIF  BetaBase64ImageSourceMediaType = "image/gif"
+	BetaBase64ImageSourceMediaTypeImageWebP BetaBase64ImageSourceMediaType = "image/webp"
+)
+
+// The properties Source, Type are required.
+type BetaBase64PDFBlockParam struct {
+	Source  BetaBase64PDFBlockSourceUnionParam `json:"source,omitzero,required"`
+	Context param.Opt[string]                  `json:"context,omitzero"`
+	Title   param.Opt[string]                  `json:"title,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	Citations    BetaCitationsConfigParam       `json:"citations,omitzero"`
+	// This field can be elided, and will marshal its zero value as "document".
+	Type constant.Document `json:"type,required"`
+	paramObj
+}
+
+func (r BetaBase64PDFBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaBase64PDFBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaBase64PDFBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaBase64PDFBlockSourceUnionParam struct {
+	OfBase64  *BetaBase64PDFSourceParam    `json:",omitzero,inline"`
+	OfText    *BetaPlainTextSourceParam    `json:",omitzero,inline"`
+	OfContent *BetaContentBlockSourceParam `json:",omitzero,inline"`
+	OfURL     *BetaURLPDFSourceParam       `json:",omitzero,inline"`
+	OfFile    *BetaFileDocumentSourceParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaBase64PDFBlockSourceUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfBase64,
+		u.OfText,
+		u.OfContent,
+		u.OfURL,
+		u.OfFile)
+}
+func (u *BetaBase64PDFBlockSourceUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaBase64PDFBlockSourceUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfBase64) {
+		return u.OfBase64
+	} else if !param.IsOmitted(u.OfText) {
+		return u.OfText
+	} else if !param.IsOmitted(u.OfContent) {
+		return u.OfContent
+	} else if !param.IsOmitted(u.OfURL) {
+		return u.OfURL
+	} else if !param.IsOmitted(u.OfFile) {
+		return u.OfFile
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaBase64PDFBlockSourceUnionParam) GetContent() *BetaContentBlockSourceContentUnionParam {
+	if vt := u.OfContent; vt != nil {
+		return &vt.Content
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaBase64PDFBlockSourceUnionParam) GetURL() *string {
+	if vt := u.OfURL; vt != nil {
+		return &vt.URL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaBase64PDFBlockSourceUnionParam) GetFileID() *string {
+	if vt := u.OfFile; vt != nil {
+		return &vt.FileID
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaBase64PDFBlockSourceUnionParam) GetData() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.Data)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Data)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaBase64PDFBlockSourceUnionParam) GetMediaType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.MediaType)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.MediaType)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaBase64PDFBlockSourceUnionParam) GetType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfContent; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfURL; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfFile; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+func init() {
+	apijson.RegisterUnion[BetaBase64PDFBlockSourceUnionParam](
+		"type",
+		apijson.Discriminator[BetaBase64PDFSourceParam]("base64"),
+		apijson.Discriminator[BetaPlainTextSourceParam]("text"),
+		apijson.Discriminator[BetaContentBlockSourceParam]("content"),
+		apijson.Discriminator[BetaURLPDFSourceParam]("url"),
+		apijson.Discriminator[BetaFileDocumentSourceParam]("file"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[BetaContentBlockParamUnion](
+		"type",
+		apijson.Discriminator[BetaServerToolUseBlockParam]("server_tool_use"),
+		apijson.Discriminator[BetaWebSearchToolResultBlockParam]("web_search_tool_result"),
+		apijson.Discriminator[BetaCodeExecutionToolResultBlockParam]("code_execution_tool_result"),
+		apijson.Discriminator[BetaMCPToolUseBlockParam]("mcp_tool_use"),
+		apijson.Discriminator[BetaRequestMCPToolResultBlockParam]("mcp_tool_result"),
+		apijson.Discriminator[BetaTextBlockParam]("text"),
+		apijson.Discriminator[BetaImageBlockParam]("image"),
+		apijson.Discriminator[BetaToolUseBlockParam]("tool_use"),
+		apijson.Discriminator[BetaToolResultBlockParam]("tool_result"),
+		apijson.Discriminator[BetaBase64PDFBlockParam]("document"),
+		apijson.Discriminator[BetaThinkingBlockParam]("thinking"),
+		apijson.Discriminator[BetaRedactedThinkingBlockParam]("redacted_thinking"),
+		apijson.Discriminator[BetaContainerUploadBlockParam]("container_upload"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[BetaImageBlockParamSourceUnion](
+		"type",
+		apijson.Discriminator[BetaBase64ImageSourceParam]("base64"),
+		apijson.Discriminator[BetaURLImageSourceParam]("url"),
+		apijson.Discriminator[BetaFileImageSourceParam]("file"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[BetaTextCitationParamUnion](
+		"type",
+		apijson.Discriminator[BetaCitationCharLocationParam]("char_location"),
+		apijson.Discriminator[BetaCitationPageLocationParam]("page_location"),
+		apijson.Discriminator[BetaCitationContentBlockLocationParam]("content_block_location"),
+		apijson.Discriminator[BetaCitationWebSearchResultLocationParam]("web_search_result_location"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[BetaThinkingConfigParamUnion](
+		"type",
+		apijson.Discriminator[BetaThinkingConfigEnabledParam]("enabled"),
+		apijson.Discriminator[BetaThinkingConfigDisabledParam]("disabled"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[BetaToolChoiceUnionParam](
+		"type",
+		apijson.Discriminator[BetaToolChoiceAutoParam]("auto"),
+		apijson.Discriminator[BetaToolChoiceAnyParam]("any"),
+		apijson.Discriminator[BetaToolChoiceToolParam]("tool"),
+		apijson.Discriminator[BetaToolChoiceNoneParam]("none"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[BetaToolResultBlockParamContentUnion](
+		"type",
+		apijson.Discriminator[BetaTextBlockParam]("text"),
+		apijson.Discriminator[BetaImageBlockParam]("image"),
+	)
+}
+
+// The properties Data, MediaType, Type are required.
+type BetaBase64PDFSourceParam struct {
+	Data string `json:"data,required" format:"byte"`
+	// This field can be elided, and will marshal its zero value as "application/pdf".
+	MediaType constant.ApplicationPDF `json:"media_type,required"`
+	// This field can be elided, and will marshal its zero value as "base64".
+	Type constant.Base64 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaBase64PDFSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaBase64PDFSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaBase64PDFSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewBetaCacheControlEphemeralParam() BetaCacheControlEphemeralParam {
+	return BetaCacheControlEphemeralParam{
+		Type: "ephemeral",
+	}
+}
+
+// This struct has a constant value, construct it with
+// [NewBetaCacheControlEphemeralParam].
+type BetaCacheControlEphemeralParam struct {
+	// The time-to-live for the cache control breakpoint.
+	//
+	// This may be one the following values:
+	//
+	// - `5m`: 5 minutes
+	// - `1h`: 1 hour
+	//
+	// Defaults to `5m`.
+	//
+	// Any of "5m", "1h".
+	TTL  BetaCacheControlEphemeralTTL `json:"ttl,omitzero"`
+	Type constant.Ephemeral           `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCacheControlEphemeralParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCacheControlEphemeralParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCacheControlEphemeralParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The time-to-live for the cache control breakpoint.
+//
+// This may be one the following values:
+//
+// - `5m`: 5 minutes
+// - `1h`: 1 hour
+//
+// Defaults to `5m`.
+type BetaCacheControlEphemeralTTL string
+
+const (
+	BetaCacheControlEphemeralTTLTTL5m BetaCacheControlEphemeralTTL = "5m"
+	BetaCacheControlEphemeralTTLTTL1h BetaCacheControlEphemeralTTL = "1h"
+)
+
+type BetaCacheCreation struct {
+	// The number of input tokens used to create the 1 hour cache entry.
+	Ephemeral1hInputTokens int64 `json:"ephemeral_1h_input_tokens,required"`
+	// The number of input tokens used to create the 5 minute cache entry.
+	Ephemeral5mInputTokens int64 `json:"ephemeral_5m_input_tokens,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Ephemeral1hInputTokens respjson.Field
+		Ephemeral5mInputTokens respjson.Field
+		ExtraFields            map[string]respjson.Field
+		raw                    string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCacheCreation) RawJSON() string { return r.JSON.raw }
+func (r *BetaCacheCreation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCitationCharLocation struct {
+	CitedText      string                `json:"cited_text,required"`
+	DocumentIndex  int64                 `json:"document_index,required"`
+	DocumentTitle  string                `json:"document_title,required"`
+	EndCharIndex   int64                 `json:"end_char_index,required"`
+	StartCharIndex int64                 `json:"start_char_index,required"`
+	Type           constant.CharLocation `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText      respjson.Field
+		DocumentIndex  respjson.Field
+		DocumentTitle  respjson.Field
+		EndCharIndex   respjson.Field
+		StartCharIndex respjson.Field
+		Type           respjson.Field
+		ExtraFields    map[string]respjson.Field
+		raw            string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCitationCharLocation) RawJSON() string { return r.JSON.raw }
+func (r *BetaCitationCharLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, DocumentIndex, DocumentTitle, EndCharIndex,
+// StartCharIndex, Type are required.
+type BetaCitationCharLocationParam struct {
+	DocumentTitle  param.Opt[string] `json:"document_title,omitzero,required"`
+	CitedText      string            `json:"cited_text,required"`
+	DocumentIndex  int64             `json:"document_index,required"`
+	EndCharIndex   int64             `json:"end_char_index,required"`
+	StartCharIndex int64             `json:"start_char_index,required"`
+	// This field can be elided, and will marshal its zero value as "char_location".
+	Type constant.CharLocation `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCitationCharLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCitationCharLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCitationCharLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCitationContentBlockLocation struct {
+	CitedText       string                        `json:"cited_text,required"`
+	DocumentIndex   int64                         `json:"document_index,required"`
+	DocumentTitle   string                        `json:"document_title,required"`
+	EndBlockIndex   int64                         `json:"end_block_index,required"`
+	StartBlockIndex int64                         `json:"start_block_index,required"`
+	Type            constant.ContentBlockLocation `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndBlockIndex   respjson.Field
+		StartBlockIndex respjson.Field
+		Type            respjson.Field
+		ExtraFields     map[string]respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCitationContentBlockLocation) RawJSON() string { return r.JSON.raw }
+func (r *BetaCitationContentBlockLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, DocumentIndex, DocumentTitle, EndBlockIndex,
+// StartBlockIndex, Type are required.
+type BetaCitationContentBlockLocationParam struct {
+	DocumentTitle   param.Opt[string] `json:"document_title,omitzero,required"`
+	CitedText       string            `json:"cited_text,required"`
+	DocumentIndex   int64             `json:"document_index,required"`
+	EndBlockIndex   int64             `json:"end_block_index,required"`
+	StartBlockIndex int64             `json:"start_block_index,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "content_block_location".
+	Type constant.ContentBlockLocation `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCitationContentBlockLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCitationContentBlockLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCitationContentBlockLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCitationPageLocation struct {
+	CitedText       string                `json:"cited_text,required"`
+	DocumentIndex   int64                 `json:"document_index,required"`
+	DocumentTitle   string                `json:"document_title,required"`
+	EndPageNumber   int64                 `json:"end_page_number,required"`
+	StartPageNumber int64                 `json:"start_page_number,required"`
+	Type            constant.PageLocation `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndPageNumber   respjson.Field
+		StartPageNumber respjson.Field
+		Type            respjson.Field
+		ExtraFields     map[string]respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCitationPageLocation) RawJSON() string { return r.JSON.raw }
+func (r *BetaCitationPageLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, DocumentIndex, DocumentTitle, EndPageNumber,
+// StartPageNumber, Type are required.
+type BetaCitationPageLocationParam struct {
+	DocumentTitle   param.Opt[string] `json:"document_title,omitzero,required"`
+	CitedText       string            `json:"cited_text,required"`
+	DocumentIndex   int64             `json:"document_index,required"`
+	EndPageNumber   int64             `json:"end_page_number,required"`
+	StartPageNumber int64             `json:"start_page_number,required"`
+	// This field can be elided, and will marshal its zero value as "page_location".
+	Type constant.PageLocation `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCitationPageLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCitationPageLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCitationPageLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, EncryptedIndex, Title, Type, URL are required.
+type BetaCitationWebSearchResultLocationParam struct {
+	Title          param.Opt[string] `json:"title,omitzero,required"`
+	CitedText      string            `json:"cited_text,required"`
+	EncryptedIndex string            `json:"encrypted_index,required"`
+	URL            string            `json:"url,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_result_location".
+	Type constant.WebSearchResultLocation `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCitationWebSearchResultLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCitationWebSearchResultLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCitationWebSearchResultLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCitationsConfigParam struct {
+	Enabled param.Opt[bool] `json:"enabled,omitzero"`
+	paramObj
+}
+
+func (r BetaCitationsConfigParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCitationsConfigParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCitationsConfigParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCitationsDelta struct {
+	Citation BetaCitationsDeltaCitationUnion `json:"citation,required"`
+	Type     constant.CitationsDelta         `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Citation    respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCitationsDelta) RawJSON() string { return r.JSON.raw }
+func (r *BetaCitationsDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaCitationsDeltaCitationUnion contains all possible properties and values from
+// [BetaCitationCharLocation], [BetaCitationPageLocation],
+// [BetaCitationContentBlockLocation], [BetaCitationsWebSearchResultLocation].
+//
+// Use the [BetaCitationsDeltaCitationUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaCitationsDeltaCitationUnion struct {
+	CitedText     string `json:"cited_text"`
+	DocumentIndex int64  `json:"document_index"`
+	DocumentTitle string `json:"document_title"`
+	// This field is from variant [BetaCitationCharLocation].
+	EndCharIndex int64 `json:"end_char_index"`
+	// This field is from variant [BetaCitationCharLocation].
+	StartCharIndex int64 `json:"start_char_index"`
+	// Any of "char_location", "page_location", "content_block_location",
+	// "web_search_result_location".
+	Type string `json:"type"`
+	// This field is from variant [BetaCitationPageLocation].
+	EndPageNumber int64 `json:"end_page_number"`
+	// This field is from variant [BetaCitationPageLocation].
+	StartPageNumber int64 `json:"start_page_number"`
+	// This field is from variant [BetaCitationContentBlockLocation].
+	EndBlockIndex int64 `json:"end_block_index"`
+	// This field is from variant [BetaCitationContentBlockLocation].
+	StartBlockIndex int64 `json:"start_block_index"`
+	// This field is from variant [BetaCitationsWebSearchResultLocation].
+	EncryptedIndex string `json:"encrypted_index"`
+	// This field is from variant [BetaCitationsWebSearchResultLocation].
+	Title string `json:"title"`
+	// This field is from variant [BetaCitationsWebSearchResultLocation].
+	URL  string `json:"url"`
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndCharIndex    respjson.Field
+		StartCharIndex  respjson.Field
+		Type            respjson.Field
+		EndPageNumber   respjson.Field
+		StartPageNumber respjson.Field
+		EndBlockIndex   respjson.Field
+		StartBlockIndex respjson.Field
+		EncryptedIndex  respjson.Field
+		Title           respjson.Field
+		URL             respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// anyBetaCitationsDeltaCitation is implemented by each variant of
+// [BetaCitationsDeltaCitationUnion] to add type safety for the return type of
+// [BetaCitationsDeltaCitationUnion.AsAny]
+type anyBetaCitationsDeltaCitation interface {
+	implBetaCitationsDeltaCitationUnion()
+}
+
+func (BetaCitationCharLocation) implBetaCitationsDeltaCitationUnion()             {}
+func (BetaCitationPageLocation) implBetaCitationsDeltaCitationUnion()             {}
+func (BetaCitationContentBlockLocation) implBetaCitationsDeltaCitationUnion()     {}
+func (BetaCitationsWebSearchResultLocation) implBetaCitationsDeltaCitationUnion() {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaCitationsDeltaCitationUnion.AsAny().(type) {
+//	case anthropic.BetaCitationCharLocation:
+//	case anthropic.BetaCitationPageLocation:
+//	case anthropic.BetaCitationContentBlockLocation:
+//	case anthropic.BetaCitationsWebSearchResultLocation:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaCitationsDeltaCitationUnion) AsAny() anyBetaCitationsDeltaCitation {
+	switch u.Type {
+	case "char_location":
+		return u.AsCharLocation()
+	case "page_location":
+		return u.AsPageLocation()
+	case "content_block_location":
+		return u.AsContentBlockLocation()
+	case "web_search_result_location":
+		return u.AsWebSearchResultLocation()
+	}
+	return nil
+}
+
+func (u BetaCitationsDeltaCitationUnion) AsCharLocation() (v BetaCitationCharLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaCitationsDeltaCitationUnion) AsPageLocation() (v BetaCitationPageLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaCitationsDeltaCitationUnion) AsContentBlockLocation() (v BetaCitationContentBlockLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaCitationsDeltaCitationUnion) AsWebSearchResultLocation() (v BetaCitationsWebSearchResultLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaCitationsDeltaCitationUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaCitationsDeltaCitationUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCitationsWebSearchResultLocation struct {
+	CitedText      string                           `json:"cited_text,required"`
+	EncryptedIndex string                           `json:"encrypted_index,required"`
+	Title          string                           `json:"title,required"`
+	Type           constant.WebSearchResultLocation `json:"type,required"`
+	URL            string                           `json:"url,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText      respjson.Field
+		EncryptedIndex respjson.Field
+		Title          respjson.Field
+		Type           respjson.Field
+		URL            respjson.Field
+		ExtraFields    map[string]respjson.Field
+		raw            string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCitationsWebSearchResultLocation) RawJSON() string { return r.JSON.raw }
+func (r *BetaCitationsWebSearchResultLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCodeExecutionOutputBlock struct {
+	FileID string                       `json:"file_id,required"`
+	Type   constant.CodeExecutionOutput `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		FileID      respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCodeExecutionOutputBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaCodeExecutionOutputBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties FileID, Type are required.
+type BetaCodeExecutionOutputBlockParam struct {
+	FileID string `json:"file_id,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "code_execution_output".
+	Type constant.CodeExecutionOutput `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCodeExecutionOutputBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCodeExecutionOutputBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCodeExecutionOutputBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCodeExecutionResultBlock struct {
+	Content    []BetaCodeExecutionOutputBlock `json:"content,required"`
+	ReturnCode int64                          `json:"return_code,required"`
+	Stderr     string                         `json:"stderr,required"`
+	Stdout     string                         `json:"stdout,required"`
+	Type       constant.CodeExecutionResult   `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Content     respjson.Field
+		ReturnCode  respjson.Field
+		Stderr      respjson.Field
+		Stdout      respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCodeExecutionResultBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaCodeExecutionResultBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Content, ReturnCode, Stderr, Stdout, Type are required.
+type BetaCodeExecutionResultBlockParam struct {
+	Content    []BetaCodeExecutionOutputBlockParam `json:"content,omitzero,required"`
+	ReturnCode int64                               `json:"return_code,required"`
+	Stderr     string                              `json:"stderr,required"`
+	Stdout     string                              `json:"stdout,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "code_execution_result".
+	Type constant.CodeExecutionResult `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCodeExecutionResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCodeExecutionResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCodeExecutionResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Name, Type are required.
+type BetaCodeExecutionTool20250522Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "code_execution".
+	Name constant.CodeExecution `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "code_execution_20250522".
+	Type constant.CodeExecution20250522 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCodeExecutionTool20250522Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCodeExecutionTool20250522Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCodeExecutionTool20250522Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCodeExecutionToolResultBlock struct {
+	Content   BetaCodeExecutionToolResultBlockContentUnion `json:"content,required"`
+	ToolUseID string                                       `json:"tool_use_id,required"`
+	Type      constant.CodeExecutionToolResult             `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Content     respjson.Field
+		ToolUseID   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCodeExecutionToolResultBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaCodeExecutionToolResultBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaCodeExecutionToolResultBlockContentUnion contains all possible properties
+// and values from [BetaCodeExecutionToolResultError],
+// [BetaCodeExecutionResultBlock].
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaCodeExecutionToolResultBlockContentUnion struct {
+	// This field is from variant [BetaCodeExecutionToolResultError].
+	ErrorCode BetaCodeExecutionToolResultErrorCode `json:"error_code"`
+	Type      string                               `json:"type"`
+	// This field is from variant [BetaCodeExecutionResultBlock].
+	Content []BetaCodeExecutionOutputBlock `json:"content"`
+	// This field is from variant [BetaCodeExecutionResultBlock].
+	ReturnCode int64 `json:"return_code"`
+	// This field is from variant [BetaCodeExecutionResultBlock].
+	Stderr string `json:"stderr"`
+	// This field is from variant [BetaCodeExecutionResultBlock].
+	Stdout string `json:"stdout"`
+	JSON   struct {
+		ErrorCode  respjson.Field
+		Type       respjson.Field
+		Content    respjson.Field
+		ReturnCode respjson.Field
+		Stderr     respjson.Field
+		Stdout     respjson.Field
+		raw        string
+	} `json:"-"`
+}
+
+func (u BetaCodeExecutionToolResultBlockContentUnion) AsResponseCodeExecutionToolResultError() (v BetaCodeExecutionToolResultError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaCodeExecutionToolResultBlockContentUnion) AsResponseCodeExecutionResultBlock() (v BetaCodeExecutionResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaCodeExecutionToolResultBlockContentUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaCodeExecutionToolResultBlockContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Content, ToolUseID, Type are required.
+type BetaCodeExecutionToolResultBlockParam struct {
+	Content   BetaCodeExecutionToolResultBlockParamContentUnion `json:"content,omitzero,required"`
+	ToolUseID string                                            `json:"tool_use_id,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as
+	// "code_execution_tool_result".
+	Type constant.CodeExecutionToolResult `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCodeExecutionToolResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCodeExecutionToolResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCodeExecutionToolResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func BetaNewCodeExecutionToolRequestError(errorCode BetaCodeExecutionToolResultErrorCode) BetaCodeExecutionToolResultBlockParamContentUnion {
+	var variant BetaCodeExecutionToolResultErrorParam
+	variant.ErrorCode = errorCode
+	return BetaCodeExecutionToolResultBlockParamContentUnion{OfError: &variant}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaCodeExecutionToolResultBlockParamContentUnion struct {
+	OfError       *BetaCodeExecutionToolResultErrorParam `json:",omitzero,inline"`
+	OfResultBlock *BetaCodeExecutionResultBlockParam     `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaCodeExecutionToolResultBlockParamContentUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfError, u.OfResultBlock)
+}
+func (u *BetaCodeExecutionToolResultBlockParamContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaCodeExecutionToolResultBlockParamContentUnion) asAny() any {
+	if !param.IsOmitted(u.OfError) {
+		return u.OfError
+	} else if !param.IsOmitted(u.OfResultBlock) {
+		return u.OfResultBlock
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaCodeExecutionToolResultBlockParamContentUnion) GetErrorCode() *string {
+	if vt := u.OfError; vt != nil {
+		return (*string)(&vt.ErrorCode)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaCodeExecutionToolResultBlockParamContentUnion) GetContent() []BetaCodeExecutionOutputBlockParam {
+	if vt := u.OfResultBlock; vt != nil {
+		return vt.Content
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaCodeExecutionToolResultBlockParamContentUnion) GetReturnCode() *int64 {
+	if vt := u.OfResultBlock; vt != nil {
+		return &vt.ReturnCode
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaCodeExecutionToolResultBlockParamContentUnion) GetStderr() *string {
+	if vt := u.OfResultBlock; vt != nil {
+		return &vt.Stderr
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaCodeExecutionToolResultBlockParamContentUnion) GetStdout() *string {
+	if vt := u.OfResultBlock; vt != nil {
+		return &vt.Stdout
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaCodeExecutionToolResultBlockParamContentUnion) GetType() *string {
+	if vt := u.OfError; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfResultBlock; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+type BetaCodeExecutionToolResultError struct {
+	// Any of "invalid_tool_input", "unavailable", "too_many_requests",
+	// "execution_time_exceeded".
+	ErrorCode BetaCodeExecutionToolResultErrorCode  `json:"error_code,required"`
+	Type      constant.CodeExecutionToolResultError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ErrorCode   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaCodeExecutionToolResultError) RawJSON() string { return r.JSON.raw }
+func (r *BetaCodeExecutionToolResultError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaCodeExecutionToolResultErrorCode string
+
+const (
+	BetaCodeExecutionToolResultErrorCodeInvalidToolInput      BetaCodeExecutionToolResultErrorCode = "invalid_tool_input"
+	BetaCodeExecutionToolResultErrorCodeUnavailable           BetaCodeExecutionToolResultErrorCode = "unavailable"
+	BetaCodeExecutionToolResultErrorCodeTooManyRequests       BetaCodeExecutionToolResultErrorCode = "too_many_requests"
+	BetaCodeExecutionToolResultErrorCodeExecutionTimeExceeded BetaCodeExecutionToolResultErrorCode = "execution_time_exceeded"
+)
+
+// The properties ErrorCode, Type are required.
+type BetaCodeExecutionToolResultErrorParam struct {
+	// Any of "invalid_tool_input", "unavailable", "too_many_requests",
+	// "execution_time_exceeded".
+	ErrorCode BetaCodeExecutionToolResultErrorCode `json:"error_code,omitzero,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "code_execution_tool_result_error".
+	Type constant.CodeExecutionToolResultError `json:"type,required"`
+	paramObj
+}
+
+func (r BetaCodeExecutionToolResultErrorParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaCodeExecutionToolResultErrorParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaCodeExecutionToolResultErrorParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Information about the container used in the request (for the code execution
+// tool)
+type BetaContainer struct {
+	// Identifier for the container used in this request
+	ID string `json:"id,required"`
+	// The time at which the container will expire.
+	ExpiresAt time.Time `json:"expires_at,required" format:"date-time"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		ExpiresAt   respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaContainer) RawJSON() string { return r.JSON.raw }
+func (r *BetaContainer) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Response model for a file uploaded to the container.
+type BetaContainerUploadBlock struct {
+	FileID string                   `json:"file_id,required"`
+	Type   constant.ContainerUpload `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		FileID      respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaContainerUploadBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaContainerUploadBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// A content block that represents a file to be uploaded to the container Files
+// uploaded via this block will be available in the container's input directory.
+//
+// The properties FileID, Type are required.
+type BetaContainerUploadBlockParam struct {
+	FileID string `json:"file_id,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "container_upload".
+	Type constant.ContainerUpload `json:"type,required"`
+	paramObj
+}
+
+func (r BetaContainerUploadBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaContainerUploadBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaContainerUploadBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaContentBlockUnion contains all possible properties and values from
+// [BetaTextBlock], [BetaToolUseBlock], [BetaServerToolUseBlock],
+// [BetaWebSearchToolResultBlock], [BetaCodeExecutionToolResultBlock],
+// [BetaMCPToolUseBlock], [BetaMCPToolResultBlock], [BetaContainerUploadBlock],
+// [BetaThinkingBlock], [BetaRedactedThinkingBlock].
+//
+// Use the [BetaContentBlockUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaContentBlockUnion struct {
+	// This field is from variant [BetaTextBlock].
+	Citations []BetaTextCitationUnion `json:"citations"`
+	// This field is from variant [BetaTextBlock].
+	Text string `json:"text"`
+	// Any of "text", "tool_use", "server_tool_use", "web_search_tool_result",
+	// "code_execution_tool_result", "mcp_tool_use", "mcp_tool_result",
+	// "container_upload", "thinking", "redacted_thinking".
+	Type  string          `json:"type"`
+	ID    string          `json:"id"`
+	Input json.RawMessage `json:"input"`
+	Name  string          `json:"name"`
+	// This field is a union of [BetaWebSearchToolResultBlockContentUnion],
+	// [BetaCodeExecutionToolResultBlockContentUnion],
+	// [BetaMCPToolResultBlockContentUnion]
+	Content   BetaContentBlockUnionContent `json:"content"`
+	ToolUseID string                       `json:"tool_use_id"`
+	// This field is from variant [BetaMCPToolUseBlock].
+	ServerName string `json:"server_name"`
+	// This field is from variant [BetaMCPToolResultBlock].
+	IsError bool `json:"is_error"`
+	// This field is from variant [BetaContainerUploadBlock].
+	FileID string `json:"file_id"`
+	// This field is from variant [BetaThinkingBlock].
+	Signature string `json:"signature"`
+	// This field is from variant [BetaThinkingBlock].
+	Thinking string `json:"thinking"`
+	// This field is from variant [BetaRedactedThinkingBlock].
+	Data string `json:"data"`
+	JSON struct {
+		Citations  respjson.Field
+		Text       respjson.Field
+		Type       respjson.Field
+		ID         respjson.Field
+		Input      respjson.Field
+		Name       respjson.Field
+		Content    respjson.Field
+		ToolUseID  respjson.Field
+		ServerName respjson.Field
+		IsError    respjson.Field
+		FileID     respjson.Field
+		Signature  respjson.Field
+		Thinking   respjson.Field
+		Data       respjson.Field
+		raw        string
+	} `json:"-"`
+}
+
+func (r BetaContentBlockUnion) ToParam() BetaContentBlockParamUnion {
+	switch variant := r.AsAny().(type) {
+	case BetaTextBlock:
+		p := variant.ToParam()
+		return BetaContentBlockParamUnion{OfText: &p}
+	case BetaToolUseBlock:
+		p := variant.ToParam()
+		return BetaContentBlockParamUnion{OfToolUse: &p}
+	case BetaThinkingBlock:
+		p := variant.ToParam()
+		return BetaContentBlockParamUnion{OfThinking: &p}
+	case BetaRedactedThinkingBlock:
+		p := variant.ToParam()
+		return BetaContentBlockParamUnion{OfRedactedThinking: &p}
+	}
+	return BetaContentBlockParamUnion{}
+}
+
+// anyBetaContentBlock is implemented by each variant of [BetaContentBlockUnion] to
+// add type safety for the return type of [BetaContentBlockUnion.AsAny]
+type anyBetaContentBlock interface {
+	implBetaContentBlockUnion()
+}
+
+func (BetaTextBlock) implBetaContentBlockUnion()                    {}
+func (BetaToolUseBlock) implBetaContentBlockUnion()                 {}
+func (BetaServerToolUseBlock) implBetaContentBlockUnion()           {}
+func (BetaWebSearchToolResultBlock) implBetaContentBlockUnion()     {}
+func (BetaCodeExecutionToolResultBlock) implBetaContentBlockUnion() {}
+func (BetaMCPToolUseBlock) implBetaContentBlockUnion()              {}
+func (BetaMCPToolResultBlock) implBetaContentBlockUnion()           {}
+func (BetaContainerUploadBlock) implBetaContentBlockUnion()         {}
+func (BetaThinkingBlock) implBetaContentBlockUnion()                {}
+func (BetaRedactedThinkingBlock) implBetaContentBlockUnion()        {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaContentBlockUnion.AsAny().(type) {
+//	case anthropic.BetaTextBlock:
+//	case anthropic.BetaToolUseBlock:
+//	case anthropic.BetaServerToolUseBlock:
+//	case anthropic.BetaWebSearchToolResultBlock:
+//	case anthropic.BetaCodeExecutionToolResultBlock:
+//	case anthropic.BetaMCPToolUseBlock:
+//	case anthropic.BetaMCPToolResultBlock:
+//	case anthropic.BetaContainerUploadBlock:
+//	case anthropic.BetaThinkingBlock:
+//	case anthropic.BetaRedactedThinkingBlock:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaContentBlockUnion) AsAny() anyBetaContentBlock {
+	switch u.Type {
+	case "text":
+		return u.AsText()
+	case "tool_use":
+		return u.AsToolUse()
+	case "server_tool_use":
+		return u.AsServerToolUse()
+	case "web_search_tool_result":
+		return u.AsWebSearchToolResult()
+	case "code_execution_tool_result":
+		return u.AsCodeExecutionToolResult()
+	case "mcp_tool_use":
+		return u.AsMCPToolUse()
+	case "mcp_tool_result":
+		return u.AsMCPToolResult()
+	case "container_upload":
+		return u.AsContainerUpload()
+	case "thinking":
+		return u.AsThinking()
+	case "redacted_thinking":
+		return u.AsRedactedThinking()
+	}
+	return nil
+}
+
+func (u BetaContentBlockUnion) AsText() (v BetaTextBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsToolUse() (v BetaToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsServerToolUse() (v BetaServerToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsWebSearchToolResult() (v BetaWebSearchToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsCodeExecutionToolResult() (v BetaCodeExecutionToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsMCPToolUse() (v BetaMCPToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsMCPToolResult() (v BetaMCPToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsContainerUpload() (v BetaContainerUploadBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsThinking() (v BetaThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaContentBlockUnion) AsRedactedThinking() (v BetaRedactedThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaContentBlockUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaContentBlockUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaContentBlockUnionContent is an implicit subunion of [BetaContentBlockUnion].
+// BetaContentBlockUnionContent provides convenient access to the sub-properties of
+// the union.
+//
+// For type safety it is recommended to directly use a variant of the
+// [BetaContentBlockUnion].
+//
+// If the underlying value is not a json object, one of the following properties
+// will be valid: OfBetaWebSearchResultBlockArray OfString
+// OfBetaMCPToolResultBlockContent]
+type BetaContentBlockUnionContent struct {
+	// This field will be present if the value is a [[]BetaWebSearchResultBlock]
+	// instead of an object.
+	OfBetaWebSearchResultBlockArray []BetaWebSearchResultBlock `json:",inline"`
+	// This field will be present if the value is a [string] instead of an object.
+	OfString string `json:",inline"`
+	// This field will be present if the value is a [[]BetaTextBlock] instead of an
+	// object.
+	OfBetaMCPToolResultBlockContent []BetaTextBlock `json:",inline"`
+	ErrorCode                       string          `json:"error_code"`
+	Type                            string          `json:"type"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	Content []BetaCodeExecutionOutputBlock `json:"content"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	ReturnCode int64 `json:"return_code"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	Stderr string `json:"stderr"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	Stdout string `json:"stdout"`
+	JSON   struct {
+		OfBetaWebSearchResultBlockArray respjson.Field
+		OfString                        respjson.Field
+		OfBetaMCPToolResultBlockContent respjson.Field
+		ErrorCode                       respjson.Field
+		Type                            respjson.Field
+		Content                         respjson.Field
+		ReturnCode                      respjson.Field
+		Stderr                          respjson.Field
+		Stdout                          respjson.Field
+		raw                             string
+	} `json:"-"`
+}
+
+func (r *BetaContentBlockUnionContent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewBetaServerToolUseBlock(id string, input any, name BetaServerToolUseBlockParamName) BetaContentBlockParamUnion {
+	var serverToolUse BetaServerToolUseBlockParam
+	serverToolUse.ID = id
+	serverToolUse.Input = input
+	serverToolUse.Name = name
+	return BetaContentBlockParamUnion{OfServerToolUse: &serverToolUse}
+}
+
+func NewBetaWebSearchToolResultBlock[
+	T []BetaWebSearchResultBlockParam | BetaWebSearchToolRequestErrorParam,
+](content T, toolUseID string) BetaContentBlockParamUnion {
+	var webSearchToolResult BetaWebSearchToolResultBlockParam
+	switch v := any(content).(type) {
+	case []BetaWebSearchResultBlockParam:
+		webSearchToolResult.Content.OfResultBlock = v
+	case BetaWebSearchToolRequestErrorParam:
+		webSearchToolResult.Content.OfError = &v
+	}
+	webSearchToolResult.ToolUseID = toolUseID
+	return BetaContentBlockParamUnion{OfWebSearchToolResult: &webSearchToolResult}
+}
+
+func NewBetaCodeExecutionToolResultBlock[
+	T BetaCodeExecutionToolResultErrorParam | BetaCodeExecutionResultBlockParam,
+](content T, toolUseID string) BetaContentBlockParamUnion {
+	var codeExecutionToolResult BetaCodeExecutionToolResultBlockParam
+	switch v := any(content).(type) {
+	case BetaCodeExecutionToolResultErrorParam:
+		codeExecutionToolResult.Content.OfError = &v
+	case BetaCodeExecutionResultBlockParam:
+		codeExecutionToolResult.Content.OfResultBlock = &v
+	}
+	codeExecutionToolResult.ToolUseID = toolUseID
+	return BetaContentBlockParamUnion{OfCodeExecutionToolResult: &codeExecutionToolResult}
+}
+
+func NewBetaMCPToolResultBlock(toolUseID string) BetaContentBlockParamUnion {
+	var mcpToolResult BetaRequestMCPToolResultBlockParam
+	mcpToolResult.ToolUseID = toolUseID
+	return BetaContentBlockParamUnion{OfMCPToolResult: &mcpToolResult}
+}
+
+func NewBetaTextBlock(text string) BetaContentBlockParamUnion {
+	var variant BetaTextBlockParam
+	variant.Text = text
+	return BetaContentBlockParamUnion{OfText: &variant}
+}
+
+func NewBetaImageBlock[
+	T BetaBase64ImageSourceParam | BetaURLImageSourceParam | BetaFileImageSourceParam,
+](source T) BetaContentBlockParamUnion {
+	var image BetaImageBlockParam
+	switch v := any(source).(type) {
+	case BetaBase64ImageSourceParam:
+		image.Source.OfBase64 = &v
+	case BetaURLImageSourceParam:
+		image.Source.OfURL = &v
+	case BetaFileImageSourceParam:
+		image.Source.OfFile = &v
+	}
+	return BetaContentBlockParamUnion{OfImage: &image}
+}
+
+func NewBetaToolUseBlock(id string, input any, name string) BetaContentBlockParamUnion {
+	var toolUse BetaToolUseBlockParam
+	toolUse.ID = id
+	toolUse.Input = input
+	toolUse.Name = name
+	return BetaContentBlockParamUnion{OfToolUse: &toolUse}
+}
+
+func NewBetaToolResultBlock(toolUseID string, content string, isError bool) BetaContentBlockParamUnion {
+	toolResult := BetaToolResultBlockParam{
+		Content: []BetaToolResultBlockParamContentUnion{
+			{OfText: &BetaTextBlockParam{Text: content}},
+		},
+		ToolUseID: toolUseID,
+		IsError:   Bool(isError),
+	}
+	return BetaContentBlockParamUnion{OfToolResult: &toolResult}
+}
+
+func NewBetaDocumentBlock[
+	T BetaBase64PDFSourceParam | BetaPlainTextSourceParam | BetaContentBlockSourceParam | BetaURLPDFSourceParam | BetaFileDocumentSourceParam,
+](source T) BetaContentBlockParamUnion {
+	var document BetaBase64PDFBlockParam
+	switch v := any(source).(type) {
+	case BetaBase64PDFSourceParam:
+		document.Source.OfBase64 = &v
+	case BetaPlainTextSourceParam:
+		document.Source.OfText = &v
+	case BetaContentBlockSourceParam:
+		document.Source.OfContent = &v
+	case BetaURLPDFSourceParam:
+		document.Source.OfURL = &v
+	case BetaFileDocumentSourceParam:
+		document.Source.OfFile = &v
+	}
+	return BetaContentBlockParamUnion{OfDocument: &document}
+}
+
+func NewBetaThinkingBlock(signature string, thinking string) BetaContentBlockParamUnion {
+	var variant BetaThinkingBlockParam
+	variant.Signature = signature
+	variant.Thinking = thinking
+	return BetaContentBlockParamUnion{OfThinking: &variant}
+}
+
+func NewBetaRedactedThinkingBlock(data string) BetaContentBlockParamUnion {
+	var redactedThinking BetaRedactedThinkingBlockParam
+	redactedThinking.Data = data
+	return BetaContentBlockParamUnion{OfRedactedThinking: &redactedThinking}
+}
+
+func NewBetaContainerUploadBlock(fileID string) BetaContentBlockParamUnion {
+	var containerUpload BetaContainerUploadBlockParam
+	containerUpload.FileID = fileID
+	return BetaContentBlockParamUnion{OfContainerUpload: &containerUpload}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaContentBlockParamUnion struct {
+	OfServerToolUse           *BetaServerToolUseBlockParam           `json:",omitzero,inline"`
+	OfWebSearchToolResult     *BetaWebSearchToolResultBlockParam     `json:",omitzero,inline"`
+	OfCodeExecutionToolResult *BetaCodeExecutionToolResultBlockParam `json:",omitzero,inline"`
+	OfMCPToolUse              *BetaMCPToolUseBlockParam              `json:",omitzero,inline"`
+	OfMCPToolResult           *BetaRequestMCPToolResultBlockParam    `json:",omitzero,inline"`
+	OfText                    *BetaTextBlockParam                    `json:",omitzero,inline"`
+	OfImage                   *BetaImageBlockParam                   `json:",omitzero,inline"`
+	OfToolUse                 *BetaToolUseBlockParam                 `json:",omitzero,inline"`
+	OfToolResult              *BetaToolResultBlockParam              `json:",omitzero,inline"`
+	OfDocument                *BetaBase64PDFBlockParam               `json:",omitzero,inline"`
+	OfThinking                *BetaThinkingBlockParam                `json:",omitzero,inline"`
+	OfRedactedThinking        *BetaRedactedThinkingBlockParam        `json:",omitzero,inline"`
+	OfContainerUpload         *BetaContainerUploadBlockParam         `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaContentBlockParamUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfServerToolUse,
+		u.OfWebSearchToolResult,
+		u.OfCodeExecutionToolResult,
+		u.OfMCPToolUse,
+		u.OfMCPToolResult,
+		u.OfText,
+		u.OfImage,
+		u.OfToolUse,
+		u.OfToolResult,
+		u.OfDocument,
+		u.OfThinking,
+		u.OfRedactedThinking,
+		u.OfContainerUpload)
+}
+func (u *BetaContentBlockParamUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaContentBlockParamUnion) asAny() any {
+	if !param.IsOmitted(u.OfServerToolUse) {
+		return u.OfServerToolUse
+	} else if !param.IsOmitted(u.OfWebSearchToolResult) {
+		return u.OfWebSearchToolResult
+	} else if !param.IsOmitted(u.OfCodeExecutionToolResult) {
+		return u.OfCodeExecutionToolResult
+	} else if !param.IsOmitted(u.OfMCPToolUse) {
+		return u.OfMCPToolUse
+	} else if !param.IsOmitted(u.OfMCPToolResult) {
+		return u.OfMCPToolResult
+	} else if !param.IsOmitted(u.OfText) {
+		return u.OfText
+	} else if !param.IsOmitted(u.OfImage) {
+		return u.OfImage
+	} else if !param.IsOmitted(u.OfToolUse) {
+		return u.OfToolUse
+	} else if !param.IsOmitted(u.OfToolResult) {
+		return u.OfToolResult
+	} else if !param.IsOmitted(u.OfDocument) {
+		return u.OfDocument
+	} else if !param.IsOmitted(u.OfThinking) {
+		return u.OfThinking
+	} else if !param.IsOmitted(u.OfRedactedThinking) {
+		return u.OfRedactedThinking
+	} else if !param.IsOmitted(u.OfContainerUpload) {
+		return u.OfContainerUpload
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetServerName() *string {
+	if vt := u.OfMCPToolUse; vt != nil {
+		return &vt.ServerName
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetText() *string {
+	if vt := u.OfText; vt != nil {
+		return &vt.Text
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetContext() *string {
+	if vt := u.OfDocument; vt != nil && vt.Context.Valid() {
+		return &vt.Context.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetTitle() *string {
+	if vt := u.OfDocument; vt != nil && vt.Title.Valid() {
+		return &vt.Title.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetSignature() *string {
+	if vt := u.OfThinking; vt != nil {
+		return &vt.Signature
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetThinking() *string {
+	if vt := u.OfThinking; vt != nil {
+		return &vt.Thinking
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetData() *string {
+	if vt := u.OfRedactedThinking; vt != nil {
+		return &vt.Data
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetFileID() *string {
+	if vt := u.OfContainerUpload; vt != nil {
+		return &vt.FileID
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetID() *string {
+	if vt := u.OfServerToolUse; vt != nil {
+		return (*string)(&vt.ID)
+	} else if vt := u.OfMCPToolUse; vt != nil {
+		return (*string)(&vt.ID)
+	} else if vt := u.OfToolUse; vt != nil {
+		return (*string)(&vt.ID)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetName() *string {
+	if vt := u.OfServerToolUse; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfMCPToolUse; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfToolUse; vt != nil {
+		return (*string)(&vt.Name)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetType() *string {
+	if vt := u.OfServerToolUse; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchToolResult; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfCodeExecutionToolResult; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfMCPToolUse; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfMCPToolResult; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfImage; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfToolUse; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfToolResult; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfDocument; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfThinking; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfRedactedThinking; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfContainerUpload; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetToolUseID() *string {
+	if vt := u.OfWebSearchToolResult; vt != nil {
+		return (*string)(&vt.ToolUseID)
+	} else if vt := u.OfCodeExecutionToolResult; vt != nil {
+		return (*string)(&vt.ToolUseID)
+	} else if vt := u.OfMCPToolResult; vt != nil {
+		return (*string)(&vt.ToolUseID)
+	} else if vt := u.OfToolResult; vt != nil {
+		return (*string)(&vt.ToolUseID)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaContentBlockParamUnion) GetIsError() *bool {
+	if vt := u.OfMCPToolResult; vt != nil && vt.IsError.Valid() {
+		return &vt.IsError.Value
+	} else if vt := u.OfToolResult; vt != nil && vt.IsError.Valid() {
+		return &vt.IsError.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's Input property, if present.
+func (u BetaContentBlockParamUnion) GetInput() *any {
+	if vt := u.OfServerToolUse; vt != nil {
+		return &vt.Input
+	} else if vt := u.OfMCPToolUse; vt != nil {
+		return &vt.Input
+	} else if vt := u.OfToolUse; vt != nil {
+		return &vt.Input
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u BetaContentBlockParamUnion) GetCacheControl() *BetaCacheControlEphemeralParam {
+	if vt := u.OfServerToolUse; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfWebSearchToolResult; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfCodeExecutionToolResult; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfMCPToolUse; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfMCPToolResult; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfText; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfImage; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfToolUse; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfToolResult; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfDocument; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfContainerUpload; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}
+
+// Returns a subunion which exports methods to access subproperties
+//
+// Or use AsAny() to get the underlying value
+func (u BetaContentBlockParamUnion) GetContent() (res betaContentBlockParamUnionContent) {
+	if vt := u.OfWebSearchToolResult; vt != nil {
+		res.any = vt.Content.asAny()
+	} else if vt := u.OfCodeExecutionToolResult; vt != nil {
+		res.any = vt.Content.asAny()
+	} else if vt := u.OfMCPToolResult; vt != nil {
+		res.any = vt.Content.asAny()
+	} else if vt := u.OfToolResult; vt != nil {
+		res.any = &vt.Content
+	}
+	return
+}
+
+// Can have the runtime types [*[]BetaWebSearchResultBlockParam],
+// [*BetaCodeExecutionToolResultErrorParam], [*BetaCodeExecutionResultBlockParam],
+// [*string], [_[]BetaTextBlockParam], [_[]BetaToolResultBlockParamContentUnion]
+type betaContentBlockParamUnionContent struct{ any }
+
+// Use the following switch statement to get the type of the union:
+//
+//	switch u.AsAny().(type) {
+//	case *[]anthropic.BetaWebSearchResultBlockParam:
+//	case *anthropic.BetaCodeExecutionToolResultErrorParam:
+//	case *anthropic.BetaCodeExecutionResultBlockParam:
+//	case *string:
+//	case *[]anthropic.BetaTextBlockParam:
+//	case *[]anthropic.BetaToolResultBlockParamContentUnion:
+//	default:
+//	    fmt.Errorf("not present")
+//	}
+func (u betaContentBlockParamUnionContent) AsAny() any { return u.any }
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionContent) GetContent() []BetaCodeExecutionOutputBlockParam {
+	switch vt := u.any.(type) {
+	case *BetaCodeExecutionToolResultBlockParamContentUnion:
+		return vt.GetContent()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionContent) GetReturnCode() *int64 {
+	switch vt := u.any.(type) {
+	case *BetaCodeExecutionToolResultBlockParamContentUnion:
+		return vt.GetReturnCode()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionContent) GetStderr() *string {
+	switch vt := u.any.(type) {
+	case *BetaCodeExecutionToolResultBlockParamContentUnion:
+		return vt.GetStderr()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionContent) GetStdout() *string {
+	switch vt := u.any.(type) {
+	case *BetaCodeExecutionToolResultBlockParamContentUnion:
+		return vt.GetStdout()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionContent) GetErrorCode() *string {
+	switch vt := u.any.(type) {
+	case *BetaWebSearchToolResultBlockParamContentUnion:
+		if vt := vt.OfError; vt != nil {
+			return (*string)(&vt.ErrorCode)
+		}
+	case *BetaCodeExecutionToolResultBlockParamContentUnion:
+		return vt.GetErrorCode()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionContent) GetType() *string {
+	switch vt := u.any.(type) {
+	case *BetaWebSearchToolResultBlockParamContentUnion:
+		if vt := vt.OfError; vt != nil {
+			return (*string)(&vt.Type)
+		}
+	case *BetaCodeExecutionToolResultBlockParamContentUnion:
+		return vt.GetType()
+	}
+	return nil
+}
+
+// Returns a subunion which exports methods to access subproperties
+//
+// Or use AsAny() to get the underlying value
+func (u BetaContentBlockParamUnion) GetCitations() (res betaContentBlockParamUnionCitations) {
+	if vt := u.OfText; vt != nil {
+		res.any = &vt.Citations
+	} else if vt := u.OfDocument; vt != nil {
+		res.any = &vt.Citations
+	}
+	return
+}
+
+// Can have the runtime types [*[]BetaTextCitationParamUnion],
+// [*BetaCitationsConfigParam]
+type betaContentBlockParamUnionCitations struct{ any }
+
+// Use the following switch statement to get the type of the union:
+//
+//	switch u.AsAny().(type) {
+//	case *[]anthropic.BetaTextCitationParamUnion:
+//	case *anthropic.BetaCitationsConfigParam:
+//	default:
+//	    fmt.Errorf("not present")
+//	}
+func (u betaContentBlockParamUnionCitations) AsAny() any { return u.any }
+
+// Returns a subunion which exports methods to access subproperties
+//
+// Or use AsAny() to get the underlying value
+func (u BetaContentBlockParamUnion) GetSource() (res betaContentBlockParamUnionSource) {
+	if vt := u.OfImage; vt != nil {
+		res.any = vt.Source.asAny()
+	} else if vt := u.OfDocument; vt != nil {
+		res.any = vt.Source.asAny()
+	}
+	return
+}
+
+// Can have the runtime types [*BetaBase64ImageSourceParam],
+// [*BetaURLImageSourceParam], [*BetaFileImageSourceParam],
+// [*BetaBase64PDFSourceParam], [*BetaPlainTextSourceParam],
+// [*BetaContentBlockSourceParam], [*BetaURLPDFSourceParam],
+// [*BetaFileDocumentSourceParam]
+type betaContentBlockParamUnionSource struct{ any }
+
+// Use the following switch statement to get the type of the union:
+//
+//	switch u.AsAny().(type) {
+//	case *anthropic.BetaBase64ImageSourceParam:
+//	case *anthropic.BetaURLImageSourceParam:
+//	case *anthropic.BetaFileImageSourceParam:
+//	case *anthropic.BetaBase64PDFSourceParam:
+//	case *anthropic.BetaPlainTextSourceParam:
+//	case *anthropic.BetaContentBlockSourceParam:
+//	case *anthropic.BetaURLPDFSourceParam:
+//	case *anthropic.BetaFileDocumentSourceParam:
+//	default:
+//	    fmt.Errorf("not present")
+//	}
+func (u betaContentBlockParamUnionSource) AsAny() any { return u.any }
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionSource) GetContent() *BetaContentBlockSourceContentUnionParam {
+	switch vt := u.any.(type) {
+	case *BetaBase64PDFBlockSourceUnionParam:
+		return vt.GetContent()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionSource) GetData() *string {
+	switch vt := u.any.(type) {
+	case *BetaImageBlockParamSourceUnion:
+		return vt.GetData()
+	case *BetaBase64PDFBlockSourceUnionParam:
+		return vt.GetData()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionSource) GetMediaType() *string {
+	switch vt := u.any.(type) {
+	case *BetaImageBlockParamSourceUnion:
+		return vt.GetMediaType()
+	case *BetaBase64PDFBlockSourceUnionParam:
+		return vt.GetMediaType()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionSource) GetType() *string {
+	switch vt := u.any.(type) {
+	case *BetaImageBlockParamSourceUnion:
+		return vt.GetType()
+	case *BetaBase64PDFBlockSourceUnionParam:
+		return vt.GetType()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionSource) GetURL() *string {
+	switch vt := u.any.(type) {
+	case *BetaImageBlockParamSourceUnion:
+		return vt.GetURL()
+	case *BetaBase64PDFBlockSourceUnionParam:
+		return vt.GetURL()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u betaContentBlockParamUnionSource) GetFileID() *string {
+	switch vt := u.any.(type) {
+	case *BetaImageBlockParamSourceUnion:
+		return vt.GetFileID()
+	case *BetaBase64PDFBlockSourceUnionParam:
+		return vt.GetFileID()
+	}
+	return nil
+}
+
+// The properties Content, Type are required.
+type BetaContentBlockSourceParam struct {
+	Content BetaContentBlockSourceContentUnionParam `json:"content,omitzero,required"`
+	// This field can be elided, and will marshal its zero value as "content".
+	Type constant.Content `json:"type,required"`
+	paramObj
+}
+
+func (r BetaContentBlockSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaContentBlockSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaContentBlockSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaContentBlockSourceContentUnionParam struct {
+	OfString                        param.Opt[string]                         `json:",omitzero,inline"`
+	OfBetaContentBlockSourceContent []BetaContentBlockSourceContentUnionParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaContentBlockSourceContentUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfString, u.OfBetaContentBlockSourceContent)
+}
+func (u *BetaContentBlockSourceContentUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaContentBlockSourceContentUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfString) {
+		return &u.OfString.Value
+	} else if !param.IsOmitted(u.OfBetaContentBlockSourceContent) {
+		return &u.OfBetaContentBlockSourceContent
+	}
+	return nil
+}
+
+// The properties FileID, Type are required.
+type BetaFileDocumentSourceParam struct {
+	FileID string `json:"file_id,required"`
+	// This field can be elided, and will marshal its zero value as "file".
+	Type constant.File `json:"type,required"`
+	paramObj
+}
+
+func (r BetaFileDocumentSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaFileDocumentSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaFileDocumentSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties FileID, Type are required.
+type BetaFileImageSourceParam struct {
+	FileID string `json:"file_id,required"`
+	// This field can be elided, and will marshal its zero value as "file".
+	Type constant.File `json:"type,required"`
+	paramObj
+}
+
+func (r BetaFileImageSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaFileImageSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaFileImageSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Source, Type are required.
+type BetaImageBlockParam struct {
+	Source BetaImageBlockParamSourceUnion `json:"source,omitzero,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "image".
+	Type constant.Image `json:"type,required"`
+	paramObj
+}
+
+func (r BetaImageBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaImageBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaImageBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaImageBlockParamSourceUnion struct {
+	OfBase64 *BetaBase64ImageSourceParam `json:",omitzero,inline"`
+	OfURL    *BetaURLImageSourceParam    `json:",omitzero,inline"`
+	OfFile   *BetaFileImageSourceParam   `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaImageBlockParamSourceUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfBase64, u.OfURL, u.OfFile)
+}
+func (u *BetaImageBlockParamSourceUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaImageBlockParamSourceUnion) asAny() any {
+	if !param.IsOmitted(u.OfBase64) {
+		return u.OfBase64
+	} else if !param.IsOmitted(u.OfURL) {
+		return u.OfURL
+	} else if !param.IsOmitted(u.OfFile) {
+		return u.OfFile
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaImageBlockParamSourceUnion) GetData() *string {
+	if vt := u.OfBase64; vt != nil {
+		return &vt.Data
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaImageBlockParamSourceUnion) GetMediaType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.MediaType)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaImageBlockParamSourceUnion) GetURL() *string {
+	if vt := u.OfURL; vt != nil {
+		return &vt.URL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaImageBlockParamSourceUnion) GetFileID() *string {
+	if vt := u.OfFile; vt != nil {
+		return &vt.FileID
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaImageBlockParamSourceUnion) GetType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfURL; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfFile; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+type BetaInputJSONDelta struct {
+	PartialJSON string                  `json:"partial_json,required"`
+	Type        constant.InputJSONDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		PartialJSON respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaInputJSONDelta) RawJSON() string { return r.JSON.raw }
+func (r *BetaInputJSONDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMCPToolResultBlock struct {
+	Content   BetaMCPToolResultBlockContentUnion `json:"content,required"`
+	IsError   bool                               `json:"is_error,required"`
+	ToolUseID string                             `json:"tool_use_id,required"`
+	Type      constant.MCPToolResult             `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Content     respjson.Field
+		IsError     respjson.Field
+		ToolUseID   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMCPToolResultBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaMCPToolResultBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaMCPToolResultBlockContentUnion contains all possible properties and values
+// from [string], [[]BetaTextBlock].
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+//
+// If the underlying value is not a json object, one of the following properties
+// will be valid: OfString OfBetaMCPToolResultBlockContent]
+type BetaMCPToolResultBlockContentUnion struct {
+	// This field will be present if the value is a [string] instead of an object.
+	OfString string `json:",inline"`
+	// This field will be present if the value is a [[]BetaTextBlock] instead of an
+	// object.
+	OfBetaMCPToolResultBlockContent []BetaTextBlock `json:",inline"`
+	JSON                            struct {
+		OfString                        respjson.Field
+		OfBetaMCPToolResultBlockContent respjson.Field
+		raw                             string
+	} `json:"-"`
+}
+
+func (u BetaMCPToolResultBlockContentUnion) AsString() (v string) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaMCPToolResultBlockContentUnion) AsBetaMCPToolResultBlockContent() (v []BetaTextBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaMCPToolResultBlockContentUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaMCPToolResultBlockContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMCPToolUseBlock struct {
+	ID    string `json:"id,required"`
+	Input any    `json:"input,required"`
+	// The name of the MCP tool
+	Name string `json:"name,required"`
+	// The name of the MCP server
+	ServerName string              `json:"server_name,required"`
+	Type       constant.MCPToolUse `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Input       respjson.Field
+		Name        respjson.Field
+		ServerName  respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMCPToolUseBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaMCPToolUseBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties ID, Input, Name, ServerName, Type are required.
+type BetaMCPToolUseBlockParam struct {
+	ID    string `json:"id,required"`
+	Input any    `json:"input,omitzero,required"`
+	Name  string `json:"name,required"`
+	// The name of the MCP server
+	ServerName string `json:"server_name,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "mcp_tool_use".
+	Type constant.MCPToolUse `json:"type,required"`
+	paramObj
+}
+
+func (r BetaMCPToolUseBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMCPToolUseBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMCPToolUseBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessage struct {
+	// Unique object identifier.
+	//
+	// The format and length of IDs may change over time.
+	ID string `json:"id,required"`
+	// Information about the container used in the request (for the code execution
+	// tool)
+	Container BetaContainer `json:"container,required"`
+	// Content generated by the model.
+	//
+	// This is an array of content blocks, each of which has a `type` that determines
+	// its shape.
+	//
+	// Example:
+	//
+	// ```json
+	// [{ "type": "text", "text": "Hi, I'm Claude." }]
+	// ```
+	//
+	// If the request input `messages` ended with an `assistant` turn, then the
+	// response `content` will continue directly from that last turn. You can use this
+	// to constrain the model's output.
+	//
+	// For example, if the input `messages` were:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Then the response `content` might be:
+	//
+	// ```json
+	// [{ "type": "text", "text": "B)" }]
+	// ```
+	Content []BetaContentBlockUnion `json:"content,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,required"`
+	// Conversational role of the generated message.
+	//
+	// This will always be `"assistant"`.
+	Role constant.Assistant `json:"role,required"`
+	// The reason that we stopped.
+	//
+	// This may be one the following values:
+	//
+	// - `"end_turn"`: the model reached a natural stopping point
+	// - `"max_tokens"`: we exceeded the requested `max_tokens` or the model's maximum
+	// - `"stop_sequence"`: one of your provided custom `stop_sequences` was generated
+	// - `"tool_use"`: the model invoked one or more tools
+	//
+	// In non-streaming mode this value is always non-null. In streaming mode, it is
+	// null in the `message_start` event and non-null otherwise.
+	//
+	// Any of "end_turn", "max_tokens", "stop_sequence", "tool_use", "pause_turn",
+	// "refusal".
+	StopReason BetaStopReason `json:"stop_reason,required"`
+	// Which custom stop sequence was generated, if any.
+	//
+	// This value will be a non-null string if one of your custom stop sequences was
+	// generated.
+	StopSequence string `json:"stop_sequence,required"`
+	// Object type.
+	//
+	// For Messages, this is always `"message"`.
+	Type constant.Message `json:"type,required"`
+	// Billing and rate-limit usage.
+	//
+	// Anthropic's API bills and rate-limits by token counts, as tokens represent the
+	// underlying cost to our systems.
+	//
+	// Under the hood, the API transforms requests into a format suitable for the
+	// model. The model's output then goes through a parsing stage before becoming an
+	// API response. As a result, the token counts in `usage` will not match one-to-one
+	// with the exact visible content of an API request or response.
+	//
+	// For example, `output_tokens` will be non-zero, even for an empty string response
+	// from Claude.
+	//
+	// Total input tokens in a request is the summation of `input_tokens`,
+	// `cache_creation_input_tokens`, and `cache_read_input_tokens`.
+	Usage BetaUsage `json:"usage,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID           respjson.Field
+		Container    respjson.Field
+		Content      respjson.Field
+		Model        respjson.Field
+		Role         respjson.Field
+		StopReason   respjson.Field
+		StopSequence respjson.Field
+		Type         respjson.Field
+		Usage        respjson.Field
+		ExtraFields  map[string]respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessage) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessage) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r BetaMessage) ToParam() BetaMessageParam {
+	var p BetaMessageParam
+	p.Role = BetaMessageParamRole(r.Role)
+	p.Content = make([]BetaContentBlockParamUnion, len(r.Content))
+	for i, c := range r.Content {
+		contentParams := c.ToParam()
+		p.Content[i] = contentParams
+	}
+	return p
+}
+
+// The reason that we stopped.
+//
+// This may be one the following values:
+//
+// - `"end_turn"`: the model reached a natural stopping point
+// - `"max_tokens"`: we exceeded the requested `max_tokens` or the model's maximum
+// - `"stop_sequence"`: one of your provided custom `stop_sequences` was generated
+// - `"tool_use"`: the model invoked one or more tools
+//
+// In non-streaming mode this value is always non-null. In streaming mode, it is
+// null in the `message_start` event and non-null otherwise.
+type BetaMessageStopReason string
+
+const (
+	BetaMessageStopReasonEndTurn      BetaMessageStopReason = "end_turn"
+	BetaMessageStopReasonMaxTokens    BetaMessageStopReason = "max_tokens"
+	BetaMessageStopReasonStopSequence BetaMessageStopReason = "stop_sequence"
+	BetaMessageStopReasonToolUse      BetaMessageStopReason = "tool_use"
+)
+
+type BetaMessageDeltaUsage struct {
+	// The cumulative number of input tokens used to create the cache entry.
+	CacheCreationInputTokens int64 `json:"cache_creation_input_tokens,required"`
+	// The cumulative number of input tokens read from the cache.
+	CacheReadInputTokens int64 `json:"cache_read_input_tokens,required"`
+	// The cumulative number of input tokens which were used.
+	InputTokens int64 `json:"input_tokens,required"`
+	// The cumulative number of output tokens which were used.
+	OutputTokens int64 `json:"output_tokens,required"`
+	// The number of server tool requests.
+	ServerToolUse BetaServerToolUsage `json:"server_tool_use,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CacheCreationInputTokens respjson.Field
+		CacheReadInputTokens     respjson.Field
+		InputTokens              respjson.Field
+		OutputTokens             respjson.Field
+		ServerToolUse            respjson.Field
+		ExtraFields              map[string]respjson.Field
+		raw                      string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageDeltaUsage) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageDeltaUsage) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Content, Role are required.
+type BetaMessageParam struct {
+	Content []BetaContentBlockParamUnion `json:"content,omitzero,required"`
+	// Any of "user", "assistant".
+	Role BetaMessageParamRole `json:"role,omitzero,required"`
+	paramObj
+}
+
+func NewBetaUserMessage(blocks ...BetaContentBlockParamUnion) BetaMessageParam {
+	return BetaMessageParam{
+		Role:    BetaMessageParamRoleUser,
+		Content: blocks,
+	}
+}
+
+func (r BetaMessageParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMessageParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMessageParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessageParamRole string
+
+const (
+	BetaMessageParamRoleUser      BetaMessageParamRole = "user"
+	BetaMessageParamRoleAssistant BetaMessageParamRole = "assistant"
+)
+
+type BetaMessageTokensCount struct {
+	// The total number of tokens across the provided list of messages, system prompt,
+	// and tools.
+	InputTokens int64 `json:"input_tokens,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		InputTokens respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageTokensCount) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageTokensCount) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMetadataParam struct {
+	// An external identifier for the user who is associated with the request.
+	//
+	// This should be a uuid, hash value, or other opaque identifier. Anthropic may use
+	// this id to help detect abuse. Do not include any identifying information such as
+	// name, email address, or phone number.
+	UserID param.Opt[string] `json:"user_id,omitzero"`
+	paramObj
+}
+
+func (r BetaMetadataParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMetadataParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMetadataParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Data, MediaType, Type are required.
+type BetaPlainTextSourceParam struct {
+	Data string `json:"data,required"`
+	// This field can be elided, and will marshal its zero value as "text/plain".
+	MediaType constant.TextPlain `json:"media_type,required"`
+	// This field can be elided, and will marshal its zero value as "text".
+	Type constant.Text `json:"type,required"`
+	paramObj
+}
+
+func (r BetaPlainTextSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaPlainTextSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaPlainTextSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaRawContentBlockDeltaUnion contains all possible properties and values from
+// [BetaTextDelta], [BetaInputJSONDelta], [BetaCitationsDelta],
+// [BetaThinkingDelta], [BetaSignatureDelta].
+//
+// Use the [BetaRawContentBlockDeltaUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaRawContentBlockDeltaUnion struct {
+	// This field is from variant [BetaTextDelta].
+	Text string `json:"text"`
+	// Any of "text_delta", "input_json_delta", "citations_delta", "thinking_delta",
+	// "signature_delta".
+	Type string `json:"type"`
+	// This field is from variant [BetaInputJSONDelta].
+	PartialJSON string `json:"partial_json"`
+	// This field is from variant [BetaCitationsDelta].
+	Citation BetaCitationsDeltaCitationUnion `json:"citation"`
+	// This field is from variant [BetaThinkingDelta].
+	Thinking string `json:"thinking"`
+	// This field is from variant [BetaSignatureDelta].
+	Signature string `json:"signature"`
+	JSON      struct {
+		Text        respjson.Field
+		Type        respjson.Field
+		PartialJSON respjson.Field
+		Citation    respjson.Field
+		Thinking    respjson.Field
+		Signature   respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// anyBetaRawContentBlockDelta is implemented by each variant of
+// [BetaRawContentBlockDeltaUnion] to add type safety for the return type of
+// [BetaRawContentBlockDeltaUnion.AsAny]
+type anyBetaRawContentBlockDelta interface {
+	implBetaRawContentBlockDeltaUnion()
+}
+
+func (BetaTextDelta) implBetaRawContentBlockDeltaUnion()      {}
+func (BetaInputJSONDelta) implBetaRawContentBlockDeltaUnion() {}
+func (BetaCitationsDelta) implBetaRawContentBlockDeltaUnion() {}
+func (BetaThinkingDelta) implBetaRawContentBlockDeltaUnion()  {}
+func (BetaSignatureDelta) implBetaRawContentBlockDeltaUnion() {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaRawContentBlockDeltaUnion.AsAny().(type) {
+//	case anthropic.BetaTextDelta:
+//	case anthropic.BetaInputJSONDelta:
+//	case anthropic.BetaCitationsDelta:
+//	case anthropic.BetaThinkingDelta:
+//	case anthropic.BetaSignatureDelta:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaRawContentBlockDeltaUnion) AsAny() anyBetaRawContentBlockDelta {
+	switch u.Type {
+	case "text_delta":
+		return u.AsTextDelta()
+	case "input_json_delta":
+		return u.AsInputJSONDelta()
+	case "citations_delta":
+		return u.AsCitationsDelta()
+	case "thinking_delta":
+		return u.AsThinkingDelta()
+	case "signature_delta":
+		return u.AsSignatureDelta()
+	}
+	return nil
+}
+
+func (u BetaRawContentBlockDeltaUnion) AsTextDelta() (v BetaTextDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockDeltaUnion) AsInputJSONDelta() (v BetaInputJSONDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockDeltaUnion) AsCitationsDelta() (v BetaCitationsDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockDeltaUnion) AsThinkingDelta() (v BetaThinkingDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockDeltaUnion) AsSignatureDelta() (v BetaSignatureDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaRawContentBlockDeltaUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaRawContentBlockDeltaUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRawContentBlockDeltaEvent struct {
+	Delta BetaRawContentBlockDeltaUnion `json:"delta,required"`
+	Index int64                         `json:"index,required"`
+	Type  constant.ContentBlockDelta    `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Delta       respjson.Field
+		Index       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRawContentBlockDeltaEvent) RawJSON() string { return r.JSON.raw }
+func (r *BetaRawContentBlockDeltaEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRawContentBlockStartEvent struct {
+	// Response model for a file uploaded to the container.
+	ContentBlock BetaRawContentBlockStartEventContentBlockUnion `json:"content_block,required"`
+	Index        int64                                          `json:"index,required"`
+	Type         constant.ContentBlockStart                     `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ContentBlock respjson.Field
+		Index        respjson.Field
+		Type         respjson.Field
+		ExtraFields  map[string]respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRawContentBlockStartEvent) RawJSON() string { return r.JSON.raw }
+func (r *BetaRawContentBlockStartEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaRawContentBlockStartEventContentBlockUnion contains all possible properties
+// and values from [BetaTextBlock], [BetaToolUseBlock], [BetaServerToolUseBlock],
+// [BetaWebSearchToolResultBlock], [BetaCodeExecutionToolResultBlock],
+// [BetaMCPToolUseBlock], [BetaMCPToolResultBlock], [BetaContainerUploadBlock],
+// [BetaThinkingBlock], [BetaRedactedThinkingBlock].
+//
+// Use the [BetaRawContentBlockStartEventContentBlockUnion.AsAny] method to switch
+// on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaRawContentBlockStartEventContentBlockUnion struct {
+	// This field is from variant [BetaTextBlock].
+	Citations []BetaTextCitationUnion `json:"citations"`
+	// This field is from variant [BetaTextBlock].
+	Text string `json:"text"`
+	// Any of "text", "tool_use", "server_tool_use", "web_search_tool_result",
+	// "code_execution_tool_result", "mcp_tool_use", "mcp_tool_result",
+	// "container_upload", "thinking", "redacted_thinking".
+	Type  string `json:"type"`
+	ID    string `json:"id"`
+	Input any    `json:"input"`
+	Name  string `json:"name"`
+	// This field is a union of [BetaWebSearchToolResultBlockContentUnion],
+	// [BetaCodeExecutionToolResultBlockContentUnion],
+	// [BetaMCPToolResultBlockContentUnion]
+	Content   BetaRawContentBlockStartEventContentBlockUnionContent `json:"content"`
+	ToolUseID string                                                `json:"tool_use_id"`
+	// This field is from variant [BetaMCPToolUseBlock].
+	ServerName string `json:"server_name"`
+	// This field is from variant [BetaMCPToolResultBlock].
+	IsError bool `json:"is_error"`
+	// This field is from variant [BetaContainerUploadBlock].
+	FileID string `json:"file_id"`
+	// This field is from variant [BetaThinkingBlock].
+	Signature string `json:"signature"`
+	// This field is from variant [BetaThinkingBlock].
+	Thinking string `json:"thinking"`
+	// This field is from variant [BetaRedactedThinkingBlock].
+	Data string `json:"data"`
+	JSON struct {
+		Citations  respjson.Field
+		Text       respjson.Field
+		Type       respjson.Field
+		ID         respjson.Field
+		Input      respjson.Field
+		Name       respjson.Field
+		Content    respjson.Field
+		ToolUseID  respjson.Field
+		ServerName respjson.Field
+		IsError    respjson.Field
+		FileID     respjson.Field
+		Signature  respjson.Field
+		Thinking   respjson.Field
+		Data       respjson.Field
+		raw        string
+	} `json:"-"`
+}
+
+// anyBetaRawContentBlockStartEventContentBlock is implemented by each variant of
+// [BetaRawContentBlockStartEventContentBlockUnion] to add type safety for the
+// return type of [BetaRawContentBlockStartEventContentBlockUnion.AsAny]
+type anyBetaRawContentBlockStartEventContentBlock interface {
+	implBetaRawContentBlockStartEventContentBlockUnion()
+}
+
+func (BetaTextBlock) implBetaRawContentBlockStartEventContentBlockUnion()                    {}
+func (BetaToolUseBlock) implBetaRawContentBlockStartEventContentBlockUnion()                 {}
+func (BetaServerToolUseBlock) implBetaRawContentBlockStartEventContentBlockUnion()           {}
+func (BetaWebSearchToolResultBlock) implBetaRawContentBlockStartEventContentBlockUnion()     {}
+func (BetaCodeExecutionToolResultBlock) implBetaRawContentBlockStartEventContentBlockUnion() {}
+func (BetaMCPToolUseBlock) implBetaRawContentBlockStartEventContentBlockUnion()              {}
+func (BetaMCPToolResultBlock) implBetaRawContentBlockStartEventContentBlockUnion()           {}
+func (BetaContainerUploadBlock) implBetaRawContentBlockStartEventContentBlockUnion()         {}
+func (BetaThinkingBlock) implBetaRawContentBlockStartEventContentBlockUnion()                {}
+func (BetaRedactedThinkingBlock) implBetaRawContentBlockStartEventContentBlockUnion()        {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaRawContentBlockStartEventContentBlockUnion.AsAny().(type) {
+//	case anthropic.BetaTextBlock:
+//	case anthropic.BetaToolUseBlock:
+//	case anthropic.BetaServerToolUseBlock:
+//	case anthropic.BetaWebSearchToolResultBlock:
+//	case anthropic.BetaCodeExecutionToolResultBlock:
+//	case anthropic.BetaMCPToolUseBlock:
+//	case anthropic.BetaMCPToolResultBlock:
+//	case anthropic.BetaContainerUploadBlock:
+//	case anthropic.BetaThinkingBlock:
+//	case anthropic.BetaRedactedThinkingBlock:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsAny() anyBetaRawContentBlockStartEventContentBlock {
+	switch u.Type {
+	case "text":
+		return u.AsText()
+	case "tool_use":
+		return u.AsToolUse()
+	case "server_tool_use":
+		return u.AsServerToolUse()
+	case "web_search_tool_result":
+		return u.AsWebSearchToolResult()
+	case "code_execution_tool_result":
+		return u.AsCodeExecutionToolResult()
+	case "mcp_tool_use":
+		return u.AsMCPToolUse()
+	case "mcp_tool_result":
+		return u.AsMCPToolResult()
+	case "container_upload":
+		return u.AsContainerUpload()
+	case "thinking":
+		return u.AsThinking()
+	case "redacted_thinking":
+		return u.AsRedactedThinking()
+	}
+	return nil
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsText() (v BetaTextBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsToolUse() (v BetaToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsServerToolUse() (v BetaServerToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsWebSearchToolResult() (v BetaWebSearchToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsCodeExecutionToolResult() (v BetaCodeExecutionToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsMCPToolUse() (v BetaMCPToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsMCPToolResult() (v BetaMCPToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsContainerUpload() (v BetaContainerUploadBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsThinking() (v BetaThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawContentBlockStartEventContentBlockUnion) AsRedactedThinking() (v BetaRedactedThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaRawContentBlockStartEventContentBlockUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaRawContentBlockStartEventContentBlockUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaRawContentBlockStartEventContentBlockUnionContent is an implicit subunion of
+// [BetaRawContentBlockStartEventContentBlockUnion].
+// BetaRawContentBlockStartEventContentBlockUnionContent provides convenient access
+// to the sub-properties of the union.
+//
+// For type safety it is recommended to directly use a variant of the
+// [BetaRawContentBlockStartEventContentBlockUnion].
+//
+// If the underlying value is not a json object, one of the following properties
+// will be valid: OfBetaWebSearchResultBlockArray OfString
+// OfBetaMCPToolResultBlockContent]
+type BetaRawContentBlockStartEventContentBlockUnionContent struct {
+	// This field will be present if the value is a [[]BetaWebSearchResultBlock]
+	// instead of an object.
+	OfBetaWebSearchResultBlockArray []BetaWebSearchResultBlock `json:",inline"`
+	// This field will be present if the value is a [string] instead of an object.
+	OfString string `json:",inline"`
+	// This field will be present if the value is a [[]BetaTextBlock] instead of an
+	// object.
+	OfBetaMCPToolResultBlockContent []BetaTextBlock `json:",inline"`
+	ErrorCode                       string          `json:"error_code"`
+	Type                            string          `json:"type"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	Content []BetaCodeExecutionOutputBlock `json:"content"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	ReturnCode int64 `json:"return_code"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	Stderr string `json:"stderr"`
+	// This field is from variant [BetaCodeExecutionToolResultBlockContentUnion].
+	Stdout string `json:"stdout"`
+	JSON   struct {
+		OfBetaWebSearchResultBlockArray respjson.Field
+		OfString                        respjson.Field
+		OfBetaMCPToolResultBlockContent respjson.Field
+		ErrorCode                       respjson.Field
+		Type                            respjson.Field
+		Content                         respjson.Field
+		ReturnCode                      respjson.Field
+		Stderr                          respjson.Field
+		Stdout                          respjson.Field
+		raw                             string
+	} `json:"-"`
+}
+
+func (r *BetaRawContentBlockStartEventContentBlockUnionContent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRawContentBlockStopEvent struct {
+	Index int64                     `json:"index,required"`
+	Type  constant.ContentBlockStop `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Index       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRawContentBlockStopEvent) RawJSON() string { return r.JSON.raw }
+func (r *BetaRawContentBlockStopEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRawMessageDeltaEvent struct {
+	Delta BetaRawMessageDeltaEventDelta `json:"delta,required"`
+	Type  constant.MessageDelta         `json:"type,required"`
+	// Billing and rate-limit usage.
+	//
+	// Anthropic's API bills and rate-limits by token counts, as tokens represent the
+	// underlying cost to our systems.
+	//
+	// Under the hood, the API transforms requests into a format suitable for the
+	// model. The model's output then goes through a parsing stage before becoming an
+	// API response. As a result, the token counts in `usage` will not match one-to-one
+	// with the exact visible content of an API request or response.
+	//
+	// For example, `output_tokens` will be non-zero, even for an empty string response
+	// from Claude.
+	//
+	// Total input tokens in a request is the summation of `input_tokens`,
+	// `cache_creation_input_tokens`, and `cache_read_input_tokens`.
+	Usage BetaMessageDeltaUsage `json:"usage,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Delta       respjson.Field
+		Type        respjson.Field
+		Usage       respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRawMessageDeltaEvent) RawJSON() string { return r.JSON.raw }
+func (r *BetaRawMessageDeltaEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRawMessageDeltaEventDelta struct {
+	// Information about the container used in the request (for the code execution
+	// tool)
+	Container BetaContainer `json:"container,required"`
+	// Any of "end_turn", "max_tokens", "stop_sequence", "tool_use", "pause_turn",
+	// "refusal".
+	StopReason   BetaStopReason `json:"stop_reason,required"`
+	StopSequence string         `json:"stop_sequence,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Container    respjson.Field
+		StopReason   respjson.Field
+		StopSequence respjson.Field
+		ExtraFields  map[string]respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRawMessageDeltaEventDelta) RawJSON() string { return r.JSON.raw }
+func (r *BetaRawMessageDeltaEventDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRawMessageStartEvent struct {
+	Message BetaMessage           `json:"message,required"`
+	Type    constant.MessageStart `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRawMessageStartEvent) RawJSON() string { return r.JSON.raw }
+func (r *BetaRawMessageStartEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRawMessageStopEvent struct {
+	Type constant.MessageStop `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRawMessageStopEvent) RawJSON() string { return r.JSON.raw }
+func (r *BetaRawMessageStopEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaRawMessageStreamEventUnion contains all possible properties and values from
+// [BetaRawMessageStartEvent], [BetaRawMessageDeltaEvent],
+// [BetaRawMessageStopEvent], [BetaRawContentBlockStartEvent],
+// [BetaRawContentBlockDeltaEvent], [BetaRawContentBlockStopEvent].
+//
+// Use the [BetaRawMessageStreamEventUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaRawMessageStreamEventUnion struct {
+	// This field is from variant [BetaRawMessageStartEvent].
+	Message BetaMessage `json:"message"`
+	// Any of "message_start", "message_delta", "message_stop", "content_block_start",
+	// "content_block_delta", "content_block_stop".
+	Type string `json:"type"`
+	// This field is a union of [BetaRawMessageDeltaEventDelta],
+	// [BetaRawContentBlockDeltaUnion]
+	Delta BetaRawMessageStreamEventUnionDelta `json:"delta"`
+	// This field is from variant [BetaRawMessageDeltaEvent].
+	Usage BetaMessageDeltaUsage `json:"usage"`
+	// This field is from variant [BetaRawContentBlockStartEvent].
+	ContentBlock BetaRawContentBlockStartEventContentBlockUnion `json:"content_block"`
+	Index        int64                                          `json:"index"`
+	JSON         struct {
+		Message      respjson.Field
+		Type         respjson.Field
+		Delta        respjson.Field
+		Usage        respjson.Field
+		ContentBlock respjson.Field
+		Index        respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// anyBetaRawMessageStreamEvent is implemented by each variant of
+// [BetaRawMessageStreamEventUnion] to add type safety for the return type of
+// [BetaRawMessageStreamEventUnion.AsAny]
+type anyBetaRawMessageStreamEvent interface {
+	implBetaRawMessageStreamEventUnion()
+}
+
+func (BetaRawMessageStartEvent) implBetaRawMessageStreamEventUnion()      {}
+func (BetaRawMessageDeltaEvent) implBetaRawMessageStreamEventUnion()      {}
+func (BetaRawMessageStopEvent) implBetaRawMessageStreamEventUnion()       {}
+func (BetaRawContentBlockStartEvent) implBetaRawMessageStreamEventUnion() {}
+func (BetaRawContentBlockDeltaEvent) implBetaRawMessageStreamEventUnion() {}
+func (BetaRawContentBlockStopEvent) implBetaRawMessageStreamEventUnion()  {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaRawMessageStreamEventUnion.AsAny().(type) {
+//	case anthropic.BetaRawMessageStartEvent:
+//	case anthropic.BetaRawMessageDeltaEvent:
+//	case anthropic.BetaRawMessageStopEvent:
+//	case anthropic.BetaRawContentBlockStartEvent:
+//	case anthropic.BetaRawContentBlockDeltaEvent:
+//	case anthropic.BetaRawContentBlockStopEvent:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaRawMessageStreamEventUnion) AsAny() anyBetaRawMessageStreamEvent {
+	switch u.Type {
+	case "message_start":
+		return u.AsMessageStart()
+	case "message_delta":
+		return u.AsMessageDelta()
+	case "message_stop":
+		return u.AsMessageStop()
+	case "content_block_start":
+		return u.AsContentBlockStart()
+	case "content_block_delta":
+		return u.AsContentBlockDelta()
+	case "content_block_stop":
+		return u.AsContentBlockStop()
+	}
+	return nil
+}
+
+func (u BetaRawMessageStreamEventUnion) AsMessageStart() (v BetaRawMessageStartEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawMessageStreamEventUnion) AsMessageDelta() (v BetaRawMessageDeltaEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawMessageStreamEventUnion) AsMessageStop() (v BetaRawMessageStopEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawMessageStreamEventUnion) AsContentBlockStart() (v BetaRawContentBlockStartEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawMessageStreamEventUnion) AsContentBlockDelta() (v BetaRawContentBlockDeltaEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaRawMessageStreamEventUnion) AsContentBlockStop() (v BetaRawContentBlockStopEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaRawMessageStreamEventUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaRawMessageStreamEventUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaRawMessageStreamEventUnionDelta is an implicit subunion of
+// [BetaRawMessageStreamEventUnion]. BetaRawMessageStreamEventUnionDelta provides
+// convenient access to the sub-properties of the union.
+//
+// For type safety it is recommended to directly use a variant of the
+// [BetaRawMessageStreamEventUnion].
+type BetaRawMessageStreamEventUnionDelta struct {
+	// This field is from variant [BetaRawMessageDeltaEventDelta].
+	Container BetaContainer `json:"container"`
+	// This field is from variant [BetaRawMessageDeltaEventDelta].
+	StopReason BetaStopReason `json:"stop_reason"`
+	// This field is from variant [BetaRawMessageDeltaEventDelta].
+	StopSequence string `json:"stop_sequence"`
+	// This field is from variant [BetaRawContentBlockDeltaUnion].
+	Text string `json:"text"`
+	Type string `json:"type"`
+	// This field is from variant [BetaRawContentBlockDeltaUnion].
+	PartialJSON string `json:"partial_json"`
+	// This field is from variant [BetaRawContentBlockDeltaUnion].
+	Citation BetaCitationsDeltaCitationUnion `json:"citation"`
+	// This field is from variant [BetaRawContentBlockDeltaUnion].
+	Thinking string `json:"thinking"`
+	// This field is from variant [BetaRawContentBlockDeltaUnion].
+	Signature string `json:"signature"`
+	JSON      struct {
+		Container    respjson.Field
+		StopReason   respjson.Field
+		StopSequence respjson.Field
+		Text         respjson.Field
+		Type         respjson.Field
+		PartialJSON  respjson.Field
+		Citation     respjson.Field
+		Thinking     respjson.Field
+		Signature    respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+func (r *BetaRawMessageStreamEventUnionDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Accumulate builds up the Message incrementally from a MessageStreamEvent. The Message then can be used as
+// any other Message, except with the caveat that the Message.JSON field which normally can be used to inspect
+// the JSON sent over the network may not be populated fully.
+//
+//	message := anthropic.Message{}
+//	for stream.Next() {
+//		event := stream.Current()
+//		message.Accumulate(event)
+//	}
+func (acc *BetaMessage) Accumulate(event BetaRawMessageStreamEventUnion) error {
+	if acc == nil {
+		return fmt.Errorf("accumulate: cannot accumlate into nil Message")
+	}
+
+	switch event := event.AsAny().(type) {
+	case BetaRawMessageStartEvent:
+		*acc = event.Message
+	case BetaRawMessageDeltaEvent:
+		acc.StopReason = event.Delta.StopReason
+		acc.StopSequence = event.Delta.StopSequence
+		acc.Usage.OutputTokens = event.Usage.OutputTokens
+	case BetaRawMessageStopEvent:
+		accJson, err := json.Marshal(acc)
+		if err != nil {
+			return fmt.Errorf("error converting content block to JSON: %w", err)
+		}
+		acc.JSON.raw = string(accJson)
+	case BetaRawContentBlockStartEvent:
+		acc.Content = append(acc.Content, BetaContentBlockUnion{})
+		err := acc.Content[len(acc.Content)-1].UnmarshalJSON([]byte(event.ContentBlock.RawJSON()))
+		if err != nil {
+			return err
+		}
+	case BetaRawContentBlockDeltaEvent:
+		if len(acc.Content) == 0 {
+			return fmt.Errorf("received event of type %s but there was no content block", event.Type)
+		}
+		cb := &acc.Content[len(acc.Content)-1]
+		switch delta := event.Delta.AsAny().(type) {
+		case BetaTextDelta:
+			cb.Text += delta.Text
+		case BetaInputJSONDelta:
+			if len(delta.PartialJSON) != 0 {
+				if string(cb.Input) == "{}" {
+					cb.Input = []byte(delta.PartialJSON)
+				} else {
+					cb.Input = append(cb.Input, []byte(delta.PartialJSON)...)
+				}
+			}
+		case BetaThinkingDelta:
+			cb.Thinking += delta.Thinking
+		case BetaSignatureDelta:
+			cb.Signature += delta.Signature
+		case BetaCitationsDelta:
+			citation := BetaTextCitationUnion{}
+			err := citation.UnmarshalJSON([]byte(delta.Citation.RawJSON()))
+			if err != nil {
+				return fmt.Errorf("could not unmarshal citation delta into citation type: %w", err)
+			}
+			cb.Citations = append(cb.Citations, citation)
+		}
+	case BetaRawContentBlockStopEvent:
+		if len(acc.Content) == 0 {
+			return fmt.Errorf("received event of type %s but there was no content block", event.Type)
+		}
+		contentBlock := &acc.Content[len(acc.Content)-1]
+		cbJson, err := json.Marshal(contentBlock)
+		if err != nil {
+			return fmt.Errorf("error converting content block to JSON: %w", err)
+		}
+		contentBlock.JSON.raw = string(cbJson)
+	}
+
+	return nil
+}
+
+type BetaRedactedThinkingBlock struct {
+	Data string                    `json:"data,required"`
+	Type constant.RedactedThinking `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Data        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaRedactedThinkingBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaRedactedThinkingBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r BetaRedactedThinkingBlock) ToParam() BetaRedactedThinkingBlockParam {
+	var p BetaRedactedThinkingBlockParam
+	p.Type = r.Type
+	p.Data = r.Data
+	return p
+}
+
+// The properties Data, Type are required.
+type BetaRedactedThinkingBlockParam struct {
+	Data string `json:"data,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "redacted_thinking".
+	Type constant.RedactedThinking `json:"type,required"`
+	paramObj
+}
+
+func (r BetaRedactedThinkingBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaRedactedThinkingBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaRedactedThinkingBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaRequestMCPServerToolConfigurationParam struct {
+	Enabled      param.Opt[bool] `json:"enabled,omitzero"`
+	AllowedTools []string        `json:"allowed_tools,omitzero"`
+	paramObj
+}
+
+func (r BetaRequestMCPServerToolConfigurationParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaRequestMCPServerToolConfigurationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaRequestMCPServerToolConfigurationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Name, Type, URL are required.
+type BetaRequestMCPServerURLDefinitionParam struct {
+	Name               string                                     `json:"name,required"`
+	URL                string                                     `json:"url,required"`
+	AuthorizationToken param.Opt[string]                          `json:"authorization_token,omitzero"`
+	ToolConfiguration  BetaRequestMCPServerToolConfigurationParam `json:"tool_configuration,omitzero"`
+	// This field can be elided, and will marshal its zero value as "url".
+	Type constant.URL `json:"type,required"`
+	paramObj
+}
+
+func (r BetaRequestMCPServerURLDefinitionParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaRequestMCPServerURLDefinitionParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaRequestMCPServerURLDefinitionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties ToolUseID, Type are required.
+type BetaRequestMCPToolResultBlockParam struct {
+	ToolUseID string          `json:"tool_use_id,required"`
+	IsError   param.Opt[bool] `json:"is_error,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam                 `json:"cache_control,omitzero"`
+	Content      BetaRequestMCPToolResultBlockParamContentUnion `json:"content,omitzero"`
+	// This field can be elided, and will marshal its zero value as "mcp_tool_result".
+	Type constant.MCPToolResult `json:"type,required"`
+	paramObj
+}
+
+func (r BetaRequestMCPToolResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaRequestMCPToolResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaRequestMCPToolResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaRequestMCPToolResultBlockParamContentUnion struct {
+	OfString                        param.Opt[string]    `json:",omitzero,inline"`
+	OfBetaMCPToolResultBlockContent []BetaTextBlockParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaRequestMCPToolResultBlockParamContentUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfString, u.OfBetaMCPToolResultBlockContent)
+}
+func (u *BetaRequestMCPToolResultBlockParamContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaRequestMCPToolResultBlockParamContentUnion) asAny() any {
+	if !param.IsOmitted(u.OfString) {
+		return &u.OfString.Value
+	} else if !param.IsOmitted(u.OfBetaMCPToolResultBlockContent) {
+		return &u.OfBetaMCPToolResultBlockContent
+	}
+	return nil
+}
+
+type BetaServerToolUsage struct {
+	// The number of web search tool requests.
+	WebSearchRequests int64 `json:"web_search_requests,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		WebSearchRequests respjson.Field
+		ExtraFields       map[string]respjson.Field
+		raw               string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaServerToolUsage) RawJSON() string { return r.JSON.raw }
+func (r *BetaServerToolUsage) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaServerToolUseBlock struct {
+	ID    string `json:"id,required"`
+	Input any    `json:"input,required"`
+	// Any of "web_search", "code_execution".
+	Name BetaServerToolUseBlockName `json:"name,required"`
+	Type constant.ServerToolUse     `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Input       respjson.Field
+		Name        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaServerToolUseBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaServerToolUseBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaServerToolUseBlockName string
+
+const (
+	BetaServerToolUseBlockNameWebSearch     BetaServerToolUseBlockName = "web_search"
+	BetaServerToolUseBlockNameCodeExecution BetaServerToolUseBlockName = "code_execution"
+)
+
+// The properties ID, Input, Name, Type are required.
+type BetaServerToolUseBlockParam struct {
+	ID    string `json:"id,required"`
+	Input any    `json:"input,omitzero,required"`
+	// Any of "web_search", "code_execution".
+	Name BetaServerToolUseBlockParamName `json:"name,omitzero,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "server_tool_use".
+	Type constant.ServerToolUse `json:"type,required"`
+	paramObj
+}
+
+func (r BetaServerToolUseBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaServerToolUseBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaServerToolUseBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaServerToolUseBlockParamName string
+
+const (
+	BetaServerToolUseBlockParamNameWebSearch     BetaServerToolUseBlockParamName = "web_search"
+	BetaServerToolUseBlockParamNameCodeExecution BetaServerToolUseBlockParamName = "code_execution"
+)
+
+type BetaSignatureDelta struct {
+	Signature string                  `json:"signature,required"`
+	Type      constant.SignatureDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Signature   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaSignatureDelta) RawJSON() string { return r.JSON.raw }
+func (r *BetaSignatureDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaStopReason string
+
+const (
+	BetaStopReasonEndTurn      BetaStopReason = "end_turn"
+	BetaStopReasonMaxTokens    BetaStopReason = "max_tokens"
+	BetaStopReasonStopSequence BetaStopReason = "stop_sequence"
+	BetaStopReasonToolUse      BetaStopReason = "tool_use"
+	BetaStopReasonPauseTurn    BetaStopReason = "pause_turn"
+	BetaStopReasonRefusal      BetaStopReason = "refusal"
+)
+
+type BetaTextBlock struct {
+	// Citations supporting the text block.
+	//
+	// The type of citation returned will depend on the type of document being cited.
+	// Citing a PDF results in `page_location`, plain text results in `char_location`,
+	// and content document results in `content_block_location`.
+	Citations []BetaTextCitationUnion `json:"citations,required"`
+	Text      string                  `json:"text,required"`
+	Type      constant.Text           `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Citations   respjson.Field
+		Text        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaTextBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaTextBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r BetaTextBlock) ToParam() BetaTextBlockParam {
+	var p BetaTextBlockParam
+	p.Type = r.Type
+	p.Text = r.Text
+
+	// Distinguish between a nil and zero length slice, since some compatible
+	// APIs may not require citations.
+	if r.Citations != nil {
+		p.Citations = make([]BetaTextCitationParamUnion, len(r.Citations))
+	}
+
+	for i, citation := range r.Citations {
+		switch citationVariant := citation.AsAny().(type) {
+		case BetaCitationCharLocation:
+			var citationParam BetaCitationCharLocationParam
+			citationParam.Type = citationVariant.Type
+			citationParam.DocumentTitle = paramutil.ToOpt(citationVariant.DocumentTitle, citationVariant.JSON.DocumentTitle)
+			citationParam.CitedText = citationVariant.CitedText
+			citationParam.DocumentIndex = citationVariant.DocumentIndex
+			citationParam.EndCharIndex = citationVariant.EndCharIndex
+			citationParam.StartCharIndex = citationVariant.StartCharIndex
+			p.Citations[i] = BetaTextCitationParamUnion{OfCharLocation: &citationParam}
+		case BetaCitationPageLocation:
+			var citationParam BetaCitationPageLocationParam
+			citationParam.Type = citationVariant.Type
+			citationParam.DocumentTitle = paramutil.ToOpt(citationVariant.DocumentTitle, citationVariant.JSON.DocumentTitle)
+			citationParam.DocumentIndex = citationVariant.DocumentIndex
+			citationParam.EndPageNumber = citationVariant.EndPageNumber
+			citationParam.StartPageNumber = citationVariant.StartPageNumber
+			p.Citations[i] = BetaTextCitationParamUnion{OfPageLocation: &citationParam}
+		case BetaCitationContentBlockLocation:
+			var citationParam BetaCitationContentBlockLocationParam
+			citationParam.Type = citationVariant.Type
+			citationParam.DocumentTitle = paramutil.ToOpt(citationVariant.DocumentTitle, citationVariant.JSON.DocumentTitle)
+			citationParam.CitedText = citationVariant.CitedText
+			citationParam.DocumentIndex = citationVariant.DocumentIndex
+			citationParam.EndBlockIndex = citationVariant.EndBlockIndex
+			citationParam.StartBlockIndex = citationVariant.StartBlockIndex
+			p.Citations[i] = BetaTextCitationParamUnion{OfContentBlockLocation: &citationParam}
+		}
+	}
+	return p
+}
+
+// The properties Text, Type are required.
+type BetaTextBlockParam struct {
+	Text      string                       `json:"text,required"`
+	Citations []BetaTextCitationParamUnion `json:"citations,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "text".
+	Type constant.Text `json:"type,required"`
+	paramObj
+}
+
+func (r BetaTextBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaTextBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaTextBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaTextCitationUnion contains all possible properties and values from
+// [BetaCitationCharLocation], [BetaCitationPageLocation],
+// [BetaCitationContentBlockLocation], [BetaCitationsWebSearchResultLocation].
+//
+// Use the [BetaTextCitationUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaTextCitationUnion struct {
+	CitedText     string `json:"cited_text"`
+	DocumentIndex int64  `json:"document_index"`
+	DocumentTitle string `json:"document_title"`
+	// This field is from variant [BetaCitationCharLocation].
+	EndCharIndex int64 `json:"end_char_index"`
+	// This field is from variant [BetaCitationCharLocation].
+	StartCharIndex int64 `json:"start_char_index"`
+	// Any of "char_location", "page_location", "content_block_location",
+	// "web_search_result_location".
+	Type string `json:"type"`
+	// This field is from variant [BetaCitationPageLocation].
+	EndPageNumber int64 `json:"end_page_number"`
+	// This field is from variant [BetaCitationPageLocation].
+	StartPageNumber int64 `json:"start_page_number"`
+	// This field is from variant [BetaCitationContentBlockLocation].
+	EndBlockIndex int64 `json:"end_block_index"`
+	// This field is from variant [BetaCitationContentBlockLocation].
+	StartBlockIndex int64 `json:"start_block_index"`
+	// This field is from variant [BetaCitationsWebSearchResultLocation].
+	EncryptedIndex string `json:"encrypted_index"`
+	// This field is from variant [BetaCitationsWebSearchResultLocation].
+	Title string `json:"title"`
+	// This field is from variant [BetaCitationsWebSearchResultLocation].
+	URL  string `json:"url"`
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndCharIndex    respjson.Field
+		StartCharIndex  respjson.Field
+		Type            respjson.Field
+		EndPageNumber   respjson.Field
+		StartPageNumber respjson.Field
+		EndBlockIndex   respjson.Field
+		StartBlockIndex respjson.Field
+		EncryptedIndex  respjson.Field
+		Title           respjson.Field
+		URL             respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// anyBetaTextCitation is implemented by each variant of [BetaTextCitationUnion] to
+// add type safety for the return type of [BetaTextCitationUnion.AsAny]
+type anyBetaTextCitation interface {
+	implBetaTextCitationUnion()
+}
+
+func (BetaCitationCharLocation) implBetaTextCitationUnion()             {}
+func (BetaCitationPageLocation) implBetaTextCitationUnion()             {}
+func (BetaCitationContentBlockLocation) implBetaTextCitationUnion()     {}
+func (BetaCitationsWebSearchResultLocation) implBetaTextCitationUnion() {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaTextCitationUnion.AsAny().(type) {
+//	case anthropic.BetaCitationCharLocation:
+//	case anthropic.BetaCitationPageLocation:
+//	case anthropic.BetaCitationContentBlockLocation:
+//	case anthropic.BetaCitationsWebSearchResultLocation:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaTextCitationUnion) AsAny() anyBetaTextCitation {
+	switch u.Type {
+	case "char_location":
+		return u.AsCharLocation()
+	case "page_location":
+		return u.AsPageLocation()
+	case "content_block_location":
+		return u.AsContentBlockLocation()
+	case "web_search_result_location":
+		return u.AsWebSearchResultLocation()
+	}
+	return nil
+}
+
+func (u BetaTextCitationUnion) AsCharLocation() (v BetaCitationCharLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaTextCitationUnion) AsPageLocation() (v BetaCitationPageLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaTextCitationUnion) AsContentBlockLocation() (v BetaCitationContentBlockLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaTextCitationUnion) AsWebSearchResultLocation() (v BetaCitationsWebSearchResultLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaTextCitationUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaTextCitationUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaTextCitationParamUnion struct {
+	OfCharLocation            *BetaCitationCharLocationParam            `json:",omitzero,inline"`
+	OfPageLocation            *BetaCitationPageLocationParam            `json:",omitzero,inline"`
+	OfContentBlockLocation    *BetaCitationContentBlockLocationParam    `json:",omitzero,inline"`
+	OfWebSearchResultLocation *BetaCitationWebSearchResultLocationParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaTextCitationParamUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfCharLocation, u.OfPageLocation, u.OfContentBlockLocation, u.OfWebSearchResultLocation)
+}
+func (u *BetaTextCitationParamUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaTextCitationParamUnion) asAny() any {
+	if !param.IsOmitted(u.OfCharLocation) {
+		return u.OfCharLocation
+	} else if !param.IsOmitted(u.OfPageLocation) {
+		return u.OfPageLocation
+	} else if !param.IsOmitted(u.OfContentBlockLocation) {
+		return u.OfContentBlockLocation
+	} else if !param.IsOmitted(u.OfWebSearchResultLocation) {
+		return u.OfWebSearchResultLocation
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetEndCharIndex() *int64 {
+	if vt := u.OfCharLocation; vt != nil {
+		return &vt.EndCharIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetStartCharIndex() *int64 {
+	if vt := u.OfCharLocation; vt != nil {
+		return &vt.StartCharIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetEndPageNumber() *int64 {
+	if vt := u.OfPageLocation; vt != nil {
+		return &vt.EndPageNumber
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetStartPageNumber() *int64 {
+	if vt := u.OfPageLocation; vt != nil {
+		return &vt.StartPageNumber
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetEndBlockIndex() *int64 {
+	if vt := u.OfContentBlockLocation; vt != nil {
+		return &vt.EndBlockIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetStartBlockIndex() *int64 {
+	if vt := u.OfContentBlockLocation; vt != nil {
+		return &vt.StartBlockIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetEncryptedIndex() *string {
+	if vt := u.OfWebSearchResultLocation; vt != nil {
+		return &vt.EncryptedIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetTitle() *string {
+	if vt := u.OfWebSearchResultLocation; vt != nil && vt.Title.Valid() {
+		return &vt.Title.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetURL() *string {
+	if vt := u.OfWebSearchResultLocation; vt != nil {
+		return &vt.URL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetCitedText() *string {
+	if vt := u.OfCharLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	} else if vt := u.OfPageLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	} else if vt := u.OfContentBlockLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	} else if vt := u.OfWebSearchResultLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetDocumentIndex() *int64 {
+	if vt := u.OfCharLocation; vt != nil {
+		return (*int64)(&vt.DocumentIndex)
+	} else if vt := u.OfPageLocation; vt != nil {
+		return (*int64)(&vt.DocumentIndex)
+	} else if vt := u.OfContentBlockLocation; vt != nil {
+		return (*int64)(&vt.DocumentIndex)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetDocumentTitle() *string {
+	if vt := u.OfCharLocation; vt != nil && vt.DocumentTitle.Valid() {
+		return &vt.DocumentTitle.Value
+	} else if vt := u.OfPageLocation; vt != nil && vt.DocumentTitle.Valid() {
+		return &vt.DocumentTitle.Value
+	} else if vt := u.OfContentBlockLocation; vt != nil && vt.DocumentTitle.Valid() {
+		return &vt.DocumentTitle.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaTextCitationParamUnion) GetType() *string {
+	if vt := u.OfCharLocation; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfPageLocation; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfContentBlockLocation; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchResultLocation; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+type BetaTextDelta struct {
+	Text string             `json:"text,required"`
+	Type constant.TextDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Text        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaTextDelta) RawJSON() string { return r.JSON.raw }
+func (r *BetaTextDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaThinkingBlock struct {
+	Signature string            `json:"signature,required"`
+	Thinking  string            `json:"thinking,required"`
+	Type      constant.Thinking `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Signature   respjson.Field
+		Thinking    respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaThinkingBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaThinkingBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r BetaThinkingBlock) ToParam() BetaThinkingBlockParam {
+	var p BetaThinkingBlockParam
+	p.Type = r.Type
+	p.Signature = r.Signature
+	p.Thinking = r.Thinking
+	return p
+}
+
+// The properties Signature, Thinking, Type are required.
+type BetaThinkingBlockParam struct {
+	Signature string `json:"signature,required"`
+	Thinking  string `json:"thinking,required"`
+	// This field can be elided, and will marshal its zero value as "thinking".
+	Type constant.Thinking `json:"type,required"`
+	paramObj
+}
+
+func (r BetaThinkingBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaThinkingBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaThinkingBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewBetaThinkingConfigDisabledParam() BetaThinkingConfigDisabledParam {
+	return BetaThinkingConfigDisabledParam{
+		Type: "disabled",
+	}
+}
+
+// This struct has a constant value, construct it with
+// [NewBetaThinkingConfigDisabledParam].
+type BetaThinkingConfigDisabledParam struct {
+	Type constant.Disabled `json:"type,required"`
+	paramObj
+}
+
+func (r BetaThinkingConfigDisabledParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaThinkingConfigDisabledParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaThinkingConfigDisabledParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties BudgetTokens, Type are required.
+type BetaThinkingConfigEnabledParam struct {
+	// Determines how many tokens Claude can use for its internal reasoning process.
+	// Larger budgets can enable more thorough analysis for complex problems, improving
+	// response quality.
+	//
+	// Must be ≥1024 and less than `max_tokens`.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	BudgetTokens int64 `json:"budget_tokens,required"`
+	// This field can be elided, and will marshal its zero value as "enabled".
+	Type constant.Enabled `json:"type,required"`
+	paramObj
+}
+
+func (r BetaThinkingConfigEnabledParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaThinkingConfigEnabledParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaThinkingConfigEnabledParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func BetaThinkingConfigParamOfEnabled(budgetTokens int64) BetaThinkingConfigParamUnion {
+	var enabled BetaThinkingConfigEnabledParam
+	enabled.BudgetTokens = budgetTokens
+	return BetaThinkingConfigParamUnion{OfEnabled: &enabled}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaThinkingConfigParamUnion struct {
+	OfEnabled  *BetaThinkingConfigEnabledParam  `json:",omitzero,inline"`
+	OfDisabled *BetaThinkingConfigDisabledParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaThinkingConfigParamUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfEnabled, u.OfDisabled)
+}
+func (u *BetaThinkingConfigParamUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaThinkingConfigParamUnion) asAny() any {
+	if !param.IsOmitted(u.OfEnabled) {
+		return u.OfEnabled
+	} else if !param.IsOmitted(u.OfDisabled) {
+		return u.OfDisabled
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaThinkingConfigParamUnion) GetBudgetTokens() *int64 {
+	if vt := u.OfEnabled; vt != nil {
+		return &vt.BudgetTokens
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaThinkingConfigParamUnion) GetType() *string {
+	if vt := u.OfEnabled; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfDisabled; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+type BetaThinkingDelta struct {
+	Thinking string                 `json:"thinking,required"`
+	Type     constant.ThinkingDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Thinking    respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaThinkingDelta) RawJSON() string { return r.JSON.raw }
+func (r *BetaThinkingDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties InputSchema, Name are required.
+type BetaToolParam struct {
+	// [JSON schema](https://json-schema.org/draft/2020-12) for this tool's input.
+	//
+	// This defines the shape of the `input` that your tool accepts and that the model
+	// will produce.
+	InputSchema BetaToolInputSchemaParam `json:"input_schema,omitzero,required"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	Name string `json:"name,required"`
+	// Description of what this tool does.
+	//
+	// Tool descriptions should be as detailed as possible. The more information that
+	// the model has about what the tool is and how to use it, the better it will
+	// perform. You can use natural language descriptions to reinforce important
+	// aspects of the tool input JSON schema.
+	Description param.Opt[string] `json:"description,omitzero"`
+	// Any of "custom".
+	Type BetaToolType `json:"type,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	paramObj
+}
+
+func (r BetaToolParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// [JSON schema](https://json-schema.org/draft/2020-12) for this tool's input.
+//
+// This defines the shape of the `input` that your tool accepts and that the model
+// will produce.
+//
+// The property Type is required.
+type BetaToolInputSchemaParam struct {
+	Properties any      `json:"properties,omitzero"`
+	Required   []string `json:"required,omitzero"`
+	// This field can be elided, and will marshal its zero value as "object".
+	Type        constant.Object `json:"type,required"`
+	ExtraFields map[string]any  `json:"-"`
+	paramObj
+}
+
+func (r BetaToolInputSchemaParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolInputSchemaParam
+	return param.MarshalWithExtras(r, (*shadow)(&r), r.ExtraFields)
+}
+func (r *BetaToolInputSchemaParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaToolType string
+
+const (
+	BetaToolTypeCustom BetaToolType = "custom"
+)
+
+// The properties Name, Type are required.
+type BetaToolBash20241022Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "bash".
+	Name constant.Bash `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as "bash_20241022".
+	Type constant.Bash20241022 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolBash20241022Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolBash20241022Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolBash20241022Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Name, Type are required.
+type BetaToolBash20250124Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "bash".
+	Name constant.Bash `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as "bash_20250124".
+	Type constant.Bash20250124 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolBash20250124Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolBash20250124Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolBash20250124Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func BetaToolChoiceParamOfTool(name string) BetaToolChoiceUnionParam {
+	var tool BetaToolChoiceToolParam
+	tool.Name = name
+	return BetaToolChoiceUnionParam{OfTool: &tool}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaToolChoiceUnionParam struct {
+	OfAuto *BetaToolChoiceAutoParam `json:",omitzero,inline"`
+	OfAny  *BetaToolChoiceAnyParam  `json:",omitzero,inline"`
+	OfTool *BetaToolChoiceToolParam `json:",omitzero,inline"`
+	OfNone *BetaToolChoiceNoneParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaToolChoiceUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfAuto, u.OfAny, u.OfTool, u.OfNone)
+}
+func (u *BetaToolChoiceUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaToolChoiceUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfAuto) {
+		return u.OfAuto
+	} else if !param.IsOmitted(u.OfAny) {
+		return u.OfAny
+	} else if !param.IsOmitted(u.OfTool) {
+		return u.OfTool
+	} else if !param.IsOmitted(u.OfNone) {
+		return u.OfNone
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolChoiceUnionParam) GetName() *string {
+	if vt := u.OfTool; vt != nil {
+		return &vt.Name
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolChoiceUnionParam) GetType() *string {
+	if vt := u.OfAuto; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfAny; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfNone; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolChoiceUnionParam) GetDisableParallelToolUse() *bool {
+	if vt := u.OfAuto; vt != nil && vt.DisableParallelToolUse.Valid() {
+		return &vt.DisableParallelToolUse.Value
+	} else if vt := u.OfAny; vt != nil && vt.DisableParallelToolUse.Valid() {
+		return &vt.DisableParallelToolUse.Value
+	} else if vt := u.OfTool; vt != nil && vt.DisableParallelToolUse.Valid() {
+		return &vt.DisableParallelToolUse.Value
+	}
+	return nil
+}
+
+// The model will use any available tools.
+//
+// The property Type is required.
+type BetaToolChoiceAnyParam struct {
+	// Whether to disable parallel tool use.
+	//
+	// Defaults to `false`. If set to `true`, the model will output exactly one tool
+	// use.
+	DisableParallelToolUse param.Opt[bool] `json:"disable_parallel_tool_use,omitzero"`
+	// This field can be elided, and will marshal its zero value as "any".
+	Type constant.Any `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolChoiceAnyParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolChoiceAnyParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolChoiceAnyParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The model will automatically decide whether to use tools.
+//
+// The property Type is required.
+type BetaToolChoiceAutoParam struct {
+	// Whether to disable parallel tool use.
+	//
+	// Defaults to `false`. If set to `true`, the model will output at most one tool
+	// use.
+	DisableParallelToolUse param.Opt[bool] `json:"disable_parallel_tool_use,omitzero"`
+	// This field can be elided, and will marshal its zero value as "auto".
+	Type constant.Auto `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolChoiceAutoParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolChoiceAutoParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolChoiceAutoParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewBetaToolChoiceNoneParam() BetaToolChoiceNoneParam {
+	return BetaToolChoiceNoneParam{
+		Type: "none",
+	}
+}
+
+// The model will not be allowed to use tools.
+//
+// This struct has a constant value, construct it with
+// [NewBetaToolChoiceNoneParam].
+type BetaToolChoiceNoneParam struct {
+	Type constant.None `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolChoiceNoneParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolChoiceNoneParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolChoiceNoneParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The model will use the specified tool with `tool_choice.name`.
+//
+// The properties Name, Type are required.
+type BetaToolChoiceToolParam struct {
+	// The name of the tool to use.
+	Name string `json:"name,required"`
+	// Whether to disable parallel tool use.
+	//
+	// Defaults to `false`. If set to `true`, the model will output exactly one tool
+	// use.
+	DisableParallelToolUse param.Opt[bool] `json:"disable_parallel_tool_use,omitzero"`
+	// This field can be elided, and will marshal its zero value as "tool".
+	Type constant.Tool `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolChoiceToolParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolChoiceToolParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolChoiceToolParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties DisplayHeightPx, DisplayWidthPx, Name, Type are required.
+type BetaToolComputerUse20241022Param struct {
+	// The height of the display in pixels.
+	DisplayHeightPx int64 `json:"display_height_px,required"`
+	// The width of the display in pixels.
+	DisplayWidthPx int64 `json:"display_width_px,required"`
+	// The X11 display number (e.g. 0, 1) for the display.
+	DisplayNumber param.Opt[int64] `json:"display_number,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "computer".
+	Name constant.Computer `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "computer_20241022".
+	Type constant.Computer20241022 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolComputerUse20241022Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolComputerUse20241022Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolComputerUse20241022Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties DisplayHeightPx, DisplayWidthPx, Name, Type are required.
+type BetaToolComputerUse20250124Param struct {
+	// The height of the display in pixels.
+	DisplayHeightPx int64 `json:"display_height_px,required"`
+	// The width of the display in pixels.
+	DisplayWidthPx int64 `json:"display_width_px,required"`
+	// The X11 display number (e.g. 0, 1) for the display.
+	DisplayNumber param.Opt[int64] `json:"display_number,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "computer".
+	Name constant.Computer `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "computer_20250124".
+	Type constant.Computer20250124 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolComputerUse20250124Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolComputerUse20250124Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolComputerUse20250124Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties ToolUseID, Type are required.
+type BetaToolResultBlockParam struct {
+	ToolUseID string          `json:"tool_use_id,required"`
+	IsError   param.Opt[bool] `json:"is_error,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam         `json:"cache_control,omitzero"`
+	Content      []BetaToolResultBlockParamContentUnion `json:"content,omitzero"`
+	// This field can be elided, and will marshal its zero value as "tool_result".
+	Type constant.ToolResult `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaToolResultBlockParamContentUnion struct {
+	OfText  *BetaTextBlockParam  `json:",omitzero,inline"`
+	OfImage *BetaImageBlockParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaToolResultBlockParamContentUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfText, u.OfImage)
+}
+func (u *BetaToolResultBlockParamContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaToolResultBlockParamContentUnion) asAny() any {
+	if !param.IsOmitted(u.OfText) {
+		return u.OfText
+	} else if !param.IsOmitted(u.OfImage) {
+		return u.OfImage
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolResultBlockParamContentUnion) GetText() *string {
+	if vt := u.OfText; vt != nil {
+		return &vt.Text
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolResultBlockParamContentUnion) GetCitations() []BetaTextCitationParamUnion {
+	if vt := u.OfText; vt != nil {
+		return vt.Citations
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolResultBlockParamContentUnion) GetSource() *BetaImageBlockParamSourceUnion {
+	if vt := u.OfImage; vt != nil {
+		return &vt.Source
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolResultBlockParamContentUnion) GetType() *string {
+	if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfImage; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u BetaToolResultBlockParamContentUnion) GetCacheControl() *BetaCacheControlEphemeralParam {
+	if vt := u.OfText; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfImage; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}
+
+// The properties Name, Type are required.
+type BetaToolTextEditor20241022Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as
+	// "str_replace_editor".
+	Name constant.StrReplaceEditor `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "text_editor_20241022".
+	Type constant.TextEditor20241022 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolTextEditor20241022Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolTextEditor20241022Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolTextEditor20241022Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Name, Type are required.
+type BetaToolTextEditor20250124Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as
+	// "str_replace_editor".
+	Name constant.StrReplaceEditor `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "text_editor_20250124".
+	Type constant.TextEditor20250124 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolTextEditor20250124Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolTextEditor20250124Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolTextEditor20250124Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Name, Type are required.
+type BetaToolTextEditor20250429Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as
+	// "str_replace_based_edit_tool".
+	Name constant.StrReplaceBasedEditTool `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "text_editor_20250429".
+	Type constant.TextEditor20250429 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolTextEditor20250429Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolTextEditor20250429Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolTextEditor20250429Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func BetaToolUnionParamOfTool(inputSchema BetaToolInputSchemaParam, name string) BetaToolUnionParam {
+	var variant BetaToolParam
+	variant.InputSchema = inputSchema
+	variant.Name = name
+	return BetaToolUnionParam{OfTool: &variant}
+}
+
+func BetaToolUnionParamOfComputerUseTool20241022(displayHeightPx int64, displayWidthPx int64) BetaToolUnionParam {
+	var variant BetaToolComputerUse20241022Param
+	variant.DisplayHeightPx = displayHeightPx
+	variant.DisplayWidthPx = displayWidthPx
+	return BetaToolUnionParam{OfComputerUseTool20241022: &variant}
+}
+
+func BetaToolUnionParamOfComputerUseTool20250124(displayHeightPx int64, displayWidthPx int64) BetaToolUnionParam {
+	var variant BetaToolComputerUse20250124Param
+	variant.DisplayHeightPx = displayHeightPx
+	variant.DisplayWidthPx = displayWidthPx
+	return BetaToolUnionParam{OfComputerUseTool20250124: &variant}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaToolUnionParam struct {
+	OfTool                      *BetaToolParam                      `json:",omitzero,inline"`
+	OfComputerUseTool20241022   *BetaToolComputerUse20241022Param   `json:",omitzero,inline"`
+	OfBashTool20241022          *BetaToolBash20241022Param          `json:",omitzero,inline"`
+	OfTextEditor20241022        *BetaToolTextEditor20241022Param    `json:",omitzero,inline"`
+	OfComputerUseTool20250124   *BetaToolComputerUse20250124Param   `json:",omitzero,inline"`
+	OfBashTool20250124          *BetaToolBash20250124Param          `json:",omitzero,inline"`
+	OfTextEditor20250124        *BetaToolTextEditor20250124Param    `json:",omitzero,inline"`
+	OfTextEditor20250429        *BetaToolTextEditor20250429Param    `json:",omitzero,inline"`
+	OfWebSearchTool20250305     *BetaWebSearchTool20250305Param     `json:",omitzero,inline"`
+	OfCodeExecutionTool20250522 *BetaCodeExecutionTool20250522Param `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaToolUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfTool,
+		u.OfComputerUseTool20241022,
+		u.OfBashTool20241022,
+		u.OfTextEditor20241022,
+		u.OfComputerUseTool20250124,
+		u.OfBashTool20250124,
+		u.OfTextEditor20250124,
+		u.OfTextEditor20250429,
+		u.OfWebSearchTool20250305,
+		u.OfCodeExecutionTool20250522)
+}
+func (u *BetaToolUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaToolUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfTool) {
+		return u.OfTool
+	} else if !param.IsOmitted(u.OfComputerUseTool20241022) {
+		return u.OfComputerUseTool20241022
+	} else if !param.IsOmitted(u.OfBashTool20241022) {
+		return u.OfBashTool20241022
+	} else if !param.IsOmitted(u.OfTextEditor20241022) {
+		return u.OfTextEditor20241022
+	} else if !param.IsOmitted(u.OfComputerUseTool20250124) {
+		return u.OfComputerUseTool20250124
+	} else if !param.IsOmitted(u.OfBashTool20250124) {
+		return u.OfBashTool20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250124) {
+		return u.OfTextEditor20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250429) {
+		return u.OfTextEditor20250429
+	} else if !param.IsOmitted(u.OfWebSearchTool20250305) {
+		return u.OfWebSearchTool20250305
+	} else if !param.IsOmitted(u.OfCodeExecutionTool20250522) {
+		return u.OfCodeExecutionTool20250522
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetInputSchema() *BetaToolInputSchemaParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.InputSchema
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetDescription() *string {
+	if vt := u.OfTool; vt != nil && vt.Description.Valid() {
+		return &vt.Description.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetAllowedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.AllowedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetBlockedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.BlockedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetMaxUses() *int64 {
+	if vt := u.OfWebSearchTool20250305; vt != nil && vt.MaxUses.Valid() {
+		return &vt.MaxUses.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetUserLocation() *BetaWebSearchTool20250305UserLocationParam {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.UserLocation
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetName() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfBashTool20241022; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20241022; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfCodeExecutionTool20250522; vt != nil {
+		return (*string)(&vt.Name)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetType() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfBashTool20241022; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20241022; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfCodeExecutionTool20250522; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetDisplayHeightPx() *int64 {
+	if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*int64)(&vt.DisplayHeightPx)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*int64)(&vt.DisplayHeightPx)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetDisplayWidthPx() *int64 {
+	if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*int64)(&vt.DisplayWidthPx)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*int64)(&vt.DisplayWidthPx)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaToolUnionParam) GetDisplayNumber() *int64 {
+	if vt := u.OfComputerUseTool20241022; vt != nil && vt.DisplayNumber.Valid() {
+		return &vt.DisplayNumber.Value
+	} else if vt := u.OfComputerUseTool20250124; vt != nil && vt.DisplayNumber.Valid() {
+		return &vt.DisplayNumber.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u BetaToolUnionParam) GetCacheControl() *BetaCacheControlEphemeralParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfComputerUseTool20241022; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfBashTool20241022; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20241022; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfCodeExecutionTool20250522; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}
+
+type BetaToolUseBlock struct {
+	ID    string           `json:"id,required"`
+	Input any              `json:"input,required"`
+	Name  string           `json:"name,required"`
+	Type  constant.ToolUse `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Input       respjson.Field
+		Name        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaToolUseBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaToolUseBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r BetaToolUseBlock) ToParam() BetaToolUseBlockParam {
+	var p BetaToolUseBlockParam
+	p.Type = r.Type
+	p.ID = r.ID
+	p.Input = r.Input
+	p.Name = r.Name
+	return p
+}
+
+// The properties ID, Input, Name, Type are required.
+type BetaToolUseBlockParam struct {
+	ID    string `json:"id,required"`
+	Input any    `json:"input,omitzero,required"`
+	Name  string `json:"name,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "tool_use".
+	Type constant.ToolUse `json:"type,required"`
+	paramObj
+}
+
+func (r BetaToolUseBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaToolUseBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaToolUseBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Type, URL are required.
+type BetaURLImageSourceParam struct {
+	URL string `json:"url,required"`
+	// This field can be elided, and will marshal its zero value as "url".
+	Type constant.URL `json:"type,required"`
+	paramObj
+}
+
+func (r BetaURLImageSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaURLImageSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaURLImageSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Type, URL are required.
+type BetaURLPDFSourceParam struct {
+	URL string `json:"url,required"`
+	// This field can be elided, and will marshal its zero value as "url".
+	Type constant.URL `json:"type,required"`
+	paramObj
+}
+
+func (r BetaURLPDFSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaURLPDFSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaURLPDFSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaUsage struct {
+	// Breakdown of cached tokens by TTL
+	CacheCreation BetaCacheCreation `json:"cache_creation,required"`
+	// The number of input tokens used to create the cache entry.
+	CacheCreationInputTokens int64 `json:"cache_creation_input_tokens,required"`
+	// The number of input tokens read from the cache.
+	CacheReadInputTokens int64 `json:"cache_read_input_tokens,required"`
+	// The number of input tokens which were used.
+	InputTokens int64 `json:"input_tokens,required"`
+	// The number of output tokens which were used.
+	OutputTokens int64 `json:"output_tokens,required"`
+	// The number of server tool requests.
+	ServerToolUse BetaServerToolUsage `json:"server_tool_use,required"`
+	// If the request used the priority, standard, or batch tier.
+	//
+	// Any of "standard", "priority", "batch".
+	ServiceTier BetaUsageServiceTier `json:"service_tier,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CacheCreation            respjson.Field
+		CacheCreationInputTokens respjson.Field
+		CacheReadInputTokens     respjson.Field
+		InputTokens              respjson.Field
+		OutputTokens             respjson.Field
+		ServerToolUse            respjson.Field
+		ServiceTier              respjson.Field
+		ExtraFields              map[string]respjson.Field
+		raw                      string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaUsage) RawJSON() string { return r.JSON.raw }
+func (r *BetaUsage) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// If the request used the priority, standard, or batch tier.
+type BetaUsageServiceTier string
+
+const (
+	BetaUsageServiceTierStandard BetaUsageServiceTier = "standard"
+	BetaUsageServiceTierPriority BetaUsageServiceTier = "priority"
+	BetaUsageServiceTierBatch    BetaUsageServiceTier = "batch"
+)
+
+type BetaWebSearchResultBlock struct {
+	EncryptedContent string                   `json:"encrypted_content,required"`
+	PageAge          string                   `json:"page_age,required"`
+	Title            string                   `json:"title,required"`
+	Type             constant.WebSearchResult `json:"type,required"`
+	URL              string                   `json:"url,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		EncryptedContent respjson.Field
+		PageAge          respjson.Field
+		Title            respjson.Field
+		Type             respjson.Field
+		URL              respjson.Field
+		ExtraFields      map[string]respjson.Field
+		raw              string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaWebSearchResultBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaWebSearchResultBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties EncryptedContent, Title, Type, URL are required.
+type BetaWebSearchResultBlockParam struct {
+	EncryptedContent string            `json:"encrypted_content,required"`
+	Title            string            `json:"title,required"`
+	URL              string            `json:"url,required"`
+	PageAge          param.Opt[string] `json:"page_age,omitzero"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_result".
+	Type constant.WebSearchResult `json:"type,required"`
+	paramObj
+}
+
+func (r BetaWebSearchResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaWebSearchResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaWebSearchResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Name, Type are required.
+type BetaWebSearchTool20250305Param struct {
+	// Maximum number of times the tool can be used in the API request.
+	MaxUses param.Opt[int64] `json:"max_uses,omitzero"`
+	// If provided, only these domains will be included in results. Cannot be used
+	// alongside `blocked_domains`.
+	AllowedDomains []string `json:"allowed_domains,omitzero"`
+	// If provided, these domains will never appear in results. Cannot be used
+	// alongside `allowed_domains`.
+	BlockedDomains []string `json:"blocked_domains,omitzero"`
+	// Parameters for the user's location. Used to provide more relevant search
+	// results.
+	UserLocation BetaWebSearchTool20250305UserLocationParam `json:"user_location,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "web_search".
+	Name constant.WebSearch `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_20250305".
+	Type constant.WebSearch20250305 `json:"type,required"`
+	paramObj
+}
+
+func (r BetaWebSearchTool20250305Param) MarshalJSON() (data []byte, err error) {
+	type shadow BetaWebSearchTool20250305Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaWebSearchTool20250305Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Parameters for the user's location. Used to provide more relevant search
+// results.
+//
+// The property Type is required.
+type BetaWebSearchTool20250305UserLocationParam struct {
+	// The city of the user.
+	City param.Opt[string] `json:"city,omitzero"`
+	// The two letter
+	// [ISO country code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) of the
+	// user.
+	Country param.Opt[string] `json:"country,omitzero"`
+	// The region of the user.
+	Region param.Opt[string] `json:"region,omitzero"`
+	// The [IANA timezone](https://nodatime.org/TimeZones) of the user.
+	Timezone param.Opt[string] `json:"timezone,omitzero"`
+	// This field can be elided, and will marshal its zero value as "approximate".
+	Type constant.Approximate `json:"type,required"`
+	paramObj
+}
+
+func (r BetaWebSearchTool20250305UserLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaWebSearchTool20250305UserLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaWebSearchTool20250305UserLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties ErrorCode, Type are required.
+type BetaWebSearchToolRequestErrorParam struct {
+	// Any of "invalid_tool_input", "unavailable", "max_uses_exceeded",
+	// "too_many_requests", "query_too_long".
+	ErrorCode BetaWebSearchToolResultErrorCode `json:"error_code,omitzero,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_tool_result_error".
+	Type constant.WebSearchToolResultError `json:"type,required"`
+	paramObj
+}
+
+func (r BetaWebSearchToolRequestErrorParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaWebSearchToolRequestErrorParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaWebSearchToolRequestErrorParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaWebSearchToolResultBlock struct {
+	Content   BetaWebSearchToolResultBlockContentUnion `json:"content,required"`
+	ToolUseID string                                   `json:"tool_use_id,required"`
+	Type      constant.WebSearchToolResult             `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Content     respjson.Field
+		ToolUseID   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaWebSearchToolResultBlock) RawJSON() string { return r.JSON.raw }
+func (r *BetaWebSearchToolResultBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaWebSearchToolResultBlockContentUnion contains all possible properties and
+// values from [BetaWebSearchToolResultError], [[]BetaWebSearchResultBlock].
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+//
+// If the underlying value is not a json object, one of the following properties
+// will be valid: OfBetaWebSearchResultBlockArray]
+type BetaWebSearchToolResultBlockContentUnion struct {
+	// This field will be present if the value is a [[]BetaWebSearchResultBlock]
+	// instead of an object.
+	OfBetaWebSearchResultBlockArray []BetaWebSearchResultBlock `json:",inline"`
+	// This field is from variant [BetaWebSearchToolResultError].
+	ErrorCode BetaWebSearchToolResultErrorCode `json:"error_code"`
+	// This field is from variant [BetaWebSearchToolResultError].
+	Type constant.WebSearchToolResultError `json:"type"`
+	JSON struct {
+		OfBetaWebSearchResultBlockArray respjson.Field
+		ErrorCode                       respjson.Field
+		Type                            respjson.Field
+		raw                             string
+	} `json:"-"`
+}
+
+func (u BetaWebSearchToolResultBlockContentUnion) AsResponseWebSearchToolResultError() (v BetaWebSearchToolResultError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaWebSearchToolResultBlockContentUnion) AsBetaWebSearchResultBlockArray() (v []BetaWebSearchResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaWebSearchToolResultBlockContentUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaWebSearchToolResultBlockContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Content, ToolUseID, Type are required.
+type BetaWebSearchToolResultBlockParam struct {
+	Content   BetaWebSearchToolResultBlockParamContentUnion `json:"content,omitzero,required"`
+	ToolUseID string                                        `json:"tool_use_id,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl BetaCacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_tool_result".
+	Type constant.WebSearchToolResult `json:"type,required"`
+	paramObj
+}
+
+func (r BetaWebSearchToolResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow BetaWebSearchToolResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaWebSearchToolResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func BetaNewWebSearchToolRequestError(errorCode BetaWebSearchToolResultErrorCode) BetaWebSearchToolResultBlockParamContentUnion {
+	var variant BetaWebSearchToolRequestErrorParam
+	variant.ErrorCode = errorCode
+	return BetaWebSearchToolResultBlockParamContentUnion{OfError: &variant}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaWebSearchToolResultBlockParamContentUnion struct {
+	OfResultBlock []BetaWebSearchResultBlockParam     `json:",omitzero,inline"`
+	OfError       *BetaWebSearchToolRequestErrorParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaWebSearchToolResultBlockParamContentUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfResultBlock, u.OfError)
+}
+func (u *BetaWebSearchToolResultBlockParamContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaWebSearchToolResultBlockParamContentUnion) asAny() any {
+	if !param.IsOmitted(u.OfResultBlock) {
+		return &u.OfResultBlock
+	} else if !param.IsOmitted(u.OfError) {
+		return u.OfError
+	}
+	return nil
+}
+
+type BetaWebSearchToolResultError struct {
+	// Any of "invalid_tool_input", "unavailable", "max_uses_exceeded",
+	// "too_many_requests", "query_too_long".
+	ErrorCode BetaWebSearchToolResultErrorCode  `json:"error_code,required"`
+	Type      constant.WebSearchToolResultError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ErrorCode   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaWebSearchToolResultError) RawJSON() string { return r.JSON.raw }
+func (r *BetaWebSearchToolResultError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaWebSearchToolResultErrorCode string
+
+const (
+	BetaWebSearchToolResultErrorCodeInvalidToolInput BetaWebSearchToolResultErrorCode = "invalid_tool_input"
+	BetaWebSearchToolResultErrorCodeUnavailable      BetaWebSearchToolResultErrorCode = "unavailable"
+	BetaWebSearchToolResultErrorCodeMaxUsesExceeded  BetaWebSearchToolResultErrorCode = "max_uses_exceeded"
+	BetaWebSearchToolResultErrorCodeTooManyRequests  BetaWebSearchToolResultErrorCode = "too_many_requests"
+	BetaWebSearchToolResultErrorCodeQueryTooLong     BetaWebSearchToolResultErrorCode = "query_too_long"
+)
+
+type BetaMessageNewParams struct {
+	// The maximum number of tokens to generate before stopping.
+	//
+	// Note that our models may stop _before_ reaching this maximum. This parameter
+	// only specifies the absolute maximum number of tokens to generate.
+	//
+	// Different models have different maximum values for this parameter. See
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for details.
+	MaxTokens int64 `json:"max_tokens,required"`
+	// Input messages.
+	//
+	// Our models are trained to operate on alternating `user` and `assistant`
+	// conversational turns. When creating a new `Message`, you specify the prior
+	// conversational turns with the `messages` parameter, and the model then generates
+	// the next `Message` in the conversation. Consecutive `user` or `assistant` turns
+	// in your request will be combined into a single turn.
+	//
+	// Each input message must be an object with a `role` and `content`. You can
+	// specify a single `user`-role message, or you can include multiple `user` and
+	// `assistant` messages.
+	//
+	// If the final message uses the `assistant` role, the response content will
+	// continue immediately from the content in that message. This can be used to
+	// constrain part of the model's response.
+	//
+	// Example with a single `user` message:
+	//
+	// ```json
+	// [{ "role": "user", "content": "Hello, Claude" }]
+	// ```
+	//
+	// Example with multiple conversational turns:
+	//
+	// ```json
+	// [
+	//
+	//	{ "role": "user", "content": "Hello there." },
+	//	{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
+	//	{ "role": "user", "content": "Can you explain LLMs in plain English?" }
+	//
+	// ]
+	// ```
+	//
+	// Example with a partially-filled response from Claude:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Each input message `content` may be either a single `string` or an array of
+	// content blocks, where each block has a specific `type`. Using a `string` for
+	// `content` is shorthand for an array of one content block of type `"text"`. The
+	// following input messages are equivalent:
+	//
+	// ```json
+	// { "role": "user", "content": "Hello, Claude" }
+	// ```
+	//
+	// ```json
+	// { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
+	// ```
+	//
+	// Starting with Claude 3 models, you can also send image content blocks:
+	//
+	// ```json
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": [
+	//	    {
+	//	      "type": "image",
+	//	      "source": {
+	//	        "type": "base64",
+	//	        "media_type": "image/jpeg",
+	//	        "data": "/9j/4AAQSkZJRg..."
+	//	      }
+	//	    },
+	//	    { "type": "text", "text": "What is in this image?" }
+	//	  ]
+	//	}
+	//
+	// ```
+	//
+	// We currently support the `base64` source type for images, and the `image/jpeg`,
+	// `image/png`, `image/gif`, and `image/webp` media types.
+	//
+	// See [examples](https://docs.anthropic.com/en/api/messages-examples#vision) for
+	// more input examples.
+	//
+	// Note that if you want to include a
+	// [system prompt](https://docs.anthropic.com/en/docs/system-prompts), you can use
+	// the top-level `system` parameter — there is no `"system"` role for input
+	// messages in the Messages API.
+	//
+	// There is a limit of 100000 messages in a single request.
+	Messages []BetaMessageParam `json:"messages,omitzero,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,omitzero,required"`
+	// Container identifier for reuse across requests.
+	Container param.Opt[string] `json:"container,omitzero"`
+	// Amount of randomness injected into the response.
+	//
+	// Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0`
+	// for analytical / multiple choice, and closer to `1.0` for creative and
+	// generative tasks.
+	//
+	// Note that even with `temperature` of `0.0`, the results will not be fully
+	// deterministic.
+	Temperature param.Opt[float64] `json:"temperature,omitzero"`
+	// Only sample from the top K options for each subsequent token.
+	//
+	// Used to remove "long tail" low probability responses.
+	// [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277).
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopK param.Opt[int64] `json:"top_k,omitzero"`
+	// Use nucleus sampling.
+	//
+	// In nucleus sampling, we compute the cumulative distribution over all the options
+	// for each subsequent token in decreasing probability order and cut it off once it
+	// reaches a particular probability specified by `top_p`. You should either alter
+	// `temperature` or `top_p`, but not both.
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopP param.Opt[float64] `json:"top_p,omitzero"`
+	// MCP servers to be utilized in this request
+	MCPServers []BetaRequestMCPServerURLDefinitionParam `json:"mcp_servers,omitzero"`
+	// An object describing metadata about the request.
+	Metadata BetaMetadataParam `json:"metadata,omitzero"`
+	// Determines whether to use priority capacity (if available) or standard capacity
+	// for this request.
+	//
+	// Anthropic offers different levels of service for your API requests. See
+	// [service-tiers](https://docs.anthropic.com/en/api/service-tiers) for details.
+	//
+	// Any of "auto", "standard_only".
+	ServiceTier BetaMessageNewParamsServiceTier `json:"service_tier,omitzero"`
+	// Custom text sequences that will cause the model to stop generating.
+	//
+	// Our models will normally stop when they have naturally completed their turn,
+	// which will result in a response `stop_reason` of `"end_turn"`.
+	//
+	// If you want the model to stop generating when it encounters custom strings of
+	// text, you can use the `stop_sequences` parameter. If the model encounters one of
+	// the custom sequences, the response `stop_reason` value will be `"stop_sequence"`
+	// and the response `stop_sequence` value will contain the matched stop sequence.
+	StopSequences []string `json:"stop_sequences,omitzero"`
+	// System prompt.
+	//
+	// A system prompt is a way of providing context and instructions to Claude, such
+	// as specifying a particular goal or role. See our
+	// [guide to system prompts](https://docs.anthropic.com/en/docs/system-prompts).
+	System []BetaTextBlockParam `json:"system,omitzero"`
+	// Configuration for enabling Claude's extended thinking.
+	//
+	// When enabled, responses include `thinking` content blocks showing Claude's
+	// thinking process before the final answer. Requires a minimum budget of 1,024
+	// tokens and counts towards your `max_tokens` limit.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	Thinking BetaThinkingConfigParamUnion `json:"thinking,omitzero"`
+	// How the model should use the provided tools. The model can use a specific tool,
+	// any available tool, decide by itself, or not use tools at all.
+	ToolChoice BetaToolChoiceUnionParam `json:"tool_choice,omitzero"`
+	// Definitions of tools that the model may use.
+	//
+	// If you include `tools` in your API request, the model may return `tool_use`
+	// content blocks that represent the model's use of those tools. You can then run
+	// those tools using the tool input generated by the model and then optionally
+	// return results back to the model using `tool_result` content blocks.
+	//
+	// Each tool definition includes:
+	//
+	//   - `name`: Name of the tool.
+	//   - `description`: Optional, but strongly-recommended description of the tool.
+	//   - `input_schema`: [JSON schema](https://json-schema.org/draft/2020-12) for the
+	//     tool `input` shape that the model will produce in `tool_use` output content
+	//     blocks.
+	//
+	// For example, if you defined `tools` as:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "name": "get_stock_price",
+	//	  "description": "Get the current stock price for a given ticker symbol.",
+	//	  "input_schema": {
+	//	    "type": "object",
+	//	    "properties": {
+	//	      "ticker": {
+	//	        "type": "string",
+	//	        "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
+	//	      }
+	//	    },
+	//	    "required": ["ticker"]
+	//	  }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// And then asked the model "What's the S&P 500 at today?", the model might produce
+	// `tool_use` content blocks in the response like this:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_use",
+	//	  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "name": "get_stock_price",
+	//	  "input": { "ticker": "^GSPC" }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// You might then run your `get_stock_price` tool with `{"ticker": "^GSPC"}` as an
+	// input, and return the following back to the model in a subsequent `user`
+	// message:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_result",
+	//	  "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "content": "259.75 USD"
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// Tools can be used for workflows that include running client-side tools and
+	// functions, or more generally whenever you want the model to produce a particular
+	// JSON structure of output.
+	//
+	// See our [guide](https://docs.anthropic.com/en/docs/tool-use) for more details.
+	Tools []BetaToolUnionParam `json:"tools,omitzero"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+func (r BetaMessageNewParams) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMessageNewParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMessageNewParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Determines whether to use priority capacity (if available) or standard capacity
+// for this request.
+//
+// Anthropic offers different levels of service for your API requests. See
+// [service-tiers](https://docs.anthropic.com/en/api/service-tiers) for details.
+type BetaMessageNewParamsServiceTier string
+
+const (
+	BetaMessageNewParamsServiceTierAuto         BetaMessageNewParamsServiceTier = "auto"
+	BetaMessageNewParamsServiceTierStandardOnly BetaMessageNewParamsServiceTier = "standard_only"
+)
+
+type BetaMessageCountTokensParams struct {
+	// Input messages.
+	//
+	// Our models are trained to operate on alternating `user` and `assistant`
+	// conversational turns. When creating a new `Message`, you specify the prior
+	// conversational turns with the `messages` parameter, and the model then generates
+	// the next `Message` in the conversation. Consecutive `user` or `assistant` turns
+	// in your request will be combined into a single turn.
+	//
+	// Each input message must be an object with a `role` and `content`. You can
+	// specify a single `user`-role message, or you can include multiple `user` and
+	// `assistant` messages.
+	//
+	// If the final message uses the `assistant` role, the response content will
+	// continue immediately from the content in that message. This can be used to
+	// constrain part of the model's response.
+	//
+	// Example with a single `user` message:
+	//
+	// ```json
+	// [{ "role": "user", "content": "Hello, Claude" }]
+	// ```
+	//
+	// Example with multiple conversational turns:
+	//
+	// ```json
+	// [
+	//
+	//	{ "role": "user", "content": "Hello there." },
+	//	{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
+	//	{ "role": "user", "content": "Can you explain LLMs in plain English?" }
+	//
+	// ]
+	// ```
+	//
+	// Example with a partially-filled response from Claude:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Each input message `content` may be either a single `string` or an array of
+	// content blocks, where each block has a specific `type`. Using a `string` for
+	// `content` is shorthand for an array of one content block of type `"text"`. The
+	// following input messages are equivalent:
+	//
+	// ```json
+	// { "role": "user", "content": "Hello, Claude" }
+	// ```
+	//
+	// ```json
+	// { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
+	// ```
+	//
+	// Starting with Claude 3 models, you can also send image content blocks:
+	//
+	// ```json
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": [
+	//	    {
+	//	      "type": "image",
+	//	      "source": {
+	//	        "type": "base64",
+	//	        "media_type": "image/jpeg",
+	//	        "data": "/9j/4AAQSkZJRg..."
+	//	      }
+	//	    },
+	//	    { "type": "text", "text": "What is in this image?" }
+	//	  ]
+	//	}
+	//
+	// ```
+	//
+	// We currently support the `base64` source type for images, and the `image/jpeg`,
+	// `image/png`, `image/gif`, and `image/webp` media types.
+	//
+	// See [examples](https://docs.anthropic.com/en/api/messages-examples#vision) for
+	// more input examples.
+	//
+	// Note that if you want to include a
+	// [system prompt](https://docs.anthropic.com/en/docs/system-prompts), you can use
+	// the top-level `system` parameter — there is no `"system"` role for input
+	// messages in the Messages API.
+	//
+	// There is a limit of 100000 messages in a single request.
+	Messages []BetaMessageParam `json:"messages,omitzero,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,omitzero,required"`
+	// MCP servers to be utilized in this request
+	MCPServers []BetaRequestMCPServerURLDefinitionParam `json:"mcp_servers,omitzero"`
+	// System prompt.
+	//
+	// A system prompt is a way of providing context and instructions to Claude, such
+	// as specifying a particular goal or role. See our
+	// [guide to system prompts](https://docs.anthropic.com/en/docs/system-prompts).
+	System BetaMessageCountTokensParamsSystemUnion `json:"system,omitzero"`
+	// Configuration for enabling Claude's extended thinking.
+	//
+	// When enabled, responses include `thinking` content blocks showing Claude's
+	// thinking process before the final answer. Requires a minimum budget of 1,024
+	// tokens and counts towards your `max_tokens` limit.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	Thinking BetaThinkingConfigParamUnion `json:"thinking,omitzero"`
+	// How the model should use the provided tools. The model can use a specific tool,
+	// any available tool, decide by itself, or not use tools at all.
+	ToolChoice BetaToolChoiceUnionParam `json:"tool_choice,omitzero"`
+	// Definitions of tools that the model may use.
+	//
+	// If you include `tools` in your API request, the model may return `tool_use`
+	// content blocks that represent the model's use of those tools. You can then run
+	// those tools using the tool input generated by the model and then optionally
+	// return results back to the model using `tool_result` content blocks.
+	//
+	// Each tool definition includes:
+	//
+	//   - `name`: Name of the tool.
+	//   - `description`: Optional, but strongly-recommended description of the tool.
+	//   - `input_schema`: [JSON schema](https://json-schema.org/draft/2020-12) for the
+	//     tool `input` shape that the model will produce in `tool_use` output content
+	//     blocks.
+	//
+	// For example, if you defined `tools` as:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "name": "get_stock_price",
+	//	  "description": "Get the current stock price for a given ticker symbol.",
+	//	  "input_schema": {
+	//	    "type": "object",
+	//	    "properties": {
+	//	      "ticker": {
+	//	        "type": "string",
+	//	        "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
+	//	      }
+	//	    },
+	//	    "required": ["ticker"]
+	//	  }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// And then asked the model "What's the S&P 500 at today?", the model might produce
+	// `tool_use` content blocks in the response like this:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_use",
+	//	  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "name": "get_stock_price",
+	//	  "input": { "ticker": "^GSPC" }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// You might then run your `get_stock_price` tool with `{"ticker": "^GSPC"}` as an
+	// input, and return the following back to the model in a subsequent `user`
+	// message:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_result",
+	//	  "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "content": "259.75 USD"
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// Tools can be used for workflows that include running client-side tools and
+	// functions, or more generally whenever you want the model to produce a particular
+	// JSON structure of output.
+	//
+	// See our [guide](https://docs.anthropic.com/en/docs/tool-use) for more details.
+	Tools []BetaMessageCountTokensParamsToolUnion `json:"tools,omitzero"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+func (r BetaMessageCountTokensParams) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMessageCountTokensParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMessageCountTokensParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaMessageCountTokensParamsSystemUnion struct {
+	OfString             param.Opt[string]    `json:",omitzero,inline"`
+	OfBetaTextBlockArray []BetaTextBlockParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaMessageCountTokensParamsSystemUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfString, u.OfBetaTextBlockArray)
+}
+func (u *BetaMessageCountTokensParamsSystemUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaMessageCountTokensParamsSystemUnion) asAny() any {
+	if !param.IsOmitted(u.OfString) {
+		return &u.OfString.Value
+	} else if !param.IsOmitted(u.OfBetaTextBlockArray) {
+		return &u.OfBetaTextBlockArray
+	}
+	return nil
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type BetaMessageCountTokensParamsToolUnion struct {
+	OfTool                      *BetaToolParam                      `json:",omitzero,inline"`
+	OfComputerUseTool20241022   *BetaToolComputerUse20241022Param   `json:",omitzero,inline"`
+	OfBashTool20241022          *BetaToolBash20241022Param          `json:",omitzero,inline"`
+	OfTextEditor20241022        *BetaToolTextEditor20241022Param    `json:",omitzero,inline"`
+	OfComputerUseTool20250124   *BetaToolComputerUse20250124Param   `json:",omitzero,inline"`
+	OfBashTool20250124          *BetaToolBash20250124Param          `json:",omitzero,inline"`
+	OfTextEditor20250124        *BetaToolTextEditor20250124Param    `json:",omitzero,inline"`
+	OfTextEditor20250429        *BetaToolTextEditor20250429Param    `json:",omitzero,inline"`
+	OfWebSearchTool20250305     *BetaWebSearchTool20250305Param     `json:",omitzero,inline"`
+	OfCodeExecutionTool20250522 *BetaCodeExecutionTool20250522Param `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u BetaMessageCountTokensParamsToolUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfTool,
+		u.OfComputerUseTool20241022,
+		u.OfBashTool20241022,
+		u.OfTextEditor20241022,
+		u.OfComputerUseTool20250124,
+		u.OfBashTool20250124,
+		u.OfTextEditor20250124,
+		u.OfTextEditor20250429,
+		u.OfWebSearchTool20250305,
+		u.OfCodeExecutionTool20250522)
+}
+func (u *BetaMessageCountTokensParamsToolUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *BetaMessageCountTokensParamsToolUnion) asAny() any {
+	if !param.IsOmitted(u.OfTool) {
+		return u.OfTool
+	} else if !param.IsOmitted(u.OfComputerUseTool20241022) {
+		return u.OfComputerUseTool20241022
+	} else if !param.IsOmitted(u.OfBashTool20241022) {
+		return u.OfBashTool20241022
+	} else if !param.IsOmitted(u.OfTextEditor20241022) {
+		return u.OfTextEditor20241022
+	} else if !param.IsOmitted(u.OfComputerUseTool20250124) {
+		return u.OfComputerUseTool20250124
+	} else if !param.IsOmitted(u.OfBashTool20250124) {
+		return u.OfBashTool20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250124) {
+		return u.OfTextEditor20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250429) {
+		return u.OfTextEditor20250429
+	} else if !param.IsOmitted(u.OfWebSearchTool20250305) {
+		return u.OfWebSearchTool20250305
+	} else if !param.IsOmitted(u.OfCodeExecutionTool20250522) {
+		return u.OfCodeExecutionTool20250522
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetInputSchema() *BetaToolInputSchemaParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.InputSchema
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetDescription() *string {
+	if vt := u.OfTool; vt != nil && vt.Description.Valid() {
+		return &vt.Description.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetAllowedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.AllowedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetBlockedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.BlockedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetMaxUses() *int64 {
+	if vt := u.OfWebSearchTool20250305; vt != nil && vt.MaxUses.Valid() {
+		return &vt.MaxUses.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetUserLocation() *BetaWebSearchTool20250305UserLocationParam {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.UserLocation
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetName() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfBashTool20241022; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20241022; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfCodeExecutionTool20250522; vt != nil {
+		return (*string)(&vt.Name)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetType() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfBashTool20241022; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20241022; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfCodeExecutionTool20250522; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetDisplayHeightPx() *int64 {
+	if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*int64)(&vt.DisplayHeightPx)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*int64)(&vt.DisplayHeightPx)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetDisplayWidthPx() *int64 {
+	if vt := u.OfComputerUseTool20241022; vt != nil {
+		return (*int64)(&vt.DisplayWidthPx)
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return (*int64)(&vt.DisplayWidthPx)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetDisplayNumber() *int64 {
+	if vt := u.OfComputerUseTool20241022; vt != nil && vt.DisplayNumber.Valid() {
+		return &vt.DisplayNumber.Value
+	} else if vt := u.OfComputerUseTool20250124; vt != nil && vt.DisplayNumber.Valid() {
+		return &vt.DisplayNumber.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u BetaMessageCountTokensParamsToolUnion) GetCacheControl() *BetaCacheControlEphemeralParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfComputerUseTool20241022; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfBashTool20241022; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20241022; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfComputerUseTool20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfCodeExecutionTool20250522; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}

vendor/github.com/anthropics/anthropic-sdk-go/betamessagebatch.go 🔗

@@ -0,0 +1,879 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/apiquery"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/jsonl"
+	"github.com/anthropics/anthropic-sdk-go/packages/pagination"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// BetaMessageBatchService contains methods and other services that help with
+// interacting with the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewBetaMessageBatchService] method instead.
+type BetaMessageBatchService struct {
+	Options []option.RequestOption
+}
+
+// NewBetaMessageBatchService generates a new service that applies the given
+// options to each request. These options are applied after the parent client's
+// options (if there is one), and before any request-specific options.
+func NewBetaMessageBatchService(opts ...option.RequestOption) (r BetaMessageBatchService) {
+	r = BetaMessageBatchService{}
+	r.Options = opts
+	return
+}
+
+// Send a batch of Message creation requests.
+//
+// The Message Batches API can be used to process multiple Messages API requests at
+// once. Once a Message Batch is created, it begins processing immediately. Batches
+// can take up to 24 hours to complete.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *BetaMessageBatchService) New(ctx context.Context, params BetaMessageBatchNewParams, opts ...option.RequestOption) (res *BetaMessageBatch, err error) {
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "message-batches-2024-09-24")}, opts...)
+	path := "v1/messages/batches?beta=true"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, params, &res, opts...)
+	return
+}
+
+// This endpoint is idempotent and can be used to poll for Message Batch
+// completion. To access the results of a Message Batch, make a request to the
+// `results_url` field in the response.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *BetaMessageBatchService) Get(ctx context.Context, messageBatchID string, query BetaMessageBatchGetParams, opts ...option.RequestOption) (res *BetaMessageBatch, err error) {
+	for _, v := range query.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "message-batches-2024-09-24")}, opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s?beta=true", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &res, opts...)
+	return
+}
+
+// List all Message Batches within a Workspace. Most recently created batches are
+// returned first.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *BetaMessageBatchService) List(ctx context.Context, params BetaMessageBatchListParams, opts ...option.RequestOption) (res *pagination.Page[BetaMessageBatch], err error) {
+	var raw *http.Response
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "message-batches-2024-09-24"), option.WithResponseInto(&raw)}, opts...)
+	path := "v1/messages/batches?beta=true"
+	cfg, err := requestconfig.NewRequestConfig(ctx, http.MethodGet, path, params, &res, opts...)
+	if err != nil {
+		return nil, err
+	}
+	err = cfg.Execute()
+	if err != nil {
+		return nil, err
+	}
+	res.SetPageConfig(cfg, raw)
+	return res, nil
+}
+
+// List all Message Batches within a Workspace. Most recently created batches are
+// returned first.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *BetaMessageBatchService) ListAutoPaging(ctx context.Context, params BetaMessageBatchListParams, opts ...option.RequestOption) *pagination.PageAutoPager[BetaMessageBatch] {
+	return pagination.NewPageAutoPager(r.List(ctx, params, opts...))
+}
+
+// Delete a Message Batch.
+//
+// Message Batches can only be deleted once they've finished processing. If you'd
+// like to delete an in-progress batch, you must first cancel it.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *BetaMessageBatchService) Delete(ctx context.Context, messageBatchID string, body BetaMessageBatchDeleteParams, opts ...option.RequestOption) (res *BetaDeletedMessageBatch, err error) {
+	for _, v := range body.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "message-batches-2024-09-24")}, opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s?beta=true", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodDelete, path, nil, &res, opts...)
+	return
+}
+
+// Batches may be canceled any time before processing ends. Once cancellation is
+// initiated, the batch enters a `canceling` state, at which time the system may
+// complete any in-progress, non-interruptible requests before finalizing
+// cancellation.
+//
+// The number of canceled requests is specified in `request_counts`. To determine
+// which requests were canceled, check the individual results within the batch.
+// Note that cancellation may not result in any canceled requests if they were
+// non-interruptible.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *BetaMessageBatchService) Cancel(ctx context.Context, messageBatchID string, body BetaMessageBatchCancelParams, opts ...option.RequestOption) (res *BetaMessageBatch, err error) {
+	for _, v := range body.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "message-batches-2024-09-24")}, opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s/cancel?beta=true", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, nil, &res, opts...)
+	return
+}
+
+// Streams the results of a Message Batch as a `.jsonl` file.
+//
+// Each line in the file is a JSON object containing the result of a single request
+// in the Message Batch. Results are not guaranteed to be in the same order as
+// requests. Use the `custom_id` field to match results to requests.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *BetaMessageBatchService) ResultsStreaming(ctx context.Context, messageBatchID string, query BetaMessageBatchResultsParams, opts ...option.RequestOption) (stream *jsonl.Stream[BetaMessageBatchIndividualResponse]) {
+	var (
+		raw *http.Response
+		err error
+	)
+	for _, v := range query.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("anthropic-beta", "message-batches-2024-09-24"), option.WithHeader("Accept", "application/x-jsonl")}, opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s/results?beta=true", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &raw, opts...)
+	return jsonl.NewStream[BetaMessageBatchIndividualResponse](raw, err)
+}
+
+type BetaDeletedMessageBatch struct {
+	// ID of the Message Batch.
+	ID string `json:"id,required"`
+	// Deleted object type.
+	//
+	// For Message Batches, this is always `"message_batch_deleted"`.
+	Type constant.MessageBatchDeleted `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaDeletedMessageBatch) RawJSON() string { return r.JSON.raw }
+func (r *BetaDeletedMessageBatch) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessageBatch struct {
+	// Unique object identifier.
+	//
+	// The format and length of IDs may change over time.
+	ID string `json:"id,required"`
+	// RFC 3339 datetime string representing the time at which the Message Batch was
+	// archived and its results became unavailable.
+	ArchivedAt time.Time `json:"archived_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which cancellation was
+	// initiated for the Message Batch. Specified only if cancellation was initiated.
+	CancelInitiatedAt time.Time `json:"cancel_initiated_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which the Message Batch was
+	// created.
+	CreatedAt time.Time `json:"created_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which processing for the
+	// Message Batch ended. Specified only once processing ends.
+	//
+	// Processing ends when every request in a Message Batch has either succeeded,
+	// errored, canceled, or expired.
+	EndedAt time.Time `json:"ended_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which the Message Batch will
+	// expire and end processing, which is 24 hours after creation.
+	ExpiresAt time.Time `json:"expires_at,required" format:"date-time"`
+	// Processing status of the Message Batch.
+	//
+	// Any of "in_progress", "canceling", "ended".
+	ProcessingStatus BetaMessageBatchProcessingStatus `json:"processing_status,required"`
+	// Tallies requests within the Message Batch, categorized by their status.
+	//
+	// Requests start as `processing` and move to one of the other statuses only once
+	// processing of the entire batch ends. The sum of all values always matches the
+	// total number of requests in the batch.
+	RequestCounts BetaMessageBatchRequestCounts `json:"request_counts,required"`
+	// URL to a `.jsonl` file containing the results of the Message Batch requests.
+	// Specified only once processing ends.
+	//
+	// Results in the file are not guaranteed to be in the same order as requests. Use
+	// the `custom_id` field to match results to requests.
+	ResultsURL string `json:"results_url,required"`
+	// Object type.
+	//
+	// For Message Batches, this is always `"message_batch"`.
+	Type constant.MessageBatch `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID                respjson.Field
+		ArchivedAt        respjson.Field
+		CancelInitiatedAt respjson.Field
+		CreatedAt         respjson.Field
+		EndedAt           respjson.Field
+		ExpiresAt         respjson.Field
+		ProcessingStatus  respjson.Field
+		RequestCounts     respjson.Field
+		ResultsURL        respjson.Field
+		Type              respjson.Field
+		ExtraFields       map[string]respjson.Field
+		raw               string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageBatch) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageBatch) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Processing status of the Message Batch.
+type BetaMessageBatchProcessingStatus string
+
+const (
+	BetaMessageBatchProcessingStatusInProgress BetaMessageBatchProcessingStatus = "in_progress"
+	BetaMessageBatchProcessingStatusCanceling  BetaMessageBatchProcessingStatus = "canceling"
+	BetaMessageBatchProcessingStatusEnded      BetaMessageBatchProcessingStatus = "ended"
+)
+
+type BetaMessageBatchCanceledResult struct {
+	Type constant.Canceled `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageBatchCanceledResult) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageBatchCanceledResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessageBatchErroredResult struct {
+	Error BetaErrorResponse `json:"error,required"`
+	Type  constant.Errored  `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Error       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageBatchErroredResult) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageBatchErroredResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessageBatchExpiredResult struct {
+	Type constant.Expired `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageBatchExpiredResult) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageBatchExpiredResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// This is a single line in the response `.jsonl` file and does not represent the
+// response as a whole.
+type BetaMessageBatchIndividualResponse struct {
+	// Developer-provided ID created for each request in a Message Batch. Useful for
+	// matching results to requests, as results may be given out of request order.
+	//
+	// Must be unique for each request within the Message Batch.
+	CustomID string `json:"custom_id,required"`
+	// Processing result for this request.
+	//
+	// Contains a Message output if processing was successful, an error response if
+	// processing failed, or the reason why processing was not attempted, such as
+	// cancellation or expiration.
+	Result BetaMessageBatchResultUnion `json:"result,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CustomID    respjson.Field
+		Result      respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageBatchIndividualResponse) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageBatchIndividualResponse) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessageBatchRequestCounts struct {
+	// Number of requests in the Message Batch that have been canceled.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Canceled int64 `json:"canceled,required"`
+	// Number of requests in the Message Batch that encountered an error.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Errored int64 `json:"errored,required"`
+	// Number of requests in the Message Batch that have expired.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Expired int64 `json:"expired,required"`
+	// Number of requests in the Message Batch that are processing.
+	Processing int64 `json:"processing,required"`
+	// Number of requests in the Message Batch that have completed successfully.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Succeeded int64 `json:"succeeded,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Canceled    respjson.Field
+		Errored     respjson.Field
+		Expired     respjson.Field
+		Processing  respjson.Field
+		Succeeded   respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageBatchRequestCounts) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageBatchRequestCounts) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// BetaMessageBatchResultUnion contains all possible properties and values from
+// [BetaMessageBatchSucceededResult], [BetaMessageBatchErroredResult],
+// [BetaMessageBatchCanceledResult], [BetaMessageBatchExpiredResult].
+//
+// Use the [BetaMessageBatchResultUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type BetaMessageBatchResultUnion struct {
+	// This field is from variant [BetaMessageBatchSucceededResult].
+	Message BetaMessage `json:"message"`
+	// Any of "succeeded", "errored", "canceled", "expired".
+	Type string `json:"type"`
+	// This field is from variant [BetaMessageBatchErroredResult].
+	Error BetaErrorResponse `json:"error"`
+	JSON  struct {
+		Message respjson.Field
+		Type    respjson.Field
+		Error   respjson.Field
+		raw     string
+	} `json:"-"`
+}
+
+// anyBetaMessageBatchResult is implemented by each variant of
+// [BetaMessageBatchResultUnion] to add type safety for the return type of
+// [BetaMessageBatchResultUnion.AsAny]
+type anyBetaMessageBatchResult interface {
+	implBetaMessageBatchResultUnion()
+}
+
+func (BetaMessageBatchSucceededResult) implBetaMessageBatchResultUnion() {}
+func (BetaMessageBatchErroredResult) implBetaMessageBatchResultUnion()   {}
+func (BetaMessageBatchCanceledResult) implBetaMessageBatchResultUnion()  {}
+func (BetaMessageBatchExpiredResult) implBetaMessageBatchResultUnion()   {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := BetaMessageBatchResultUnion.AsAny().(type) {
+//	case anthropic.BetaMessageBatchSucceededResult:
+//	case anthropic.BetaMessageBatchErroredResult:
+//	case anthropic.BetaMessageBatchCanceledResult:
+//	case anthropic.BetaMessageBatchExpiredResult:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u BetaMessageBatchResultUnion) AsAny() anyBetaMessageBatchResult {
+	switch u.Type {
+	case "succeeded":
+		return u.AsSucceeded()
+	case "errored":
+		return u.AsErrored()
+	case "canceled":
+		return u.AsCanceled()
+	case "expired":
+		return u.AsExpired()
+	}
+	return nil
+}
+
+func (u BetaMessageBatchResultUnion) AsSucceeded() (v BetaMessageBatchSucceededResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaMessageBatchResultUnion) AsErrored() (v BetaMessageBatchErroredResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaMessageBatchResultUnion) AsCanceled() (v BetaMessageBatchCanceledResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u BetaMessageBatchResultUnion) AsExpired() (v BetaMessageBatchExpiredResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u BetaMessageBatchResultUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *BetaMessageBatchResultUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessageBatchSucceededResult struct {
+	Message BetaMessage        `json:"message,required"`
+	Type    constant.Succeeded `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaMessageBatchSucceededResult) RawJSON() string { return r.JSON.raw }
+func (r *BetaMessageBatchSucceededResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaMessageBatchNewParams struct {
+	// List of requests for prompt completion. Each is an individual request to create
+	// a Message.
+	Requests []BetaMessageBatchNewParamsRequest `json:"requests,omitzero,required"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+func (r BetaMessageBatchNewParams) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMessageBatchNewParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMessageBatchNewParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CustomID, Params are required.
+type BetaMessageBatchNewParamsRequest struct {
+	// Developer-provided ID created for each request in a Message Batch. Useful for
+	// matching results to requests, as results may be given out of request order.
+	//
+	// Must be unique for each request within the Message Batch.
+	CustomID string `json:"custom_id,required"`
+	// Messages API creation parameters for the individual request.
+	//
+	// See the [Messages API reference](/en/api/messages) for full documentation on
+	// available parameters.
+	Params BetaMessageBatchNewParamsRequestParams `json:"params,omitzero,required"`
+	paramObj
+}
+
+func (r BetaMessageBatchNewParamsRequest) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMessageBatchNewParamsRequest
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMessageBatchNewParamsRequest) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Messages API creation parameters for the individual request.
+//
+// See the [Messages API reference](/en/api/messages) for full documentation on
+// available parameters.
+//
+// The properties MaxTokens, Messages, Model are required.
+type BetaMessageBatchNewParamsRequestParams struct {
+	// The maximum number of tokens to generate before stopping.
+	//
+	// Note that our models may stop _before_ reaching this maximum. This parameter
+	// only specifies the absolute maximum number of tokens to generate.
+	//
+	// Different models have different maximum values for this parameter. See
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for details.
+	MaxTokens int64 `json:"max_tokens,required"`
+	// Input messages.
+	//
+	// Our models are trained to operate on alternating `user` and `assistant`
+	// conversational turns. When creating a new `Message`, you specify the prior
+	// conversational turns with the `messages` parameter, and the model then generates
+	// the next `Message` in the conversation. Consecutive `user` or `assistant` turns
+	// in your request will be combined into a single turn.
+	//
+	// Each input message must be an object with a `role` and `content`. You can
+	// specify a single `user`-role message, or you can include multiple `user` and
+	// `assistant` messages.
+	//
+	// If the final message uses the `assistant` role, the response content will
+	// continue immediately from the content in that message. This can be used to
+	// constrain part of the model's response.
+	//
+	// Example with a single `user` message:
+	//
+	// ```json
+	// [{ "role": "user", "content": "Hello, Claude" }]
+	// ```
+	//
+	// Example with multiple conversational turns:
+	//
+	// ```json
+	// [
+	//
+	//	{ "role": "user", "content": "Hello there." },
+	//	{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
+	//	{ "role": "user", "content": "Can you explain LLMs in plain English?" }
+	//
+	// ]
+	// ```
+	//
+	// Example with a partially-filled response from Claude:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Each input message `content` may be either a single `string` or an array of
+	// content blocks, where each block has a specific `type`. Using a `string` for
+	// `content` is shorthand for an array of one content block of type `"text"`. The
+	// following input messages are equivalent:
+	//
+	// ```json
+	// { "role": "user", "content": "Hello, Claude" }
+	// ```
+	//
+	// ```json
+	// { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
+	// ```
+	//
+	// Starting with Claude 3 models, you can also send image content blocks:
+	//
+	// ```json
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": [
+	//	    {
+	//	      "type": "image",
+	//	      "source": {
+	//	        "type": "base64",
+	//	        "media_type": "image/jpeg",
+	//	        "data": "/9j/4AAQSkZJRg..."
+	//	      }
+	//	    },
+	//	    { "type": "text", "text": "What is in this image?" }
+	//	  ]
+	//	}
+	//
+	// ```
+	//
+	// We currently support the `base64` source type for images, and the `image/jpeg`,
+	// `image/png`, `image/gif`, and `image/webp` media types.
+	//
+	// See [examples](https://docs.anthropic.com/en/api/messages-examples#vision) for
+	// more input examples.
+	//
+	// Note that if you want to include a
+	// [system prompt](https://docs.anthropic.com/en/docs/system-prompts), you can use
+	// the top-level `system` parameter — there is no `"system"` role for input
+	// messages in the Messages API.
+	//
+	// There is a limit of 100000 messages in a single request.
+	Messages []BetaMessageParam `json:"messages,omitzero,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,omitzero,required"`
+	// Container identifier for reuse across requests.
+	Container param.Opt[string] `json:"container,omitzero"`
+	// Whether to incrementally stream the response using server-sent events.
+	//
+	// See [streaming](https://docs.anthropic.com/en/api/messages-streaming) for
+	// details.
+	Stream param.Opt[bool] `json:"stream,omitzero"`
+	// Amount of randomness injected into the response.
+	//
+	// Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0`
+	// for analytical / multiple choice, and closer to `1.0` for creative and
+	// generative tasks.
+	//
+	// Note that even with `temperature` of `0.0`, the results will not be fully
+	// deterministic.
+	Temperature param.Opt[float64] `json:"temperature,omitzero"`
+	// Only sample from the top K options for each subsequent token.
+	//
+	// Used to remove "long tail" low probability responses.
+	// [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277).
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopK param.Opt[int64] `json:"top_k,omitzero"`
+	// Use nucleus sampling.
+	//
+	// In nucleus sampling, we compute the cumulative distribution over all the options
+	// for each subsequent token in decreasing probability order and cut it off once it
+	// reaches a particular probability specified by `top_p`. You should either alter
+	// `temperature` or `top_p`, but not both.
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopP param.Opt[float64] `json:"top_p,omitzero"`
+	// MCP servers to be utilized in this request
+	MCPServers []BetaRequestMCPServerURLDefinitionParam `json:"mcp_servers,omitzero"`
+	// An object describing metadata about the request.
+	Metadata BetaMetadataParam `json:"metadata,omitzero"`
+	// Determines whether to use priority capacity (if available) or standard capacity
+	// for this request.
+	//
+	// Anthropic offers different levels of service for your API requests. See
+	// [service-tiers](https://docs.anthropic.com/en/api/service-tiers) for details.
+	//
+	// Any of "auto", "standard_only".
+	ServiceTier string `json:"service_tier,omitzero"`
+	// Custom text sequences that will cause the model to stop generating.
+	//
+	// Our models will normally stop when they have naturally completed their turn,
+	// which will result in a response `stop_reason` of `"end_turn"`.
+	//
+	// If you want the model to stop generating when it encounters custom strings of
+	// text, you can use the `stop_sequences` parameter. If the model encounters one of
+	// the custom sequences, the response `stop_reason` value will be `"stop_sequence"`
+	// and the response `stop_sequence` value will contain the matched stop sequence.
+	StopSequences []string `json:"stop_sequences,omitzero"`
+	// System prompt.
+	//
+	// A system prompt is a way of providing context and instructions to Claude, such
+	// as specifying a particular goal or role. See our
+	// [guide to system prompts](https://docs.anthropic.com/en/docs/system-prompts).
+	System []BetaTextBlockParam `json:"system,omitzero"`
+	// Configuration for enabling Claude's extended thinking.
+	//
+	// When enabled, responses include `thinking` content blocks showing Claude's
+	// thinking process before the final answer. Requires a minimum budget of 1,024
+	// tokens and counts towards your `max_tokens` limit.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	Thinking BetaThinkingConfigParamUnion `json:"thinking,omitzero"`
+	// How the model should use the provided tools. The model can use a specific tool,
+	// any available tool, decide by itself, or not use tools at all.
+	ToolChoice BetaToolChoiceUnionParam `json:"tool_choice,omitzero"`
+	// Definitions of tools that the model may use.
+	//
+	// If you include `tools` in your API request, the model may return `tool_use`
+	// content blocks that represent the model's use of those tools. You can then run
+	// those tools using the tool input generated by the model and then optionally
+	// return results back to the model using `tool_result` content blocks.
+	//
+	// Each tool definition includes:
+	//
+	//   - `name`: Name of the tool.
+	//   - `description`: Optional, but strongly-recommended description of the tool.
+	//   - `input_schema`: [JSON schema](https://json-schema.org/draft/2020-12) for the
+	//     tool `input` shape that the model will produce in `tool_use` output content
+	//     blocks.
+	//
+	// For example, if you defined `tools` as:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "name": "get_stock_price",
+	//	  "description": "Get the current stock price for a given ticker symbol.",
+	//	  "input_schema": {
+	//	    "type": "object",
+	//	    "properties": {
+	//	      "ticker": {
+	//	        "type": "string",
+	//	        "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
+	//	      }
+	//	    },
+	//	    "required": ["ticker"]
+	//	  }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// And then asked the model "What's the S&P 500 at today?", the model might produce
+	// `tool_use` content blocks in the response like this:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_use",
+	//	  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "name": "get_stock_price",
+	//	  "input": { "ticker": "^GSPC" }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// You might then run your `get_stock_price` tool with `{"ticker": "^GSPC"}` as an
+	// input, and return the following back to the model in a subsequent `user`
+	// message:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_result",
+	//	  "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "content": "259.75 USD"
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// Tools can be used for workflows that include running client-side tools and
+	// functions, or more generally whenever you want the model to produce a particular
+	// JSON structure of output.
+	//
+	// See our [guide](https://docs.anthropic.com/en/docs/tool-use) for more details.
+	Tools []BetaToolUnionParam `json:"tools,omitzero"`
+	paramObj
+}
+
+func (r BetaMessageBatchNewParamsRequestParams) MarshalJSON() (data []byte, err error) {
+	type shadow BetaMessageBatchNewParamsRequestParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *BetaMessageBatchNewParamsRequestParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func init() {
+	apijson.RegisterFieldValidator[BetaMessageBatchNewParamsRequestParams](
+		"service_tier", "auto", "standard_only",
+	)
+}
+
+type BetaMessageBatchGetParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type BetaMessageBatchListParams struct {
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately after this object.
+	AfterID param.Opt[string] `query:"after_id,omitzero" json:"-"`
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately before this object.
+	BeforeID param.Opt[string] `query:"before_id,omitzero" json:"-"`
+	// Number of items to return per page.
+	//
+	// Defaults to `20`. Ranges from `1` to `1000`.
+	Limit param.Opt[int64] `query:"limit,omitzero" json:"-"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+// URLQuery serializes [BetaMessageBatchListParams]'s query parameters as
+// `url.Values`.
+func (r BetaMessageBatchListParams) URLQuery() (v url.Values, err error) {
+	return apiquery.MarshalWithSettings(r, apiquery.QuerySettings{
+		ArrayFormat:  apiquery.ArrayQueryFormatComma,
+		NestedFormat: apiquery.NestedQueryFormatBrackets,
+	})
+}
+
+type BetaMessageBatchDeleteParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type BetaMessageBatchCancelParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type BetaMessageBatchResultsParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}

vendor/github.com/anthropics/anthropic-sdk-go/betamodel.go 🔗

@@ -0,0 +1,149 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/apiquery"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/pagination"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// BetaModelService contains methods and other services that help with interacting
+// with the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewBetaModelService] method instead.
+type BetaModelService struct {
+	Options []option.RequestOption
+}
+
+// NewBetaModelService generates a new service that applies the given options to
+// each request. These options are applied after the parent client's options (if
+// there is one), and before any request-specific options.
+func NewBetaModelService(opts ...option.RequestOption) (r BetaModelService) {
+	r = BetaModelService{}
+	r.Options = opts
+	return
+}
+
+// Get a specific model.
+//
+// The Models API response can be used to determine information about a specific
+// model or resolve a model alias to a model ID.
+func (r *BetaModelService) Get(ctx context.Context, modelID string, query BetaModelGetParams, opts ...option.RequestOption) (res *BetaModelInfo, err error) {
+	for _, v := range query.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	if modelID == "" {
+		err = errors.New("missing required model_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/models/%s?beta=true", modelID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &res, opts...)
+	return
+}
+
+// List available models.
+//
+// The Models API response can be used to determine which models are available for
+// use in the API. More recently released models are listed first.
+func (r *BetaModelService) List(ctx context.Context, params BetaModelListParams, opts ...option.RequestOption) (res *pagination.Page[BetaModelInfo], err error) {
+	var raw *http.Response
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithResponseInto(&raw)}, opts...)
+	path := "v1/models?beta=true"
+	cfg, err := requestconfig.NewRequestConfig(ctx, http.MethodGet, path, params, &res, opts...)
+	if err != nil {
+		return nil, err
+	}
+	err = cfg.Execute()
+	if err != nil {
+		return nil, err
+	}
+	res.SetPageConfig(cfg, raw)
+	return res, nil
+}
+
+// List available models.
+//
+// The Models API response can be used to determine which models are available for
+// use in the API. More recently released models are listed first.
+func (r *BetaModelService) ListAutoPaging(ctx context.Context, params BetaModelListParams, opts ...option.RequestOption) *pagination.PageAutoPager[BetaModelInfo] {
+	return pagination.NewPageAutoPager(r.List(ctx, params, opts...))
+}
+
+type BetaModelInfo struct {
+	// Unique model identifier.
+	ID string `json:"id,required"`
+	// RFC 3339 datetime string representing the time at which the model was released.
+	// May be set to an epoch value if the release date is unknown.
+	CreatedAt time.Time `json:"created_at,required" format:"date-time"`
+	// A human-readable name for the model.
+	DisplayName string `json:"display_name,required"`
+	// Object type.
+	//
+	// For Models, this is always `"model"`.
+	Type constant.Model `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		CreatedAt   respjson.Field
+		DisplayName respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BetaModelInfo) RawJSON() string { return r.JSON.raw }
+func (r *BetaModelInfo) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type BetaModelGetParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type BetaModelListParams struct {
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately after this object.
+	AfterID param.Opt[string] `query:"after_id,omitzero" json:"-"`
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately before this object.
+	BeforeID param.Opt[string] `query:"before_id,omitzero" json:"-"`
+	// Number of items to return per page.
+	//
+	// Defaults to `20`. Ranges from `1` to `1000`.
+	Limit param.Opt[int64] `query:"limit,omitzero" json:"-"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+// URLQuery serializes [BetaModelListParams]'s query parameters as `url.Values`.
+func (r BetaModelListParams) URLQuery() (v url.Values, err error) {
+	return apiquery.MarshalWithSettings(r, apiquery.QuerySettings{
+		ArrayFormat:  apiquery.ArrayQueryFormatComma,
+		NestedFormat: apiquery.NestedQueryFormatBrackets,
+	})
+}

vendor/github.com/anthropics/anthropic-sdk-go/client.go 🔗

@@ -0,0 +1,126 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"net/http"
+	"os"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+)
+
+// Client creates a struct with services and top level methods that help with
+// interacting with the anthropic API. You should not instantiate this client
+// directly, and instead use the [NewClient] method instead.
+type Client struct {
+	Options     []option.RequestOption
+	Completions CompletionService
+	Messages    MessageService
+	Models      ModelService
+	Beta        BetaService
+}
+
+// DefaultClientOptions read from the environment (ANTHROPIC_API_KEY,
+// ANTHROPIC_AUTH_TOKEN, ANTHROPIC_BASE_URL). This should be used to initialize new
+// clients.
+func DefaultClientOptions() []option.RequestOption {
+	defaults := []option.RequestOption{option.WithEnvironmentProduction()}
+	if o, ok := os.LookupEnv("ANTHROPIC_BASE_URL"); ok {
+		defaults = append(defaults, option.WithBaseURL(o))
+	}
+	if o, ok := os.LookupEnv("ANTHROPIC_API_KEY"); ok {
+		defaults = append(defaults, option.WithAPIKey(o))
+	}
+	if o, ok := os.LookupEnv("ANTHROPIC_AUTH_TOKEN"); ok {
+		defaults = append(defaults, option.WithAuthToken(o))
+	}
+	return defaults
+}
+
+// NewClient generates a new client with the default option read from the
+// environment (ANTHROPIC_API_KEY, ANTHROPIC_AUTH_TOKEN, ANTHROPIC_BASE_URL). The
+// option passed in as arguments are applied after these default arguments, and all
+// option will be passed down to the services and requests that this client makes.
+func NewClient(opts ...option.RequestOption) (r Client) {
+	opts = append(DefaultClientOptions(), opts...)
+
+	r = Client{Options: opts}
+
+	r.Completions = NewCompletionService(opts...)
+	r.Messages = NewMessageService(opts...)
+	r.Models = NewModelService(opts...)
+	r.Beta = NewBetaService(opts...)
+
+	return
+}
+
+// Execute makes a request with the given context, method, URL, request params,
+// response, and request options. This is useful for hitting undocumented endpoints
+// while retaining the base URL, auth, retries, and other options from the client.
+//
+// If a byte slice or an [io.Reader] is supplied to params, it will be used as-is
+// for the request body.
+//
+// The params is by default serialized into the body using [encoding/json]. If your
+// type implements a MarshalJSON function, it will be used instead to serialize the
+// request. If a URLQuery method is implemented, the returned [url.Values] will be
+// used as query strings to the url.
+//
+// If your params struct uses [param.Field], you must provide either [MarshalJSON],
+// [URLQuery], and/or [MarshalForm] functions. It is undefined behavior to use a
+// struct uses [param.Field] without specifying how it is serialized.
+//
+// Any "…Params" object defined in this library can be used as the request
+// argument. Note that 'path' arguments will not be forwarded into the url.
+//
+// The response body will be deserialized into the res variable, depending on its
+// type:
+//
+//   - A pointer to a [*http.Response] is populated by the raw response.
+//   - A pointer to a byte array will be populated with the contents of the request
+//     body.
+//   - A pointer to any other type uses this library's default JSON decoding, which
+//     respects UnmarshalJSON if it is defined on the type.
+//   - A nil value will not read the response body.
+//
+// For even greater flexibility, see [option.WithResponseInto] and
+// [option.WithResponseBodyInto].
+func (r *Client) Execute(ctx context.Context, method string, path string, params any, res any, opts ...option.RequestOption) error {
+	opts = append(r.Options, opts...)
+	return requestconfig.ExecuteNewRequest(ctx, method, path, params, res, opts...)
+}
+
+// Get makes a GET request with the given URL, params, and optionally deserializes
+// to a response. See [Execute] documentation on the params and response.
+func (r *Client) Get(ctx context.Context, path string, params any, res any, opts ...option.RequestOption) error {
+	return r.Execute(ctx, http.MethodGet, path, params, res, opts...)
+}
+
+// Post makes a POST request with the given URL, params, and optionally
+// deserializes to a response. See [Execute] documentation on the params and
+// response.
+func (r *Client) Post(ctx context.Context, path string, params any, res any, opts ...option.RequestOption) error {
+	return r.Execute(ctx, http.MethodPost, path, params, res, opts...)
+}
+
+// Put makes a PUT request with the given URL, params, and optionally deserializes
+// to a response. See [Execute] documentation on the params and response.
+func (r *Client) Put(ctx context.Context, path string, params any, res any, opts ...option.RequestOption) error {
+	return r.Execute(ctx, http.MethodPut, path, params, res, opts...)
+}
+
+// Patch makes a PATCH request with the given URL, params, and optionally
+// deserializes to a response. See [Execute] documentation on the params and
+// response.
+func (r *Client) Patch(ctx context.Context, path string, params any, res any, opts ...option.RequestOption) error {
+	return r.Execute(ctx, http.MethodPatch, path, params, res, opts...)
+}
+
+// Delete makes a DELETE request with the given URL, params, and optionally
+// deserializes to a response. See [Execute] documentation on the params and
+// response.
+func (r *Client) Delete(ctx context.Context, path string, params any, res any, opts ...option.RequestOption) error {
+	return r.Execute(ctx, http.MethodDelete, path, params, res, opts...)
+}

vendor/github.com/anthropics/anthropic-sdk-go/completion.go 🔗

@@ -0,0 +1,194 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/packages/ssestream"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// CompletionService contains methods and other services that help with interacting
+// with the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewCompletionService] method instead.
+type CompletionService struct {
+	Options []option.RequestOption
+}
+
+// NewCompletionService generates a new service that applies the given options to
+// each request. These options are applied after the parent client's options (if
+// there is one), and before any request-specific options.
+func NewCompletionService(opts ...option.RequestOption) (r CompletionService) {
+	r = CompletionService{}
+	r.Options = opts
+	return
+}
+
+// [Legacy] Create a Text Completion.
+//
+// The Text Completions API is a legacy API. We recommend using the
+// [Messages API](https://docs.anthropic.com/en/api/messages) going forward.
+//
+// Future models and features will not be compatible with Text Completions. See our
+// [migration guide](https://docs.anthropic.com/en/api/migrating-from-text-completions-to-messages)
+// for guidance in migrating from Text Completions to Messages.
+//
+// Note: If you choose to set a timeout for this request, we recommend 10 minutes.
+func (r *CompletionService) New(ctx context.Context, params CompletionNewParams, opts ...option.RequestOption) (res *Completion, err error) {
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	path := "v1/complete"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, params, &res, opts...)
+	return
+}
+
+// [Legacy] Create a Text Completion.
+//
+// The Text Completions API is a legacy API. We recommend using the
+// [Messages API](https://docs.anthropic.com/en/api/messages) going forward.
+//
+// Future models and features will not be compatible with Text Completions. See our
+// [migration guide](https://docs.anthropic.com/en/api/migrating-from-text-completions-to-messages)
+// for guidance in migrating from Text Completions to Messages.
+//
+// Note: If you choose to set a timeout for this request, we recommend 10 minutes.
+func (r *CompletionService) NewStreaming(ctx context.Context, params CompletionNewParams, opts ...option.RequestOption) (stream *ssestream.Stream[Completion]) {
+	var (
+		raw *http.Response
+		err error
+	)
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithJSONSet("stream", true)}, opts...)
+	path := "v1/complete"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, params, &raw, opts...)
+	return ssestream.NewStream[Completion](ssestream.NewDecoder(raw), err)
+}
+
+type Completion struct {
+	// Unique object identifier.
+	//
+	// The format and length of IDs may change over time.
+	ID string `json:"id,required"`
+	// The resulting completion up to and excluding the stop sequences.
+	Completion string `json:"completion,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,required"`
+	// The reason that we stopped.
+	//
+	// This may be one the following values:
+	//
+	//   - `"stop_sequence"`: we reached a stop sequence — either provided by you via the
+	//     `stop_sequences` parameter, or a stop sequence built into the model
+	//   - `"max_tokens"`: we exceeded `max_tokens_to_sample` or the model's maximum
+	StopReason string `json:"stop_reason,required"`
+	// Object type.
+	//
+	// For Text Completions, this is always `"completion"`.
+	Type constant.Completion `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Completion  respjson.Field
+		Model       respjson.Field
+		StopReason  respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r Completion) RawJSON() string { return r.JSON.raw }
+func (r *Completion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type CompletionNewParams struct {
+	// The maximum number of tokens to generate before stopping.
+	//
+	// Note that our models may stop _before_ reaching this maximum. This parameter
+	// only specifies the absolute maximum number of tokens to generate.
+	MaxTokensToSample int64 `json:"max_tokens_to_sample,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,omitzero,required"`
+	// The prompt that you want Claude to complete.
+	//
+	// For proper response generation you will need to format your prompt using
+	// alternating `\n\nHuman:` and `\n\nAssistant:` conversational turns. For example:
+	//
+	// ```
+	// "\n\nHuman: {userQuestion}\n\nAssistant:"
+	// ```
+	//
+	// See [prompt validation](https://docs.anthropic.com/en/api/prompt-validation) and
+	// our guide to
+	// [prompt design](https://docs.anthropic.com/en/docs/intro-to-prompting) for more
+	// details.
+	Prompt string `json:"prompt,required"`
+	// Amount of randomness injected into the response.
+	//
+	// Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0`
+	// for analytical / multiple choice, and closer to `1.0` for creative and
+	// generative tasks.
+	//
+	// Note that even with `temperature` of `0.0`, the results will not be fully
+	// deterministic.
+	Temperature param.Opt[float64] `json:"temperature,omitzero"`
+	// Only sample from the top K options for each subsequent token.
+	//
+	// Used to remove "long tail" low probability responses.
+	// [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277).
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopK param.Opt[int64] `json:"top_k,omitzero"`
+	// Use nucleus sampling.
+	//
+	// In nucleus sampling, we compute the cumulative distribution over all the options
+	// for each subsequent token in decreasing probability order and cut it off once it
+	// reaches a particular probability specified by `top_p`. You should either alter
+	// `temperature` or `top_p`, but not both.
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopP param.Opt[float64] `json:"top_p,omitzero"`
+	// An object describing metadata about the request.
+	Metadata MetadataParam `json:"metadata,omitzero"`
+	// Sequences that will cause the model to stop generating.
+	//
+	// Our models stop on `"\n\nHuman:"`, and may include additional built-in stop
+	// sequences in the future. By providing the stop_sequences parameter, you may
+	// include additional strings that will cause the model to stop generating.
+	StopSequences []string `json:"stop_sequences,omitzero"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+func (r CompletionNewParams) MarshalJSON() (data []byte, err error) {
+	type shadow CompletionNewParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *CompletionNewParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}

vendor/github.com/anthropics/anthropic-sdk-go/field.go 🔗

@@ -0,0 +1,45 @@
+package anthropic
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"io"
+	"time"
+)
+
+func String(s string) param.Opt[string]     { return param.NewOpt(s) }
+func Int(i int64) param.Opt[int64]          { return param.NewOpt(i) }
+func Bool(b bool) param.Opt[bool]           { return param.NewOpt(b) }
+func Float(f float64) param.Opt[float64]    { return param.NewOpt(f) }
+func Time(t time.Time) param.Opt[time.Time] { return param.NewOpt(t) }
+
+func Opt[T comparable](v T) param.Opt[T] { return param.NewOpt(v) }
+func Ptr[T any](v T) *T                  { return &v }
+
+func IntPtr(v int64) *int64          { return &v }
+func BoolPtr(v bool) *bool           { return &v }
+func FloatPtr(v float64) *float64    { return &v }
+func StringPtr(v string) *string     { return &v }
+func TimePtr(v time.Time) *time.Time { return &v }
+
+func File(rdr io.Reader, filename string, contentType string) file {
+	return file{rdr, filename, contentType}
+}
+
+type file struct {
+	io.Reader
+	name        string
+	contentType string
+}
+
+func (f file) Filename() string {
+	if f.name != "" {
+		return f.name
+	} else if named, ok := f.Reader.(interface{ Name() string }); ok {
+		return named.Name()
+	}
+	return ""
+}
+
+func (f file) ContentType() string {
+	return f.contentType
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apierror/apierror.go 🔗

@@ -0,0 +1,50 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package apierror
+
+import (
+	"fmt"
+	"net/http"
+	"net/http/httputil"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+)
+
+// Error represents an error that originates from the API, i.e. when a request is
+// made and the API returns a response with a HTTP status code. Other errors are
+// not wrapped by this SDK.
+type Error struct {
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+	StatusCode int
+	Request    *http.Request
+	Response   *http.Response
+}
+
+// Returns the unmodified JSON received from the API
+func (r Error) RawJSON() string { return r.JSON.raw }
+func (r *Error) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r *Error) Error() string {
+	// Attempt to re-populate the response body
+	return fmt.Sprintf("%s %q: %d %s %s", r.Request.Method, r.Request.URL, r.Response.StatusCode, http.StatusText(r.Response.StatusCode), r.JSON.raw)
+}
+
+func (r *Error) DumpRequest(body bool) []byte {
+	if r.Request.GetBody != nil {
+		r.Request.Body, _ = r.Request.GetBody()
+	}
+	out, _ := httputil.DumpRequestOut(r.Request, body)
+	return out
+}
+
+func (r *Error) DumpResponse(body bool) []byte {
+	out, _ := httputil.DumpResponse(r.Response, body)
+	return out
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apiform/encoder.go 🔗

@@ -0,0 +1,465 @@
+package apiform
+
+import (
+	"fmt"
+	"io"
+	"mime/multipart"
+	"net/textproto"
+	"path"
+	"reflect"
+	"sort"
+	"strconv"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+)
+
+var encoders sync.Map // map[encoderEntry]encoderFunc
+
+func Marshal(value any, writer *multipart.Writer) error {
+	e := &encoder{
+		dateFormat: time.RFC3339,
+		arrayFmt:   "comma",
+	}
+	return e.marshal(value, writer)
+}
+
+func MarshalRoot(value any, writer *multipart.Writer) error {
+	e := &encoder{
+		root:       true,
+		dateFormat: time.RFC3339,
+		arrayFmt:   "comma",
+	}
+	return e.marshal(value, writer)
+}
+
+func MarshalWithSettings(value any, writer *multipart.Writer, arrayFormat string) error {
+	e := &encoder{
+		arrayFmt:   arrayFormat,
+		dateFormat: time.RFC3339,
+	}
+	return e.marshal(value, writer)
+}
+
+type encoder struct {
+	arrayFmt   string
+	dateFormat string
+	root       bool
+}
+
+type encoderFunc func(key string, value reflect.Value, writer *multipart.Writer) error
+
+type encoderField struct {
+	tag parsedStructTag
+	fn  encoderFunc
+	idx []int
+}
+
+type encoderEntry struct {
+	reflect.Type
+	dateFormat string
+	root       bool
+}
+
+func (e *encoder) marshal(value any, writer *multipart.Writer) error {
+	val := reflect.ValueOf(value)
+	if !val.IsValid() {
+		return nil
+	}
+	typ := val.Type()
+	enc := e.typeEncoder(typ)
+	return enc("", val, writer)
+}
+
+func (e *encoder) typeEncoder(t reflect.Type) encoderFunc {
+	entry := encoderEntry{
+		Type:       t,
+		dateFormat: e.dateFormat,
+		root:       e.root,
+	}
+
+	if fi, ok := encoders.Load(entry); ok {
+		return fi.(encoderFunc)
+	}
+
+	// To deal with recursive types, populate the map with an
+	// indirect func before we build it. This type waits on the
+	// real func (f) to be ready and then calls it. This indirect
+	// func is only used for recursive types.
+	var (
+		wg sync.WaitGroup
+		f  encoderFunc
+	)
+	wg.Add(1)
+	fi, loaded := encoders.LoadOrStore(entry, encoderFunc(func(key string, v reflect.Value, writer *multipart.Writer) error {
+		wg.Wait()
+		return f(key, v, writer)
+	}))
+	if loaded {
+		return fi.(encoderFunc)
+	}
+
+	// Compute the real encoder and replace the indirect func with it.
+	f = e.newTypeEncoder(t)
+	wg.Done()
+	encoders.Store(entry, f)
+	return f
+}
+
+func (e *encoder) newTypeEncoder(t reflect.Type) encoderFunc {
+	if t.ConvertibleTo(reflect.TypeOf(time.Time{})) {
+		return e.newTimeTypeEncoder()
+	}
+	if t.Implements(reflect.TypeOf((*io.Reader)(nil)).Elem()) {
+		return e.newReaderTypeEncoder()
+	}
+	e.root = false
+	switch t.Kind() {
+	case reflect.Pointer:
+		inner := t.Elem()
+
+		innerEncoder := e.typeEncoder(inner)
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			if !v.IsValid() || v.IsNil() {
+				return nil
+			}
+			return innerEncoder(key, v.Elem(), writer)
+		}
+	case reflect.Struct:
+		return e.newStructTypeEncoder(t)
+	case reflect.Slice, reflect.Array:
+		return e.newArrayTypeEncoder(t)
+	case reflect.Map:
+		return e.newMapEncoder(t)
+	case reflect.Interface:
+		return e.newInterfaceEncoder()
+	default:
+		return e.newPrimitiveTypeEncoder(t)
+	}
+}
+
+func (e *encoder) newPrimitiveTypeEncoder(t reflect.Type) encoderFunc {
+	switch t.Kind() {
+	// Note that we could use `gjson` to encode these types but it would complicate our
+	// code more and this current code shouldn't cause any issues
+	case reflect.String:
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			return writer.WriteField(key, v.String())
+		}
+	case reflect.Bool:
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			if v.Bool() {
+				return writer.WriteField(key, "true")
+			}
+			return writer.WriteField(key, "false")
+		}
+	case reflect.Int, reflect.Int16, reflect.Int32, reflect.Int64:
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			return writer.WriteField(key, strconv.FormatInt(v.Int(), 10))
+		}
+	case reflect.Uint, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			return writer.WriteField(key, strconv.FormatUint(v.Uint(), 10))
+		}
+	case reflect.Float32:
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			return writer.WriteField(key, strconv.FormatFloat(v.Float(), 'f', -1, 32))
+		}
+	case reflect.Float64:
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			return writer.WriteField(key, strconv.FormatFloat(v.Float(), 'f', -1, 64))
+		}
+	default:
+		return func(key string, v reflect.Value, writer *multipart.Writer) error {
+			return fmt.Errorf("unknown type received at primitive encoder: %s", t.String())
+		}
+	}
+}
+
+func arrayKeyEncoder(arrayFmt string) func(string, int) string {
+	var keyFn func(string, int) string
+	switch arrayFmt {
+	case "comma", "repeat":
+		keyFn = func(k string, _ int) string { return k }
+	case "brackets":
+		keyFn = func(key string, _ int) string { return key + "[]" }
+	case "indices:dots":
+		keyFn = func(k string, i int) string {
+			if k == "" {
+				return strconv.Itoa(i)
+			}
+			return k + "." + strconv.Itoa(i)
+		}
+	case "indices:brackets":
+		keyFn = func(k string, i int) string {
+			if k == "" {
+				return strconv.Itoa(i)
+			}
+			return k + "[" + strconv.Itoa(i) + "]"
+		}
+	}
+	return keyFn
+}
+
+func (e *encoder) newArrayTypeEncoder(t reflect.Type) encoderFunc {
+	itemEncoder := e.typeEncoder(t.Elem())
+	keyFn := arrayKeyEncoder(e.arrayFmt)
+	return func(key string, v reflect.Value, writer *multipart.Writer) error {
+		if keyFn == nil {
+			return fmt.Errorf("apiform: unsupported array format")
+		}
+		for i := 0; i < v.Len(); i++ {
+			err := itemEncoder(keyFn(key, i), v.Index(i), writer)
+			if err != nil {
+				return err
+			}
+		}
+		return nil
+	}
+}
+
+func (e *encoder) newStructTypeEncoder(t reflect.Type) encoderFunc {
+	if t.Implements(reflect.TypeOf((*param.Optional)(nil)).Elem()) {
+		return e.newRichFieldTypeEncoder(t)
+	}
+
+	for i := 0; i < t.NumField(); i++ {
+		if t.Field(i).Type == paramUnionType && t.Field(i).Anonymous {
+			return e.newStructUnionTypeEncoder(t)
+		}
+	}
+
+	encoderFields := []encoderField{}
+	extraEncoder := (*encoderField)(nil)
+
+	// This helper allows us to recursively collect field encoders into a flat
+	// array. The parameter `index` keeps track of the access patterns necessary
+	// to get to some field.
+	var collectEncoderFields func(r reflect.Type, index []int)
+	collectEncoderFields = func(r reflect.Type, index []int) {
+		for i := 0; i < r.NumField(); i++ {
+			idx := append(index, i)
+			field := t.FieldByIndex(idx)
+			if !field.IsExported() {
+				continue
+			}
+			// If this is an embedded struct, traverse one level deeper to extract
+			// the field and get their encoders as well.
+			if field.Anonymous {
+				collectEncoderFields(field.Type, idx)
+				continue
+			}
+			// If json tag is not present, then we skip, which is intentionally
+			// different behavior from the stdlib.
+			ptag, ok := parseFormStructTag(field)
+			if !ok {
+				continue
+			}
+			// We only want to support unexported field if they're tagged with
+			// `extras` because that field shouldn't be part of the public API. We
+			// also want to only keep the top level extras
+			if ptag.extras && len(index) == 0 {
+				extraEncoder = &encoderField{ptag, e.typeEncoder(field.Type.Elem()), idx}
+				continue
+			}
+			if ptag.name == "-" || ptag.name == "" {
+				continue
+			}
+
+			dateFormat, ok := parseFormatStructTag(field)
+			oldFormat := e.dateFormat
+			if ok {
+				switch dateFormat {
+				case "date-time":
+					e.dateFormat = time.RFC3339
+				case "date":
+					e.dateFormat = "2006-01-02"
+				}
+			}
+
+			var encoderFn encoderFunc
+			if ptag.omitzero {
+				typeEncoderFn := e.typeEncoder(field.Type)
+				encoderFn = func(key string, value reflect.Value, writer *multipart.Writer) error {
+					if value.IsZero() {
+						return nil
+					}
+					return typeEncoderFn(key, value, writer)
+				}
+			} else {
+				encoderFn = e.typeEncoder(field.Type)
+			}
+			encoderFields = append(encoderFields, encoderField{ptag, encoderFn, idx})
+			e.dateFormat = oldFormat
+		}
+	}
+	collectEncoderFields(t, []int{})
+
+	// Ensure deterministic output by sorting by lexicographic order
+	sort.Slice(encoderFields, func(i, j int) bool {
+		return encoderFields[i].tag.name < encoderFields[j].tag.name
+	})
+
+	return func(key string, value reflect.Value, writer *multipart.Writer) error {
+		if key != "" {
+			key = key + "."
+		}
+
+		for _, ef := range encoderFields {
+			field := value.FieldByIndex(ef.idx)
+			err := ef.fn(key+ef.tag.name, field, writer)
+			if err != nil {
+				return err
+			}
+		}
+
+		if extraEncoder != nil {
+			err := e.encodeMapEntries(key, value.FieldByIndex(extraEncoder.idx), writer)
+			if err != nil {
+				return err
+			}
+		}
+
+		return nil
+	}
+}
+
+var paramUnionType = reflect.TypeOf((*param.APIUnion)(nil)).Elem()
+
+func (e *encoder) newStructUnionTypeEncoder(t reflect.Type) encoderFunc {
+	var fieldEncoders []encoderFunc
+	for i := 0; i < t.NumField(); i++ {
+		field := t.Field(i)
+		if field.Type == paramUnionType && field.Anonymous {
+			fieldEncoders = append(fieldEncoders, nil)
+			continue
+		}
+		fieldEncoders = append(fieldEncoders, e.typeEncoder(field.Type))
+	}
+
+	return func(key string, value reflect.Value, writer *multipart.Writer) error {
+		for i := 0; i < t.NumField(); i++ {
+			if value.Field(i).Type() == paramUnionType {
+				continue
+			}
+			if !value.Field(i).IsZero() {
+				return fieldEncoders[i](key, value.Field(i), writer)
+			}
+		}
+		return fmt.Errorf("apiform: union %s has no field set", t.String())
+	}
+}
+
+func (e *encoder) newTimeTypeEncoder() encoderFunc {
+	format := e.dateFormat
+	return func(key string, value reflect.Value, writer *multipart.Writer) error {
+		return writer.WriteField(key, value.Convert(reflect.TypeOf(time.Time{})).Interface().(time.Time).Format(format))
+	}
+}
+
+func (e encoder) newInterfaceEncoder() encoderFunc {
+	return func(key string, value reflect.Value, writer *multipart.Writer) error {
+		value = value.Elem()
+		if !value.IsValid() {
+			return nil
+		}
+		return e.typeEncoder(value.Type())(key, value, writer)
+	}
+}
+
+var quoteEscaper = strings.NewReplacer("\\", "\\\\", `"`, "\\\"")
+
+func escapeQuotes(s string) string {
+	return quoteEscaper.Replace(s)
+}
+
+func (e *encoder) newReaderTypeEncoder() encoderFunc {
+	return func(key string, value reflect.Value, writer *multipart.Writer) error {
+		reader, ok := value.Convert(reflect.TypeOf((*io.Reader)(nil)).Elem()).Interface().(io.Reader)
+		if !ok {
+			return nil
+		}
+		filename := "anonymous_file"
+		contentType := "application/octet-stream"
+		if named, ok := reader.(interface{ Filename() string }); ok {
+			filename = named.Filename()
+		} else if named, ok := reader.(interface{ Name() string }); ok {
+			filename = path.Base(named.Name())
+		}
+		if typed, ok := reader.(interface{ ContentType() string }); ok {
+			contentType = typed.ContentType()
+		}
+
+		// Below is taken almost 1-for-1 from [multipart.CreateFormFile]
+		h := make(textproto.MIMEHeader)
+		h.Set("Content-Disposition", fmt.Sprintf(`form-data; name="%s"; filename="%s"`, escapeQuotes(key), escapeQuotes(filename)))
+		h.Set("Content-Type", contentType)
+		filewriter, err := writer.CreatePart(h)
+		if err != nil {
+			return err
+		}
+		_, err = io.Copy(filewriter, reader)
+		return err
+	}
+}
+
+// Given a []byte of json (may either be an empty object or an object that already contains entries)
+// encode all of the entries in the map to the json byte array.
+func (e *encoder) encodeMapEntries(key string, v reflect.Value, writer *multipart.Writer) error {
+	type mapPair struct {
+		key   string
+		value reflect.Value
+	}
+
+	if key != "" {
+		key = key + "."
+	}
+
+	pairs := []mapPair{}
+
+	iter := v.MapRange()
+	for iter.Next() {
+		if iter.Key().Type().Kind() == reflect.String {
+			pairs = append(pairs, mapPair{key: iter.Key().String(), value: iter.Value()})
+		} else {
+			return fmt.Errorf("cannot encode a map with a non string key")
+		}
+	}
+
+	// Ensure deterministic output
+	sort.Slice(pairs, func(i, j int) bool {
+		return pairs[i].key < pairs[j].key
+	})
+
+	elementEncoder := e.typeEncoder(v.Type().Elem())
+	for _, p := range pairs {
+		err := elementEncoder(key+string(p.key), p.value, writer)
+		if err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func (e *encoder) newMapEncoder(_ reflect.Type) encoderFunc {
+	return func(key string, value reflect.Value, writer *multipart.Writer) error {
+		return e.encodeMapEntries(key, value, writer)
+	}
+}
+
+func WriteExtras(writer *multipart.Writer, extras map[string]any) (err error) {
+	for k, v := range extras {
+		str, ok := v.(string)
+		if !ok {
+			break
+		}
+		err = writer.WriteField(k, str)
+		if err != nil {
+			break
+		}
+	}
+	return
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apiform/richparam.go 🔗

@@ -0,0 +1,20 @@
+package apiform
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"mime/multipart"
+	"reflect"
+)
+
+func (e *encoder) newRichFieldTypeEncoder(t reflect.Type) encoderFunc {
+	f, _ := t.FieldByName("Value")
+	enc := e.newPrimitiveTypeEncoder(f.Type)
+	return func(key string, value reflect.Value, writer *multipart.Writer) error {
+		if opt, ok := value.Interface().(param.Optional); ok && opt.Valid() {
+			return enc(key, value.FieldByIndex(f.Index), writer)
+		} else if ok && param.IsNull(opt) {
+			return writer.WriteField(key, "null")
+		}
+		return nil
+	}
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apiform/tag.go 🔗

@@ -0,0 +1,51 @@
+package apiform
+
+import (
+	"reflect"
+	"strings"
+)
+
+const jsonStructTag = "json"
+const formStructTag = "form"
+const formatStructTag = "format"
+
+type parsedStructTag struct {
+	name     string
+	required bool
+	extras   bool
+	metadata bool
+	omitzero bool
+}
+
+func parseFormStructTag(field reflect.StructField) (tag parsedStructTag, ok bool) {
+	raw, ok := field.Tag.Lookup(formStructTag)
+	if !ok {
+		raw, ok = field.Tag.Lookup(jsonStructTag)
+	}
+	if !ok {
+		return
+	}
+	parts := strings.Split(raw, ",")
+	if len(parts) == 0 {
+		return tag, false
+	}
+	tag.name = parts[0]
+	for _, part := range parts[1:] {
+		switch part {
+		case "required":
+			tag.required = true
+		case "extras":
+			tag.extras = true
+		case "metadata":
+			tag.metadata = true
+		case "omitzero":
+			tag.omitzero = true
+		}
+	}
+	return
+}
+
+func parseFormatStructTag(field reflect.StructField) (format string, ok bool) {
+	format, ok = field.Tag.Lookup(formatStructTag)
+	return
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/decoder.go 🔗

@@ -0,0 +1,691 @@
+// The deserialization algorithm from apijson may be subject to improvements
+// between minor versions, particularly with respect to calling [json.Unmarshal]
+// into param unions.
+
+package apijson
+
+import (
+	"encoding/json"
+	"fmt"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"reflect"
+	"strconv"
+	"sync"
+	"time"
+	"unsafe"
+
+	"github.com/tidwall/gjson"
+)
+
+// decoders is a synchronized map with roughly the following type:
+// map[reflect.Type]decoderFunc
+var decoders sync.Map
+
+// Unmarshal is similar to [encoding/json.Unmarshal] and parses the JSON-encoded
+// data and stores it in the given pointer.
+func Unmarshal(raw []byte, to any) error {
+	d := &decoderBuilder{dateFormat: time.RFC3339}
+	return d.unmarshal(raw, to)
+}
+
+// UnmarshalRoot is like Unmarshal, but doesn't try to call MarshalJSON on the
+// root element. Useful if a struct's UnmarshalJSON is overrode to use the
+// behavior of this encoder versus the standard library.
+func UnmarshalRoot(raw []byte, to any) error {
+	d := &decoderBuilder{dateFormat: time.RFC3339, root: true}
+	return d.unmarshal(raw, to)
+}
+
+// decoderBuilder contains the 'compile-time' state of the decoder.
+type decoderBuilder struct {
+	// Whether or not this is the first element and called by [UnmarshalRoot], see
+	// the documentation there to see why this is necessary.
+	root bool
+	// The dateFormat (a format string for [time.Format]) which is chosen by the
+	// last struct tag that was seen.
+	dateFormat string
+}
+
+// decoderState contains the 'run-time' state of the decoder.
+type decoderState struct {
+	strict    bool
+	exactness exactness
+	validator *validationEntry
+}
+
+// Exactness refers to how close to the type the result was if deserialization
+// was successful. This is useful in deserializing unions, where you want to try
+// each entry, first with strict, then with looser validation, without actually
+// having to do a lot of redundant work by marshalling twice (or maybe even more
+// times).
+type exactness int8
+
+const (
+	// Some values had to fudged a bit, for example by converting a string to an
+	// int, or an enum with extra values.
+	loose exactness = iota
+	// There are some extra arguments, but other wise it matches the union.
+	extras
+	// Exactly right.
+	exact
+)
+
+type decoderFunc func(node gjson.Result, value reflect.Value, state *decoderState) error
+
+type decoderField struct {
+	tag    parsedStructTag
+	fn     decoderFunc
+	idx    []int
+	goname string
+}
+
+type decoderEntry struct {
+	reflect.Type
+	dateFormat string
+	root       bool
+}
+
+func (d *decoderBuilder) unmarshal(raw []byte, to any) error {
+	value := reflect.ValueOf(to).Elem()
+	result := gjson.ParseBytes(raw)
+	if !value.IsValid() {
+		return fmt.Errorf("apijson: cannot marshal into invalid value")
+	}
+	return d.typeDecoder(value.Type())(result, value, &decoderState{strict: false, exactness: exact})
+}
+
+// unmarshalWithExactness is used for internal testing purposes.
+func (d *decoderBuilder) unmarshalWithExactness(raw []byte, to any) (exactness, error) {
+	value := reflect.ValueOf(to).Elem()
+	result := gjson.ParseBytes(raw)
+	if !value.IsValid() {
+		return 0, fmt.Errorf("apijson: cannot marshal into invalid value")
+	}
+	state := decoderState{strict: false, exactness: exact}
+	err := d.typeDecoder(value.Type())(result, value, &state)
+	return state.exactness, err
+}
+
+func (d *decoderBuilder) typeDecoder(t reflect.Type) decoderFunc {
+	entry := decoderEntry{
+		Type:       t,
+		dateFormat: d.dateFormat,
+		root:       d.root,
+	}
+
+	if fi, ok := decoders.Load(entry); ok {
+		return fi.(decoderFunc)
+	}
+
+	// To deal with recursive types, populate the map with an
+	// indirect func before we build it. This type waits on the
+	// real func (f) to be ready and then calls it. This indirect
+	// func is only used for recursive types.
+	var (
+		wg sync.WaitGroup
+		f  decoderFunc
+	)
+	wg.Add(1)
+	fi, loaded := decoders.LoadOrStore(entry, decoderFunc(func(node gjson.Result, v reflect.Value, state *decoderState) error {
+		wg.Wait()
+		return f(node, v, state)
+	}))
+	if loaded {
+		return fi.(decoderFunc)
+	}
+
+	// Compute the real decoder and replace the indirect func with it.
+	f = d.newTypeDecoder(t)
+	wg.Done()
+	decoders.Store(entry, f)
+	return f
+}
+
+// validatedTypeDecoder wraps the type decoder with a validator. This is helpful
+// for ensuring that enum fields are correct.
+func (d *decoderBuilder) validatedTypeDecoder(t reflect.Type, entry *validationEntry) decoderFunc {
+	dec := d.typeDecoder(t)
+	if entry == nil {
+		return dec
+	}
+
+	// Thread the current validation entry through the decoder,
+	// but clean up in time for the next field.
+	return func(node gjson.Result, v reflect.Value, state *decoderState) error {
+		state.validator = entry
+		err := dec(node, v, state)
+		state.validator = nil
+		return err
+	}
+}
+
+func indirectUnmarshalerDecoder(n gjson.Result, v reflect.Value, state *decoderState) error {
+	return v.Addr().Interface().(json.Unmarshaler).UnmarshalJSON([]byte(n.Raw))
+}
+
+func unmarshalerDecoder(n gjson.Result, v reflect.Value, state *decoderState) error {
+	if v.Kind() == reflect.Pointer && v.CanSet() {
+		v.Set(reflect.New(v.Type().Elem()))
+	}
+	return v.Interface().(json.Unmarshaler).UnmarshalJSON([]byte(n.Raw))
+}
+
+func (d *decoderBuilder) newTypeDecoder(t reflect.Type) decoderFunc {
+	if t.ConvertibleTo(reflect.TypeOf(time.Time{})) {
+		return d.newTimeTypeDecoder(t)
+	}
+
+	if t.Implements(reflect.TypeOf((*param.Optional)(nil)).Elem()) {
+		return d.newOptTypeDecoder(t)
+	}
+
+	if !d.root && t.Implements(reflect.TypeOf((*json.Unmarshaler)(nil)).Elem()) {
+		return unmarshalerDecoder
+	}
+	if !d.root && reflect.PointerTo(t).Implements(reflect.TypeOf((*json.Unmarshaler)(nil)).Elem()) {
+		if _, ok := unionVariants[t]; !ok {
+			return indirectUnmarshalerDecoder
+		}
+	}
+	d.root = false
+
+	if _, ok := unionRegistry[t]; ok {
+		if isStructUnion(t) {
+			return d.newStructUnionDecoder(t)
+		}
+		return d.newUnionDecoder(t)
+	}
+
+	switch t.Kind() {
+	case reflect.Pointer:
+		inner := t.Elem()
+		innerDecoder := d.typeDecoder(inner)
+
+		return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+			if !v.IsValid() {
+				return fmt.Errorf("apijson: unexpected invalid reflection value %+#v", v)
+			}
+
+			newValue := reflect.New(inner).Elem()
+			err := innerDecoder(n, newValue, state)
+			if err != nil {
+				return err
+			}
+
+			v.Set(newValue.Addr())
+			return nil
+		}
+	case reflect.Struct:
+		if isStructUnion(t) {
+			return d.newStructUnionDecoder(t)
+		}
+		return d.newStructTypeDecoder(t)
+	case reflect.Array:
+		fallthrough
+	case reflect.Slice:
+		return d.newArrayTypeDecoder(t)
+	case reflect.Map:
+		return d.newMapDecoder(t)
+	case reflect.Interface:
+		return func(node gjson.Result, value reflect.Value, state *decoderState) error {
+			if !value.IsValid() {
+				return fmt.Errorf("apijson: unexpected invalid value %+#v", value)
+			}
+			if node.Value() != nil && value.CanSet() {
+				value.Set(reflect.ValueOf(node.Value()))
+			}
+			return nil
+		}
+	default:
+		return d.newPrimitiveTypeDecoder(t)
+	}
+}
+
+func (d *decoderBuilder) newMapDecoder(t reflect.Type) decoderFunc {
+	keyType := t.Key()
+	itemType := t.Elem()
+	itemDecoder := d.typeDecoder(itemType)
+
+	return func(node gjson.Result, value reflect.Value, state *decoderState) (err error) {
+		mapValue := reflect.MakeMapWithSize(t, len(node.Map()))
+
+		node.ForEach(func(key, value gjson.Result) bool {
+			// It's fine for us to just use `ValueOf` here because the key types will
+			// always be primitive types so we don't need to decode it using the standard pattern
+			keyValue := reflect.ValueOf(key.Value())
+			if !keyValue.IsValid() {
+				if err == nil {
+					err = fmt.Errorf("apijson: received invalid key type %v", keyValue.String())
+				}
+				return false
+			}
+			if keyValue.Type() != keyType {
+				if err == nil {
+					err = fmt.Errorf("apijson: expected key type %v but got %v", keyType, keyValue.Type())
+				}
+				return false
+			}
+
+			itemValue := reflect.New(itemType).Elem()
+			itemerr := itemDecoder(value, itemValue, state)
+			if itemerr != nil {
+				if err == nil {
+					err = itemerr
+				}
+				return false
+			}
+
+			mapValue.SetMapIndex(keyValue, itemValue)
+			return true
+		})
+
+		if err != nil {
+			return err
+		}
+		value.Set(mapValue)
+		return nil
+	}
+}
+
+func (d *decoderBuilder) newArrayTypeDecoder(t reflect.Type) decoderFunc {
+	itemDecoder := d.typeDecoder(t.Elem())
+
+	return func(node gjson.Result, value reflect.Value, state *decoderState) (err error) {
+		if !node.IsArray() {
+			return fmt.Errorf("apijson: could not deserialize to an array")
+		}
+
+		arrayNode := node.Array()
+
+		arrayValue := reflect.MakeSlice(reflect.SliceOf(t.Elem()), len(arrayNode), len(arrayNode))
+		for i, itemNode := range arrayNode {
+			err = itemDecoder(itemNode, arrayValue.Index(i), state)
+			if err != nil {
+				return err
+			}
+		}
+
+		value.Set(arrayValue)
+		return nil
+	}
+}
+
+func (d *decoderBuilder) newStructTypeDecoder(t reflect.Type) decoderFunc {
+	// map of json field name to struct field decoders
+	decoderFields := map[string]decoderField{}
+	anonymousDecoders := []decoderField{}
+	extraDecoder := (*decoderField)(nil)
+	var inlineDecoders []decoderField
+
+	validationEntries := validationRegistry[t]
+
+	for i := 0; i < t.NumField(); i++ {
+		idx := []int{i}
+		field := t.FieldByIndex(idx)
+		if !field.IsExported() {
+			continue
+		}
+
+		var validator *validationEntry
+		for _, entry := range validationEntries {
+			if entry.field.Offset == field.Offset {
+				validator = &entry
+				break
+			}
+		}
+
+		// If this is an embedded struct, traverse one level deeper to extract
+		// the fields and get their encoders as well.
+		if field.Anonymous {
+			anonymousDecoders = append(anonymousDecoders, decoderField{
+				fn:  d.typeDecoder(field.Type),
+				idx: idx[:],
+			})
+			continue
+		}
+		// If json tag is not present, then we skip, which is intentionally
+		// different behavior from the stdlib.
+		ptag, ok := parseJSONStructTag(field)
+		if !ok {
+			continue
+		}
+		// We only want to support unexported fields if they're tagged with
+		// `extras` because that field shouldn't be part of the public API.
+		if ptag.extras {
+			extraDecoder = &decoderField{ptag, d.typeDecoder(field.Type.Elem()), idx, field.Name}
+			continue
+		}
+		if ptag.inline {
+			df := decoderField{ptag, d.typeDecoder(field.Type), idx, field.Name}
+			inlineDecoders = append(inlineDecoders, df)
+			continue
+		}
+		if ptag.metadata {
+			continue
+		}
+
+		oldFormat := d.dateFormat
+		dateFormat, ok := parseFormatStructTag(field)
+		if ok {
+			switch dateFormat {
+			case "date-time":
+				d.dateFormat = time.RFC3339
+			case "date":
+				d.dateFormat = "2006-01-02"
+			}
+		}
+
+		decoderFields[ptag.name] = decoderField{
+			ptag,
+			d.validatedTypeDecoder(field.Type, validator),
+			idx, field.Name,
+		}
+
+		d.dateFormat = oldFormat
+	}
+
+	return func(node gjson.Result, value reflect.Value, state *decoderState) (err error) {
+		if field := value.FieldByName("JSON"); field.IsValid() {
+			if raw := field.FieldByName("raw"); raw.IsValid() {
+				setUnexportedField(raw, node.Raw)
+			}
+		}
+
+		for _, decoder := range anonymousDecoders {
+			// ignore errors
+			decoder.fn(node, value.FieldByIndex(decoder.idx), state)
+		}
+
+		for _, inlineDecoder := range inlineDecoders {
+			var meta Field
+			dest := value.FieldByIndex(inlineDecoder.idx)
+			isValid := false
+			if dest.IsValid() && node.Type != gjson.Null {
+				inlineState := decoderState{exactness: state.exactness, strict: true}
+				err = inlineDecoder.fn(node, dest, &inlineState)
+				if err == nil {
+					isValid = true
+				}
+			}
+
+			if node.Type == gjson.Null {
+				meta = Field{
+					raw:    node.Raw,
+					status: null,
+				}
+			} else if !isValid {
+				// If an inline decoder fails, unset the field and move on.
+				if dest.IsValid() {
+					dest.SetZero()
+				}
+				continue
+			} else if isValid {
+				meta = Field{
+					raw:    node.Raw,
+					status: valid,
+				}
+			}
+			setMetadataSubField(value, inlineDecoder.idx, inlineDecoder.goname, meta)
+		}
+
+		typedExtraType := reflect.Type(nil)
+		typedExtraFields := reflect.Value{}
+		if extraDecoder != nil {
+			typedExtraType = value.FieldByIndex(extraDecoder.idx).Type()
+			typedExtraFields = reflect.MakeMap(typedExtraType)
+		}
+		untypedExtraFields := map[string]Field{}
+
+		for fieldName, itemNode := range node.Map() {
+			df, explicit := decoderFields[fieldName]
+			var (
+				dest reflect.Value
+				fn   decoderFunc
+				meta Field
+			)
+			if explicit {
+				fn = df.fn
+				dest = value.FieldByIndex(df.idx)
+			}
+			if !explicit && extraDecoder != nil {
+				dest = reflect.New(typedExtraType.Elem()).Elem()
+				fn = extraDecoder.fn
+			}
+
+			isValid := false
+			if dest.IsValid() && itemNode.Type != gjson.Null {
+				err = fn(itemNode, dest, state)
+				if err == nil {
+					isValid = true
+				}
+			}
+
+			// Handle null [param.Opt]
+			if itemNode.Type == gjson.Null && dest.IsValid() && dest.Type().Implements(reflect.TypeOf((*param.Optional)(nil)).Elem()) {
+				dest.Addr().Interface().(json.Unmarshaler).UnmarshalJSON([]byte(itemNode.Raw))
+				continue
+			}
+
+			if itemNode.Type == gjson.Null {
+				meta = Field{
+					raw:    itemNode.Raw,
+					status: null,
+				}
+			} else if !isValid {
+				meta = Field{
+					raw:    itemNode.Raw,
+					status: invalid,
+				}
+			} else if isValid {
+				meta = Field{
+					raw:    itemNode.Raw,
+					status: valid,
+				}
+			}
+
+			if explicit {
+				setMetadataSubField(value, df.idx, df.goname, meta)
+			}
+			if !explicit {
+				untypedExtraFields[fieldName] = meta
+			}
+			if !explicit && extraDecoder != nil {
+				typedExtraFields.SetMapIndex(reflect.ValueOf(fieldName), dest)
+			}
+		}
+
+		if extraDecoder != nil && typedExtraFields.Len() > 0 {
+			value.FieldByIndex(extraDecoder.idx).Set(typedExtraFields)
+		}
+
+		// Set exactness to 'extras' if there are untyped, extra fields.
+		if len(untypedExtraFields) > 0 && state.exactness > extras {
+			state.exactness = extras
+		}
+
+		if len(untypedExtraFields) > 0 {
+			setMetadataExtraFields(value, []int{-1}, "ExtraFields", untypedExtraFields)
+		}
+		return nil
+	}
+}
+
+func (d *decoderBuilder) newPrimitiveTypeDecoder(t reflect.Type) decoderFunc {
+	switch t.Kind() {
+	case reflect.String:
+		return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+			v.SetString(n.String())
+			if guardStrict(state, n.Type != gjson.String) {
+				return fmt.Errorf("apijson: failed to parse string strictly")
+			}
+			// Everything that is not an object can be loosely stringified.
+			if n.Type == gjson.JSON {
+				return fmt.Errorf("apijson: failed to parse string")
+			}
+
+			state.validateString(v)
+
+			if guardUnknown(state, v) {
+				return fmt.Errorf("apijson: failed string enum validation")
+			}
+			return nil
+		}
+	case reflect.Bool:
+		return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+			v.SetBool(n.Bool())
+			if guardStrict(state, n.Type != gjson.True && n.Type != gjson.False) {
+				return fmt.Errorf("apijson: failed to parse bool strictly")
+			}
+			// Numbers and strings that are either 'true' or 'false' can be loosely
+			// deserialized as bool.
+			if n.Type == gjson.String && (n.Raw != "true" && n.Raw != "false") || n.Type == gjson.JSON {
+				return fmt.Errorf("apijson: failed to parse bool")
+			}
+
+			state.validateBool(v)
+
+			if guardUnknown(state, v) {
+				return fmt.Errorf("apijson: failed bool enum validation")
+			}
+			return nil
+		}
+	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+		return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+			v.SetInt(n.Int())
+			if guardStrict(state, n.Type != gjson.Number || n.Num != float64(int(n.Num))) {
+				return fmt.Errorf("apijson: failed to parse int strictly")
+			}
+			// Numbers, booleans, and strings that maybe look like numbers can be
+			// loosely deserialized as numbers.
+			if n.Type == gjson.JSON || (n.Type == gjson.String && !canParseAsNumber(n.Str)) {
+				return fmt.Errorf("apijson: failed to parse int")
+			}
+
+			state.validateInt(v)
+
+			if guardUnknown(state, v) {
+				return fmt.Errorf("apijson: failed int enum validation")
+			}
+			return nil
+		}
+	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+		return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+			v.SetUint(n.Uint())
+			if guardStrict(state, n.Type != gjson.Number || n.Num != float64(int(n.Num)) || n.Num < 0) {
+				return fmt.Errorf("apijson: failed to parse uint strictly")
+			}
+			// Numbers, booleans, and strings that maybe look like numbers can be
+			// loosely deserialized as uint.
+			if n.Type == gjson.JSON || (n.Type == gjson.String && !canParseAsNumber(n.Str)) {
+				return fmt.Errorf("apijson: failed to parse uint")
+			}
+			if guardUnknown(state, v) {
+				return fmt.Errorf("apijson: failed uint enum validation")
+			}
+			return nil
+		}
+	case reflect.Float32, reflect.Float64:
+		return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+			v.SetFloat(n.Float())
+			if guardStrict(state, n.Type != gjson.Number) {
+				return fmt.Errorf("apijson: failed to parse float strictly")
+			}
+			// Numbers, booleans, and strings that maybe look like numbers can be
+			// loosely deserialized as floats.
+			if n.Type == gjson.JSON || (n.Type == gjson.String && !canParseAsNumber(n.Str)) {
+				return fmt.Errorf("apijson: failed to parse float")
+			}
+			if guardUnknown(state, v) {
+				return fmt.Errorf("apijson: failed float enum validation")
+			}
+			return nil
+		}
+	default:
+		return func(node gjson.Result, v reflect.Value, state *decoderState) error {
+			return fmt.Errorf("unknown type received at primitive decoder: %s", t.String())
+		}
+	}
+}
+
+func (d *decoderBuilder) newOptTypeDecoder(t reflect.Type) decoderFunc {
+	for t.Kind() == reflect.Pointer {
+		t = t.Elem()
+	}
+	valueField, _ := t.FieldByName("Value")
+	return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+		state.validateOptKind(n, valueField.Type)
+		return v.Addr().Interface().(json.Unmarshaler).UnmarshalJSON([]byte(n.Raw))
+	}
+}
+
+func (d *decoderBuilder) newTimeTypeDecoder(t reflect.Type) decoderFunc {
+	format := d.dateFormat
+	return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+		parsed, err := time.Parse(format, n.Str)
+		if err == nil {
+			v.Set(reflect.ValueOf(parsed).Convert(t))
+			return nil
+		}
+
+		if guardStrict(state, true) {
+			return err
+		}
+
+		layouts := []string{
+			"2006-01-02",
+			"2006-01-02T15:04:05Z07:00",
+			"2006-01-02T15:04:05Z0700",
+			"2006-01-02T15:04:05",
+			"2006-01-02 15:04:05Z07:00",
+			"2006-01-02 15:04:05Z0700",
+			"2006-01-02 15:04:05",
+		}
+
+		for _, layout := range layouts {
+			parsed, err := time.Parse(layout, n.Str)
+			if err == nil {
+				v.Set(reflect.ValueOf(parsed).Convert(t))
+				return nil
+			}
+		}
+
+		return fmt.Errorf("unable to leniently parse date-time string: %s", n.Str)
+	}
+}
+
+func setUnexportedField(field reflect.Value, value any) {
+	reflect.NewAt(field.Type(), unsafe.Pointer(field.UnsafeAddr())).Elem().Set(reflect.ValueOf(value))
+}
+
+func guardStrict(state *decoderState, cond bool) bool {
+	if !cond {
+		return false
+	}
+
+	if state.strict {
+		return true
+	}
+
+	state.exactness = loose
+	return false
+}
+
+func canParseAsNumber(str string) bool {
+	_, err := strconv.ParseFloat(str, 64)
+	return err == nil
+}
+
+var stringType = reflect.TypeOf(string(""))
+
+func guardUnknown(state *decoderState, v reflect.Value) bool {
+	if have, ok := v.Interface().(interface{ IsKnown() bool }); guardStrict(state, ok && !have.IsKnown()) {
+		return true
+	}
+
+	constantString, ok := v.Interface().(interface{ Default() string })
+	named := v.Type() != stringType
+	if guardStrict(state, ok && named && v.Equal(reflect.ValueOf(constantString.Default()))) {
+		return true
+	}
+	return false
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/encoder.go 🔗

@@ -0,0 +1,392 @@
+package apijson
+
+import (
+	"bytes"
+	"encoding/json"
+	"fmt"
+	"reflect"
+	"sort"
+	"strconv"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/tidwall/sjson"
+)
+
+var encoders sync.Map // map[encoderEntry]encoderFunc
+
+func Marshal(value any) ([]byte, error) {
+	e := &encoder{dateFormat: time.RFC3339}
+	return e.marshal(value)
+}
+
+func MarshalRoot(value any) ([]byte, error) {
+	e := &encoder{root: true, dateFormat: time.RFC3339}
+	return e.marshal(value)
+}
+
+type encoder struct {
+	dateFormat string
+	root       bool
+}
+
+type encoderFunc func(value reflect.Value) ([]byte, error)
+
+type encoderField struct {
+	tag parsedStructTag
+	fn  encoderFunc
+	idx []int
+}
+
+type encoderEntry struct {
+	reflect.Type
+	dateFormat string
+	root       bool
+}
+
+func (e *encoder) marshal(value any) ([]byte, error) {
+	val := reflect.ValueOf(value)
+	if !val.IsValid() {
+		return nil, nil
+	}
+	typ := val.Type()
+	enc := e.typeEncoder(typ)
+	return enc(val)
+}
+
+func (e *encoder) typeEncoder(t reflect.Type) encoderFunc {
+	entry := encoderEntry{
+		Type:       t,
+		dateFormat: e.dateFormat,
+		root:       e.root,
+	}
+
+	if fi, ok := encoders.Load(entry); ok {
+		return fi.(encoderFunc)
+	}
+
+	// To deal with recursive types, populate the map with an
+	// indirect func before we build it. This type waits on the
+	// real func (f) to be ready and then calls it. This indirect
+	// func is only used for recursive types.
+	var (
+		wg sync.WaitGroup
+		f  encoderFunc
+	)
+	wg.Add(1)
+	fi, loaded := encoders.LoadOrStore(entry, encoderFunc(func(v reflect.Value) ([]byte, error) {
+		wg.Wait()
+		return f(v)
+	}))
+	if loaded {
+		return fi.(encoderFunc)
+	}
+
+	// Compute the real encoder and replace the indirect func with it.
+	f = e.newTypeEncoder(t)
+	wg.Done()
+	encoders.Store(entry, f)
+	return f
+}
+
+func marshalerEncoder(v reflect.Value) ([]byte, error) {
+	return v.Interface().(json.Marshaler).MarshalJSON()
+}
+
+func indirectMarshalerEncoder(v reflect.Value) ([]byte, error) {
+	return v.Addr().Interface().(json.Marshaler).MarshalJSON()
+}
+
+func (e *encoder) newTypeEncoder(t reflect.Type) encoderFunc {
+	if t.ConvertibleTo(reflect.TypeOf(time.Time{})) {
+		return e.newTimeTypeEncoder()
+	}
+	if !e.root && t.Implements(reflect.TypeOf((*json.Marshaler)(nil)).Elem()) {
+		return marshalerEncoder
+	}
+	if !e.root && reflect.PointerTo(t).Implements(reflect.TypeOf((*json.Marshaler)(nil)).Elem()) {
+		return indirectMarshalerEncoder
+	}
+	e.root = false
+	switch t.Kind() {
+	case reflect.Pointer:
+		inner := t.Elem()
+
+		innerEncoder := e.typeEncoder(inner)
+		return func(v reflect.Value) ([]byte, error) {
+			if !v.IsValid() || v.IsNil() {
+				return nil, nil
+			}
+			return innerEncoder(v.Elem())
+		}
+	case reflect.Struct:
+		return e.newStructTypeEncoder(t)
+	case reflect.Array:
+		fallthrough
+	case reflect.Slice:
+		return e.newArrayTypeEncoder(t)
+	case reflect.Map:
+		return e.newMapEncoder(t)
+	case reflect.Interface:
+		return e.newInterfaceEncoder()
+	default:
+		return e.newPrimitiveTypeEncoder(t)
+	}
+}
+
+func (e *encoder) newPrimitiveTypeEncoder(t reflect.Type) encoderFunc {
+	switch t.Kind() {
+	// Note that we could use `gjson` to encode these types but it would complicate our
+	// code more and this current code shouldn't cause any issues
+	case reflect.String:
+		return func(v reflect.Value) ([]byte, error) {
+			return json.Marshal(v.Interface())
+		}
+	case reflect.Bool:
+		return func(v reflect.Value) ([]byte, error) {
+			if v.Bool() {
+				return []byte("true"), nil
+			}
+			return []byte("false"), nil
+		}
+	case reflect.Int, reflect.Int16, reflect.Int32, reflect.Int64:
+		return func(v reflect.Value) ([]byte, error) {
+			return []byte(strconv.FormatInt(v.Int(), 10)), nil
+		}
+	case reflect.Uint, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+		return func(v reflect.Value) ([]byte, error) {
+			return []byte(strconv.FormatUint(v.Uint(), 10)), nil
+		}
+	case reflect.Float32:
+		return func(v reflect.Value) ([]byte, error) {
+			return []byte(strconv.FormatFloat(v.Float(), 'f', -1, 32)), nil
+		}
+	case reflect.Float64:
+		return func(v reflect.Value) ([]byte, error) {
+			return []byte(strconv.FormatFloat(v.Float(), 'f', -1, 64)), nil
+		}
+	default:
+		return func(v reflect.Value) ([]byte, error) {
+			return nil, fmt.Errorf("unknown type received at primitive encoder: %s", t.String())
+		}
+	}
+}
+
+func (e *encoder) newArrayTypeEncoder(t reflect.Type) encoderFunc {
+	itemEncoder := e.typeEncoder(t.Elem())
+
+	return func(value reflect.Value) ([]byte, error) {
+		json := []byte("[]")
+		for i := 0; i < value.Len(); i++ {
+			var value, err = itemEncoder(value.Index(i))
+			if err != nil {
+				return nil, err
+			}
+			if value == nil {
+				// Assume that empty items should be inserted as `null` so that the output array
+				// will be the same length as the input array
+				value = []byte("null")
+			}
+
+			json, err = sjson.SetRawBytes(json, "-1", value)
+			if err != nil {
+				return nil, err
+			}
+		}
+
+		return json, nil
+	}
+}
+
+func (e *encoder) newStructTypeEncoder(t reflect.Type) encoderFunc {
+	encoderFields := []encoderField{}
+	extraEncoder := (*encoderField)(nil)
+
+	// This helper allows us to recursively collect field encoders into a flat
+	// array. The parameter `index` keeps track of the access patterns necessary
+	// to get to some field.
+	var collectEncoderFields func(r reflect.Type, index []int)
+	collectEncoderFields = func(r reflect.Type, index []int) {
+		for i := 0; i < r.NumField(); i++ {
+			idx := append(index, i)
+			field := t.FieldByIndex(idx)
+			if !field.IsExported() {
+				continue
+			}
+			// If this is an embedded struct, traverse one level deeper to extract
+			// the field and get their encoders as well.
+			if field.Anonymous {
+				collectEncoderFields(field.Type, idx)
+				continue
+			}
+			// If json tag is not present, then we skip, which is intentionally
+			// different behavior from the stdlib.
+			ptag, ok := parseJSONStructTag(field)
+			if !ok {
+				continue
+			}
+			// We only want to support unexported field if they're tagged with
+			// `extras` because that field shouldn't be part of the public API. We
+			// also want to only keep the top level extras
+			if ptag.extras && len(index) == 0 {
+				extraEncoder = &encoderField{ptag, e.typeEncoder(field.Type.Elem()), idx}
+				continue
+			}
+			if ptag.name == "-" {
+				continue
+			}
+
+			dateFormat, ok := parseFormatStructTag(field)
+			oldFormat := e.dateFormat
+			if ok {
+				switch dateFormat {
+				case "date-time":
+					e.dateFormat = time.RFC3339
+				case "date":
+					e.dateFormat = "2006-01-02"
+				}
+			}
+			encoderFields = append(encoderFields, encoderField{ptag, e.typeEncoder(field.Type), idx})
+			e.dateFormat = oldFormat
+		}
+	}
+	collectEncoderFields(t, []int{})
+
+	// Ensure deterministic output by sorting by lexicographic order
+	sort.Slice(encoderFields, func(i, j int) bool {
+		return encoderFields[i].tag.name < encoderFields[j].tag.name
+	})
+
+	return func(value reflect.Value) (json []byte, err error) {
+		json = []byte("{}")
+
+		for _, ef := range encoderFields {
+			field := value.FieldByIndex(ef.idx)
+			encoded, err := ef.fn(field)
+			if err != nil {
+				return nil, err
+			}
+			if encoded == nil {
+				continue
+			}
+			json, err = sjson.SetRawBytes(json, ef.tag.name, encoded)
+			if err != nil {
+				return nil, err
+			}
+		}
+
+		if extraEncoder != nil {
+			json, err = e.encodeMapEntries(json, value.FieldByIndex(extraEncoder.idx))
+			if err != nil {
+				return nil, err
+			}
+		}
+		return
+	}
+}
+
+func (e *encoder) newFieldTypeEncoder(t reflect.Type) encoderFunc {
+	f, _ := t.FieldByName("Value")
+	enc := e.typeEncoder(f.Type)
+
+	return func(value reflect.Value) (json []byte, err error) {
+		present := value.FieldByName("Present")
+		if !present.Bool() {
+			return nil, nil
+		}
+		null := value.FieldByName("Null")
+		if null.Bool() {
+			return []byte("null"), nil
+		}
+		raw := value.FieldByName("Raw")
+		if !raw.IsNil() {
+			return e.typeEncoder(raw.Type())(raw)
+		}
+		return enc(value.FieldByName("Value"))
+	}
+}
+
+func (e *encoder) newTimeTypeEncoder() encoderFunc {
+	format := e.dateFormat
+	return func(value reflect.Value) (json []byte, err error) {
+		return []byte(`"` + value.Convert(reflect.TypeOf(time.Time{})).Interface().(time.Time).Format(format) + `"`), nil
+	}
+}
+
+func (e encoder) newInterfaceEncoder() encoderFunc {
+	return func(value reflect.Value) ([]byte, error) {
+		value = value.Elem()
+		if !value.IsValid() {
+			return nil, nil
+		}
+		return e.typeEncoder(value.Type())(value)
+	}
+}
+
+// Given a []byte of json (may either be an empty object or an object that already contains entries)
+// encode all of the entries in the map to the json byte array.
+func (e *encoder) encodeMapEntries(json []byte, v reflect.Value) ([]byte, error) {
+	type mapPair struct {
+		key   []byte
+		value reflect.Value
+	}
+
+	pairs := []mapPair{}
+	keyEncoder := e.typeEncoder(v.Type().Key())
+
+	iter := v.MapRange()
+	for iter.Next() {
+		var encodedKeyString string
+		if iter.Key().Type().Kind() == reflect.String {
+			encodedKeyString = iter.Key().String()
+		} else {
+			var err error
+			encodedKeyBytes, err := keyEncoder(iter.Key())
+			if err != nil {
+				return nil, err
+			}
+			encodedKeyString = string(encodedKeyBytes)
+		}
+		encodedKey := []byte(sjsonReplacer.Replace(encodedKeyString))
+		pairs = append(pairs, mapPair{key: encodedKey, value: iter.Value()})
+	}
+
+	// Ensure deterministic output
+	sort.Slice(pairs, func(i, j int) bool {
+		return bytes.Compare(pairs[i].key, pairs[j].key) < 0
+	})
+
+	elementEncoder := e.typeEncoder(v.Type().Elem())
+	for _, p := range pairs {
+		encodedValue, err := elementEncoder(p.value)
+		if err != nil {
+			return nil, err
+		}
+		if len(encodedValue) == 0 {
+			continue
+		}
+		json, err = sjson.SetRawBytes(json, string(p.key), encodedValue)
+		if err != nil {
+			return nil, err
+		}
+	}
+
+	return json, nil
+}
+
+func (e *encoder) newMapEncoder(_ reflect.Type) encoderFunc {
+	return func(value reflect.Value) ([]byte, error) {
+		json := []byte("{}")
+		var err error
+		json, err = e.encodeMapEntries(json, value)
+		if err != nil {
+			return nil, err
+		}
+		return json, nil
+	}
+}
+
+// If we want to set a literal key value into JSON using sjson, we need to make sure it doesn't have
+// special characters that sjson interprets as a path.
+var sjsonReplacer *strings.Replacer = strings.NewReplacer(".", "\\.", ":", "\\:", "*", "\\*")

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/enum.go 🔗

@@ -0,0 +1,145 @@
+package apijson
+
+import (
+	"fmt"
+	"reflect"
+	"slices"
+	"sync"
+
+	"github.com/tidwall/gjson"
+)
+
+/********************/
+/* Validating Enums */
+/********************/
+
+type validationEntry struct {
+	field       reflect.StructField
+	required    bool
+	legalValues struct {
+		strings []string
+		// 1 represents true, 0 represents false, -1 represents either
+		bools int
+		ints  []int64
+	}
+}
+
+type validatorFunc func(reflect.Value) exactness
+
+var validators sync.Map
+var validationRegistry = map[reflect.Type][]validationEntry{}
+
+func RegisterFieldValidator[T any, V string | bool | int](fieldName string, values ...V) {
+	var t T
+	parentType := reflect.TypeOf(t)
+
+	if _, ok := validationRegistry[parentType]; !ok {
+		validationRegistry[parentType] = []validationEntry{}
+	}
+
+	// The following checks run at initialization time,
+	// it is impossible for them to panic if any tests pass.
+	if parentType.Kind() != reflect.Struct {
+		panic(fmt.Sprintf("apijson: cannot initialize validator for non-struct %s", parentType.String()))
+	}
+
+	var field reflect.StructField
+	found := false
+	for i := 0; i < parentType.NumField(); i++ {
+		ptag, ok := parseJSONStructTag(parentType.Field(i))
+		if ok && ptag.name == fieldName {
+			field = parentType.Field(i)
+			found = true
+			break
+		}
+	}
+
+	if !found {
+		panic(fmt.Sprintf("apijson: cannot find field %s in struct %s", fieldName, parentType.String()))
+	}
+
+	newEntry := validationEntry{field: field}
+	newEntry.legalValues.bools = -1 // default to either
+
+	switch values := any(values).(type) {
+	case []string:
+		newEntry.legalValues.strings = values
+	case []int:
+		newEntry.legalValues.ints = make([]int64, len(values))
+		for i, value := range values {
+			newEntry.legalValues.ints[i] = int64(value)
+		}
+	case []bool:
+		for i, value := range values {
+			var next int
+			if value {
+				next = 1
+			}
+			if i > 0 && newEntry.legalValues.bools != next {
+				newEntry.legalValues.bools = -1 // accept either
+				break
+			}
+			newEntry.legalValues.bools = next
+		}
+	}
+
+	// Store the information necessary to create a validator, so that we can use it
+	// lazily create the validator function when did.
+	validationRegistry[parentType] = append(validationRegistry[parentType], newEntry)
+}
+
+func (state *decoderState) validateString(v reflect.Value) {
+	if state.validator == nil {
+		return
+	}
+	if !slices.Contains(state.validator.legalValues.strings, v.String()) {
+		state.exactness = loose
+	}
+}
+
+func (state *decoderState) validateInt(v reflect.Value) {
+	if state.validator == nil {
+		return
+	}
+	if !slices.Contains(state.validator.legalValues.ints, v.Int()) {
+		state.exactness = loose
+	}
+}
+
+func (state *decoderState) validateBool(v reflect.Value) {
+	if state.validator == nil {
+		return
+	}
+	b := v.Bool()
+	if state.validator.legalValues.bools == 1 && b == false {
+		state.exactness = loose
+	} else if state.validator.legalValues.bools == 0 && b == true {
+		state.exactness = loose
+	}
+}
+
+func (state *decoderState) validateOptKind(node gjson.Result, t reflect.Type) {
+	switch node.Type {
+	case gjson.JSON:
+		state.exactness = loose
+	case gjson.Null:
+		return
+	case gjson.False, gjson.True:
+		if t.Kind() != reflect.Bool {
+			state.exactness = loose
+		}
+	case gjson.Number:
+		switch t.Kind() {
+		case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+			reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64,
+			reflect.Float32, reflect.Float64:
+			return
+		default:
+			state.exactness = loose
+		}
+	case gjson.String:
+		if t.Kind() != reflect.String {
+			state.exactness = loose
+		}
+	}
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/field.go 🔗

@@ -0,0 +1,23 @@
+package apijson
+
+type status uint8
+
+const (
+	missing status = iota
+	null
+	invalid
+	valid
+)
+
+type Field struct {
+	raw    string
+	status status
+}
+
+// Returns true if the field is explicitly `null` _or_ if it is not present at all (ie, missing).
+// To check if the field's key is present in the JSON with an explicit null value,
+// you must check `f.IsNull() && !f.IsMissing()`.
+func (j Field) IsNull() bool    { return j.status <= null }
+func (j Field) IsMissing() bool { return j.status == missing }
+func (j Field) IsInvalid() bool { return j.status == invalid }
+func (j Field) Raw() string     { return j.raw }

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/port.go 🔗

@@ -0,0 +1,120 @@
+package apijson
+
+import (
+	"fmt"
+	"reflect"
+)
+
+// Port copies over values from one struct to another struct.
+func Port(from any, to any) error {
+	toVal := reflect.ValueOf(to)
+	fromVal := reflect.ValueOf(from)
+
+	if toVal.Kind() != reflect.Ptr || toVal.IsNil() {
+		return fmt.Errorf("destination must be a non-nil pointer")
+	}
+
+	for toVal.Kind() == reflect.Ptr {
+		toVal = toVal.Elem()
+	}
+	toType := toVal.Type()
+
+	for fromVal.Kind() == reflect.Ptr {
+		fromVal = fromVal.Elem()
+	}
+	fromType := fromVal.Type()
+
+	if toType.Kind() != reflect.Struct {
+		return fmt.Errorf("destination must be a non-nil pointer to a struct (%v %v)", toType, toType.Kind())
+	}
+
+	values := map[string]reflect.Value{}
+	fields := map[string]reflect.Value{}
+
+	fromJSON := fromVal.FieldByName("JSON")
+	toJSON := toVal.FieldByName("JSON")
+
+	// Iterate through the fields of v and load all the "normal" fields in the struct to the map of
+	// string to reflect.Value, as well as their raw .JSON.Foo counterpart indicated by j.
+	var getFields func(t reflect.Type, v reflect.Value)
+	getFields = func(t reflect.Type, v reflect.Value) {
+		j := v.FieldByName("JSON")
+
+		// Recurse into anonymous fields first, since the fields on the object should win over the fields in the
+		// embedded object.
+		for i := 0; i < t.NumField(); i++ {
+			field := t.Field(i)
+			if field.Anonymous {
+				getFields(field.Type, v.Field(i))
+				continue
+			}
+		}
+
+		for i := 0; i < t.NumField(); i++ {
+			field := t.Field(i)
+			ptag, ok := parseJSONStructTag(field)
+			if !ok || ptag.name == "-" || ptag.name == "" {
+				continue
+			}
+			values[ptag.name] = v.Field(i)
+			if j.IsValid() {
+				fields[ptag.name] = j.FieldByName(field.Name)
+			}
+		}
+	}
+	getFields(fromType, fromVal)
+
+	// Use the values from the previous step to populate the 'to' struct.
+	for i := 0; i < toType.NumField(); i++ {
+		field := toType.Field(i)
+		ptag, ok := parseJSONStructTag(field)
+		if !ok {
+			continue
+		}
+		if ptag.name == "-" {
+			continue
+		}
+		if value, ok := values[ptag.name]; ok {
+			delete(values, ptag.name)
+			if field.Type.Kind() == reflect.Interface {
+				toVal.Field(i).Set(value)
+			} else {
+				switch value.Kind() {
+				case reflect.String:
+					toVal.Field(i).SetString(value.String())
+				case reflect.Bool:
+					toVal.Field(i).SetBool(value.Bool())
+				case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+					toVal.Field(i).SetInt(value.Int())
+				case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+					toVal.Field(i).SetUint(value.Uint())
+				case reflect.Float32, reflect.Float64:
+					toVal.Field(i).SetFloat(value.Float())
+				default:
+					toVal.Field(i).Set(value)
+				}
+			}
+		}
+
+		if fromJSONField, ok := fields[ptag.name]; ok {
+			if toJSONField := toJSON.FieldByName(field.Name); toJSONField.IsValid() {
+				toJSONField.Set(fromJSONField)
+			}
+		}
+	}
+
+	// Finally, copy over the .JSON.raw and .JSON.ExtraFields
+	if toJSON.IsValid() {
+		if raw := toJSON.FieldByName("raw"); raw.IsValid() {
+			setUnexportedField(raw, fromJSON.Interface().(interface{ RawJSON() string }).RawJSON())
+		}
+
+		if toExtraFields := toJSON.FieldByName("ExtraFields"); toExtraFields.IsValid() {
+			if fromExtraFields := fromJSON.FieldByName("ExtraFields"); fromExtraFields.IsValid() {
+				setUnexportedField(toExtraFields, fromExtraFields.Interface())
+			}
+		}
+	}
+
+	return nil
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/registry.go 🔗

@@ -0,0 +1,51 @@
+package apijson
+
+import (
+	"reflect"
+
+	"github.com/tidwall/gjson"
+)
+
+type UnionVariant struct {
+	TypeFilter         gjson.Type
+	DiscriminatorValue any
+	Type               reflect.Type
+}
+
+var unionRegistry = map[reflect.Type]unionEntry{}
+var unionVariants = map[reflect.Type]any{}
+
+type unionEntry struct {
+	discriminatorKey string
+	variants         []UnionVariant
+}
+
+func Discriminator[T any](value any) UnionVariant {
+	var zero T
+	return UnionVariant{
+		TypeFilter:         gjson.JSON,
+		DiscriminatorValue: value,
+		Type:               reflect.TypeOf(zero),
+	}
+}
+
+func RegisterUnion[T any](discriminator string, variants ...UnionVariant) {
+	typ := reflect.TypeOf((*T)(nil)).Elem()
+	unionRegistry[typ] = unionEntry{
+		discriminatorKey: discriminator,
+		variants:         variants,
+	}
+	for _, variant := range variants {
+		unionVariants[variant.Type] = typ
+	}
+}
+
+// Useful to wrap a union type to force it to use [apijson.UnmarshalJSON] since you cannot define an
+// UnmarshalJSON function on the interface itself.
+type UnionUnmarshaler[T any] struct {
+	Value T
+}
+
+func (c *UnionUnmarshaler[T]) UnmarshalJSON(buf []byte) error {
+	return UnmarshalRoot(buf, &c.Value)
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/subfield.go 🔗

@@ -0,0 +1,67 @@
+package apijson
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"reflect"
+)
+
+func getSubField(root reflect.Value, index []int, name string) reflect.Value {
+	strct := root.FieldByIndex(index[:len(index)-1])
+	if !strct.IsValid() {
+		panic("couldn't find encapsulating struct for field " + name)
+	}
+	meta := strct.FieldByName("JSON")
+	if !meta.IsValid() {
+		return reflect.Value{}
+	}
+	field := meta.FieldByName(name)
+	if !field.IsValid() {
+		return reflect.Value{}
+	}
+	return field
+}
+
+func setMetadataSubField(root reflect.Value, index []int, name string, meta Field) {
+	target := getSubField(root, index, name)
+	if !target.IsValid() {
+		return
+	}
+
+	if target.Type() == reflect.TypeOf(meta) {
+		target.Set(reflect.ValueOf(meta))
+	} else if respMeta := meta.toRespField(); target.Type() == reflect.TypeOf(respMeta) {
+		target.Set(reflect.ValueOf(respMeta))
+	}
+}
+
+func setMetadataExtraFields(root reflect.Value, index []int, name string, metaExtras map[string]Field) {
+	target := getSubField(root, index, name)
+	if !target.IsValid() {
+		return
+	}
+
+	if target.Type() == reflect.TypeOf(metaExtras) {
+		target.Set(reflect.ValueOf(metaExtras))
+		return
+	}
+
+	newMap := make(map[string]respjson.Field, len(metaExtras))
+	if target.Type() == reflect.TypeOf(newMap) {
+		for k, v := range metaExtras {
+			newMap[k] = v.toRespField()
+		}
+		target.Set(reflect.ValueOf(newMap))
+	}
+}
+
+func (f Field) toRespField() respjson.Field {
+	if f.IsMissing() {
+		return respjson.Field{}
+	} else if f.IsNull() {
+		return respjson.NewField("null")
+	} else if f.IsInvalid() {
+		return respjson.NewInvalidField(f.raw)
+	} else {
+		return respjson.NewField(f.raw)
+	}
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/tag.go 🔗

@@ -0,0 +1,47 @@
+package apijson
+
+import (
+	"reflect"
+	"strings"
+)
+
+const jsonStructTag = "json"
+const formatStructTag = "format"
+
+type parsedStructTag struct {
+	name     string
+	required bool
+	extras   bool
+	metadata bool
+	inline   bool
+}
+
+func parseJSONStructTag(field reflect.StructField) (tag parsedStructTag, ok bool) {
+	raw, ok := field.Tag.Lookup(jsonStructTag)
+	if !ok {
+		return
+	}
+	parts := strings.Split(raw, ",")
+	if len(parts) == 0 {
+		return tag, false
+	}
+	tag.name = parts[0]
+	for _, part := range parts[1:] {
+		switch part {
+		case "required":
+			tag.required = true
+		case "extras":
+			tag.extras = true
+		case "metadata":
+			tag.metadata = true
+		case "inline":
+			tag.inline = true
+		}
+	}
+	return
+}
+
+func parseFormatStructTag(field reflect.StructField) (format string, ok bool) {
+	format, ok = field.Tag.Lookup(formatStructTag)
+	return
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apijson/union.go 🔗

@@ -0,0 +1,202 @@
+package apijson
+
+import (
+	"errors"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"reflect"
+
+	"github.com/tidwall/gjson"
+)
+
+var apiUnionType = reflect.TypeOf(param.APIUnion{})
+
+func isStructUnion(t reflect.Type) bool {
+	if t.Kind() != reflect.Struct {
+		return false
+	}
+	for i := 0; i < t.NumField(); i++ {
+		if t.Field(i).Type == apiUnionType && t.Field(i).Anonymous {
+			return true
+		}
+	}
+	return false
+}
+
+func RegisterDiscriminatedUnion[T any](key string, mappings map[string]reflect.Type) {
+	var t T
+	entry := unionEntry{
+		discriminatorKey: key,
+		variants:         []UnionVariant{},
+	}
+	for k, typ := range mappings {
+		entry.variants = append(entry.variants, UnionVariant{
+			DiscriminatorValue: k,
+			Type:               typ,
+		})
+	}
+	unionRegistry[reflect.TypeOf(t)] = entry
+}
+
+func (d *decoderBuilder) newStructUnionDecoder(t reflect.Type) decoderFunc {
+	type variantDecoder struct {
+		decoder            decoderFunc
+		field              reflect.StructField
+		discriminatorValue any
+	}
+
+	variants := []variantDecoder{}
+	for i := 0; i < t.NumField(); i++ {
+		field := t.Field(i)
+
+		if field.Anonymous && field.Type == apiUnionType {
+			continue
+		}
+
+		decoder := d.typeDecoder(field.Type)
+		variants = append(variants, variantDecoder{
+			decoder: decoder,
+			field:   field,
+		})
+	}
+
+	unionEntry, discriminated := unionRegistry[t]
+	for _, unionVariant := range unionEntry.variants {
+		for i := 0; i < len(variants); i++ {
+			variant := &variants[i]
+			if variant.field.Type.Elem() == unionVariant.Type {
+				variant.discriminatorValue = unionVariant.DiscriminatorValue
+				break
+			}
+		}
+	}
+
+	return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+		if discriminated && n.Type == gjson.JSON && len(unionEntry.discriminatorKey) != 0 {
+			discriminator := n.Get(unionEntry.discriminatorKey).Value()
+			for _, variant := range variants {
+				if discriminator == variant.discriminatorValue {
+					inner := v.FieldByIndex(variant.field.Index)
+					return variant.decoder(n, inner, state)
+				}
+			}
+			return errors.New("apijson: was not able to find discriminated union variant")
+		}
+
+		// Set bestExactness to worse than loose
+		bestExactness := loose - 1
+		bestVariant := -1
+		for i, variant := range variants {
+			// Pointers are used to discern JSON object variants from value variants
+			if n.Type != gjson.JSON && variant.field.Type.Kind() == reflect.Ptr {
+				continue
+			}
+
+			sub := decoderState{strict: state.strict, exactness: exact}
+			inner := v.FieldByIndex(variant.field.Index)
+			err := variant.decoder(n, inner, &sub)
+			if err != nil {
+				continue
+			}
+			if sub.exactness == exact {
+				bestExactness = exact
+				bestVariant = i
+				break
+			}
+			if sub.exactness > bestExactness {
+				bestExactness = sub.exactness
+				bestVariant = i
+			}
+		}
+
+		if bestExactness < loose {
+			return errors.New("apijson: was not able to coerce type as union")
+		}
+
+		if guardStrict(state, bestExactness != exact) {
+			return errors.New("apijson: was not able to coerce type as union strictly")
+		}
+
+		for i := 0; i < len(variants); i++ {
+			if i == bestVariant {
+				continue
+			}
+			v.FieldByIndex(variants[i].field.Index).SetZero()
+		}
+
+		return nil
+	}
+}
+
+// newUnionDecoder returns a decoderFunc that deserializes into a union using an
+// algorithm roughly similar to Pydantic's [smart algorithm].
+//
+// Conceptually this is equivalent to choosing the best schema based on how 'exact'
+// the deserialization is for each of the schemas.
+//
+// If there is a tie in the level of exactness, then the tie is broken
+// left-to-right.
+//
+// [smart algorithm]: https://docs.pydantic.dev/latest/concepts/unions/#smart-mode
+func (d *decoderBuilder) newUnionDecoder(t reflect.Type) decoderFunc {
+	unionEntry, ok := unionRegistry[t]
+	if !ok {
+		panic("apijson: couldn't find union of type " + t.String() + " in union registry")
+	}
+	decoders := []decoderFunc{}
+	for _, variant := range unionEntry.variants {
+		decoder := d.typeDecoder(variant.Type)
+		decoders = append(decoders, decoder)
+	}
+	return func(n gjson.Result, v reflect.Value, state *decoderState) error {
+		// If there is a discriminator match, circumvent the exactness logic entirely
+		for idx, variant := range unionEntry.variants {
+			decoder := decoders[idx]
+			if variant.TypeFilter != n.Type {
+				continue
+			}
+
+			if len(unionEntry.discriminatorKey) != 0 {
+				discriminatorValue := n.Get(unionEntry.discriminatorKey).Value()
+				if discriminatorValue == variant.DiscriminatorValue {
+					inner := reflect.New(variant.Type).Elem()
+					err := decoder(n, inner, state)
+					v.Set(inner)
+					return err
+				}
+			}
+		}
+
+		// Set bestExactness to worse than loose
+		bestExactness := loose - 1
+		for idx, variant := range unionEntry.variants {
+			decoder := decoders[idx]
+			if variant.TypeFilter != n.Type {
+				continue
+			}
+			sub := decoderState{strict: state.strict, exactness: exact}
+			inner := reflect.New(variant.Type).Elem()
+			err := decoder(n, inner, &sub)
+			if err != nil {
+				continue
+			}
+			if sub.exactness == exact {
+				v.Set(inner)
+				return nil
+			}
+			if sub.exactness > bestExactness {
+				v.Set(inner)
+				bestExactness = sub.exactness
+			}
+		}
+
+		if bestExactness < loose {
+			return errors.New("apijson: was not able to coerce type as union")
+		}
+
+		if guardStrict(state, bestExactness != exact) {
+			return errors.New("apijson: was not able to coerce type as union strictly")
+		}
+
+		return nil
+	}
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/encoder.go 🔗

@@ -0,0 +1,415 @@
+package apiquery
+
+import (
+	"encoding/json"
+	"fmt"
+	"reflect"
+	"strconv"
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+)
+
+var encoders sync.Map // map[reflect.Type]encoderFunc
+
+type encoder struct {
+	dateFormat string
+	root       bool
+	settings   QuerySettings
+}
+
+type encoderFunc func(key string, value reflect.Value) ([]Pair, error)
+
+type encoderField struct {
+	tag parsedStructTag
+	fn  encoderFunc
+	idx []int
+}
+
+type encoderEntry struct {
+	reflect.Type
+	dateFormat string
+	root       bool
+	settings   QuerySettings
+}
+
+type Pair struct {
+	key   string
+	value string
+}
+
+func (e *encoder) typeEncoder(t reflect.Type) encoderFunc {
+	entry := encoderEntry{
+		Type:       t,
+		dateFormat: e.dateFormat,
+		root:       e.root,
+		settings:   e.settings,
+	}
+
+	if fi, ok := encoders.Load(entry); ok {
+		return fi.(encoderFunc)
+	}
+
+	// To deal with recursive types, populate the map with an
+	// indirect func before we build it. This type waits on the
+	// real func (f) to be ready and then calls it. This indirect
+	// func is only used for recursive types.
+	var (
+		wg sync.WaitGroup
+		f  encoderFunc
+	)
+	wg.Add(1)
+	fi, loaded := encoders.LoadOrStore(entry, encoderFunc(func(key string, v reflect.Value) ([]Pair, error) {
+		wg.Wait()
+		return f(key, v)
+	}))
+	if loaded {
+		return fi.(encoderFunc)
+	}
+
+	// Compute the real encoder and replace the indirect func with it.
+	f = e.newTypeEncoder(t)
+	wg.Done()
+	encoders.Store(entry, f)
+	return f
+}
+
+func marshalerEncoder(key string, value reflect.Value) ([]Pair, error) {
+	s, err := value.Interface().(json.Marshaler).MarshalJSON()
+	if err != nil {
+		return nil, fmt.Errorf("apiquery: json fallback marshal error %s", err)
+	}
+	return []Pair{{key, string(s)}}, nil
+}
+
+func (e *encoder) newTypeEncoder(t reflect.Type) encoderFunc {
+	if t.ConvertibleTo(reflect.TypeOf(time.Time{})) {
+		return e.newTimeTypeEncoder(t)
+	}
+
+	if t.Implements(reflect.TypeOf((*param.Optional)(nil)).Elem()) {
+		return e.newRichFieldTypeEncoder(t)
+	}
+
+	if !e.root && t.Implements(reflect.TypeOf((*json.Marshaler)(nil)).Elem()) {
+		return marshalerEncoder
+	}
+
+	e.root = false
+	switch t.Kind() {
+	case reflect.Pointer:
+		encoder := e.typeEncoder(t.Elem())
+		return func(key string, value reflect.Value) (pairs []Pair, err error) {
+			if !value.IsValid() || value.IsNil() {
+				return
+			}
+			return encoder(key, value.Elem())
+		}
+	case reflect.Struct:
+		return e.newStructTypeEncoder(t)
+	case reflect.Array:
+		fallthrough
+	case reflect.Slice:
+		return e.newArrayTypeEncoder(t)
+	case reflect.Map:
+		return e.newMapEncoder(t)
+	case reflect.Interface:
+		return e.newInterfaceEncoder()
+	default:
+		return e.newPrimitiveTypeEncoder(t)
+	}
+}
+
+func (e *encoder) newStructTypeEncoder(t reflect.Type) encoderFunc {
+	if t.Implements(reflect.TypeOf((*param.Optional)(nil)).Elem()) {
+		return e.newRichFieldTypeEncoder(t)
+	}
+
+	for i := 0; i < t.NumField(); i++ {
+		if t.Field(i).Type == paramUnionType && t.Field(i).Anonymous {
+			return e.newStructUnionTypeEncoder(t)
+		}
+	}
+
+	encoderFields := []encoderField{}
+
+	// This helper allows us to recursively collect field encoders into a flat
+	// array. The parameter `index` keeps track of the access patterns necessary
+	// to get to some field.
+	var collectEncoderFields func(r reflect.Type, index []int)
+	collectEncoderFields = func(r reflect.Type, index []int) {
+		for i := 0; i < r.NumField(); i++ {
+			idx := append(index, i)
+			field := t.FieldByIndex(idx)
+			if !field.IsExported() {
+				continue
+			}
+			// If this is an embedded struct, traverse one level deeper to extract
+			// the field and get their encoders as well.
+			if field.Anonymous {
+				collectEncoderFields(field.Type, idx)
+				continue
+			}
+			// If query tag is not present, then we skip, which is intentionally
+			// different behavior from the stdlib.
+			ptag, ok := parseQueryStructTag(field)
+			if !ok {
+				continue
+			}
+
+			if (ptag.name == "-" || ptag.name == "") && !ptag.inline {
+				continue
+			}
+
+			dateFormat, ok := parseFormatStructTag(field)
+			oldFormat := e.dateFormat
+			if ok {
+				switch dateFormat {
+				case "date-time":
+					e.dateFormat = time.RFC3339
+				case "date":
+					e.dateFormat = "2006-01-02"
+				}
+			}
+			var encoderFn encoderFunc
+			if ptag.omitzero {
+				typeEncoderFn := e.typeEncoder(field.Type)
+				encoderFn = func(key string, value reflect.Value) ([]Pair, error) {
+					if value.IsZero() {
+						return nil, nil
+					}
+					return typeEncoderFn(key, value)
+				}
+			} else {
+				encoderFn = e.typeEncoder(field.Type)
+			}
+			encoderFields = append(encoderFields, encoderField{ptag, encoderFn, idx})
+			e.dateFormat = oldFormat
+		}
+	}
+	collectEncoderFields(t, []int{})
+
+	return func(key string, value reflect.Value) (pairs []Pair, err error) {
+		for _, ef := range encoderFields {
+			var subkey string = e.renderKeyPath(key, ef.tag.name)
+			if ef.tag.inline {
+				subkey = key
+			}
+
+			field := value.FieldByIndex(ef.idx)
+			subpairs, suberr := ef.fn(subkey, field)
+			if suberr != nil {
+				err = suberr
+			}
+			pairs = append(pairs, subpairs...)
+		}
+		return
+	}
+}
+
+var paramUnionType = reflect.TypeOf((*param.APIUnion)(nil)).Elem()
+
+func (e *encoder) newStructUnionTypeEncoder(t reflect.Type) encoderFunc {
+	var fieldEncoders []encoderFunc
+	for i := 0; i < t.NumField(); i++ {
+		field := t.Field(i)
+		if field.Type == paramUnionType && field.Anonymous {
+			fieldEncoders = append(fieldEncoders, nil)
+			continue
+		}
+		fieldEncoders = append(fieldEncoders, e.typeEncoder(field.Type))
+	}
+
+	return func(key string, value reflect.Value) (pairs []Pair, err error) {
+		for i := 0; i < t.NumField(); i++ {
+			if value.Field(i).Type() == paramUnionType {
+				continue
+			}
+			if !value.Field(i).IsZero() {
+				return fieldEncoders[i](key, value.Field(i))
+			}
+		}
+		return nil, fmt.Errorf("apiquery: union %s has no field set", t.String())
+	}
+}
+
+func (e *encoder) newMapEncoder(t reflect.Type) encoderFunc {
+	keyEncoder := e.typeEncoder(t.Key())
+	elementEncoder := e.typeEncoder(t.Elem())
+	return func(key string, value reflect.Value) (pairs []Pair, err error) {
+		iter := value.MapRange()
+		for iter.Next() {
+			encodedKey, err := keyEncoder("", iter.Key())
+			if err != nil {
+				return nil, err
+			}
+			if len(encodedKey) != 1 {
+				return nil, fmt.Errorf("apiquery: unexpected number of parts for encoded map key, map may contain non-primitive")
+			}
+			subkey := encodedKey[0].value
+			keyPath := e.renderKeyPath(key, subkey)
+			subpairs, suberr := elementEncoder(keyPath, iter.Value())
+			if suberr != nil {
+				err = suberr
+			}
+			pairs = append(pairs, subpairs...)
+		}
+		return
+	}
+}
+
+func (e *encoder) renderKeyPath(key string, subkey string) string {
+	if len(key) == 0 {
+		return subkey
+	}
+	if e.settings.NestedFormat == NestedQueryFormatDots {
+		return fmt.Sprintf("%s.%s", key, subkey)
+	}
+	return fmt.Sprintf("%s[%s]", key, subkey)
+}
+
+func (e *encoder) newArrayTypeEncoder(t reflect.Type) encoderFunc {
+	switch e.settings.ArrayFormat {
+	case ArrayQueryFormatComma:
+		innerEncoder := e.typeEncoder(t.Elem())
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			elements := []string{}
+			for i := 0; i < v.Len(); i++ {
+				innerPairs, err := innerEncoder("", v.Index(i))
+				if err != nil {
+					return nil, err
+				}
+				for _, pair := range innerPairs {
+					elements = append(elements, pair.value)
+				}
+			}
+			if len(elements) == 0 {
+				return []Pair{}, nil
+			}
+			return []Pair{{key, strings.Join(elements, ",")}}, nil
+		}
+	case ArrayQueryFormatRepeat:
+		innerEncoder := e.typeEncoder(t.Elem())
+		return func(key string, value reflect.Value) (pairs []Pair, err error) {
+			for i := 0; i < value.Len(); i++ {
+				subpairs, suberr := innerEncoder(key, value.Index(i))
+				if suberr != nil {
+					err = suberr
+				}
+				pairs = append(pairs, subpairs...)
+			}
+			return
+		}
+	case ArrayQueryFormatIndices:
+		panic("The array indices format is not supported yet")
+	case ArrayQueryFormatBrackets:
+		innerEncoder := e.typeEncoder(t.Elem())
+		return func(key string, value reflect.Value) (pairs []Pair, err error) {
+			pairs = []Pair{}
+			for i := 0; i < value.Len(); i++ {
+				subpairs, suberr := innerEncoder(key+"[]", value.Index(i))
+				if suberr != nil {
+					err = suberr
+				}
+				pairs = append(pairs, subpairs...)
+			}
+			return
+		}
+	default:
+		panic(fmt.Sprintf("Unknown ArrayFormat value: %d", e.settings.ArrayFormat))
+	}
+}
+
+func (e *encoder) newPrimitiveTypeEncoder(t reflect.Type) encoderFunc {
+	switch t.Kind() {
+	case reflect.Pointer:
+		inner := t.Elem()
+
+		innerEncoder := e.newPrimitiveTypeEncoder(inner)
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			if !v.IsValid() || v.IsNil() {
+				return nil, nil
+			}
+			return innerEncoder(key, v.Elem())
+		}
+	case reflect.String:
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			return []Pair{{key, v.String()}}, nil
+		}
+	case reflect.Bool:
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			if v.Bool() {
+				return []Pair{{key, "true"}}, nil
+			}
+			return []Pair{{key, "false"}}, nil
+		}
+	case reflect.Int, reflect.Int16, reflect.Int32, reflect.Int64:
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			return []Pair{{key, strconv.FormatInt(v.Int(), 10)}}, nil
+		}
+	case reflect.Uint, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			return []Pair{{key, strconv.FormatUint(v.Uint(), 10)}}, nil
+		}
+	case reflect.Float32, reflect.Float64:
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			return []Pair{{key, strconv.FormatFloat(v.Float(), 'f', -1, 64)}}, nil
+		}
+	case reflect.Complex64, reflect.Complex128:
+		bitSize := 64
+		if t.Kind() == reflect.Complex128 {
+			bitSize = 128
+		}
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			return []Pair{{key, strconv.FormatComplex(v.Complex(), 'f', -1, bitSize)}}, nil
+		}
+	default:
+		return func(key string, v reflect.Value) ([]Pair, error) {
+			return nil, nil
+		}
+	}
+}
+
+func (e *encoder) newFieldTypeEncoder(t reflect.Type) encoderFunc {
+	f, _ := t.FieldByName("Value")
+	enc := e.typeEncoder(f.Type)
+
+	return func(key string, value reflect.Value) ([]Pair, error) {
+		present := value.FieldByName("Present")
+		if !present.Bool() {
+			return nil, nil
+		}
+		null := value.FieldByName("Null")
+		if null.Bool() {
+			return nil, fmt.Errorf("apiquery: field cannot be null")
+		}
+		raw := value.FieldByName("Raw")
+		if !raw.IsNil() {
+			return e.typeEncoder(raw.Type())(key, raw)
+		}
+		return enc(key, value.FieldByName("Value"))
+	}
+}
+
+func (e *encoder) newTimeTypeEncoder(_ reflect.Type) encoderFunc {
+	format := e.dateFormat
+	return func(key string, value reflect.Value) ([]Pair, error) {
+		return []Pair{{
+			key,
+			value.Convert(reflect.TypeOf(time.Time{})).Interface().(time.Time).Format(format),
+		}}, nil
+	}
+}
+
+func (e encoder) newInterfaceEncoder() encoderFunc {
+	return func(key string, value reflect.Value) ([]Pair, error) {
+		value = value.Elem()
+		if !value.IsValid() {
+			return nil, nil
+		}
+		return e.typeEncoder(value.Type())(key, value)
+	}
+
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/query.go 🔗

@@ -0,0 +1,55 @@
+package apiquery
+
+import (
+	"net/url"
+	"reflect"
+	"time"
+)
+
+func MarshalWithSettings(value any, settings QuerySettings) (url.Values, error) {
+	e := encoder{time.RFC3339, true, settings}
+	kv := url.Values{}
+	val := reflect.ValueOf(value)
+	if !val.IsValid() {
+		return nil, nil
+	}
+	typ := val.Type()
+
+	pairs, err := e.typeEncoder(typ)("", val)
+	if err != nil {
+		return nil, err
+	}
+	for _, pair := range pairs {
+		kv.Add(pair.key, pair.value)
+	}
+	return kv, nil
+}
+
+func Marshal(value any) (url.Values, error) {
+	return MarshalWithSettings(value, QuerySettings{})
+}
+
+type Queryer interface {
+	URLQuery() (url.Values, error)
+}
+
+type QuerySettings struct {
+	NestedFormat NestedQueryFormat
+	ArrayFormat  ArrayQueryFormat
+}
+
+type NestedQueryFormat int
+
+const (
+	NestedQueryFormatBrackets NestedQueryFormat = iota
+	NestedQueryFormatDots
+)
+
+type ArrayQueryFormat int
+
+const (
+	ArrayQueryFormatComma ArrayQueryFormat = iota
+	ArrayQueryFormatRepeat
+	ArrayQueryFormatIndices
+	ArrayQueryFormatBrackets
+)

vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/richparam.go 🔗

@@ -0,0 +1,19 @@
+package apiquery
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"reflect"
+)
+
+func (e *encoder) newRichFieldTypeEncoder(t reflect.Type) encoderFunc {
+	f, _ := t.FieldByName("Value")
+	enc := e.typeEncoder(f.Type)
+	return func(key string, value reflect.Value) ([]Pair, error) {
+		if opt, ok := value.Interface().(param.Optional); ok && opt.Valid() {
+			return enc(key, value.FieldByIndex(f.Index))
+		} else if ok && param.IsNull(opt) {
+			return []Pair{{key, "null"}}, nil
+		}
+		return nil, nil
+	}
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/apiquery/tag.go 🔗

@@ -0,0 +1,44 @@
+package apiquery
+
+import (
+	"reflect"
+	"strings"
+)
+
+const queryStructTag = "query"
+const formatStructTag = "format"
+
+type parsedStructTag struct {
+	name      string
+	omitempty bool
+	omitzero  bool
+	inline    bool
+}
+
+func parseQueryStructTag(field reflect.StructField) (tag parsedStructTag, ok bool) {
+	raw, ok := field.Tag.Lookup(queryStructTag)
+	if !ok {
+		return
+	}
+	parts := strings.Split(raw, ",")
+	if len(parts) == 0 {
+		return tag, false
+	}
+	tag.name = parts[0]
+	for _, part := range parts[1:] {
+		switch part {
+		case "omitzero":
+			tag.omitzero = true
+		case "omitempty":
+			tag.omitempty = true
+		case "inline":
+			tag.inline = true
+		}
+	}
+	return
+}
+
+func parseFormatStructTag(field reflect.StructField) (format string, ok bool) {
+	format, ok = field.Tag.Lookup(formatStructTag)
+	return
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/decode.go 🔗

@@ -0,0 +1,1324 @@
+// Vendored from Go 1.24.0-pre-release
+// To find alterations, check package shims, and comments beginning in SHIM().
+//
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Represents JSON data structure using native Go types: booleans, floats,
+// strings, arrays, and maps.
+
+package json
+
+import (
+	"encoding"
+	"encoding/base64"
+	"fmt"
+	"github.com/anthropics/anthropic-sdk-go/internal/encoding/json/shims"
+	"reflect"
+	"strconv"
+	"strings"
+	"unicode"
+	"unicode/utf16"
+	"unicode/utf8"
+	_ "unsafe" // for linkname
+)
+
+// Unmarshal parses the JSON-encoded data and stores the result
+// in the value pointed to by v. If v is nil or not a pointer,
+// Unmarshal returns an [InvalidUnmarshalError].
+//
+// Unmarshal uses the inverse of the encodings that
+// [Marshal] uses, allocating maps, slices, and pointers as necessary,
+// with the following additional rules:
+//
+// To unmarshal JSON into a pointer, Unmarshal first handles the case of
+// the JSON being the JSON literal null. In that case, Unmarshal sets
+// the pointer to nil. Otherwise, Unmarshal unmarshals the JSON into
+// the value pointed at by the pointer. If the pointer is nil, Unmarshal
+// allocates a new value for it to point to.
+//
+// To unmarshal JSON into a value implementing [Unmarshaler],
+// Unmarshal calls that value's [Unmarshaler.UnmarshalJSON] method, including
+// when the input is a JSON null.
+// Otherwise, if the value implements [encoding.TextUnmarshaler]
+// and the input is a JSON quoted string, Unmarshal calls
+// [encoding.TextUnmarshaler.UnmarshalText] with the unquoted form of the string.
+//
+// To unmarshal JSON into a struct, Unmarshal matches incoming object
+// keys to the keys used by [Marshal] (either the struct field name or its tag),
+// preferring an exact match but also accepting a case-insensitive match. By
+// default, object keys which don't have a corresponding struct field are
+// ignored (see [Decoder.DisallowUnknownFields] for an alternative).
+//
+// To unmarshal JSON into an interface value,
+// Unmarshal stores one of these in the interface value:
+//
+//   - bool, for JSON booleans
+//   - float64, for JSON numbers
+//   - string, for JSON strings
+//   - []any, for JSON arrays
+//   - map[string]any, for JSON objects
+//   - nil for JSON null
+//
+// To unmarshal a JSON array into a slice, Unmarshal resets the slice length
+// to zero and then appends each element to the slice.
+// As a special case, to unmarshal an empty JSON array into a slice,
+// Unmarshal replaces the slice with a new empty slice.
+//
+// To unmarshal a JSON array into a Go array, Unmarshal decodes
+// JSON array elements into corresponding Go array elements.
+// If the Go array is smaller than the JSON array,
+// the additional JSON array elements are discarded.
+// If the JSON array is smaller than the Go array,
+// the additional Go array elements are set to zero values.
+//
+// To unmarshal a JSON object into a map, Unmarshal first establishes a map to
+// use. If the map is nil, Unmarshal allocates a new map. Otherwise Unmarshal
+// reuses the existing map, keeping existing entries. Unmarshal then stores
+// key-value pairs from the JSON object into the map. The map's key type must
+// either be any string type, an integer, or implement [encoding.TextUnmarshaler].
+//
+// If the JSON-encoded data contain a syntax error, Unmarshal returns a [SyntaxError].
+//
+// If a JSON value is not appropriate for a given target type,
+// or if a JSON number overflows the target type, Unmarshal
+// skips that field and completes the unmarshaling as best it can.
+// If no more serious errors are encountered, Unmarshal returns
+// an [UnmarshalTypeError] describing the earliest such error. In any
+// case, it's not guaranteed that all the remaining fields following
+// the problematic one will be unmarshaled into the target object.
+//
+// The JSON null value unmarshals into an interface, map, pointer, or slice
+// by setting that Go value to nil. Because null is often used in JSON to mean
+// “not present,” unmarshaling a JSON null into any other Go type has no effect
+// on the value and produces no error.
+//
+// When unmarshaling quoted strings, invalid UTF-8 or
+// invalid UTF-16 surrogate pairs are not treated as an error.
+// Instead, they are replaced by the Unicode replacement
+// character U+FFFD.
+func Unmarshal(data []byte, v any) error {
+	// Check for well-formedness.
+	// Avoids filling out half a data structure
+	// before discovering a JSON syntax error.
+	var d decodeState
+	err := checkValid(data, &d.scan)
+	if err != nil {
+		return err
+	}
+
+	d.init(data)
+	return d.unmarshal(v)
+}
+
+// Unmarshaler is the interface implemented by types
+// that can unmarshal a JSON description of themselves.
+// The input can be assumed to be a valid encoding of
+// a JSON value. UnmarshalJSON must copy the JSON data
+// if it wishes to retain the data after returning.
+//
+// By convention, to approximate the behavior of [Unmarshal] itself,
+// Unmarshalers implement UnmarshalJSON([]byte("null")) as a no-op.
+type Unmarshaler interface {
+	UnmarshalJSON([]byte) error
+}
+
+// An UnmarshalTypeError describes a JSON value that was
+// not appropriate for a value of a specific Go type.
+type UnmarshalTypeError struct {
+	Value  string       // description of JSON value - "bool", "array", "number -5"
+	Type   reflect.Type // type of Go value it could not be assigned to
+	Offset int64        // error occurred after reading Offset bytes
+	Struct string       // name of the struct type containing the field
+	Field  string       // the full path from root node to the field, include embedded struct
+}
+
+func (e *UnmarshalTypeError) Error() string {
+	if e.Struct != "" || e.Field != "" {
+		return "json: cannot unmarshal " + e.Value + " into Go struct field " + e.Struct + "." + e.Field + " of type " + e.Type.String()
+	}
+	return "json: cannot unmarshal " + e.Value + " into Go value of type " + e.Type.String()
+}
+
+// An UnmarshalFieldError describes a JSON object key that
+// led to an unexported (and therefore unwritable) struct field.
+//
+// Deprecated: No longer used; kept for compatibility.
+type UnmarshalFieldError struct {
+	Key   string
+	Type  reflect.Type
+	Field reflect.StructField
+}
+
+func (e *UnmarshalFieldError) Error() string {
+	return "json: cannot unmarshal object key " + strconv.Quote(e.Key) + " into unexported field " + e.Field.Name + " of type " + e.Type.String()
+}
+
+// An InvalidUnmarshalError describes an invalid argument passed to [Unmarshal].
+// (The argument to [Unmarshal] must be a non-nil pointer.)
+type InvalidUnmarshalError struct {
+	Type reflect.Type
+}
+
+func (e *InvalidUnmarshalError) Error() string {
+	if e.Type == nil {
+		return "json: Unmarshal(nil)"
+	}
+
+	if e.Type.Kind() != reflect.Pointer {
+		return "json: Unmarshal(non-pointer " + e.Type.String() + ")"
+	}
+	return "json: Unmarshal(nil " + e.Type.String() + ")"
+}
+
+func (d *decodeState) unmarshal(v any) error {
+	rv := reflect.ValueOf(v)
+	if rv.Kind() != reflect.Pointer || rv.IsNil() {
+		return &InvalidUnmarshalError{reflect.TypeOf(v)}
+	}
+
+	d.scan.reset()
+	d.scanWhile(scanSkipSpace)
+	// We decode rv not rv.Elem because the Unmarshaler interface
+	// test must be applied at the top level of the value.
+	err := d.value(rv)
+	if err != nil {
+		return d.addErrorContext(err)
+	}
+	return d.savedError
+}
+
+// A Number represents a JSON number literal.
+type Number string
+
+// String returns the literal text of the number.
+func (n Number) String() string { return string(n) }
+
+// Float64 returns the number as a float64.
+func (n Number) Float64() (float64, error) {
+	return strconv.ParseFloat(string(n), 64)
+}
+
+// Int64 returns the number as an int64.
+func (n Number) Int64() (int64, error) {
+	return strconv.ParseInt(string(n), 10, 64)
+}
+
+// An errorContext provides context for type errors during decoding.
+type errorContext struct {
+	Struct     reflect.Type
+	FieldStack []string
+}
+
+// decodeState represents the state while decoding a JSON value.
+type decodeState struct {
+	data                  []byte
+	off                   int // next read offset in data
+	opcode                int // last read result
+	scan                  scanner
+	errorContext          *errorContext
+	savedError            error
+	useNumber             bool
+	disallowUnknownFields bool
+}
+
+// readIndex returns the position of the last byte read.
+func (d *decodeState) readIndex() int {
+	return d.off - 1
+}
+
+// phasePanicMsg is used as a panic message when we end up with something that
+// shouldn't happen. It can indicate a bug in the JSON decoder, or that
+// something is editing the data slice while the decoder executes.
+const phasePanicMsg = "JSON decoder out of sync - data changing underfoot?"
+
+func (d *decodeState) init(data []byte) *decodeState {
+	d.data = data
+	d.off = 0
+	d.savedError = nil
+	if d.errorContext != nil {
+		d.errorContext.Struct = nil
+		// Reuse the allocated space for the FieldStack slice.
+		d.errorContext.FieldStack = d.errorContext.FieldStack[:0]
+	}
+	return d
+}
+
+// saveError saves the first err it is called with,
+// for reporting at the end of the unmarshal.
+func (d *decodeState) saveError(err error) {
+	if d.savedError == nil {
+		d.savedError = d.addErrorContext(err)
+	}
+}
+
+// addErrorContext returns a new error enhanced with information from d.errorContext
+func (d *decodeState) addErrorContext(err error) error {
+	if d.errorContext != nil && (d.errorContext.Struct != nil || len(d.errorContext.FieldStack) > 0) {
+		switch err := err.(type) {
+		case *UnmarshalTypeError:
+			err.Struct = d.errorContext.Struct.Name()
+			fieldStack := d.errorContext.FieldStack
+			if err.Field != "" {
+				fieldStack = append(fieldStack, err.Field)
+			}
+			err.Field = strings.Join(fieldStack, ".")
+		}
+	}
+	return err
+}
+
+// skip scans to the end of what was started.
+func (d *decodeState) skip() {
+	s, data, i := &d.scan, d.data, d.off
+	depth := len(s.parseState)
+	for {
+		op := s.step(s, data[i])
+		i++
+		if len(s.parseState) < depth {
+			d.off = i
+			d.opcode = op
+			return
+		}
+	}
+}
+
+// scanNext processes the byte at d.data[d.off].
+func (d *decodeState) scanNext() {
+	if d.off < len(d.data) {
+		d.opcode = d.scan.step(&d.scan, d.data[d.off])
+		d.off++
+	} else {
+		d.opcode = d.scan.eof()
+		d.off = len(d.data) + 1 // mark processed EOF with len+1
+	}
+}
+
+// scanWhile processes bytes in d.data[d.off:] until it
+// receives a scan code not equal to op.
+func (d *decodeState) scanWhile(op int) {
+	s, data, i := &d.scan, d.data, d.off
+	for i < len(data) {
+		newOp := s.step(s, data[i])
+		i++
+		if newOp != op {
+			d.opcode = newOp
+			d.off = i
+			return
+		}
+	}
+
+	d.off = len(data) + 1 // mark processed EOF with len+1
+	d.opcode = d.scan.eof()
+}
+
+// rescanLiteral is similar to scanWhile(scanContinue), but it specialises the
+// common case where we're decoding a literal. The decoder scans the input
+// twice, once for syntax errors and to check the length of the value, and the
+// second to perform the decoding.
+//
+// Only in the second step do we use decodeState to tokenize literals, so we
+// know there aren't any syntax errors. We can take advantage of that knowledge,
+// and scan a literal's bytes much more quickly.
+func (d *decodeState) rescanLiteral() {
+	data, i := d.data, d.off
+Switch:
+	switch data[i-1] {
+	case '"': // string
+		for ; i < len(data); i++ {
+			switch data[i] {
+			case '\\':
+				i++ // escaped char
+			case '"':
+				i++ // tokenize the closing quote too
+				break Switch
+			}
+		}
+	case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '-': // number
+		for ; i < len(data); i++ {
+			switch data[i] {
+			case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
+				'.', 'e', 'E', '+', '-':
+			default:
+				break Switch
+			}
+		}
+	case 't': // true
+		i += len("rue")
+	case 'f': // false
+		i += len("alse")
+	case 'n': // null
+		i += len("ull")
+	}
+	if i < len(data) {
+		d.opcode = stateEndValue(&d.scan, data[i])
+	} else {
+		d.opcode = scanEnd
+	}
+	d.off = i + 1
+}
+
+// value consumes a JSON value from d.data[d.off-1:], decoding into v, and
+// reads the following byte ahead. If v is invalid, the value is discarded.
+// The first byte of the value has been read already.
+func (d *decodeState) value(v reflect.Value) error {
+	switch d.opcode {
+	default:
+		panic(phasePanicMsg)
+
+	case scanBeginArray:
+		if v.IsValid() {
+			if err := d.array(v); err != nil {
+				return err
+			}
+		} else {
+			d.skip()
+		}
+		d.scanNext()
+
+	case scanBeginObject:
+		if v.IsValid() {
+			if err := d.object(v); err != nil {
+				return err
+			}
+		} else {
+			d.skip()
+		}
+		d.scanNext()
+
+	case scanBeginLiteral:
+		// All bytes inside literal return scanContinue op code.
+		start := d.readIndex()
+		d.rescanLiteral()
+
+		if v.IsValid() {
+			if err := d.literalStore(d.data[start:d.readIndex()], v, false); err != nil {
+				return err
+			}
+		}
+	}
+	return nil
+}
+
+type unquotedValue struct{}
+
+// valueQuoted is like value but decodes a
+// quoted string literal or literal null into an interface value.
+// If it finds anything other than a quoted string literal or null,
+// valueQuoted returns unquotedValue{}.
+func (d *decodeState) valueQuoted() any {
+	switch d.opcode {
+	default:
+		panic(phasePanicMsg)
+
+	case scanBeginArray, scanBeginObject:
+		d.skip()
+		d.scanNext()
+
+	case scanBeginLiteral:
+		v := d.literalInterface()
+		switch v.(type) {
+		case nil, string:
+			return v
+		}
+	}
+	return unquotedValue{}
+}
+
+// indirect walks down v allocating pointers as needed,
+// until it gets to a non-pointer.
+// If it encounters an Unmarshaler, indirect stops and returns that.
+// If decodingNull is true, indirect stops at the first settable pointer so it
+// can be set to nil.
+func indirect(v reflect.Value, decodingNull bool) (Unmarshaler, encoding.TextUnmarshaler, reflect.Value) {
+	// Issue #24153 indicates that it is generally not a guaranteed property
+	// that you may round-trip a reflect.Value by calling Value.Addr().Elem()
+	// and expect the value to still be settable for values derived from
+	// unexported embedded struct fields.
+	//
+	// The logic below effectively does this when it first addresses the value
+	// (to satisfy possible pointer methods) and continues to dereference
+	// subsequent pointers as necessary.
+	//
+	// After the first round-trip, we set v back to the original value to
+	// preserve the original RW flags contained in reflect.Value.
+	v0 := v
+	haveAddr := false
+
+	// If v is a named type and is addressable,
+	// start with its address, so that if the type has pointer methods,
+	// we find them.
+	if v.Kind() != reflect.Pointer && v.Type().Name() != "" && v.CanAddr() {
+		haveAddr = true
+		v = v.Addr()
+	}
+	for {
+		// Load value from interface, but only if the result will be
+		// usefully addressable.
+		if v.Kind() == reflect.Interface && !v.IsNil() {
+			e := v.Elem()
+			if e.Kind() == reflect.Pointer && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Pointer) {
+				haveAddr = false
+				v = e
+				continue
+			}
+		}
+
+		if v.Kind() != reflect.Pointer {
+			break
+		}
+
+		if decodingNull && v.CanSet() {
+			break
+		}
+
+		// Prevent infinite loop if v is an interface pointing to its own address:
+		//     var v any
+		//     v = &v
+		if v.Elem().Kind() == reflect.Interface && v.Elem().Elem().Equal(v) {
+			v = v.Elem()
+			break
+		}
+		if v.IsNil() {
+			v.Set(reflect.New(v.Type().Elem()))
+		}
+		if v.Type().NumMethod() > 0 && v.CanInterface() {
+			if u, ok := v.Interface().(Unmarshaler); ok {
+				return u, nil, reflect.Value{}
+			}
+			if !decodingNull {
+				if u, ok := v.Interface().(encoding.TextUnmarshaler); ok {
+					return nil, u, reflect.Value{}
+				}
+			}
+		}
+
+		if haveAddr {
+			v = v0 // restore original value after round-trip Value.Addr().Elem()
+			haveAddr = false
+		} else {
+			v = v.Elem()
+		}
+	}
+	return nil, nil, v
+}
+
+// array consumes an array from d.data[d.off-1:], decoding into v.
+// The first byte of the array ('[') has been read already.
+func (d *decodeState) array(v reflect.Value) error {
+	// Check for unmarshaler.
+	u, ut, pv := indirect(v, false)
+	if u != nil {
+		start := d.readIndex()
+		d.skip()
+		return u.UnmarshalJSON(d.data[start:d.off])
+	}
+	if ut != nil {
+		d.saveError(&UnmarshalTypeError{Value: "array", Type: v.Type(), Offset: int64(d.off)})
+		d.skip()
+		return nil
+	}
+	v = pv
+
+	// Check type of target.
+	switch v.Kind() {
+	case reflect.Interface:
+		if v.NumMethod() == 0 {
+			// Decoding into nil interface? Switch to non-reflect code.
+			ai := d.arrayInterface()
+			v.Set(reflect.ValueOf(ai))
+			return nil
+		}
+		// Otherwise it's invalid.
+		fallthrough
+	default:
+		d.saveError(&UnmarshalTypeError{Value: "array", Type: v.Type(), Offset: int64(d.off)})
+		d.skip()
+		return nil
+	case reflect.Array, reflect.Slice:
+		break
+	}
+
+	i := 0
+	for {
+		// Look ahead for ] - can only happen on first iteration.
+		d.scanWhile(scanSkipSpace)
+		if d.opcode == scanEndArray {
+			break
+		}
+
+		// Expand slice length, growing the slice if necessary.
+		if v.Kind() == reflect.Slice {
+			if i >= v.Cap() {
+				v.Grow(1)
+			}
+			if i >= v.Len() {
+				v.SetLen(i + 1)
+			}
+		}
+
+		if i < v.Len() {
+			// Decode into element.
+			if err := d.value(v.Index(i)); err != nil {
+				return err
+			}
+		} else {
+			// Ran out of fixed array: skip.
+			if err := d.value(reflect.Value{}); err != nil {
+				return err
+			}
+		}
+		i++
+
+		// Next token must be , or ].
+		if d.opcode == scanSkipSpace {
+			d.scanWhile(scanSkipSpace)
+		}
+		if d.opcode == scanEndArray {
+			break
+		}
+		if d.opcode != scanArrayValue {
+			panic(phasePanicMsg)
+		}
+	}
+
+	if i < v.Len() {
+		if v.Kind() == reflect.Array {
+			for ; i < v.Len(); i++ {
+				v.Index(i).SetZero() // zero remainder of array
+			}
+		} else {
+			v.SetLen(i) // truncate the slice
+		}
+	}
+	if i == 0 && v.Kind() == reflect.Slice {
+		v.Set(reflect.MakeSlice(v.Type(), 0, 0))
+	}
+	return nil
+}
+
+var nullLiteral = []byte("null")
+
+// SHIM(reflect): reflect.TypeFor[T]() reflect.T
+var textUnmarshalerType = shims.TypeFor[encoding.TextUnmarshaler]()
+
+// object consumes an object from d.data[d.off-1:], decoding into v.
+// The first byte ('{') of the object has been read already.
+func (d *decodeState) object(v reflect.Value) error {
+	// Check for unmarshaler.
+	u, ut, pv := indirect(v, false)
+	if u != nil {
+		start := d.readIndex()
+		d.skip()
+		return u.UnmarshalJSON(d.data[start:d.off])
+	}
+	if ut != nil {
+		d.saveError(&UnmarshalTypeError{Value: "object", Type: v.Type(), Offset: int64(d.off)})
+		d.skip()
+		return nil
+	}
+	v = pv
+	t := v.Type()
+
+	// Decoding into nil interface? Switch to non-reflect code.
+	if v.Kind() == reflect.Interface && v.NumMethod() == 0 {
+		oi := d.objectInterface()
+		v.Set(reflect.ValueOf(oi))
+		return nil
+	}
+
+	var fields structFields
+
+	// Check type of target:
+	//   struct or
+	//   map[T1]T2 where T1 is string, an integer type,
+	//             or an encoding.TextUnmarshaler
+	switch v.Kind() {
+	case reflect.Map:
+		// Map key must either have string kind, have an integer kind,
+		// or be an encoding.TextUnmarshaler.
+		switch t.Key().Kind() {
+		case reflect.String,
+			reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+			reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+		default:
+			if !reflect.PointerTo(t.Key()).Implements(textUnmarshalerType) {
+				d.saveError(&UnmarshalTypeError{Value: "object", Type: t, Offset: int64(d.off)})
+				d.skip()
+				return nil
+			}
+		}
+		if v.IsNil() {
+			v.Set(reflect.MakeMap(t))
+		}
+	case reflect.Struct:
+		fields = cachedTypeFields(t)
+		// ok
+	default:
+		d.saveError(&UnmarshalTypeError{Value: "object", Type: t, Offset: int64(d.off)})
+		d.skip()
+		return nil
+	}
+
+	var mapElem reflect.Value
+	var origErrorContext errorContext
+	if d.errorContext != nil {
+		origErrorContext = *d.errorContext
+	}
+
+	for {
+		// Read opening " of string key or closing }.
+		d.scanWhile(scanSkipSpace)
+		if d.opcode == scanEndObject {
+			// closing } - can only happen on first iteration.
+			break
+		}
+		if d.opcode != scanBeginLiteral {
+			panic(phasePanicMsg)
+		}
+
+		// Read key.
+		start := d.readIndex()
+		d.rescanLiteral()
+		item := d.data[start:d.readIndex()]
+		key, ok := unquoteBytes(item)
+		if !ok {
+			panic(phasePanicMsg)
+		}
+
+		// Figure out field corresponding to key.
+		var subv reflect.Value
+		destring := false // whether the value is wrapped in a string to be decoded first
+
+		if v.Kind() == reflect.Map {
+			elemType := t.Elem()
+			if !mapElem.IsValid() {
+				mapElem = reflect.New(elemType).Elem()
+			} else {
+				mapElem.SetZero()
+			}
+			subv = mapElem
+		} else {
+			f := fields.byExactName[string(key)]
+			if f == nil {
+				f = fields.byFoldedName[string(foldName(key))]
+			}
+			if f != nil {
+				subv = v
+				destring = f.quoted
+				if d.errorContext == nil {
+					d.errorContext = new(errorContext)
+				}
+				for i, ind := range f.index {
+					if subv.Kind() == reflect.Pointer {
+						if subv.IsNil() {
+							// If a struct embeds a pointer to an unexported type,
+							// it is not possible to set a newly allocated value
+							// since the field is unexported.
+							//
+							// See https://golang.org/issue/21357
+							if !subv.CanSet() {
+								d.saveError(fmt.Errorf("json: cannot set embedded pointer to unexported struct: %v", subv.Type().Elem()))
+								// Invalidate subv to ensure d.value(subv) skips over
+								// the JSON value without assigning it to subv.
+								subv = reflect.Value{}
+								destring = false
+								break
+							}
+							subv.Set(reflect.New(subv.Type().Elem()))
+						}
+						subv = subv.Elem()
+					}
+					if i < len(f.index)-1 {
+						d.errorContext.FieldStack = append(
+							d.errorContext.FieldStack,
+							subv.Type().Field(ind).Name,
+						)
+					}
+					subv = subv.Field(ind)
+				}
+				d.errorContext.Struct = t
+				d.errorContext.FieldStack = append(d.errorContext.FieldStack, f.name)
+			} else if d.disallowUnknownFields {
+				d.saveError(fmt.Errorf("json: unknown field %q", key))
+			}
+		}
+
+		// Read : before value.
+		if d.opcode == scanSkipSpace {
+			d.scanWhile(scanSkipSpace)
+		}
+		if d.opcode != scanObjectKey {
+			panic(phasePanicMsg)
+		}
+		d.scanWhile(scanSkipSpace)
+
+		if destring {
+			switch qv := d.valueQuoted().(type) {
+			case nil:
+				if err := d.literalStore(nullLiteral, subv, false); err != nil {
+					return err
+				}
+			case string:
+				if err := d.literalStore([]byte(qv), subv, true); err != nil {
+					return err
+				}
+			default:
+				d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal unquoted value into %v", subv.Type()))
+			}
+		} else {
+			if err := d.value(subv); err != nil {
+				return err
+			}
+		}
+
+		// Write value back to map;
+		// if using struct, subv points into struct already.
+		if v.Kind() == reflect.Map {
+			kt := t.Key()
+			var kv reflect.Value
+			if reflect.PointerTo(kt).Implements(textUnmarshalerType) {
+				kv = reflect.New(kt)
+				if err := d.literalStore(item, kv, true); err != nil {
+					return err
+				}
+				kv = kv.Elem()
+			} else {
+				switch kt.Kind() {
+				case reflect.String:
+					kv = reflect.New(kt).Elem()
+					kv.SetString(string(key))
+				case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+					s := string(key)
+					n, err := strconv.ParseInt(s, 10, 64)
+					// SHIM(reflect): reflect.Type.OverflowInt(int64) bool
+					okt := shims.OverflowableType{Type: kt}
+					if err != nil || okt.OverflowInt(n) {
+						d.saveError(&UnmarshalTypeError{Value: "number " + s, Type: kt, Offset: int64(start + 1)})
+						break
+					}
+					kv = reflect.New(kt).Elem()
+					kv.SetInt(n)
+				case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+					s := string(key)
+					n, err := strconv.ParseUint(s, 10, 64)
+					// SHIM(reflect): reflect.Type.OverflowUint(uint64) bool
+					okt := shims.OverflowableType{Type: kt}
+					if err != nil || okt.OverflowUint(n) {
+						d.saveError(&UnmarshalTypeError{Value: "number " + s, Type: kt, Offset: int64(start + 1)})
+						break
+					}
+					kv = reflect.New(kt).Elem()
+					kv.SetUint(n)
+				default:
+					panic("json: Unexpected key type") // should never occur
+				}
+			}
+			if kv.IsValid() {
+				v.SetMapIndex(kv, subv)
+			}
+		}
+
+		// Next token must be , or }.
+		if d.opcode == scanSkipSpace {
+			d.scanWhile(scanSkipSpace)
+		}
+		if d.errorContext != nil {
+			// Reset errorContext to its original state.
+			// Keep the same underlying array for FieldStack, to reuse the
+			// space and avoid unnecessary allocs.
+			d.errorContext.FieldStack = d.errorContext.FieldStack[:len(origErrorContext.FieldStack)]
+			d.errorContext.Struct = origErrorContext.Struct
+		}
+		if d.opcode == scanEndObject {
+			break
+		}
+		if d.opcode != scanObjectValue {
+			panic(phasePanicMsg)
+		}
+	}
+	return nil
+}
+
+// convertNumber converts the number literal s to a float64 or a Number
+// depending on the setting of d.useNumber.
+func (d *decodeState) convertNumber(s string) (any, error) {
+	if d.useNumber {
+		return Number(s), nil
+	}
+	f, err := strconv.ParseFloat(s, 64)
+	if err != nil {
+		// SHIM(reflect): reflect.TypeFor[T]() reflect.Type
+		return nil, &UnmarshalTypeError{Value: "number " + s, Type: shims.TypeFor[float64](), Offset: int64(d.off)}
+	}
+	return f, nil
+}
+
+// SHIM(reflect): TypeFor[T]() reflect.Type
+var numberType = shims.TypeFor[Number]()
+
+// literalStore decodes a literal stored in item into v.
+//
+// fromQuoted indicates whether this literal came from unwrapping a
+// string from the ",string" struct tag option. this is used only to
+// produce more helpful error messages.
+func (d *decodeState) literalStore(item []byte, v reflect.Value, fromQuoted bool) error {
+	// Check for unmarshaler.
+	if len(item) == 0 {
+		// Empty string given.
+		d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+		return nil
+	}
+	isNull := item[0] == 'n' // null
+	u, ut, pv := indirect(v, isNull)
+	if u != nil {
+		return u.UnmarshalJSON(item)
+	}
+	if ut != nil {
+		if item[0] != '"' {
+			if fromQuoted {
+				d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+				return nil
+			}
+			val := "number"
+			switch item[0] {
+			case 'n':
+				val = "null"
+			case 't', 'f':
+				val = "bool"
+			}
+			d.saveError(&UnmarshalTypeError{Value: val, Type: v.Type(), Offset: int64(d.readIndex())})
+			return nil
+		}
+		s, ok := unquoteBytes(item)
+		if !ok {
+			if fromQuoted {
+				return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())
+			}
+			panic(phasePanicMsg)
+		}
+		return ut.UnmarshalText(s)
+	}
+
+	v = pv
+
+	switch c := item[0]; c {
+	case 'n': // null
+		// The main parser checks that only true and false can reach here,
+		// but if this was a quoted string input, it could be anything.
+		if fromQuoted && string(item) != "null" {
+			d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+			break
+		}
+		switch v.Kind() {
+		case reflect.Interface, reflect.Pointer, reflect.Map, reflect.Slice:
+			v.SetZero()
+			// otherwise, ignore null for primitives/string
+		}
+	case 't', 'f': // true, false
+		value := item[0] == 't'
+		// The main parser checks that only true and false can reach here,
+		// but if this was a quoted string input, it could be anything.
+		if fromQuoted && string(item) != "true" && string(item) != "false" {
+			d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+			break
+		}
+		switch v.Kind() {
+		default:
+			if fromQuoted {
+				d.saveError(fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type()))
+			} else {
+				d.saveError(&UnmarshalTypeError{Value: "bool", Type: v.Type(), Offset: int64(d.readIndex())})
+			}
+		case reflect.Bool:
+			v.SetBool(value)
+		case reflect.Interface:
+			if v.NumMethod() == 0 {
+				v.Set(reflect.ValueOf(value))
+			} else {
+				d.saveError(&UnmarshalTypeError{Value: "bool", Type: v.Type(), Offset: int64(d.readIndex())})
+			}
+		}
+
+	case '"': // string
+		s, ok := unquoteBytes(item)
+		if !ok {
+			if fromQuoted {
+				return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())
+			}
+			panic(phasePanicMsg)
+		}
+		switch v.Kind() {
+		default:
+			d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type(), Offset: int64(d.readIndex())})
+		case reflect.Slice:
+			if v.Type().Elem().Kind() != reflect.Uint8 {
+				d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type(), Offset: int64(d.readIndex())})
+				break
+			}
+			b := make([]byte, base64.StdEncoding.DecodedLen(len(s)))
+			n, err := base64.StdEncoding.Decode(b, s)
+			if err != nil {
+				d.saveError(err)
+				break
+			}
+			v.SetBytes(b[:n])
+		case reflect.String:
+			t := string(s)
+			if v.Type() == numberType && !isValidNumber(t) {
+				return fmt.Errorf("json: invalid number literal, trying to unmarshal %q into Number", item)
+			}
+			v.SetString(t)
+		case reflect.Interface:
+			if v.NumMethod() == 0 {
+				v.Set(reflect.ValueOf(string(s)))
+			} else {
+				d.saveError(&UnmarshalTypeError{Value: "string", Type: v.Type(), Offset: int64(d.readIndex())})
+			}
+		}
+
+	default: // number
+		if c != '-' && (c < '0' || c > '9') {
+			if fromQuoted {
+				return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())
+			}
+			panic(phasePanicMsg)
+		}
+		switch v.Kind() {
+		default:
+			if v.Kind() == reflect.String && v.Type() == numberType {
+				// s must be a valid number, because it's
+				// already been tokenized.
+				v.SetString(string(item))
+				break
+			}
+			if fromQuoted {
+				return fmt.Errorf("json: invalid use of ,string struct tag, trying to unmarshal %q into %v", item, v.Type())
+			}
+			d.saveError(&UnmarshalTypeError{Value: "number", Type: v.Type(), Offset: int64(d.readIndex())})
+		case reflect.Interface:
+			n, err := d.convertNumber(string(item))
+			if err != nil {
+				d.saveError(err)
+				break
+			}
+			if v.NumMethod() != 0 {
+				d.saveError(&UnmarshalTypeError{Value: "number", Type: v.Type(), Offset: int64(d.readIndex())})
+				break
+			}
+			v.Set(reflect.ValueOf(n))
+
+		case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+			n, err := strconv.ParseInt(string(item), 10, 64)
+			if err != nil || v.OverflowInt(n) {
+				d.saveError(&UnmarshalTypeError{Value: "number " + string(item), Type: v.Type(), Offset: int64(d.readIndex())})
+				break
+			}
+			v.SetInt(n)
+
+		case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+			n, err := strconv.ParseUint(string(item), 10, 64)
+			if err != nil || v.OverflowUint(n) {
+				d.saveError(&UnmarshalTypeError{Value: "number " + string(item), Type: v.Type(), Offset: int64(d.readIndex())})
+				break
+			}
+			v.SetUint(n)
+
+		case reflect.Float32, reflect.Float64:
+			n, err := strconv.ParseFloat(string(item), v.Type().Bits())
+			if err != nil || v.OverflowFloat(n) {
+				d.saveError(&UnmarshalTypeError{Value: "number " + string(item), Type: v.Type(), Offset: int64(d.readIndex())})
+				break
+			}
+			v.SetFloat(n)
+		}
+	}
+	return nil
+}
+
+// The xxxInterface routines build up a value to be stored
+// in an empty interface. They are not strictly necessary,
+// but they avoid the weight of reflection in this common case.
+
+// valueInterface is like value but returns any.
+func (d *decodeState) valueInterface() (val any) {
+	switch d.opcode {
+	default:
+		panic(phasePanicMsg)
+	case scanBeginArray:
+		val = d.arrayInterface()
+		d.scanNext()
+	case scanBeginObject:
+		val = d.objectInterface()
+		d.scanNext()
+	case scanBeginLiteral:
+		val = d.literalInterface()
+	}
+	return
+}
+
+// arrayInterface is like array but returns []any.
+func (d *decodeState) arrayInterface() []any {
+	var v = make([]any, 0)
+	for {
+		// Look ahead for ] - can only happen on first iteration.
+		d.scanWhile(scanSkipSpace)
+		if d.opcode == scanEndArray {
+			break
+		}
+
+		v = append(v, d.valueInterface())
+
+		// Next token must be , or ].
+		if d.opcode == scanSkipSpace {
+			d.scanWhile(scanSkipSpace)
+		}
+		if d.opcode == scanEndArray {
+			break
+		}
+		if d.opcode != scanArrayValue {
+			panic(phasePanicMsg)
+		}
+	}
+	return v
+}
+
+// objectInterface is like object but returns map[string]any.
+func (d *decodeState) objectInterface() map[string]any {
+	m := make(map[string]any)
+	for {
+		// Read opening " of string key or closing }.
+		d.scanWhile(scanSkipSpace)
+		if d.opcode == scanEndObject {
+			// closing } - can only happen on first iteration.
+			break
+		}
+		if d.opcode != scanBeginLiteral {
+			panic(phasePanicMsg)
+		}
+
+		// Read string key.
+		start := d.readIndex()
+		d.rescanLiteral()
+		item := d.data[start:d.readIndex()]
+		key, ok := unquote(item)
+		if !ok {
+			panic(phasePanicMsg)
+		}
+
+		// Read : before value.
+		if d.opcode == scanSkipSpace {
+			d.scanWhile(scanSkipSpace)
+		}
+		if d.opcode != scanObjectKey {
+			panic(phasePanicMsg)
+		}
+		d.scanWhile(scanSkipSpace)
+
+		// Read value.
+		m[key] = d.valueInterface()
+
+		// Next token must be , or }.
+		if d.opcode == scanSkipSpace {
+			d.scanWhile(scanSkipSpace)
+		}
+		if d.opcode == scanEndObject {
+			break
+		}
+		if d.opcode != scanObjectValue {
+			panic(phasePanicMsg)
+		}
+	}
+	return m
+}
+
+// literalInterface consumes and returns a literal from d.data[d.off-1:] and
+// it reads the following byte ahead. The first byte of the literal has been
+// read already (that's how the caller knows it's a literal).
+func (d *decodeState) literalInterface() any {
+	// All bytes inside literal return scanContinue op code.
+	start := d.readIndex()
+	d.rescanLiteral()
+
+	item := d.data[start:d.readIndex()]
+
+	switch c := item[0]; c {
+	case 'n': // null
+		return nil
+
+	case 't', 'f': // true, false
+		return c == 't'
+
+	case '"': // string
+		s, ok := unquote(item)
+		if !ok {
+			panic(phasePanicMsg)
+		}
+		return s
+
+	default: // number
+		if c != '-' && (c < '0' || c > '9') {
+			panic(phasePanicMsg)
+		}
+		n, err := d.convertNumber(string(item))
+		if err != nil {
+			d.saveError(err)
+		}
+		return n
+	}
+}
+
+// getu4 decodes \uXXXX from the beginning of s, returning the hex value,
+// or it returns -1.
+func getu4(s []byte) rune {
+	if len(s) < 6 || s[0] != '\\' || s[1] != 'u' {
+		return -1
+	}
+	var r rune
+	for _, c := range s[2:6] {
+		switch {
+		case '0' <= c && c <= '9':
+			c = c - '0'
+		case 'a' <= c && c <= 'f':
+			c = c - 'a' + 10
+		case 'A' <= c && c <= 'F':
+			c = c - 'A' + 10
+		default:
+			return -1
+		}
+		r = r*16 + rune(c)
+	}
+	return r
+}
+
+// unquote converts a quoted JSON string literal s into an actual string t.
+// The rules are different than for Go, so cannot use strconv.Unquote.
+func unquote(s []byte) (t string, ok bool) {
+	s, ok = unquoteBytes(s)
+	t = string(s)
+	return
+}
+
+// unquoteBytes should be an internal detail,
+// but widely used packages access it using linkname.
+// Notable members of the hall of shame include:
+//   - github.com/bytedance/sonic
+//
+// Do not remove or change the type signature.
+// See go.dev/issue/67401.
+//
+//go:linkname unquoteBytes
+func unquoteBytes(s []byte) (t []byte, ok bool) {
+	if len(s) < 2 || s[0] != '"' || s[len(s)-1] != '"' {
+		return
+	}
+	s = s[1 : len(s)-1]
+
+	// Check for unusual characters. If there are none,
+	// then no unquoting is needed, so return a slice of the
+	// original bytes.
+	r := 0
+	for r < len(s) {
+		c := s[r]
+		if c == '\\' || c == '"' || c < ' ' {
+			break
+		}
+		if c < utf8.RuneSelf {
+			r++
+			continue
+		}
+		rr, size := utf8.DecodeRune(s[r:])
+		if rr == utf8.RuneError && size == 1 {
+			break
+		}
+		r += size
+	}
+	if r == len(s) {
+		return s, true
+	}
+
+	b := make([]byte, len(s)+2*utf8.UTFMax)
+	w := copy(b, s[0:r])
+	for r < len(s) {
+		// Out of room? Can only happen if s is full of
+		// malformed UTF-8 and we're replacing each
+		// byte with RuneError.
+		if w >= len(b)-2*utf8.UTFMax {
+			nb := make([]byte, (len(b)+utf8.UTFMax)*2)
+			copy(nb, b[0:w])
+			b = nb
+		}
+		switch c := s[r]; {
+		case c == '\\':
+			r++
+			if r >= len(s) {
+				return
+			}
+			switch s[r] {
+			default:
+				return
+			case '"', '\\', '/', '\'':
+				b[w] = s[r]
+				r++
+				w++
+			case 'b':
+				b[w] = '\b'
+				r++
+				w++
+			case 'f':
+				b[w] = '\f'
+				r++
+				w++
+			case 'n':
+				b[w] = '\n'
+				r++
+				w++
+			case 'r':
+				b[w] = '\r'
+				r++
+				w++
+			case 't':
+				b[w] = '\t'
+				r++
+				w++
+			case 'u':
+				r--
+				rr := getu4(s[r:])
+				if rr < 0 {
+					return
+				}
+				r += 6
+				if utf16.IsSurrogate(rr) {
+					rr1 := getu4(s[r:])
+					if dec := utf16.DecodeRune(rr, rr1); dec != unicode.ReplacementChar {
+						// A valid pair; consume.
+						r += 6
+						w += utf8.EncodeRune(b[w:], dec)
+						break
+					}
+					// Invalid surrogate; fall back to replacement rune.
+					rr = unicode.ReplacementChar
+				}
+				w += utf8.EncodeRune(b[w:], rr)
+			}
+
+		// Quote, control characters are invalid.
+		case c == '"', c < ' ':
+			return
+
+		// ASCII
+		case c < utf8.RuneSelf:
+			b[w] = c
+			r++
+			w++
+
+		// Coerce to well-formed UTF-8.
+		default:
+			rr, size := utf8.DecodeRune(s[r:])
+			r += size
+			w += utf8.EncodeRune(b[w:], rr)
+		}
+	}
+	return b[0:w], true
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/encode.go 🔗

@@ -0,0 +1,1398 @@
+// Vendored from Go 1.24.0-pre-release
+// To find alterations, check package shims, and comments beginning in SHIM().
+//
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package json implements encoding and decoding of JSON as defined in
+// RFC 7159. The mapping between JSON and Go values is described
+// in the documentation for the Marshal and Unmarshal functions.
+//
+// See "JSON and Go" for an introduction to this package:
+// https://golang.org/doc/articles/json_and_go.html
+package json
+
+import (
+	"bytes"
+	"cmp"
+	"encoding"
+	"encoding/base64"
+	"fmt"
+	"github.com/anthropics/anthropic-sdk-go/internal/encoding/json/sentinel"
+	"github.com/anthropics/anthropic-sdk-go/internal/encoding/json/shims"
+	"math"
+	"reflect"
+	"slices"
+	"strconv"
+	"strings"
+	"sync"
+	"unicode"
+	"unicode/utf8"
+	_ "unsafe" // for linkname
+)
+
+// Marshal returns the JSON encoding of v.
+//
+// Marshal traverses the value v recursively.
+// If an encountered value implements [Marshaler]
+// and is not a nil pointer, Marshal calls [Marshaler.MarshalJSON]
+// to produce JSON. If no [Marshaler.MarshalJSON] method is present but the
+// value implements [encoding.TextMarshaler] instead, Marshal calls
+// [encoding.TextMarshaler.MarshalText] and encodes the result as a JSON string.
+// The nil pointer exception is not strictly necessary
+// but mimics a similar, necessary exception in the behavior of
+// [Unmarshaler.UnmarshalJSON].
+//
+// Otherwise, Marshal uses the following type-dependent default encodings:
+//
+// Boolean values encode as JSON booleans.
+//
+// Floating point, integer, and [Number] values encode as JSON numbers.
+// NaN and +/-Inf values will return an [UnsupportedValueError].
+//
+// String values encode as JSON strings coerced to valid UTF-8,
+// replacing invalid bytes with the Unicode replacement rune.
+// So that the JSON will be safe to embed inside HTML <script> tags,
+// the string is encoded using [HTMLEscape],
+// which replaces "<", ">", "&", U+2028, and U+2029 are escaped
+// to "\u003c","\u003e", "\u0026", "\u2028", and "\u2029".
+// This replacement can be disabled when using an [Encoder],
+// by calling [Encoder.SetEscapeHTML](false).
+//
+// Array and slice values encode as JSON arrays, except that
+// []byte encodes as a base64-encoded string, and a nil slice
+// encodes as the null JSON value.
+//
+// Struct values encode as JSON objects.
+// Each exported struct field becomes a member of the object, using the
+// field name as the object key, unless the field is omitted for one of the
+// reasons given below.
+//
+// The encoding of each struct field can be customized by the format string
+// stored under the "json" key in the struct field's tag.
+// The format string gives the name of the field, possibly followed by a
+// comma-separated list of options. The name may be empty in order to
+// specify options without overriding the default field name.
+//
+// The "omitempty" option specifies that the field should be omitted
+// from the encoding if the field has an empty value, defined as
+// false, 0, a nil pointer, a nil interface value, and any array,
+// slice, map, or string of length zero.
+//
+// As a special case, if the field tag is "-", the field is always omitted.
+// Note that a field with name "-" can still be generated using the tag "-,".
+//
+// Examples of struct field tags and their meanings:
+//
+//	// Field appears in JSON as key "myName".
+//	Field int `json:"myName"`
+//
+//	// Field appears in JSON as key "myName" and
+//	// the field is omitted from the object if its value is empty,
+//	// as defined above.
+//	Field int `json:"myName,omitempty"`
+//
+//	// Field appears in JSON as key "Field" (the default), but
+//	// the field is skipped if empty.
+//	// Note the leading comma.
+//	Field int `json:",omitempty"`
+//
+//	// Field is ignored by this package.
+//	Field int `json:"-"`
+//
+//	// Field appears in JSON as key "-".
+//	Field int `json:"-,"`
+//
+// The "omitzero" option specifies that the field should be omitted
+// from the encoding if the field has a zero value, according to rules:
+//
+// 1) If the field type has an "IsZero() bool" method, that will be used to
+// determine whether the value is zero.
+//
+// 2) Otherwise, the value is zero if it is the zero value for its type.
+//
+// If both "omitempty" and "omitzero" are specified, the field will be omitted
+// if the value is either empty or zero (or both).
+//
+// The "string" option signals that a field is stored as JSON inside a
+// JSON-encoded string. It applies only to fields of string, floating point,
+// integer, or boolean types. This extra level of encoding is sometimes used
+// when communicating with JavaScript programs:
+//
+//	Int64String int64 `json:",string"`
+//
+// The key name will be used if it's a non-empty string consisting of
+// only Unicode letters, digits, and ASCII punctuation except quotation
+// marks, backslash, and comma.
+//
+// Embedded struct fields are usually marshaled as if their inner exported fields
+// were fields in the outer struct, subject to the usual Go visibility rules amended
+// as described in the next paragraph.
+// An anonymous struct field with a name given in its JSON tag is treated as
+// having that name, rather than being anonymous.
+// An anonymous struct field of interface type is treated the same as having
+// that type as its name, rather than being anonymous.
+//
+// The Go visibility rules for struct fields are amended for JSON when
+// deciding which field to marshal or unmarshal. If there are
+// multiple fields at the same level, and that level is the least
+// nested (and would therefore be the nesting level selected by the
+// usual Go rules), the following extra rules apply:
+//
+// 1) Of those fields, if any are JSON-tagged, only tagged fields are considered,
+// even if there are multiple untagged fields that would otherwise conflict.
+//
+// 2) If there is exactly one field (tagged or not according to the first rule), that is selected.
+//
+// 3) Otherwise there are multiple fields, and all are ignored; no error occurs.
+//
+// Handling of anonymous struct fields is new in Go 1.1.
+// Prior to Go 1.1, anonymous struct fields were ignored. To force ignoring of
+// an anonymous struct field in both current and earlier versions, give the field
+// a JSON tag of "-".
+//
+// Map values encode as JSON objects. The map's key type must either be a
+// string, an integer type, or implement [encoding.TextMarshaler]. The map keys
+// are sorted and used as JSON object keys by applying the following rules,
+// subject to the UTF-8 coercion described for string values above:
+//   - keys of any string type are used directly
+//   - keys that implement [encoding.TextMarshaler] are marshaled
+//   - integer keys are converted to strings
+//
+// Pointer values encode as the value pointed to.
+// A nil pointer encodes as the null JSON value.
+//
+// Interface values encode as the value contained in the interface.
+// A nil interface value encodes as the null JSON value.
+//
+// Channel, complex, and function values cannot be encoded in JSON.
+// Attempting to encode such a value causes Marshal to return
+// an [UnsupportedTypeError].
+//
+// JSON cannot represent cyclic data structures and Marshal does not
+// handle them. Passing cyclic structures to Marshal will result in
+// an error.
+func Marshal(v any) ([]byte, error) {
+	e := newEncodeState()
+	defer encodeStatePool.Put(e)
+
+	err := e.marshal(v, encOpts{escapeHTML: true})
+	if err != nil {
+		return nil, err
+	}
+	buf := append([]byte(nil), e.Bytes()...)
+
+	return buf, nil
+}
+
+// MarshalIndent is like [Marshal] but applies [Indent] to format the output.
+// Each JSON element in the output will begin on a new line beginning with prefix
+// followed by one or more copies of indent according to the indentation nesting.
+func MarshalIndent(v any, prefix, indent string) ([]byte, error) {
+	b, err := Marshal(v)
+	if err != nil {
+		return nil, err
+	}
+	b2 := make([]byte, 0, indentGrowthFactor*len(b))
+	b2, err = appendIndent(b2, b, prefix, indent)
+	if err != nil {
+		return nil, err
+	}
+	return b2, nil
+}
+
+// Marshaler is the interface implemented by types that
+// can marshal themselves into valid JSON.
+type Marshaler interface {
+	MarshalJSON() ([]byte, error)
+}
+
+// An UnsupportedTypeError is returned by [Marshal] when attempting
+// to encode an unsupported value type.
+type UnsupportedTypeError struct {
+	Type reflect.Type
+}
+
+func (e *UnsupportedTypeError) Error() string {
+	return "json: unsupported type: " + e.Type.String()
+}
+
+// An UnsupportedValueError is returned by [Marshal] when attempting
+// to encode an unsupported value.
+type UnsupportedValueError struct {
+	Value reflect.Value
+	Str   string
+}
+
+func (e *UnsupportedValueError) Error() string {
+	return "json: unsupported value: " + e.Str
+}
+
+// Before Go 1.2, an InvalidUTF8Error was returned by [Marshal] when
+// attempting to encode a string value with invalid UTF-8 sequences.
+// As of Go 1.2, [Marshal] instead coerces the string to valid UTF-8 by
+// replacing invalid bytes with the Unicode replacement rune U+FFFD.
+//
+// Deprecated: No longer used; kept for compatibility.
+type InvalidUTF8Error struct {
+	S string // the whole string value that caused the error
+}
+
+func (e *InvalidUTF8Error) Error() string {
+	return "json: invalid UTF-8 in string: " + strconv.Quote(e.S)
+}
+
+// A MarshalerError represents an error from calling a
+// [Marshaler.MarshalJSON] or [encoding.TextMarshaler.MarshalText] method.
+type MarshalerError struct {
+	Type       reflect.Type
+	Err        error
+	sourceFunc string
+}
+
+func (e *MarshalerError) Error() string {
+	srcFunc := e.sourceFunc
+	if srcFunc == "" {
+		srcFunc = "MarshalJSON"
+	}
+	return "json: error calling " + srcFunc +
+		" for type " + e.Type.String() +
+		": " + e.Err.Error()
+}
+
+// Unwrap returns the underlying error.
+func (e *MarshalerError) Unwrap() error { return e.Err }
+
+const hex = "0123456789abcdef"
+
+// An encodeState encodes JSON into a bytes.Buffer.
+type encodeState struct {
+	bytes.Buffer // accumulated output
+
+	// Keep track of what pointers we've seen in the current recursive call
+	// path, to avoid cycles that could lead to a stack overflow. Only do
+	// the relatively expensive map operations if ptrLevel is larger than
+	// startDetectingCyclesAfter, so that we skip the work if we're within a
+	// reasonable amount of nested pointers deep.
+	ptrLevel uint
+	ptrSeen  map[any]struct{}
+}
+
+const startDetectingCyclesAfter = 1000
+
+var encodeStatePool sync.Pool
+
+func newEncodeState() *encodeState {
+	if v := encodeStatePool.Get(); v != nil {
+		e := v.(*encodeState)
+		e.Reset()
+		if len(e.ptrSeen) > 0 {
+			panic("ptrEncoder.encode should have emptied ptrSeen via defers")
+		}
+		e.ptrLevel = 0
+		return e
+	}
+	return &encodeState{ptrSeen: make(map[any]struct{})}
+}
+
+// jsonError is an error wrapper type for internal use only.
+// Panics with errors are wrapped in jsonError so that the top-level recover
+// can distinguish intentional panics from this package.
+type jsonError struct{ error }
+
+func (e *encodeState) marshal(v any, opts encOpts) (err error) {
+	defer func() {
+		if r := recover(); r != nil {
+			if je, ok := r.(jsonError); ok {
+				err = je.error
+			} else {
+				panic(r)
+			}
+		}
+	}()
+	e.reflectValue(reflect.ValueOf(v), opts)
+	return nil
+}
+
+// error aborts the encoding by panicking with err wrapped in jsonError.
+func (e *encodeState) error(err error) {
+	panic(jsonError{err})
+}
+
+func isEmptyValue(v reflect.Value) bool {
+	switch v.Kind() {
+	case reflect.String:
+		return v.Len() == 0
+	case reflect.Array, reflect.Map, reflect.Slice:
+		return v.Len() == 0
+	case reflect.Bool,
+		reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+		reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr,
+		reflect.Float32, reflect.Float64,
+		reflect.Interface, reflect.Pointer:
+		return v.IsZero()
+	}
+	return false
+}
+
+func (e *encodeState) reflectValue(v reflect.Value, opts encOpts) {
+	valueEncoder(v)(e, v, opts)
+}
+
+type encOpts struct {
+	// quoted causes primitive fields to be encoded inside JSON strings.
+	quoted bool
+	// escapeHTML causes '<', '>', and '&' to be escaped in JSON strings.
+	escapeHTML bool
+	// EDIT(begin): save the timefmt
+	timefmt string
+	// EDIT(end)
+}
+
+type encoderFunc func(e *encodeState, v reflect.Value, opts encOpts)
+
+var encoderCache sync.Map // map[reflect.Type]encoderFunc
+
+func valueEncoder(v reflect.Value) encoderFunc {
+	if !v.IsValid() {
+		return invalidValueEncoder
+	}
+	return typeEncoder(v.Type())
+}
+
+func typeEncoder(t reflect.Type) encoderFunc {
+	if fi, ok := encoderCache.Load(t); ok {
+		return fi.(encoderFunc)
+	}
+
+	// To deal with recursive types, populate the map with an
+	// indirect func before we build it. This type waits on the
+	// real func (f) to be ready and then calls it. This indirect
+	// func is only used for recursive types.
+	var (
+		wg sync.WaitGroup
+		f  encoderFunc
+	)
+	wg.Add(1)
+	fi, loaded := encoderCache.LoadOrStore(t, encoderFunc(func(e *encodeState, v reflect.Value, opts encOpts) {
+		wg.Wait()
+		f(e, v, opts)
+	}))
+	if loaded {
+		return fi.(encoderFunc)
+	}
+
+	// Compute the real encoder and replace the indirect func with it.
+	f = newTypeEncoder(t, true)
+	wg.Done()
+	encoderCache.Store(t, f)
+	return f
+}
+
+var (
+	// SHIM(begin): TypeFor[T]() reflect.Type
+	marshalerType     = shims.TypeFor[Marshaler]()
+	textMarshalerType = shims.TypeFor[encoding.TextMarshaler]()
+	// SHIM(end)
+)
+
+// newTypeEncoder constructs an encoderFunc for a type.
+// The returned encoder only checks CanAddr when allowAddr is true.
+func newTypeEncoder(t reflect.Type, allowAddr bool) encoderFunc {
+	// EDIT(begin): add custom time encoder
+	if t == timeType {
+		return newTimeEncoder()
+	}
+	// EDIT(end)
+
+	// If we have a non-pointer value whose type implements
+	// Marshaler with a value receiver, then we're better off taking
+	// the address of the value - otherwise we end up with an
+	// allocation as we cast the value to an interface.
+	if t.Kind() != reflect.Pointer && allowAddr && reflect.PointerTo(t).Implements(marshalerType) {
+		return newCondAddrEncoder(addrMarshalerEncoder, newTypeEncoder(t, false))
+	}
+
+	if t.Implements(marshalerType) {
+		return marshalerEncoder
+	}
+	if t.Kind() != reflect.Pointer && allowAddr && reflect.PointerTo(t).Implements(textMarshalerType) {
+		return newCondAddrEncoder(addrTextMarshalerEncoder, newTypeEncoder(t, false))
+	}
+	if t.Implements(textMarshalerType) {
+		return textMarshalerEncoder
+	}
+
+	switch t.Kind() {
+	case reflect.Bool:
+		return boolEncoder
+	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+		return intEncoder
+	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+		return uintEncoder
+	case reflect.Float32:
+		return float32Encoder
+	case reflect.Float64:
+		return float64Encoder
+	case reflect.String:
+		return stringEncoder
+	case reflect.Interface:
+		return interfaceEncoder
+	case reflect.Struct:
+		return newStructEncoder(t)
+	case reflect.Map:
+		return newMapEncoder(t)
+	case reflect.Slice:
+		return newSliceEncoder(t)
+	case reflect.Array:
+		return newArrayEncoder(t)
+	case reflect.Pointer:
+		return newPtrEncoder(t)
+	default:
+		return unsupportedTypeEncoder
+	}
+}
+
+func invalidValueEncoder(e *encodeState, v reflect.Value, _ encOpts) {
+	e.WriteString("null")
+}
+
+func marshalerEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	if v.Kind() == reflect.Pointer && v.IsNil() {
+		e.WriteString("null")
+		return
+	}
+	m, ok := v.Interface().(Marshaler)
+	if !ok {
+		e.WriteString("null")
+		return
+	}
+
+	// EDIT(begin): use custom time encoder
+	if timeMarshalEncoder(e, v, opts) {
+		return
+	}
+	// EDIT(end)
+
+	b, err := m.MarshalJSON()
+	if err == nil {
+		e.Grow(len(b))
+		out := e.AvailableBuffer()
+		out, err = appendCompact(out, b, opts.escapeHTML)
+		e.Buffer.Write(out)
+	}
+	if err != nil {
+		e.error(&MarshalerError{v.Type(), err, "MarshalJSON"})
+	}
+}
+
+func addrMarshalerEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	va := v.Addr()
+	if va.IsNil() {
+		e.WriteString("null")
+		return
+	}
+
+	// EDIT(begin): use custom time encoder
+	if timeMarshalEncoder(e, v, opts) {
+		return
+	}
+	// EDIT(end)
+
+	m := va.Interface().(Marshaler)
+	b, err := m.MarshalJSON()
+	if err == nil {
+		e.Grow(len(b))
+		out := e.AvailableBuffer()
+		out, err = appendCompact(out, b, opts.escapeHTML)
+		e.Buffer.Write(out)
+	}
+	if err != nil {
+		e.error(&MarshalerError{v.Type(), err, "MarshalJSON"})
+	}
+}
+
+func textMarshalerEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	if v.Kind() == reflect.Pointer && v.IsNil() {
+		e.WriteString("null")
+		return
+	}
+	m, ok := v.Interface().(encoding.TextMarshaler)
+	if !ok {
+		e.WriteString("null")
+		return
+	}
+	b, err := m.MarshalText()
+	if err != nil {
+		e.error(&MarshalerError{v.Type(), err, "MarshalText"})
+	}
+	e.Write(appendString(e.AvailableBuffer(), b, opts.escapeHTML))
+}
+
+func addrTextMarshalerEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	va := v.Addr()
+	if va.IsNil() {
+		e.WriteString("null")
+		return
+	}
+	m := va.Interface().(encoding.TextMarshaler)
+	b, err := m.MarshalText()
+	if err != nil {
+		e.error(&MarshalerError{v.Type(), err, "MarshalText"})
+	}
+	e.Write(appendString(e.AvailableBuffer(), b, opts.escapeHTML))
+}
+
+func boolEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	b := e.AvailableBuffer()
+	b = mayAppendQuote(b, opts.quoted)
+	b = strconv.AppendBool(b, v.Bool())
+	b = mayAppendQuote(b, opts.quoted)
+	e.Write(b)
+}
+
+func intEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	b := e.AvailableBuffer()
+	b = mayAppendQuote(b, opts.quoted)
+	b = strconv.AppendInt(b, v.Int(), 10)
+	b = mayAppendQuote(b, opts.quoted)
+	e.Write(b)
+}
+
+func uintEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	b := e.AvailableBuffer()
+	b = mayAppendQuote(b, opts.quoted)
+	b = strconv.AppendUint(b, v.Uint(), 10)
+	b = mayAppendQuote(b, opts.quoted)
+	e.Write(b)
+}
+
+type floatEncoder int // number of bits
+
+func (bits floatEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
+	f := v.Float()
+	if math.IsInf(f, 0) || math.IsNaN(f) {
+		e.error(&UnsupportedValueError{v, strconv.FormatFloat(f, 'g', -1, int(bits))})
+	}
+
+	// Convert as if by ES6 number to string conversion.
+	// This matches most other JSON generators.
+	// See golang.org/issue/6384 and golang.org/issue/14135.
+	// Like fmt %g, but the exponent cutoffs are different
+	// and exponents themselves are not padded to two digits.
+	b := e.AvailableBuffer()
+	b = mayAppendQuote(b, opts.quoted)
+	abs := math.Abs(f)
+	fmt := byte('f')
+	// Note: Must use float32 comparisons for underlying float32 value to get precise cutoffs right.
+	if abs != 0 {
+		if bits == 64 && (abs < 1e-6 || abs >= 1e21) || bits == 32 && (float32(abs) < 1e-6 || float32(abs) >= 1e21) {
+			fmt = 'e'
+		}
+	}
+	b = strconv.AppendFloat(b, f, fmt, -1, int(bits))
+	if fmt == 'e' {
+		// clean up e-09 to e-9
+		n := len(b)
+		if n >= 4 && b[n-4] == 'e' && b[n-3] == '-' && b[n-2] == '0' {
+			b[n-2] = b[n-1]
+			b = b[:n-1]
+		}
+	}
+	b = mayAppendQuote(b, opts.quoted)
+	e.Write(b)
+}
+
+var (
+	float32Encoder = (floatEncoder(32)).encode
+	float64Encoder = (floatEncoder(64)).encode
+)
+
+func stringEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	if v.Type() == numberType {
+		numStr := v.String()
+		// In Go1.5 the empty string encodes to "0", while this is not a valid number literal
+		// we keep compatibility so check validity after this.
+		if numStr == "" {
+			numStr = "0" // Number's zero-val
+		}
+		if !isValidNumber(numStr) {
+			e.error(fmt.Errorf("json: invalid number literal %q", numStr))
+		}
+		b := e.AvailableBuffer()
+		b = mayAppendQuote(b, opts.quoted)
+		b = append(b, numStr...)
+		b = mayAppendQuote(b, opts.quoted)
+		e.Write(b)
+		return
+	}
+	if opts.quoted {
+		b := appendString(nil, v.String(), opts.escapeHTML)
+		e.Write(appendString(e.AvailableBuffer(), b, false)) // no need to escape again since it is already escaped
+	} else {
+		e.Write(appendString(e.AvailableBuffer(), v.String(), opts.escapeHTML))
+	}
+}
+
+// isValidNumber reports whether s is a valid JSON number literal.
+//
+// isValidNumber should be an internal detail,
+// but widely used packages access it using linkname.
+// Notable members of the hall of shame include:
+//   - github.com/bytedance/sonic
+//
+// Do not remove or change the type signature.
+// See go.dev/issue/67401.
+//
+//go:linkname isValidNumber
+func isValidNumber(s string) bool {
+	// This function implements the JSON numbers grammar.
+	// See https://tools.ietf.org/html/rfc7159#section-6
+	// and https://www.json.org/img/number.png
+
+	if s == "" {
+		return false
+	}
+
+	// Optional -
+	if s[0] == '-' {
+		s = s[1:]
+		if s == "" {
+			return false
+		}
+	}
+
+	// Digits
+	switch {
+	default:
+		return false
+
+	case s[0] == '0':
+		s = s[1:]
+
+	case '1' <= s[0] && s[0] <= '9':
+		s = s[1:]
+		for len(s) > 0 && '0' <= s[0] && s[0] <= '9' {
+			s = s[1:]
+		}
+	}
+
+	// . followed by 1 or more digits.
+	if len(s) >= 2 && s[0] == '.' && '0' <= s[1] && s[1] <= '9' {
+		s = s[2:]
+		for len(s) > 0 && '0' <= s[0] && s[0] <= '9' {
+			s = s[1:]
+		}
+	}
+
+	// e or E followed by an optional - or + and
+	// 1 or more digits.
+	if len(s) >= 2 && (s[0] == 'e' || s[0] == 'E') {
+		s = s[1:]
+		if s[0] == '+' || s[0] == '-' {
+			s = s[1:]
+			if s == "" {
+				return false
+			}
+		}
+		for len(s) > 0 && '0' <= s[0] && s[0] <= '9' {
+			s = s[1:]
+		}
+	}
+
+	// Make sure we are at the end.
+	return s == ""
+}
+
+func interfaceEncoder(e *encodeState, v reflect.Value, opts encOpts) {
+	if v.IsNil() {
+		e.WriteString("null")
+		return
+	}
+	e.reflectValue(v.Elem(), opts)
+}
+
+func unsupportedTypeEncoder(e *encodeState, v reflect.Value, _ encOpts) {
+	e.error(&UnsupportedTypeError{v.Type()})
+}
+
+type structEncoder struct {
+	fields structFields
+}
+
+type structFields struct {
+	list         []field
+	byExactName  map[string]*field
+	byFoldedName map[string]*field
+}
+
+func (se structEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
+	next := byte('{')
+FieldLoop:
+	for i := range se.fields.list {
+		f := &se.fields.list[i]
+
+		// Find the nested struct field by following f.index.
+		fv := v
+		for _, i := range f.index {
+			if fv.Kind() == reflect.Pointer {
+				if fv.IsNil() {
+					continue FieldLoop
+				}
+				fv = fv.Elem()
+			}
+			fv = fv.Field(i)
+		}
+
+		if (f.omitEmpty && isEmptyValue(fv)) ||
+			(f.omitZero && (f.isZero == nil && fv.IsZero() || (f.isZero != nil && f.isZero(fv)))) {
+			continue
+		}
+		e.WriteByte(next)
+		next = ','
+		if opts.escapeHTML {
+			e.WriteString(f.nameEscHTML)
+		} else {
+			e.WriteString(f.nameNonEsc)
+		}
+		opts.quoted = f.quoted
+		f.encoder(e, fv, opts)
+	}
+	if next == '{' {
+		e.WriteString("{}")
+	} else {
+		e.WriteByte('}')
+	}
+}
+
+func newStructEncoder(t reflect.Type) encoderFunc {
+	se := structEncoder{fields: cachedTypeFields(t)}
+	return se.encode
+}
+
+type mapEncoder struct {
+	elemEnc encoderFunc
+}
+
+func (me mapEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
+	if v.IsNil() {
+		e.WriteString("null")
+		return
+	}
+	if e.ptrLevel++; e.ptrLevel > startDetectingCyclesAfter {
+		// We're a large number of nested ptrEncoder.encode calls deep;
+		// start checking if we've run into a pointer cycle.
+		ptr := v.UnsafePointer()
+		if _, ok := e.ptrSeen[ptr]; ok {
+			e.error(&UnsupportedValueError{v, fmt.Sprintf("encountered a cycle via %s", v.Type())})
+		}
+		e.ptrSeen[ptr] = struct{}{}
+		defer delete(e.ptrSeen, ptr)
+	}
+	e.WriteByte('{')
+
+	// Extract and sort the keys.
+	var (
+		sv  = make([]reflectWithString, v.Len())
+		mi  = v.MapRange()
+		err error
+	)
+	for i := 0; mi.Next(); i++ {
+		if sv[i].ks, err = resolveKeyName(mi.Key()); err != nil {
+			e.error(fmt.Errorf("json: encoding error for type %q: %q", v.Type().String(), err.Error()))
+		}
+		sv[i].v = mi.Value()
+	}
+	slices.SortFunc(sv, func(i, j reflectWithString) int {
+		return strings.Compare(i.ks, j.ks)
+	})
+
+	for i, kv := range sv {
+		if i > 0 {
+			e.WriteByte(',')
+		}
+		e.Write(appendString(e.AvailableBuffer(), kv.ks, opts.escapeHTML))
+		e.WriteByte(':')
+		me.elemEnc(e, kv.v, opts)
+	}
+	e.WriteByte('}')
+	e.ptrLevel--
+}
+
+func newMapEncoder(t reflect.Type) encoderFunc {
+	switch t.Key().Kind() {
+	case reflect.String,
+		reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+		reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+	default:
+		if !t.Key().Implements(textMarshalerType) {
+			return unsupportedTypeEncoder
+		}
+	}
+	me := mapEncoder{typeEncoder(t.Elem())}
+	return me.encode
+}
+
+func encodeByteSlice(e *encodeState, v reflect.Value, _ encOpts) {
+	if v.IsNil() {
+		e.WriteString("null")
+		return
+	}
+
+	s := v.Bytes()
+	b := e.AvailableBuffer()
+	b = append(b, '"')
+	// SHIM(base64): base64.StdEncoding.AppendEncode([]byte, []byte) []byte
+	b = (shims.AppendableStdEncoding{Encoding: base64.StdEncoding}).AppendEncode(b, s)
+	b = append(b, '"')
+	e.Write(b)
+}
+
+// sliceEncoder just wraps an arrayEncoder, checking to make sure the value isn't nil.
+type sliceEncoder struct {
+	arrayEnc encoderFunc
+}
+
+func (se sliceEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
+	if v.IsNil() {
+		e.WriteString("null")
+		return
+	}
+	if e.ptrLevel++; e.ptrLevel > startDetectingCyclesAfter {
+		// We're a large number of nested ptrEncoder.encode calls deep;
+		// start checking if we've run into a pointer cycle.
+		// Here we use a struct to memorize the pointer to the first element of the slice
+		// and its length.
+		ptr := struct {
+			ptr any // always an unsafe.Pointer, but avoids a dependency on package unsafe
+			len int
+		}{v.UnsafePointer(), v.Len()}
+		if _, ok := e.ptrSeen[ptr]; ok {
+			e.error(&UnsupportedValueError{v, fmt.Sprintf("encountered a cycle via %s", v.Type())})
+		}
+		e.ptrSeen[ptr] = struct{}{}
+		defer delete(e.ptrSeen, ptr)
+	}
+	se.arrayEnc(e, v, opts)
+	e.ptrLevel--
+}
+
+func newSliceEncoder(t reflect.Type) encoderFunc {
+	// Byte slices get special treatment; arrays don't.
+	if t.Elem().Kind() == reflect.Uint8 {
+		p := reflect.PointerTo(t.Elem())
+		if !p.Implements(marshalerType) && !p.Implements(textMarshalerType) {
+			return encodeByteSlice
+		}
+	}
+	enc := sliceEncoder{newArrayEncoder(t)}
+	return enc.encode
+}
+
+type arrayEncoder struct {
+	elemEnc encoderFunc
+}
+
+func (ae arrayEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
+	e.WriteByte('[')
+	n := v.Len()
+	for i := 0; i < n; i++ {
+		if i > 0 {
+			e.WriteByte(',')
+		}
+		ae.elemEnc(e, v.Index(i), opts)
+	}
+	e.WriteByte(']')
+}
+
+func newArrayEncoder(t reflect.Type) encoderFunc {
+	enc := arrayEncoder{typeEncoder(t.Elem())}
+	return enc.encode
+}
+
+type ptrEncoder struct {
+	elemEnc encoderFunc
+}
+
+func (pe ptrEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
+	// EDIT(begin)
+	//
+	// if v.IsNil()  {
+	// 	e.WriteString("null")
+	// 	return
+	// }
+
+	if v.IsNil() || sentinel.IsValueNullPtr(v) || sentinel.IsValueNullSlice(v) {
+		e.WriteString("null")
+		return
+	}
+
+	// EDIT(end)
+	if e.ptrLevel++; e.ptrLevel > startDetectingCyclesAfter {
+		// We're a large number of nested ptrEncoder.encode calls deep;
+		// start checking if we've run into a pointer cycle.
+		ptr := v.Interface()
+		if _, ok := e.ptrSeen[ptr]; ok {
+			e.error(&UnsupportedValueError{v, fmt.Sprintf("encountered a cycle via %s", v.Type())})
+		}
+		e.ptrSeen[ptr] = struct{}{}
+		defer delete(e.ptrSeen, ptr)
+	}
+	pe.elemEnc(e, v.Elem(), opts)
+	e.ptrLevel--
+}
+
+func newPtrEncoder(t reflect.Type) encoderFunc {
+	enc := ptrEncoder{typeEncoder(t.Elem())}
+	return enc.encode
+}
+
+type condAddrEncoder struct {
+	canAddrEnc, elseEnc encoderFunc
+}
+
+func (ce condAddrEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
+	if v.CanAddr() {
+		ce.canAddrEnc(e, v, opts)
+	} else {
+		ce.elseEnc(e, v, opts)
+	}
+}
+
+// newCondAddrEncoder returns an encoder that checks whether its value
+// CanAddr and delegates to canAddrEnc if so, else to elseEnc.
+func newCondAddrEncoder(canAddrEnc, elseEnc encoderFunc) encoderFunc {
+	enc := condAddrEncoder{canAddrEnc: canAddrEnc, elseEnc: elseEnc}
+	return enc.encode
+}
+
+func isValidTag(s string) bool {
+	if s == "" {
+		return false
+	}
+	for _, c := range s {
+		switch {
+		case strings.ContainsRune("!#$%&()*+-./:;<=>?@[]^_{|}~ ", c):
+			// Backslash and quote chars are reserved, but
+			// otherwise any punctuation chars are allowed
+			// in a tag name.
+		case !unicode.IsLetter(c) && !unicode.IsDigit(c):
+			return false
+		}
+	}
+	return true
+}
+
+func typeByIndex(t reflect.Type, index []int) reflect.Type {
+	for _, i := range index {
+		if t.Kind() == reflect.Pointer {
+			t = t.Elem()
+		}
+		t = t.Field(i).Type
+	}
+	return t
+}
+
+type reflectWithString struct {
+	v  reflect.Value
+	ks string
+}
+
+func resolveKeyName(k reflect.Value) (string, error) {
+	if k.Kind() == reflect.String {
+		return k.String(), nil
+	}
+	if tm, ok := k.Interface().(encoding.TextMarshaler); ok {
+		if k.Kind() == reflect.Pointer && k.IsNil() {
+			return "", nil
+		}
+		buf, err := tm.MarshalText()
+		return string(buf), err
+	}
+	switch k.Kind() {
+	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+		return strconv.FormatInt(k.Int(), 10), nil
+	case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+		return strconv.FormatUint(k.Uint(), 10), nil
+	}
+	panic("unexpected map key type")
+}
+
+func appendString[Bytes []byte | string](dst []byte, src Bytes, escapeHTML bool) []byte {
+	dst = append(dst, '"')
+	start := 0
+	for i := 0; i < len(src); {
+		if b := src[i]; b < utf8.RuneSelf {
+			if htmlSafeSet[b] || (!escapeHTML && safeSet[b]) {
+				i++
+				continue
+			}
+			dst = append(dst, src[start:i]...)
+			switch b {
+			case '\\', '"':
+				dst = append(dst, '\\', b)
+			case '\b':
+				dst = append(dst, '\\', 'b')
+			case '\f':
+				dst = append(dst, '\\', 'f')
+			case '\n':
+				dst = append(dst, '\\', 'n')
+			case '\r':
+				dst = append(dst, '\\', 'r')
+			case '\t':
+				dst = append(dst, '\\', 't')
+			default:
+				// This encodes bytes < 0x20 except for \b, \f, \n, \r and \t.
+				// If escapeHTML is set, it also escapes <, >, and &
+				// because they can lead to security holes when
+				// user-controlled strings are rendered into JSON
+				// and served to some browsers.
+				dst = append(dst, '\\', 'u', '0', '0', hex[b>>4], hex[b&0xF])
+			}
+			i++
+			start = i
+			continue
+		}
+		// TODO(https://go.dev/issue/56948): Use generic utf8 functionality.
+		// For now, cast only a small portion of byte slices to a string
+		// so that it can be stack allocated. This slows down []byte slightly
+		// due to the extra copy, but keeps string performance roughly the same.
+		n := len(src) - i
+		if n > utf8.UTFMax {
+			n = utf8.UTFMax
+		}
+		c, size := utf8.DecodeRuneInString(string(src[i : i+n]))
+		if c == utf8.RuneError && size == 1 {
+			dst = append(dst, src[start:i]...)
+			dst = append(dst, `\ufffd`...)
+			i += size
+			start = i
+			continue
+		}
+		// U+2028 is LINE SEPARATOR.
+		// U+2029 is PARAGRAPH SEPARATOR.
+		// They are both technically valid characters in JSON strings,
+		// but don't work in JSONP, which has to be evaluated as JavaScript,
+		// and can lead to security holes there. It is valid JSON to
+		// escape them, so we do so unconditionally.
+		// See https://en.wikipedia.org/wiki/JSON#Safety.
+		if c == '\u2028' || c == '\u2029' {
+			dst = append(dst, src[start:i]...)
+			dst = append(dst, '\\', 'u', '2', '0', '2', hex[c&0xF])
+			i += size
+			start = i
+			continue
+		}
+		i += size
+	}
+	dst = append(dst, src[start:]...)
+	dst = append(dst, '"')
+	return dst
+}
+
+// A field represents a single field found in a struct.
+type field struct {
+	name      string
+	nameBytes []byte // []byte(name)
+
+	nameNonEsc  string // `"` + name + `":`
+	nameEscHTML string // `"` + HTMLEscape(name) + `":`
+
+	tag       bool
+	index     []int
+	typ       reflect.Type
+	omitEmpty bool
+	omitZero  bool
+	isZero    func(reflect.Value) bool
+	quoted    bool
+
+	encoder encoderFunc
+
+	// EDIT(begin): save the timefmt if present
+	timefmt string
+	// EDIT(end)
+}
+
+type isZeroer interface {
+	IsZero() bool
+}
+
+// SHIM(reflect): TypeFor[T]() reflect.Type
+var isZeroerType = shims.TypeFor[isZeroer]()
+
+// typeFields returns a list of fields that JSON should recognize for the given type.
+// The algorithm is breadth-first search over the set of structs to include - the top struct
+// and then any reachable anonymous structs.
+//
+// typeFields should be an internal detail,
+// but widely used packages access it using linkname.
+// Notable members of the hall of shame include:
+//   - github.com/bytedance/sonic
+//
+// Do not remove or change the type signature.
+// See go.dev/issue/67401.
+//
+//go:linkname typeFields
+func typeFields(t reflect.Type) structFields {
+	// Anonymous fields to explore at the current level and the next.
+	current := []field{}
+	next := []field{{typ: t}}
+
+	// Count of queued names for current level and the next.
+	var count, nextCount map[reflect.Type]int
+
+	// Types already visited at an earlier level.
+	visited := map[reflect.Type]bool{}
+
+	// Fields found.
+	var fields []field
+
+	// Buffer to run appendHTMLEscape on field names.
+	var nameEscBuf []byte
+
+	for len(next) > 0 {
+		current, next = next, current[:0]
+		count, nextCount = nextCount, map[reflect.Type]int{}
+
+		for _, f := range current {
+			if visited[f.typ] {
+				continue
+			}
+			visited[f.typ] = true
+
+			// Scan f.typ for fields to include.
+			for i := 0; i < f.typ.NumField(); i++ {
+				sf := f.typ.Field(i)
+				if sf.Anonymous {
+					t := sf.Type
+					if t.Kind() == reflect.Pointer {
+						t = t.Elem()
+					}
+					if !sf.IsExported() && t.Kind() != reflect.Struct {
+						// Ignore embedded fields of unexported non-struct types.
+						continue
+					}
+					// Do not ignore embedded fields of unexported struct types
+					// since they may have exported fields.
+				} else if !sf.IsExported() {
+					// Ignore unexported non-embedded fields.
+					continue
+				}
+				tag := sf.Tag.Get("json")
+				if tag == "-" {
+					continue
+				}
+				name, opts := parseTag(tag)
+				if !isValidTag(name) {
+					name = ""
+				}
+				index := make([]int, len(f.index)+1)
+				copy(index, f.index)
+				index[len(f.index)] = i
+
+				ft := sf.Type
+				if ft.Name() == "" && ft.Kind() == reflect.Pointer {
+					// Follow pointer.
+					ft = ft.Elem()
+				}
+
+				// Only strings, floats, integers, and booleans can be quoted.
+				quoted := false
+				if opts.Contains("string") {
+					switch ft.Kind() {
+					case reflect.Bool,
+						reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
+						reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr,
+						reflect.Float32, reflect.Float64,
+						reflect.String:
+						quoted = true
+					}
+				}
+
+				// Record found field and index sequence.
+				if name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
+					tagged := name != ""
+					if name == "" {
+						name = sf.Name
+					}
+					field := field{
+						name:      name,
+						tag:       tagged,
+						index:     index,
+						typ:       ft,
+						omitEmpty: opts.Contains("omitempty"),
+						omitZero:  opts.Contains("omitzero"),
+						quoted:    quoted,
+						// EDIT(begin): save the timefmt
+						timefmt: sf.Tag.Get("format"),
+						// EDIT(end)
+					}
+					field.nameBytes = []byte(field.name)
+
+					// Build nameEscHTML and nameNonEsc ahead of time.
+					nameEscBuf = appendHTMLEscape(nameEscBuf[:0], field.nameBytes)
+					field.nameEscHTML = `"` + string(nameEscBuf) + `":`
+					field.nameNonEsc = `"` + field.name + `":`
+
+					if field.omitZero {
+						t := sf.Type
+						// Provide a function that uses a type's IsZero method.
+						switch {
+						case t.Kind() == reflect.Interface && t.Implements(isZeroerType):
+							field.isZero = func(v reflect.Value) bool {
+								// Avoid panics calling IsZero on a nil interface or
+								// non-nil interface with nil pointer.
+								return v.IsNil() ||
+									(v.Elem().Kind() == reflect.Pointer && v.Elem().IsNil()) ||
+									v.Interface().(isZeroer).IsZero()
+							}
+						case t.Kind() == reflect.Pointer && t.Implements(isZeroerType):
+							field.isZero = func(v reflect.Value) bool {
+								// Avoid panics calling IsZero on nil pointer.
+								return v.IsNil() || v.Interface().(isZeroer).IsZero()
+							}
+						case t.Implements(isZeroerType):
+							field.isZero = func(v reflect.Value) bool {
+								return v.Interface().(isZeroer).IsZero()
+							}
+						case reflect.PointerTo(t).Implements(isZeroerType):
+							field.isZero = func(v reflect.Value) bool {
+								if !v.CanAddr() {
+									// Temporarily box v so we can take the address.
+									v2 := reflect.New(v.Type()).Elem()
+									v2.Set(v)
+									v = v2
+								}
+								return v.Addr().Interface().(isZeroer).IsZero()
+							}
+						}
+					}
+
+					fields = append(fields, field)
+					if count[f.typ] > 1 {
+						// If there were multiple instances, add a second,
+						// so that the annihilation code will see a duplicate.
+						// It only cares about the distinction between 1 and 2,
+						// so don't bother generating any more copies.
+						fields = append(fields, fields[len(fields)-1])
+					}
+					continue
+				}
+
+				// Record new anonymous struct to explore in next round.
+				nextCount[ft]++
+				if nextCount[ft] == 1 {
+					next = append(next, field{name: ft.Name(), index: index, typ: ft})
+				}
+			}
+		}
+	}
+
+	slices.SortFunc(fields, func(a, b field) int {
+		// sort field by name, breaking ties with depth, then
+		// breaking ties with "name came from json tag", then
+		// breaking ties with index sequence.
+		if c := strings.Compare(a.name, b.name); c != 0 {
+			return c
+		}
+		if c := cmp.Compare(len(a.index), len(b.index)); c != 0 {
+			return c
+		}
+		if a.tag != b.tag {
+			if a.tag {
+				return -1
+			}
+			return +1
+		}
+		return slices.Compare(a.index, b.index)
+	})
+
+	// Delete all fields that are hidden by the Go rules for embedded fields,
+	// except that fields with JSON tags are promoted.
+
+	// The fields are sorted in primary order of name, secondary order
+	// of field index length. Loop over names; for each name, delete
+	// hidden fields by choosing the one dominant field that survives.
+	out := fields[:0]
+	for advance, i := 0, 0; i < len(fields); i += advance {
+		// One iteration per name.
+		// Find the sequence of fields with the name of this first field.
+		fi := fields[i]
+		name := fi.name
+		for advance = 1; i+advance < len(fields); advance++ {
+			fj := fields[i+advance]
+			if fj.name != name {
+				break
+			}
+		}
+		if advance == 1 { // Only one field with this name
+			out = append(out, fi)
+			continue
+		}
+		dominant, ok := dominantField(fields[i : i+advance])
+		if ok {
+			out = append(out, dominant)
+		}
+	}
+
+	fields = out
+	slices.SortFunc(fields, func(i, j field) int {
+		return slices.Compare(i.index, j.index)
+	})
+
+	for i := range fields {
+		f := &fields[i]
+		f.encoder = typeEncoder(typeByIndex(t, f.index))
+
+		// EDIT(begin): add custom timefmt if necessary
+		if f.timefmt != "" {
+			f.encoder = continueWithTimeFmt(f.timefmt, f.encoder)
+		}
+		// EDIT(end)
+	}
+	exactNameIndex := make(map[string]*field, len(fields))
+	foldedNameIndex := make(map[string]*field, len(fields))
+	for i, field := range fields {
+		exactNameIndex[field.name] = &fields[i]
+		// For historical reasons, first folded match takes precedence.
+		if _, ok := foldedNameIndex[string(foldName(field.nameBytes))]; !ok {
+			foldedNameIndex[string(foldName(field.nameBytes))] = &fields[i]
+		}
+	}
+	return structFields{fields, exactNameIndex, foldedNameIndex}
+}
+
+// dominantField looks through the fields, all of which are known to
+// have the same name, to find the single field that dominates the
+// others using Go's embedding rules, modified by the presence of
+// JSON tags. If there are multiple top-level fields, the boolean
+// will be false: This condition is an error in Go and we skip all
+// the fields.
+func dominantField(fields []field) (field, bool) {
+	// The fields are sorted in increasing index-length order, then by presence of tag.
+	// That means that the first field is the dominant one. We need only check
+	// for error cases: two fields at top level, either both tagged or neither tagged.
+	if len(fields) > 1 && len(fields[0].index) == len(fields[1].index) && fields[0].tag == fields[1].tag {
+		return field{}, false
+	}
+	return fields[0], true
+}
+
+var fieldCache sync.Map // map[reflect.Type]structFields
+
+// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
+func cachedTypeFields(t reflect.Type) structFields {
+	if f, ok := fieldCache.Load(t); ok {
+		return f.(structFields)
+	}
+	f, _ := fieldCache.LoadOrStore(t, typeFields(t))
+	return f.(structFields)
+}
+
+func mayAppendQuote(b []byte, quoted bool) []byte {
+	if quoted {
+		b = append(b, '"')
+	}
+	return b
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/fold.go 🔗

@@ -0,0 +1,48 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package json
+
+import (
+	"unicode"
+	"unicode/utf8"
+)
+
+// foldName returns a folded string such that foldName(x) == foldName(y)
+// is identical to bytes.EqualFold(x, y).
+func foldName(in []byte) []byte {
+	// This is inlinable to take advantage of "function outlining".
+	var arr [32]byte // large enough for most JSON names
+	return appendFoldedName(arr[:0], in)
+}
+
+func appendFoldedName(out, in []byte) []byte {
+	for i := 0; i < len(in); {
+		// Handle single-byte ASCII.
+		if c := in[i]; c < utf8.RuneSelf {
+			if 'a' <= c && c <= 'z' {
+				c -= 'a' - 'A'
+			}
+			out = append(out, c)
+			i++
+			continue
+		}
+		// Handle multi-byte Unicode.
+		r, n := utf8.DecodeRune(in[i:])
+		out = utf8.AppendRune(out, foldRune(r))
+		i += n
+	}
+	return out
+}
+
+// foldRune is returns the smallest rune for all runes in the same fold set.
+func foldRune(r rune) rune {
+	for {
+		r2 := unicode.SimpleFold(r)
+		if r2 <= r {
+			return r2
+		}
+		r = r2
+	}
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/indent.go 🔗

@@ -0,0 +1,182 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package json
+
+import "bytes"
+
+// HTMLEscape appends to dst the JSON-encoded src with <, >, &, U+2028 and U+2029
+// characters inside string literals changed to \u003c, \u003e, \u0026, \u2028, \u2029
+// so that the JSON will be safe to embed inside HTML <script> tags.
+// For historical reasons, web browsers don't honor standard HTML
+// escaping within <script> tags, so an alternative JSON encoding must be used.
+func HTMLEscape(dst *bytes.Buffer, src []byte) {
+	dst.Grow(len(src))
+	dst.Write(appendHTMLEscape(dst.AvailableBuffer(), src))
+}
+
+func appendHTMLEscape(dst, src []byte) []byte {
+	// The characters can only appear in string literals,
+	// so just scan the string one byte at a time.
+	start := 0
+	for i, c := range src {
+		if c == '<' || c == '>' || c == '&' {
+			dst = append(dst, src[start:i]...)
+			dst = append(dst, '\\', 'u', '0', '0', hex[c>>4], hex[c&0xF])
+			start = i + 1
+		}
+		// Convert U+2028 and U+2029 (E2 80 A8 and E2 80 A9).
+		if c == 0xE2 && i+2 < len(src) && src[i+1] == 0x80 && src[i+2]&^1 == 0xA8 {
+			dst = append(dst, src[start:i]...)
+			dst = append(dst, '\\', 'u', '2', '0', '2', hex[src[i+2]&0xF])
+			start = i + len("\u2029")
+		}
+	}
+	return append(dst, src[start:]...)
+}
+
+// Compact appends to dst the JSON-encoded src with
+// insignificant space characters elided.
+func Compact(dst *bytes.Buffer, src []byte) error {
+	dst.Grow(len(src))
+	b := dst.AvailableBuffer()
+	b, err := appendCompact(b, src, false)
+	dst.Write(b)
+	return err
+}
+
+func appendCompact(dst, src []byte, escape bool) ([]byte, error) {
+	origLen := len(dst)
+	scan := newScanner()
+	defer freeScanner(scan)
+	start := 0
+	for i, c := range src {
+		if escape && (c == '<' || c == '>' || c == '&') {
+			if start < i {
+				dst = append(dst, src[start:i]...)
+			}
+			dst = append(dst, '\\', 'u', '0', '0', hex[c>>4], hex[c&0xF])
+			start = i + 1
+		}
+		// Convert U+2028 and U+2029 (E2 80 A8 and E2 80 A9).
+		if escape && c == 0xE2 && i+2 < len(src) && src[i+1] == 0x80 && src[i+2]&^1 == 0xA8 {
+			if start < i {
+				dst = append(dst, src[start:i]...)
+			}
+			dst = append(dst, '\\', 'u', '2', '0', '2', hex[src[i+2]&0xF])
+			start = i + 3
+		}
+		v := scan.step(scan, c)
+		if v >= scanSkipSpace {
+			if v == scanError {
+				break
+			}
+			if start < i {
+				dst = append(dst, src[start:i]...)
+			}
+			start = i + 1
+		}
+	}
+	if scan.eof() == scanError {
+		return dst[:origLen], scan.err
+	}
+	if start < len(src) {
+		dst = append(dst, src[start:]...)
+	}
+	return dst, nil
+}
+
+func appendNewline(dst []byte, prefix, indent string, depth int) []byte {
+	dst = append(dst, '\n')
+	dst = append(dst, prefix...)
+	for i := 0; i < depth; i++ {
+		dst = append(dst, indent...)
+	}
+	return dst
+}
+
+// indentGrowthFactor specifies the growth factor of indenting JSON input.
+// Empirically, the growth factor was measured to be between 1.4x to 1.8x
+// for some set of compacted JSON with the indent being a single tab.
+// Specify a growth factor slightly larger than what is observed
+// to reduce probability of allocation in appendIndent.
+// A factor no higher than 2 ensures that wasted space never exceeds 50%.
+const indentGrowthFactor = 2
+
+// Indent appends to dst an indented form of the JSON-encoded src.
+// Each element in a JSON object or array begins on a new,
+// indented line beginning with prefix followed by one or more
+// copies of indent according to the indentation nesting.
+// The data appended to dst does not begin with the prefix nor
+// any indentation, to make it easier to embed inside other formatted JSON data.
+// Although leading space characters (space, tab, carriage return, newline)
+// at the beginning of src are dropped, trailing space characters
+// at the end of src are preserved and copied to dst.
+// For example, if src has no trailing spaces, neither will dst;
+// if src ends in a trailing newline, so will dst.
+func Indent(dst *bytes.Buffer, src []byte, prefix, indent string) error {
+	dst.Grow(indentGrowthFactor * len(src))
+	b := dst.AvailableBuffer()
+	b, err := appendIndent(b, src, prefix, indent)
+	dst.Write(b)
+	return err
+}
+
+func appendIndent(dst, src []byte, prefix, indent string) ([]byte, error) {
+	origLen := len(dst)
+	scan := newScanner()
+	defer freeScanner(scan)
+	needIndent := false
+	depth := 0
+	for _, c := range src {
+		scan.bytes++
+		v := scan.step(scan, c)
+		if v == scanSkipSpace {
+			continue
+		}
+		if v == scanError {
+			break
+		}
+		if needIndent && v != scanEndObject && v != scanEndArray {
+			needIndent = false
+			depth++
+			dst = appendNewline(dst, prefix, indent, depth)
+		}
+
+		// Emit semantically uninteresting bytes
+		// (in particular, punctuation in strings) unmodified.
+		if v == scanContinue {
+			dst = append(dst, c)
+			continue
+		}
+
+		// Add spacing around real punctuation.
+		switch c {
+		case '{', '[':
+			// delay indent so that empty object and array are formatted as {} and [].
+			needIndent = true
+			dst = append(dst, c)
+		case ',':
+			dst = append(dst, c)
+			dst = appendNewline(dst, prefix, indent, depth)
+		case ':':
+			dst = append(dst, c, ' ')
+		case '}', ']':
+			if needIndent {
+				// suppress indent in empty object/array
+				needIndent = false
+			} else {
+				depth--
+				dst = appendNewline(dst, prefix, indent, depth)
+			}
+			dst = append(dst, c)
+		default:
+			dst = append(dst, c)
+		}
+	}
+	if scan.eof() == scanError {
+		return dst[:origLen], scan.err
+	}
+	return dst, nil
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/scanner.go 🔗

@@ -0,0 +1,610 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package json
+
+// JSON value parser state machine.
+// Just about at the limit of what is reasonable to write by hand.
+// Some parts are a bit tedious, but overall it nicely factors out the
+// otherwise common code from the multiple scanning functions
+// in this package (Compact, Indent, checkValid, etc).
+//
+// This file starts with two simple examples using the scanner
+// before diving into the scanner itself.
+
+import (
+	"strconv"
+	"sync"
+)
+
+// Valid reports whether data is a valid JSON encoding.
+func Valid(data []byte) bool {
+	scan := newScanner()
+	defer freeScanner(scan)
+	return checkValid(data, scan) == nil
+}
+
+// checkValid verifies that data is valid JSON-encoded data.
+// scan is passed in for use by checkValid to avoid an allocation.
+// checkValid returns nil or a SyntaxError.
+func checkValid(data []byte, scan *scanner) error {
+	scan.reset()
+	for _, c := range data {
+		scan.bytes++
+		if scan.step(scan, c) == scanError {
+			return scan.err
+		}
+	}
+	if scan.eof() == scanError {
+		return scan.err
+	}
+	return nil
+}
+
+// A SyntaxError is a description of a JSON syntax error.
+// [Unmarshal] will return a SyntaxError if the JSON can't be parsed.
+type SyntaxError struct {
+	msg    string // description of error
+	Offset int64  // error occurred after reading Offset bytes
+}
+
+func (e *SyntaxError) Error() string { return e.msg }
+
+// A scanner is a JSON scanning state machine.
+// Callers call scan.reset and then pass bytes in one at a time
+// by calling scan.step(&scan, c) for each byte.
+// The return value, referred to as an opcode, tells the
+// caller about significant parsing events like beginning
+// and ending literals, objects, and arrays, so that the
+// caller can follow along if it wishes.
+// The return value scanEnd indicates that a single top-level
+// JSON value has been completed, *before* the byte that
+// just got passed in.  (The indication must be delayed in order
+// to recognize the end of numbers: is 123 a whole value or
+// the beginning of 12345e+6?).
+type scanner struct {
+	// The step is a func to be called to execute the next transition.
+	// Also tried using an integer constant and a single func
+	// with a switch, but using the func directly was 10% faster
+	// on a 64-bit Mac Mini, and it's nicer to read.
+	step func(*scanner, byte) int
+
+	// Reached end of top-level value.
+	endTop bool
+
+	// Stack of what we're in the middle of - array values, object keys, object values.
+	parseState []int
+
+	// Error that happened, if any.
+	err error
+
+	// total bytes consumed, updated by decoder.Decode (and deliberately
+	// not set to zero by scan.reset)
+	bytes int64
+}
+
+var scannerPool = sync.Pool{
+	New: func() any {
+		return &scanner{}
+	},
+}
+
+func newScanner() *scanner {
+	scan := scannerPool.Get().(*scanner)
+	// scan.reset by design doesn't set bytes to zero
+	scan.bytes = 0
+	scan.reset()
+	return scan
+}
+
+func freeScanner(scan *scanner) {
+	// Avoid hanging on to too much memory in extreme cases.
+	if len(scan.parseState) > 1024 {
+		scan.parseState = nil
+	}
+	scannerPool.Put(scan)
+}
+
+// These values are returned by the state transition functions
+// assigned to scanner.state and the method scanner.eof.
+// They give details about the current state of the scan that
+// callers might be interested to know about.
+// It is okay to ignore the return value of any particular
+// call to scanner.state: if one call returns scanError,
+// every subsequent call will return scanError too.
+const (
+	// Continue.
+	scanContinue     = iota // uninteresting byte
+	scanBeginLiteral        // end implied by next result != scanContinue
+	scanBeginObject         // begin object
+	scanObjectKey           // just finished object key (string)
+	scanObjectValue         // just finished non-last object value
+	scanEndObject           // end object (implies scanObjectValue if possible)
+	scanBeginArray          // begin array
+	scanArrayValue          // just finished array value
+	scanEndArray            // end array (implies scanArrayValue if possible)
+	scanSkipSpace           // space byte; can skip; known to be last "continue" result
+
+	// Stop.
+	scanEnd   // top-level value ended *before* this byte; known to be first "stop" result
+	scanError // hit an error, scanner.err.
+)
+
+// These values are stored in the parseState stack.
+// They give the current state of a composite value
+// being scanned. If the parser is inside a nested value
+// the parseState describes the nested state, outermost at entry 0.
+const (
+	parseObjectKey   = iota // parsing object key (before colon)
+	parseObjectValue        // parsing object value (after colon)
+	parseArrayValue         // parsing array value
+)
+
+// This limits the max nesting depth to prevent stack overflow.
+// This is permitted by https://tools.ietf.org/html/rfc7159#section-9
+const maxNestingDepth = 10000
+
+// reset prepares the scanner for use.
+// It must be called before calling s.step.
+func (s *scanner) reset() {
+	s.step = stateBeginValue
+	s.parseState = s.parseState[0:0]
+	s.err = nil
+	s.endTop = false
+}
+
+// eof tells the scanner that the end of input has been reached.
+// It returns a scan status just as s.step does.
+func (s *scanner) eof() int {
+	if s.err != nil {
+		return scanError
+	}
+	if s.endTop {
+		return scanEnd
+	}
+	s.step(s, ' ')
+	if s.endTop {
+		return scanEnd
+	}
+	if s.err == nil {
+		s.err = &SyntaxError{"unexpected end of JSON input", s.bytes}
+	}
+	return scanError
+}
+
+// pushParseState pushes a new parse state p onto the parse stack.
+// an error state is returned if maxNestingDepth was exceeded, otherwise successState is returned.
+func (s *scanner) pushParseState(c byte, newParseState int, successState int) int {
+	s.parseState = append(s.parseState, newParseState)
+	if len(s.parseState) <= maxNestingDepth {
+		return successState
+	}
+	return s.error(c, "exceeded max depth")
+}
+
+// popParseState pops a parse state (already obtained) off the stack
+// and updates s.step accordingly.
+func (s *scanner) popParseState() {
+	n := len(s.parseState) - 1
+	s.parseState = s.parseState[0:n]
+	if n == 0 {
+		s.step = stateEndTop
+		s.endTop = true
+	} else {
+		s.step = stateEndValue
+	}
+}
+
+func isSpace(c byte) bool {
+	return c <= ' ' && (c == ' ' || c == '\t' || c == '\r' || c == '\n')
+}
+
+// stateBeginValueOrEmpty is the state after reading `[`.
+func stateBeginValueOrEmpty(s *scanner, c byte) int {
+	if isSpace(c) {
+		return scanSkipSpace
+	}
+	if c == ']' {
+		return stateEndValue(s, c)
+	}
+	return stateBeginValue(s, c)
+}
+
+// stateBeginValue is the state at the beginning of the input.
+func stateBeginValue(s *scanner, c byte) int {
+	if isSpace(c) {
+		return scanSkipSpace
+	}
+	switch c {
+	case '{':
+		s.step = stateBeginStringOrEmpty
+		return s.pushParseState(c, parseObjectKey, scanBeginObject)
+	case '[':
+		s.step = stateBeginValueOrEmpty
+		return s.pushParseState(c, parseArrayValue, scanBeginArray)
+	case '"':
+		s.step = stateInString
+		return scanBeginLiteral
+	case '-':
+		s.step = stateNeg
+		return scanBeginLiteral
+	case '0': // beginning of 0.123
+		s.step = state0
+		return scanBeginLiteral
+	case 't': // beginning of true
+		s.step = stateT
+		return scanBeginLiteral
+	case 'f': // beginning of false
+		s.step = stateF
+		return scanBeginLiteral
+	case 'n': // beginning of null
+		s.step = stateN
+		return scanBeginLiteral
+	}
+	if '1' <= c && c <= '9' { // beginning of 1234.5
+		s.step = state1
+		return scanBeginLiteral
+	}
+	return s.error(c, "looking for beginning of value")
+}
+
+// stateBeginStringOrEmpty is the state after reading `{`.
+func stateBeginStringOrEmpty(s *scanner, c byte) int {
+	if isSpace(c) {
+		return scanSkipSpace
+	}
+	if c == '}' {
+		n := len(s.parseState)
+		s.parseState[n-1] = parseObjectValue
+		return stateEndValue(s, c)
+	}
+	return stateBeginString(s, c)
+}
+
+// stateBeginString is the state after reading `{"key": value,`.
+func stateBeginString(s *scanner, c byte) int {
+	if isSpace(c) {
+		return scanSkipSpace
+	}
+	if c == '"' {
+		s.step = stateInString
+		return scanBeginLiteral
+	}
+	return s.error(c, "looking for beginning of object key string")
+}
+
+// stateEndValue is the state after completing a value,
+// such as after reading `{}` or `true` or `["x"`.
+func stateEndValue(s *scanner, c byte) int {
+	n := len(s.parseState)
+	if n == 0 {
+		// Completed top-level before the current byte.
+		s.step = stateEndTop
+		s.endTop = true
+		return stateEndTop(s, c)
+	}
+	if isSpace(c) {
+		s.step = stateEndValue
+		return scanSkipSpace
+	}
+	ps := s.parseState[n-1]
+	switch ps {
+	case parseObjectKey:
+		if c == ':' {
+			s.parseState[n-1] = parseObjectValue
+			s.step = stateBeginValue
+			return scanObjectKey
+		}
+		return s.error(c, "after object key")
+	case parseObjectValue:
+		if c == ',' {
+			s.parseState[n-1] = parseObjectKey
+			s.step = stateBeginString
+			return scanObjectValue
+		}
+		if c == '}' {
+			s.popParseState()
+			return scanEndObject
+		}
+		return s.error(c, "after object key:value pair")
+	case parseArrayValue:
+		if c == ',' {
+			s.step = stateBeginValue
+			return scanArrayValue
+		}
+		if c == ']' {
+			s.popParseState()
+			return scanEndArray
+		}
+		return s.error(c, "after array element")
+	}
+	return s.error(c, "")
+}
+
+// stateEndTop is the state after finishing the top-level value,
+// such as after reading `{}` or `[1,2,3]`.
+// Only space characters should be seen now.
+func stateEndTop(s *scanner, c byte) int {
+	if !isSpace(c) {
+		// Complain about non-space byte on next call.
+		s.error(c, "after top-level value")
+	}
+	return scanEnd
+}
+
+// stateInString is the state after reading `"`.
+func stateInString(s *scanner, c byte) int {
+	if c == '"' {
+		s.step = stateEndValue
+		return scanContinue
+	}
+	if c == '\\' {
+		s.step = stateInStringEsc
+		return scanContinue
+	}
+	if c < 0x20 {
+		return s.error(c, "in string literal")
+	}
+	return scanContinue
+}
+
+// stateInStringEsc is the state after reading `"\` during a quoted string.
+func stateInStringEsc(s *scanner, c byte) int {
+	switch c {
+	case 'b', 'f', 'n', 'r', 't', '\\', '/', '"':
+		s.step = stateInString
+		return scanContinue
+	case 'u':
+		s.step = stateInStringEscU
+		return scanContinue
+	}
+	return s.error(c, "in string escape code")
+}
+
+// stateInStringEscU is the state after reading `"\u` during a quoted string.
+func stateInStringEscU(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
+		s.step = stateInStringEscU1
+		return scanContinue
+	}
+	// numbers
+	return s.error(c, "in \\u hexadecimal character escape")
+}
+
+// stateInStringEscU1 is the state after reading `"\u1` during a quoted string.
+func stateInStringEscU1(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
+		s.step = stateInStringEscU12
+		return scanContinue
+	}
+	// numbers
+	return s.error(c, "in \\u hexadecimal character escape")
+}
+
+// stateInStringEscU12 is the state after reading `"\u12` during a quoted string.
+func stateInStringEscU12(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
+		s.step = stateInStringEscU123
+		return scanContinue
+	}
+	// numbers
+	return s.error(c, "in \\u hexadecimal character escape")
+}
+
+// stateInStringEscU123 is the state after reading `"\u123` during a quoted string.
+func stateInStringEscU123(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F' {
+		s.step = stateInString
+		return scanContinue
+	}
+	// numbers
+	return s.error(c, "in \\u hexadecimal character escape")
+}
+
+// stateNeg is the state after reading `-` during a number.
+func stateNeg(s *scanner, c byte) int {
+	if c == '0' {
+		s.step = state0
+		return scanContinue
+	}
+	if '1' <= c && c <= '9' {
+		s.step = state1
+		return scanContinue
+	}
+	return s.error(c, "in numeric literal")
+}
+
+// state1 is the state after reading a non-zero integer during a number,
+// such as after reading `1` or `100` but not `0`.
+func state1(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' {
+		s.step = state1
+		return scanContinue
+	}
+	return state0(s, c)
+}
+
+// state0 is the state after reading `0` during a number.
+func state0(s *scanner, c byte) int {
+	if c == '.' {
+		s.step = stateDot
+		return scanContinue
+	}
+	if c == 'e' || c == 'E' {
+		s.step = stateE
+		return scanContinue
+	}
+	return stateEndValue(s, c)
+}
+
+// stateDot is the state after reading the integer and decimal point in a number,
+// such as after reading `1.`.
+func stateDot(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' {
+		s.step = stateDot0
+		return scanContinue
+	}
+	return s.error(c, "after decimal point in numeric literal")
+}
+
+// stateDot0 is the state after reading the integer, decimal point, and subsequent
+// digits of a number, such as after reading `3.14`.
+func stateDot0(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' {
+		return scanContinue
+	}
+	if c == 'e' || c == 'E' {
+		s.step = stateE
+		return scanContinue
+	}
+	return stateEndValue(s, c)
+}
+
+// stateE is the state after reading the mantissa and e in a number,
+// such as after reading `314e` or `0.314e`.
+func stateE(s *scanner, c byte) int {
+	if c == '+' || c == '-' {
+		s.step = stateESign
+		return scanContinue
+	}
+	return stateESign(s, c)
+}
+
+// stateESign is the state after reading the mantissa, e, and sign in a number,
+// such as after reading `314e-` or `0.314e+`.
+func stateESign(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' {
+		s.step = stateE0
+		return scanContinue
+	}
+	return s.error(c, "in exponent of numeric literal")
+}
+
+// stateE0 is the state after reading the mantissa, e, optional sign,
+// and at least one digit of the exponent in a number,
+// such as after reading `314e-2` or `0.314e+1` or `3.14e0`.
+func stateE0(s *scanner, c byte) int {
+	if '0' <= c && c <= '9' {
+		return scanContinue
+	}
+	return stateEndValue(s, c)
+}
+
+// stateT is the state after reading `t`.
+func stateT(s *scanner, c byte) int {
+	if c == 'r' {
+		s.step = stateTr
+		return scanContinue
+	}
+	return s.error(c, "in literal true (expecting 'r')")
+}
+
+// stateTr is the state after reading `tr`.
+func stateTr(s *scanner, c byte) int {
+	if c == 'u' {
+		s.step = stateTru
+		return scanContinue
+	}
+	return s.error(c, "in literal true (expecting 'u')")
+}
+
+// stateTru is the state after reading `tru`.
+func stateTru(s *scanner, c byte) int {
+	if c == 'e' {
+		s.step = stateEndValue
+		return scanContinue
+	}
+	return s.error(c, "in literal true (expecting 'e')")
+}
+
+// stateF is the state after reading `f`.
+func stateF(s *scanner, c byte) int {
+	if c == 'a' {
+		s.step = stateFa
+		return scanContinue
+	}
+	return s.error(c, "in literal false (expecting 'a')")
+}
+
+// stateFa is the state after reading `fa`.
+func stateFa(s *scanner, c byte) int {
+	if c == 'l' {
+		s.step = stateFal
+		return scanContinue
+	}
+	return s.error(c, "in literal false (expecting 'l')")
+}
+
+// stateFal is the state after reading `fal`.
+func stateFal(s *scanner, c byte) int {
+	if c == 's' {
+		s.step = stateFals
+		return scanContinue
+	}
+	return s.error(c, "in literal false (expecting 's')")
+}
+
+// stateFals is the state after reading `fals`.
+func stateFals(s *scanner, c byte) int {
+	if c == 'e' {
+		s.step = stateEndValue
+		return scanContinue
+	}
+	return s.error(c, "in literal false (expecting 'e')")
+}
+
+// stateN is the state after reading `n`.
+func stateN(s *scanner, c byte) int {
+	if c == 'u' {
+		s.step = stateNu
+		return scanContinue
+	}
+	return s.error(c, "in literal null (expecting 'u')")
+}
+
+// stateNu is the state after reading `nu`.
+func stateNu(s *scanner, c byte) int {
+	if c == 'l' {
+		s.step = stateNul
+		return scanContinue
+	}
+	return s.error(c, "in literal null (expecting 'l')")
+}
+
+// stateNul is the state after reading `nul`.
+func stateNul(s *scanner, c byte) int {
+	if c == 'l' {
+		s.step = stateEndValue
+		return scanContinue
+	}
+	return s.error(c, "in literal null (expecting 'l')")
+}
+
+// stateError is the state after reaching a syntax error,
+// such as after reading `[1}` or `5.1.2`.
+func stateError(s *scanner, c byte) int {
+	return scanError
+}
+
+// error records an error and switches to the error state.
+func (s *scanner) error(c byte, context string) int {
+	s.step = stateError
+	s.err = &SyntaxError{"invalid character " + quoteChar(c) + " " + context, s.bytes}
+	return scanError
+}
+
+// quoteChar formats c as a quoted character literal.
+func quoteChar(c byte) string {
+	// special cases - different from quoted strings
+	if c == '\'' {
+		return `'\''`
+	}
+	if c == '"' {
+		return `'"'`
+	}
+
+	// use quoted string with different quotation marks
+	s := strconv.Quote(string(c))
+	return "'" + s[1:len(s)-1] + "'"
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/sentinel/null.go 🔗

@@ -0,0 +1,57 @@
+package sentinel
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/internal/encoding/json/shims"
+	"reflect"
+	"sync"
+)
+
+var nullPtrsCache sync.Map // map[reflect.Type]*T
+
+func NullPtr[T any]() *T {
+	t := shims.TypeFor[T]()
+	ptr, loaded := nullPtrsCache.Load(t) // avoid premature allocation
+	if !loaded {
+		ptr, _ = nullPtrsCache.LoadOrStore(t, new(T))
+	}
+	return (ptr.(*T))
+}
+
+var nullSlicesCache sync.Map // map[reflect.Type][]T
+
+func NullSlice[T any]() []T {
+	t := shims.TypeFor[T]()
+	slice, loaded := nullSlicesCache.Load(t) // avoid premature allocation
+	if !loaded {
+		slice, _ = nullSlicesCache.LoadOrStore(t, []T{})
+	}
+	return slice.([]T)
+}
+
+func IsNullPtr[T any](ptr *T) bool {
+	nullptr, ok := nullPtrsCache.Load(shims.TypeFor[T]())
+	return ok && ptr == nullptr.(*T)
+}
+
+func IsNullSlice[T any](slice []T) bool {
+	nullSlice, ok := nullSlicesCache.Load(shims.TypeFor[T]())
+	return ok && reflect.ValueOf(slice).Pointer() == reflect.ValueOf(nullSlice).Pointer()
+}
+
+// internal only
+func IsValueNullPtr(v reflect.Value) bool {
+	if v.Kind() != reflect.Ptr {
+		return false
+	}
+	nullptr, ok := nullPtrsCache.Load(v.Type().Elem())
+	return ok && v.Pointer() == reflect.ValueOf(nullptr).Pointer()
+}
+
+// internal only
+func IsValueNullSlice(v reflect.Value) bool {
+	if v.Kind() != reflect.Slice {
+		return false
+	}
+	nullSlice, ok := nullSlicesCache.Load(v.Type().Elem())
+	return ok && v.Pointer() == reflect.ValueOf(nullSlice).Pointer()
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/shims/shims.go 🔗

@@ -0,0 +1,111 @@
+// This package provides shims over Go 1.2{2,3} APIs
+// which are missing from Go 1.21, and used by the Go 1.24 encoding/json package.
+//
+// Inside the vendored package, all shim code has comments that begin look like
+// // SHIM(...): ...
+package shims
+
+import (
+	"encoding/base64"
+	"reflect"
+	"slices"
+)
+
+type OverflowableType struct{ reflect.Type }
+
+func (t OverflowableType) OverflowInt(x int64) bool {
+	k := t.Kind()
+	switch k {
+	case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+		bitSize := t.Size() * 8
+		trunc := (x << (64 - bitSize)) >> (64 - bitSize)
+		return x != trunc
+	}
+	panic("reflect: OverflowInt of non-int type " + t.String())
+}
+
+func (t OverflowableType) OverflowUint(x uint64) bool {
+	k := t.Kind()
+	switch k {
+	case reflect.Uint, reflect.Uintptr, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+		bitSize := t.Size() * 8
+		trunc := (x << (64 - bitSize)) >> (64 - bitSize)
+		return x != trunc
+	}
+	panic("reflect: OverflowUint of non-uint type " + t.String())
+}
+
+// Original src code from Go 1.23 go/src/reflect/type.go (taken 1/9/25)
+/*
+
+func (t *rtype) OverflowInt(x int64) bool {
+	k := t.Kind()
+	switch k {
+	case Int, Int8, Int16, Int32, Int64:
+		bitSize := t.Size() * 8
+		trunc := (x << (64 - bitSize)) >> (64 - bitSize)
+		return x != trunc
+	}
+	panic("reflect: OverflowInt of non-int type " + t.String())
+}
+
+func (t *rtype) OverflowUint(x uint64) bool {
+	k := t.Kind()
+	switch k {
+	case Uint, Uintptr, Uint8, Uint16, Uint32, Uint64:
+		bitSize := t.Size() * 8
+		trunc := (x << (64 - bitSize)) >> (64 - bitSize)
+		return x != trunc
+	}
+	panic("reflect: OverflowUint of non-uint type " + t.String())
+}
+
+*/
+
+// TypeFor returns the [Type] that represents the type argument T.
+func TypeFor[T any]() reflect.Type {
+	var v T
+	if t := reflect.TypeOf(v); t != nil {
+		return t // optimize for T being a non-interface kind
+	}
+	return reflect.TypeOf((*T)(nil)).Elem() // only for an interface kind
+}
+
+// Original src code from Go 1.23 go/src/reflect/type.go (taken 1/9/25)
+/*
+
+// TypeFor returns the [Type] that represents the type argument T.
+func TypeFor[T any]() Type {
+	var v T
+	if t := TypeOf(v); t != nil {
+		return t // optimize for T being a non-interface kind
+	}
+	return TypeOf((*T)(nil)).Elem() // only for an interface kind
+}
+
+*/
+
+type AppendableStdEncoding struct{ *base64.Encoding }
+
+// AppendEncode appends the base64 encoded src to dst
+// and returns the extended buffer.
+func (enc AppendableStdEncoding) AppendEncode(dst, src []byte) []byte {
+	n := enc.EncodedLen(len(src))
+	dst = slices.Grow(dst, n)
+	enc.Encode(dst[len(dst):][:n], src)
+	return dst[:len(dst)+n]
+}
+
+// Original src code from Go 1.23.4 go/src/encoding/base64/base64.go (taken 1/9/25)
+/*
+
+// AppendEncode appends the base64 encoded src to dst
+// and returns the extended buffer.
+func (enc *Encoding) AppendEncode(dst, src []byte) []byte {
+	n := enc.EncodedLen(len(src))
+	dst = slices.Grow(dst, n)
+	enc.Encode(dst[len(dst):][:n], src)
+	return dst[:len(dst)+n]
+}
+
+*/

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/stream.go 🔗

@@ -0,0 +1,512 @@
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package json
+
+import (
+	"bytes"
+	"errors"
+	"io"
+)
+
+// A Decoder reads and decodes JSON values from an input stream.
+type Decoder struct {
+	r       io.Reader
+	buf     []byte
+	d       decodeState
+	scanp   int   // start of unread data in buf
+	scanned int64 // amount of data already scanned
+	scan    scanner
+	err     error
+
+	tokenState int
+	tokenStack []int
+}
+
+// NewDecoder returns a new decoder that reads from r.
+//
+// The decoder introduces its own buffering and may
+// read data from r beyond the JSON values requested.
+func NewDecoder(r io.Reader) *Decoder {
+	return &Decoder{r: r}
+}
+
+// UseNumber causes the Decoder to unmarshal a number into an
+// interface value as a [Number] instead of as a float64.
+func (dec *Decoder) UseNumber() { dec.d.useNumber = true }
+
+// DisallowUnknownFields causes the Decoder to return an error when the destination
+// is a struct and the input contains object keys which do not match any
+// non-ignored, exported fields in the destination.
+func (dec *Decoder) DisallowUnknownFields() { dec.d.disallowUnknownFields = true }
+
+// Decode reads the next JSON-encoded value from its
+// input and stores it in the value pointed to by v.
+//
+// See the documentation for [Unmarshal] for details about
+// the conversion of JSON into a Go value.
+func (dec *Decoder) Decode(v any) error {
+	if dec.err != nil {
+		return dec.err
+	}
+
+	if err := dec.tokenPrepareForDecode(); err != nil {
+		return err
+	}
+
+	if !dec.tokenValueAllowed() {
+		return &SyntaxError{msg: "not at beginning of value", Offset: dec.InputOffset()}
+	}
+
+	// Read whole value into buffer.
+	n, err := dec.readValue()
+	if err != nil {
+		return err
+	}
+	dec.d.init(dec.buf[dec.scanp : dec.scanp+n])
+	dec.scanp += n
+
+	// Don't save err from unmarshal into dec.err:
+	// the connection is still usable since we read a complete JSON
+	// object from it before the error happened.
+	err = dec.d.unmarshal(v)
+
+	// fixup token streaming state
+	dec.tokenValueEnd()
+
+	return err
+}
+
+// Buffered returns a reader of the data remaining in the Decoder's
+// buffer. The reader is valid until the next call to [Decoder.Decode].
+func (dec *Decoder) Buffered() io.Reader {
+	return bytes.NewReader(dec.buf[dec.scanp:])
+}
+
+// readValue reads a JSON value into dec.buf.
+// It returns the length of the encoding.
+func (dec *Decoder) readValue() (int, error) {
+	dec.scan.reset()
+
+	scanp := dec.scanp
+	var err error
+Input:
+	// help the compiler see that scanp is never negative, so it can remove
+	// some bounds checks below.
+	for scanp >= 0 {
+
+		// Look in the buffer for a new value.
+		for ; scanp < len(dec.buf); scanp++ {
+			c := dec.buf[scanp]
+			dec.scan.bytes++
+			switch dec.scan.step(&dec.scan, c) {
+			case scanEnd:
+				// scanEnd is delayed one byte so we decrement
+				// the scanner bytes count by 1 to ensure that
+				// this value is correct in the next call of Decode.
+				dec.scan.bytes--
+				break Input
+			case scanEndObject, scanEndArray:
+				// scanEnd is delayed one byte.
+				// We might block trying to get that byte from src,
+				// so instead invent a space byte.
+				if stateEndValue(&dec.scan, ' ') == scanEnd {
+					scanp++
+					break Input
+				}
+			case scanError:
+				dec.err = dec.scan.err
+				return 0, dec.scan.err
+			}
+		}
+
+		// Did the last read have an error?
+		// Delayed until now to allow buffer scan.
+		if err != nil {
+			if err == io.EOF {
+				if dec.scan.step(&dec.scan, ' ') == scanEnd {
+					break Input
+				}
+				if nonSpace(dec.buf) {
+					err = io.ErrUnexpectedEOF
+				}
+			}
+			dec.err = err
+			return 0, err
+		}
+
+		n := scanp - dec.scanp
+		err = dec.refill()
+		scanp = dec.scanp + n
+	}
+	return scanp - dec.scanp, nil
+}
+
+func (dec *Decoder) refill() error {
+	// Make room to read more into the buffer.
+	// First slide down data already consumed.
+	if dec.scanp > 0 {
+		dec.scanned += int64(dec.scanp)
+		n := copy(dec.buf, dec.buf[dec.scanp:])
+		dec.buf = dec.buf[:n]
+		dec.scanp = 0
+	}
+
+	// Grow buffer if not large enough.
+	const minRead = 512
+	if cap(dec.buf)-len(dec.buf) < minRead {
+		newBuf := make([]byte, len(dec.buf), 2*cap(dec.buf)+minRead)
+		copy(newBuf, dec.buf)
+		dec.buf = newBuf
+	}
+
+	// Read. Delay error for next iteration (after scan).
+	n, err := dec.r.Read(dec.buf[len(dec.buf):cap(dec.buf)])
+	dec.buf = dec.buf[0 : len(dec.buf)+n]
+
+	return err
+}
+
+func nonSpace(b []byte) bool {
+	for _, c := range b {
+		if !isSpace(c) {
+			return true
+		}
+	}
+	return false
+}
+
+// An Encoder writes JSON values to an output stream.
+type Encoder struct {
+	w          io.Writer
+	err        error
+	escapeHTML bool
+
+	indentBuf    []byte
+	indentPrefix string
+	indentValue  string
+}
+
+// NewEncoder returns a new encoder that writes to w.
+func NewEncoder(w io.Writer) *Encoder {
+	return &Encoder{w: w, escapeHTML: true}
+}
+
+// Encode writes the JSON encoding of v to the stream,
+// with insignificant space characters elided,
+// followed by a newline character.
+//
+// See the documentation for [Marshal] for details about the
+// conversion of Go values to JSON.
+func (enc *Encoder) Encode(v any) error {
+	if enc.err != nil {
+		return enc.err
+	}
+
+	e := newEncodeState()
+	defer encodeStatePool.Put(e)
+
+	err := e.marshal(v, encOpts{escapeHTML: enc.escapeHTML})
+	if err != nil {
+		return err
+	}
+
+	// Terminate each value with a newline.
+	// This makes the output look a little nicer
+	// when debugging, and some kind of space
+	// is required if the encoded value was a number,
+	// so that the reader knows there aren't more
+	// digits coming.
+	e.WriteByte('\n')
+
+	b := e.Bytes()
+	if enc.indentPrefix != "" || enc.indentValue != "" {
+		enc.indentBuf, err = appendIndent(enc.indentBuf[:0], b, enc.indentPrefix, enc.indentValue)
+		if err != nil {
+			return err
+		}
+		b = enc.indentBuf
+	}
+	if _, err = enc.w.Write(b); err != nil {
+		enc.err = err
+	}
+	return err
+}
+
+// SetIndent instructs the encoder to format each subsequent encoded
+// value as if indented by the package-level function Indent(dst, src, prefix, indent).
+// Calling SetIndent("", "") disables indentation.
+func (enc *Encoder) SetIndent(prefix, indent string) {
+	enc.indentPrefix = prefix
+	enc.indentValue = indent
+}
+
+// SetEscapeHTML specifies whether problematic HTML characters
+// should be escaped inside JSON quoted strings.
+// The default behavior is to escape &, <, and > to \u0026, \u003c, and \u003e
+// to avoid certain safety problems that can arise when embedding JSON in HTML.
+//
+// In non-HTML settings where the escaping interferes with the readability
+// of the output, SetEscapeHTML(false) disables this behavior.
+func (enc *Encoder) SetEscapeHTML(on bool) {
+	enc.escapeHTML = on
+}
+
+// RawMessage is a raw encoded JSON value.
+// It implements [Marshaler] and [Unmarshaler] and can
+// be used to delay JSON decoding or precompute a JSON encoding.
+type RawMessage []byte
+
+// MarshalJSON returns m as the JSON encoding of m.
+func (m RawMessage) MarshalJSON() ([]byte, error) {
+	if m == nil {
+		return []byte("null"), nil
+	}
+	return m, nil
+}
+
+// UnmarshalJSON sets *m to a copy of data.
+func (m *RawMessage) UnmarshalJSON(data []byte) error {
+	if m == nil {
+		return errors.New("json.RawMessage: UnmarshalJSON on nil pointer")
+	}
+	*m = append((*m)[0:0], data...)
+	return nil
+}
+
+var _ Marshaler = (*RawMessage)(nil)
+var _ Unmarshaler = (*RawMessage)(nil)
+
+// A Token holds a value of one of these types:
+//
+//   - [Delim], for the four JSON delimiters [ ] { }
+//   - bool, for JSON booleans
+//   - float64, for JSON numbers
+//   - [Number], for JSON numbers
+//   - string, for JSON string literals
+//   - nil, for JSON null
+type Token any
+
+const (
+	tokenTopValue = iota
+	tokenArrayStart
+	tokenArrayValue
+	tokenArrayComma
+	tokenObjectStart
+	tokenObjectKey
+	tokenObjectColon
+	tokenObjectValue
+	tokenObjectComma
+)
+
+// advance tokenstate from a separator state to a value state
+func (dec *Decoder) tokenPrepareForDecode() error {
+	// Note: Not calling peek before switch, to avoid
+	// putting peek into the standard Decode path.
+	// peek is only called when using the Token API.
+	switch dec.tokenState {
+	case tokenArrayComma:
+		c, err := dec.peek()
+		if err != nil {
+			return err
+		}
+		if c != ',' {
+			return &SyntaxError{"expected comma after array element", dec.InputOffset()}
+		}
+		dec.scanp++
+		dec.tokenState = tokenArrayValue
+	case tokenObjectColon:
+		c, err := dec.peek()
+		if err != nil {
+			return err
+		}
+		if c != ':' {
+			return &SyntaxError{"expected colon after object key", dec.InputOffset()}
+		}
+		dec.scanp++
+		dec.tokenState = tokenObjectValue
+	}
+	return nil
+}
+
+func (dec *Decoder) tokenValueAllowed() bool {
+	switch dec.tokenState {
+	case tokenTopValue, tokenArrayStart, tokenArrayValue, tokenObjectValue:
+		return true
+	}
+	return false
+}
+
+func (dec *Decoder) tokenValueEnd() {
+	switch dec.tokenState {
+	case tokenArrayStart, tokenArrayValue:
+		dec.tokenState = tokenArrayComma
+	case tokenObjectValue:
+		dec.tokenState = tokenObjectComma
+	}
+}
+
+// A Delim is a JSON array or object delimiter, one of [ ] { or }.
+type Delim rune
+
+func (d Delim) String() string {
+	return string(d)
+}
+
+// Token returns the next JSON token in the input stream.
+// At the end of the input stream, Token returns nil, [io.EOF].
+//
+// Token guarantees that the delimiters [ ] { } it returns are
+// properly nested and matched: if Token encounters an unexpected
+// delimiter in the input, it will return an error.
+//
+// The input stream consists of basic JSON values—bool, string,
+// number, and null—along with delimiters [ ] { } of type [Delim]
+// to mark the start and end of arrays and objects.
+// Commas and colons are elided.
+func (dec *Decoder) Token() (Token, error) {
+	for {
+		c, err := dec.peek()
+		if err != nil {
+			return nil, err
+		}
+		switch c {
+		case '[':
+			if !dec.tokenValueAllowed() {
+				return dec.tokenError(c)
+			}
+			dec.scanp++
+			dec.tokenStack = append(dec.tokenStack, dec.tokenState)
+			dec.tokenState = tokenArrayStart
+			return Delim('['), nil
+
+		case ']':
+			if dec.tokenState != tokenArrayStart && dec.tokenState != tokenArrayComma {
+				return dec.tokenError(c)
+			}
+			dec.scanp++
+			dec.tokenState = dec.tokenStack[len(dec.tokenStack)-1]
+			dec.tokenStack = dec.tokenStack[:len(dec.tokenStack)-1]
+			dec.tokenValueEnd()
+			return Delim(']'), nil
+
+		case '{':
+			if !dec.tokenValueAllowed() {
+				return dec.tokenError(c)
+			}
+			dec.scanp++
+			dec.tokenStack = append(dec.tokenStack, dec.tokenState)
+			dec.tokenState = tokenObjectStart
+			return Delim('{'), nil
+
+		case '}':
+			if dec.tokenState != tokenObjectStart && dec.tokenState != tokenObjectComma {
+				return dec.tokenError(c)
+			}
+			dec.scanp++
+			dec.tokenState = dec.tokenStack[len(dec.tokenStack)-1]
+			dec.tokenStack = dec.tokenStack[:len(dec.tokenStack)-1]
+			dec.tokenValueEnd()
+			return Delim('}'), nil
+
+		case ':':
+			if dec.tokenState != tokenObjectColon {
+				return dec.tokenError(c)
+			}
+			dec.scanp++
+			dec.tokenState = tokenObjectValue
+			continue
+
+		case ',':
+			if dec.tokenState == tokenArrayComma {
+				dec.scanp++
+				dec.tokenState = tokenArrayValue
+				continue
+			}
+			if dec.tokenState == tokenObjectComma {
+				dec.scanp++
+				dec.tokenState = tokenObjectKey
+				continue
+			}
+			return dec.tokenError(c)
+
+		case '"':
+			if dec.tokenState == tokenObjectStart || dec.tokenState == tokenObjectKey {
+				var x string
+				old := dec.tokenState
+				dec.tokenState = tokenTopValue
+				err := dec.Decode(&x)
+				dec.tokenState = old
+				if err != nil {
+					return nil, err
+				}
+				dec.tokenState = tokenObjectColon
+				return x, nil
+			}
+			fallthrough
+
+		default:
+			if !dec.tokenValueAllowed() {
+				return dec.tokenError(c)
+			}
+			var x any
+			if err := dec.Decode(&x); err != nil {
+				return nil, err
+			}
+			return x, nil
+		}
+	}
+}
+
+func (dec *Decoder) tokenError(c byte) (Token, error) {
+	var context string
+	switch dec.tokenState {
+	case tokenTopValue:
+		context = " looking for beginning of value"
+	case tokenArrayStart, tokenArrayValue, tokenObjectValue:
+		context = " looking for beginning of value"
+	case tokenArrayComma:
+		context = " after array element"
+	case tokenObjectKey:
+		context = " looking for beginning of object key string"
+	case tokenObjectColon:
+		context = " after object key"
+	case tokenObjectComma:
+		context = " after object key:value pair"
+	}
+	return nil, &SyntaxError{"invalid character " + quoteChar(c) + context, dec.InputOffset()}
+}
+
+// More reports whether there is another element in the
+// current array or object being parsed.
+func (dec *Decoder) More() bool {
+	c, err := dec.peek()
+	return err == nil && c != ']' && c != '}'
+}
+
+func (dec *Decoder) peek() (byte, error) {
+	var err error
+	for {
+		for i := dec.scanp; i < len(dec.buf); i++ {
+			c := dec.buf[i]
+			if isSpace(c) {
+				continue
+			}
+			dec.scanp = i
+			return c, nil
+		}
+		// buffer has been scanned, now report any error
+		if err != nil {
+			return 0, err
+		}
+		err = dec.refill()
+	}
+}
+
+// InputOffset returns the input stream byte offset of the current decoder position.
+// The offset gives the location of the end of the most recently returned token
+// and the beginning of the next token.
+func (dec *Decoder) InputOffset() int64 {
+	return dec.scanned + int64(dec.scanp)
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/tables.go 🔗

@@ -0,0 +1,218 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package json
+
+import "unicode/utf8"
+
+// safeSet holds the value true if the ASCII character with the given array
+// position can be represented inside a JSON string without any further
+// escaping.
+//
+// All values are true except for the ASCII control characters (0-31), the
+// double quote ("), and the backslash character ("\").
+var safeSet = [utf8.RuneSelf]bool{
+	' ':      true,
+	'!':      true,
+	'"':      false,
+	'#':      true,
+	'$':      true,
+	'%':      true,
+	'&':      true,
+	'\'':     true,
+	'(':      true,
+	')':      true,
+	'*':      true,
+	'+':      true,
+	',':      true,
+	'-':      true,
+	'.':      true,
+	'/':      true,
+	'0':      true,
+	'1':      true,
+	'2':      true,
+	'3':      true,
+	'4':      true,
+	'5':      true,
+	'6':      true,
+	'7':      true,
+	'8':      true,
+	'9':      true,
+	':':      true,
+	';':      true,
+	'<':      true,
+	'=':      true,
+	'>':      true,
+	'?':      true,
+	'@':      true,
+	'A':      true,
+	'B':      true,
+	'C':      true,
+	'D':      true,
+	'E':      true,
+	'F':      true,
+	'G':      true,
+	'H':      true,
+	'I':      true,
+	'J':      true,
+	'K':      true,
+	'L':      true,
+	'M':      true,
+	'N':      true,
+	'O':      true,
+	'P':      true,
+	'Q':      true,
+	'R':      true,
+	'S':      true,
+	'T':      true,
+	'U':      true,
+	'V':      true,
+	'W':      true,
+	'X':      true,
+	'Y':      true,
+	'Z':      true,
+	'[':      true,
+	'\\':     false,
+	']':      true,
+	'^':      true,
+	'_':      true,
+	'`':      true,
+	'a':      true,
+	'b':      true,
+	'c':      true,
+	'd':      true,
+	'e':      true,
+	'f':      true,
+	'g':      true,
+	'h':      true,
+	'i':      true,
+	'j':      true,
+	'k':      true,
+	'l':      true,
+	'm':      true,
+	'n':      true,
+	'o':      true,
+	'p':      true,
+	'q':      true,
+	'r':      true,
+	's':      true,
+	't':      true,
+	'u':      true,
+	'v':      true,
+	'w':      true,
+	'x':      true,
+	'y':      true,
+	'z':      true,
+	'{':      true,
+	'|':      true,
+	'}':      true,
+	'~':      true,
+	'\u007f': true,
+}
+
+// htmlSafeSet holds the value true if the ASCII character with the given
+// array position can be safely represented inside a JSON string, embedded
+// inside of HTML <script> tags, without any additional escaping.
+//
+// All values are true except for the ASCII control characters (0-31), the
+// double quote ("), the backslash character ("\"), HTML opening and closing
+// tags ("<" and ">"), and the ampersand ("&").
+var htmlSafeSet = [utf8.RuneSelf]bool{
+	' ':      true,
+	'!':      true,
+	'"':      false,
+	'#':      true,
+	'$':      true,
+	'%':      true,
+	'&':      false,
+	'\'':     true,
+	'(':      true,
+	')':      true,
+	'*':      true,
+	'+':      true,
+	',':      true,
+	'-':      true,
+	'.':      true,
+	'/':      true,
+	'0':      true,
+	'1':      true,
+	'2':      true,
+	'3':      true,
+	'4':      true,
+	'5':      true,
+	'6':      true,
+	'7':      true,
+	'8':      true,
+	'9':      true,
+	':':      true,
+	';':      true,
+	'<':      false,
+	'=':      true,
+	'>':      false,
+	'?':      true,
+	'@':      true,
+	'A':      true,
+	'B':      true,
+	'C':      true,
+	'D':      true,
+	'E':      true,
+	'F':      true,
+	'G':      true,
+	'H':      true,
+	'I':      true,
+	'J':      true,
+	'K':      true,
+	'L':      true,
+	'M':      true,
+	'N':      true,
+	'O':      true,
+	'P':      true,
+	'Q':      true,
+	'R':      true,
+	'S':      true,
+	'T':      true,
+	'U':      true,
+	'V':      true,
+	'W':      true,
+	'X':      true,
+	'Y':      true,
+	'Z':      true,
+	'[':      true,
+	'\\':     false,
+	']':      true,
+	'^':      true,
+	'_':      true,
+	'`':      true,
+	'a':      true,
+	'b':      true,
+	'c':      true,
+	'd':      true,
+	'e':      true,
+	'f':      true,
+	'g':      true,
+	'h':      true,
+	'i':      true,
+	'j':      true,
+	'k':      true,
+	'l':      true,
+	'm':      true,
+	'n':      true,
+	'o':      true,
+	'p':      true,
+	'q':      true,
+	'r':      true,
+	's':      true,
+	't':      true,
+	'u':      true,
+	'v':      true,
+	'w':      true,
+	'x':      true,
+	'y':      true,
+	'z':      true,
+	'{':      true,
+	'|':      true,
+	'}':      true,
+	'~':      true,
+	'\u007f': true,
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/tags.go 🔗

@@ -0,0 +1,38 @@
+// Copyright 2011 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package json
+
+import (
+	"strings"
+)
+
+// tagOptions is the string following a comma in a struct field's "json"
+// tag, or the empty string. It does not include the leading comma.
+type tagOptions string
+
+// parseTag splits a struct field's json tag into its name and
+// comma-separated options.
+func parseTag(tag string) (string, tagOptions) {
+	tag, opt, _ := strings.Cut(tag, ",")
+	return tag, tagOptions(opt)
+}
+
+// Contains reports whether a comma-separated list of options
+// contains a particular substr flag. substr must be surrounded by a
+// string boundary or commas.
+func (o tagOptions) Contains(optionName string) bool {
+	if len(o) == 0 {
+		return false
+	}
+	s := string(o)
+	for s != "" {
+		var name string
+		name, s, _ = strings.Cut(s, ",")
+		if name == optionName {
+			return true
+		}
+	}
+	return false
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/encoding/json/time.go 🔗

@@ -0,0 +1,61 @@
+// EDIT(begin): custom time marshaler
+package json
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/internal/encoding/json/shims"
+	"reflect"
+	"time"
+)
+
+type TimeMarshaler interface {
+	MarshalJSONWithTimeLayout(string) []byte
+}
+
+func TimeLayout(fmt string) string {
+	switch fmt {
+	case "", "date-time":
+		return time.RFC3339
+	case "date":
+		return time.DateOnly
+	default:
+		return fmt
+	}
+}
+
+var timeType = shims.TypeFor[time.Time]()
+
+func newTimeEncoder() encoderFunc {
+	return func(e *encodeState, v reflect.Value, opts encOpts) {
+		t := v.Interface().(time.Time)
+		fmtted := t.Format(TimeLayout(opts.timefmt))
+		stringEncoder(e, reflect.ValueOf(fmtted), opts)
+	}
+}
+
+// Uses continuation passing style, to add the timefmt option to k
+func continueWithTimeFmt(timefmt string, k encoderFunc) encoderFunc {
+	return func(e *encodeState, v reflect.Value, opts encOpts) {
+		opts.timefmt = timefmt
+		k(e, v, opts)
+	}
+}
+
+func timeMarshalEncoder(e *encodeState, v reflect.Value, opts encOpts) bool {
+	tm, ok := v.Interface().(TimeMarshaler)
+	if !ok {
+		return false
+	}
+
+	b := tm.MarshalJSONWithTimeLayout(opts.timefmt)
+	if b != nil {
+		e.Grow(len(b))
+		out := e.AvailableBuffer()
+		out, _ = appendCompact(out, b, opts.escapeHTML)
+		e.Buffer.Write(out)
+		return true
+	}
+
+	return false
+}
+
+// EDIT(end)

vendor/github.com/anthropics/anthropic-sdk-go/internal/paramutil/field.go 🔗

@@ -0,0 +1,30 @@
+package paramutil
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+)
+
+func AddrIfPresent[T comparable](v param.Opt[T]) *T {
+	if v.Valid() {
+		return &v.Value
+	}
+	return nil
+}
+
+func ToOpt[T comparable](v T, meta respjson.Field) param.Opt[T] {
+	if meta.Valid() {
+		return param.NewOpt(v)
+	} else if meta.Raw() == respjson.Null {
+		return param.Null[T]()
+	}
+	return param.Opt[T]{}
+}
+
+// Checks if the value is not omitted and not null
+func Valid(v param.ParamStruct) bool {
+	if ovr, ok := v.Overrides(); ok {
+		return ovr != nil
+	}
+	return !param.IsNull(v) && !param.IsOmitted(v)
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/paramutil/sentinel.go 🔗

@@ -0,0 +1,31 @@
+package paramutil
+
+import (
+	"github.com/anthropics/anthropic-sdk-go/internal/encoding/json/sentinel"
+)
+
+// NullPtr returns a pointer to the zero value of the type T.
+// When used with [MarshalObject] or [MarshalUnion], it will be marshaled as null.
+//
+// It is unspecified behavior to mutate the value pointed to by the returned pointer.
+func NullPtr[T any]() *T {
+	return sentinel.NullPtr[T]()
+}
+
+// IsNullPtr returns true if the pointer was created by [NullPtr].
+func IsNullPtr[T any](ptr *T) bool {
+	return sentinel.IsNullPtr(ptr)
+}
+
+// NullSlice returns a non-nil slice with a length of 0.
+// When used with [MarshalObject] or [MarshalUnion], it will be marshaled as null.
+//
+// It is undefined behavior to mutate the slice returned by [NullSlice].
+func NullSlice[T any]() []T {
+	return sentinel.NullSlice[T]()
+}
+
+// IsNullSlice returns true if the slice was created by [NullSlice].
+func IsNullSlice[T any](slice []T) bool {
+	return sentinel.IsNullSlice(slice)
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/paramutil/union.go 🔗

@@ -0,0 +1,48 @@
+package paramutil
+
+import (
+	"fmt"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"reflect"
+)
+
+var paramUnionType = reflect.TypeOf(param.APIUnion{})
+
+// VariantFromUnion can be used to extract the present variant from a param union type.
+// A param union type is a struct with an embedded field of [APIUnion].
+func VariantFromUnion(u reflect.Value) (any, error) {
+	if u.Kind() == reflect.Ptr {
+		u = u.Elem()
+	}
+
+	if u.Kind() != reflect.Struct {
+		return nil, fmt.Errorf("param: cannot extract variant from non-struct union")
+	}
+
+	isUnion := false
+	nVariants := 0
+	variantIdx := -1
+	for i := 0; i < u.NumField(); i++ {
+		if !u.Field(i).IsZero() {
+			nVariants++
+			variantIdx = i
+		}
+		if u.Field(i).Type() == paramUnionType {
+			isUnion = u.Type().Field(i).Anonymous
+		}
+	}
+
+	if !isUnion {
+		return nil, fmt.Errorf("param: cannot extract variant from non-union")
+	}
+
+	if nVariants > 1 {
+		return nil, fmt.Errorf("param: cannot extract variant from union with multiple variants")
+	}
+
+	if nVariants == 0 {
+		return nil, fmt.Errorf("param: cannot extract variant from union with no variants")
+	}
+
+	return u.Field(variantIdx).Interface(), nil
+}

vendor/github.com/anthropics/anthropic-sdk-go/internal/requestconfig/requestconfig.go 🔗

@@ -0,0 +1,629 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package requestconfig
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"math"
+	"math/rand"
+	"mime"
+	"net/http"
+	"net/url"
+	"runtime"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal"
+	"github.com/anthropics/anthropic-sdk-go/internal/apierror"
+	"github.com/anthropics/anthropic-sdk-go/internal/apiform"
+	"github.com/anthropics/anthropic-sdk-go/internal/apiquery"
+)
+
+func getDefaultHeaders() map[string]string {
+	return map[string]string{
+		"User-Agent": fmt.Sprintf("Anthropic/Go %s", internal.PackageVersion),
+	}
+}
+
+func getNormalizedOS() string {
+	switch runtime.GOOS {
+	case "ios":
+		return "iOS"
+	case "android":
+		return "Android"
+	case "darwin":
+		return "MacOS"
+	case "window":
+		return "Windows"
+	case "freebsd":
+		return "FreeBSD"
+	case "openbsd":
+		return "OpenBSD"
+	case "linux":
+		return "Linux"
+	default:
+		return fmt.Sprintf("Other:%s", runtime.GOOS)
+	}
+}
+
+func getNormalizedArchitecture() string {
+	switch runtime.GOARCH {
+	case "386":
+		return "x32"
+	case "amd64":
+		return "x64"
+	case "arm":
+		return "arm"
+	case "arm64":
+		return "arm64"
+	default:
+		return fmt.Sprintf("other:%s", runtime.GOARCH)
+	}
+}
+
+func getPlatformProperties() map[string]string {
+	return map[string]string{
+		"X-Stainless-Lang":            "go",
+		"X-Stainless-Package-Version": internal.PackageVersion,
+		"X-Stainless-OS":              getNormalizedOS(),
+		"X-Stainless-Arch":            getNormalizedArchitecture(),
+		"X-Stainless-Runtime":         "go",
+		"X-Stainless-Runtime-Version": runtime.Version(),
+	}
+}
+
+type RequestOption interface {
+	Apply(*RequestConfig) error
+}
+
+type RequestOptionFunc func(*RequestConfig) error
+type PreRequestOptionFunc func(*RequestConfig) error
+
+func (s RequestOptionFunc) Apply(r *RequestConfig) error    { return s(r) }
+func (s PreRequestOptionFunc) Apply(r *RequestConfig) error { return s(r) }
+
+func NewRequestConfig(ctx context.Context, method string, u string, body any, dst any, opts ...RequestOption) (*RequestConfig, error) {
+	var reader io.Reader
+
+	contentType := "application/json"
+	hasSerializationFunc := false
+
+	if body, ok := body.(json.Marshaler); ok {
+		content, err := body.MarshalJSON()
+		if err != nil {
+			return nil, err
+		}
+		reader = bytes.NewBuffer(content)
+		hasSerializationFunc = true
+	}
+	if body, ok := body.(apiform.Marshaler); ok {
+		var (
+			content []byte
+			err     error
+		)
+		content, contentType, err = body.MarshalMultipart()
+		if err != nil {
+			return nil, err
+		}
+		reader = bytes.NewBuffer(content)
+		hasSerializationFunc = true
+	}
+	if body, ok := body.(apiquery.Queryer); ok {
+		hasSerializationFunc = true
+		q, err := body.URLQuery()
+		if err != nil {
+			return nil, err
+		}
+		params := q.Encode()
+		if params != "" {
+			u = u + "?" + params
+		}
+	}
+	if body, ok := body.([]byte); ok {
+		reader = bytes.NewBuffer(body)
+		hasSerializationFunc = true
+	}
+	if body, ok := body.(io.Reader); ok {
+		reader = body
+		hasSerializationFunc = true
+	}
+
+	// Fallback to json serialization if none of the serialization functions that we expect
+	// to see is present.
+	if body != nil && !hasSerializationFunc {
+		content, err := json.Marshal(body)
+		if err != nil {
+			return nil, err
+		}
+		reader = bytes.NewBuffer(content)
+	}
+
+	req, err := http.NewRequestWithContext(ctx, method, u, nil)
+	if err != nil {
+		return nil, err
+	}
+	if reader != nil {
+		req.Header.Set("Content-Type", contentType)
+	}
+
+	req.Header.Set("Accept", "application/json")
+	req.Header.Set("X-Stainless-Retry-Count", "0")
+	req.Header.Set("X-Stainless-Timeout", "0")
+	for k, v := range getDefaultHeaders() {
+		req.Header.Add(k, v)
+	}
+	req.Header.Set("anthropic-version", "2023-06-01")
+	for k, v := range getPlatformProperties() {
+		req.Header.Add(k, v)
+	}
+	cfg := RequestConfig{
+		MaxRetries: 2,
+		Context:    ctx,
+		Request:    req,
+		HTTPClient: http.DefaultClient,
+		Body:       reader,
+	}
+	cfg.ResponseBodyInto = dst
+	err = cfg.Apply(opts...)
+	if err != nil {
+		return nil, err
+	}
+
+	// This must run after `cfg.Apply(...)` above in case the request timeout gets modified. We also only
+	// apply our own logic for it if it's still "0" from above. If it's not, then it was deleted or modified
+	// by the user and we should respect that.
+	if req.Header.Get("X-Stainless-Timeout") == "0" {
+		if cfg.RequestTimeout == time.Duration(0) {
+			req.Header.Del("X-Stainless-Timeout")
+		} else {
+			req.Header.Set("X-Stainless-Timeout", strconv.Itoa(int(cfg.RequestTimeout.Seconds())))
+		}
+	}
+
+	return &cfg, nil
+}
+
+// This interface is primarily used to describe an [*http.Client], but also
+// supports custom HTTP implementations.
+type HTTPDoer interface {
+	Do(req *http.Request) (*http.Response, error)
+}
+
+// RequestConfig represents all the state related to one request.
+//
+// Editing the variables inside RequestConfig directly is unstable api. Prefer
+// composing the RequestOption instead if possible.
+type RequestConfig struct {
+	MaxRetries     int
+	RequestTimeout time.Duration
+	Context        context.Context
+	Request        *http.Request
+	BaseURL        *url.URL
+	// DefaultBaseURL will be used if BaseURL is not explicitly overridden using
+	// WithBaseURL.
+	DefaultBaseURL *url.URL
+	CustomHTTPDoer HTTPDoer
+	HTTPClient     *http.Client
+	Middlewares    []middleware
+	APIKey         string
+	AuthToken      string
+	// If ResponseBodyInto not nil, then we will attempt to deserialize into
+	// ResponseBodyInto. If Destination is a []byte, then it will return the body as
+	// is.
+	ResponseBodyInto any
+	// ResponseInto copies the \*http.Response of the corresponding request into the
+	// given address
+	ResponseInto **http.Response
+	Body         io.Reader
+}
+
+// middleware is exactly the same type as the Middleware type found in the [option] package,
+// but it is redeclared here for circular dependency issues.
+type middleware = func(*http.Request, middlewareNext) (*http.Response, error)
+
+// middlewareNext is exactly the same type as the MiddlewareNext type found in the [option] package,
+// but it is redeclared here for circular dependency issues.
+type middlewareNext = func(*http.Request) (*http.Response, error)
+
+func applyMiddleware(middleware middleware, next middlewareNext) middlewareNext {
+	return func(req *http.Request) (res *http.Response, err error) {
+		return middleware(req, next)
+	}
+}
+
+func shouldRetry(req *http.Request, res *http.Response) bool {
+	// If there is no way to recover the Body, then we shouldn't retry.
+	if req.Body != nil && req.GetBody == nil {
+		return false
+	}
+
+	// If there is no response, that indicates that there is a connection error
+	// so we retry the request.
+	if res == nil {
+		return true
+	}
+
+	// If the header explicitly wants a retry behavior, respect that over the
+	// http status code.
+	if res.Header.Get("x-should-retry") == "true" {
+		return true
+	}
+	if res.Header.Get("x-should-retry") == "false" {
+		return false
+	}
+
+	return res.StatusCode == http.StatusRequestTimeout ||
+		res.StatusCode == http.StatusConflict ||
+		res.StatusCode == http.StatusTooManyRequests ||
+		res.StatusCode >= http.StatusInternalServerError
+}
+
+func parseRetryAfterHeader(resp *http.Response) (time.Duration, bool) {
+	if resp == nil {
+		return 0, false
+	}
+
+	type retryData struct {
+		header string
+		units  time.Duration
+
+		// custom is used when the regular algorithm failed and is optional.
+		// the returned duration is used verbatim (units is not applied).
+		custom func(string) (time.Duration, bool)
+	}
+
+	nop := func(string) (time.Duration, bool) { return 0, false }
+
+	// the headers are listed in order of preference
+	retries := []retryData{
+		{
+			header: "Retry-After-Ms",
+			units:  time.Millisecond,
+			custom: nop,
+		},
+		{
+			header: "Retry-After",
+			units:  time.Second,
+
+			// retry-after values are expressed in either number of
+			// seconds or an HTTP-date indicating when to try again
+			custom: func(ra string) (time.Duration, bool) {
+				t, err := time.Parse(time.RFC1123, ra)
+				if err != nil {
+					return 0, false
+				}
+				return time.Until(t), true
+			},
+		},
+	}
+
+	for _, retry := range retries {
+		v := resp.Header.Get(retry.header)
+		if v == "" {
+			continue
+		}
+		if retryAfter, err := strconv.ParseFloat(v, 64); err == nil {
+			return time.Duration(retryAfter * float64(retry.units)), true
+		}
+		if d, ok := retry.custom(v); ok {
+			return d, true
+		}
+	}
+
+	return 0, false
+}
+
+// isBeforeContextDeadline reports whether the non-zero Time t is
+// before ctx's deadline. If ctx does not have a deadline, it
+// always reports true (the deadline is considered infinite).
+func isBeforeContextDeadline(t time.Time, ctx context.Context) bool {
+	d, ok := ctx.Deadline()
+	if !ok {
+		return true
+	}
+	return t.Before(d)
+}
+
+// bodyWithTimeout is an io.ReadCloser which can observe a context's cancel func
+// to handle timeouts etc. It wraps an existing io.ReadCloser.
+type bodyWithTimeout struct {
+	stop func() // stops the time.Timer waiting to cancel the request
+	rc   io.ReadCloser
+}
+
+func (b *bodyWithTimeout) Read(p []byte) (n int, err error) {
+	n, err = b.rc.Read(p)
+	if err == nil {
+		return n, nil
+	}
+	if err == io.EOF {
+		return n, err
+	}
+	return n, err
+}
+
+func (b *bodyWithTimeout) Close() error {
+	err := b.rc.Close()
+	b.stop()
+	return err
+}
+
+func retryDelay(res *http.Response, retryCount int) time.Duration {
+	// If the API asks us to wait a certain amount of time (and it's a reasonable amount),
+	// just do what it says.
+
+	if retryAfterDelay, ok := parseRetryAfterHeader(res); ok && 0 <= retryAfterDelay && retryAfterDelay < time.Minute {
+		return retryAfterDelay
+	}
+
+	maxDelay := 8 * time.Second
+	delay := time.Duration(0.5 * float64(time.Second) * math.Pow(2, float64(retryCount)))
+	if delay > maxDelay {
+		delay = maxDelay
+	}
+
+	jitter := rand.Int63n(int64(delay / 4))
+	delay -= time.Duration(jitter)
+	return delay
+}
+
+func (cfg *RequestConfig) Execute() (err error) {
+	if cfg.BaseURL == nil {
+		if cfg.DefaultBaseURL != nil {
+			cfg.BaseURL = cfg.DefaultBaseURL
+		} else {
+			return fmt.Errorf("requestconfig: base url is not set")
+		}
+	}
+
+	cfg.Request.URL, err = cfg.BaseURL.Parse(strings.TrimLeft(cfg.Request.URL.String(), "/"))
+	if err != nil {
+		return err
+	}
+
+	if cfg.Body != nil && cfg.Request.Body == nil {
+		switch body := cfg.Body.(type) {
+		case *bytes.Buffer:
+			b := body.Bytes()
+			cfg.Request.ContentLength = int64(body.Len())
+			cfg.Request.GetBody = func() (io.ReadCloser, error) { return io.NopCloser(bytes.NewReader(b)), nil }
+			cfg.Request.Body, _ = cfg.Request.GetBody()
+		case *bytes.Reader:
+			cfg.Request.ContentLength = int64(body.Len())
+			cfg.Request.GetBody = func() (io.ReadCloser, error) {
+				_, err := body.Seek(0, 0)
+				return io.NopCloser(body), err
+			}
+			cfg.Request.Body, _ = cfg.Request.GetBody()
+		default:
+			if rc, ok := body.(io.ReadCloser); ok {
+				cfg.Request.Body = rc
+			} else {
+				cfg.Request.Body = io.NopCloser(body)
+			}
+		}
+	}
+
+	handler := cfg.HTTPClient.Do
+	if cfg.CustomHTTPDoer != nil {
+		handler = cfg.CustomHTTPDoer.Do
+	}
+	for i := len(cfg.Middlewares) - 1; i >= 0; i -= 1 {
+		handler = applyMiddleware(cfg.Middlewares[i], handler)
+	}
+
+	// Don't send the current retry count in the headers if the caller modified the header defaults.
+	shouldSendRetryCount := cfg.Request.Header.Get("X-Stainless-Retry-Count") == "0"
+
+	var res *http.Response
+	var cancel context.CancelFunc
+	for retryCount := 0; retryCount <= cfg.MaxRetries; retryCount += 1 {
+		ctx := cfg.Request.Context()
+		if cfg.RequestTimeout != time.Duration(0) && isBeforeContextDeadline(time.Now().Add(cfg.RequestTimeout), ctx) {
+			ctx, cancel = context.WithTimeout(ctx, cfg.RequestTimeout)
+			defer func() {
+				// The cancel function is nil if it was handed off to be handled in a different scope.
+				if cancel != nil {
+					cancel()
+				}
+			}()
+		}
+
+		req := cfg.Request.Clone(ctx)
+		if shouldSendRetryCount {
+			req.Header.Set("X-Stainless-Retry-Count", strconv.Itoa(retryCount))
+		}
+
+		res, err = handler(req)
+		if ctx != nil && ctx.Err() != nil {
+			return ctx.Err()
+		}
+		if !shouldRetry(cfg.Request, res) || retryCount >= cfg.MaxRetries {
+			break
+		}
+
+		// Prepare next request and wait for the retry delay
+		if cfg.Request.GetBody != nil {
+			cfg.Request.Body, err = cfg.Request.GetBody()
+			if err != nil {
+				return err
+			}
+		}
+
+		// Can't actually refresh the body, so we don't attempt to retry here
+		if cfg.Request.GetBody == nil && cfg.Request.Body != nil {
+			break
+		}
+
+		time.Sleep(retryDelay(res, retryCount))
+	}
+
+	// Save *http.Response if it is requested to, even if there was an error making the request. This is
+	// useful in cases where you might want to debug by inspecting the response. Note that if err != nil,
+	// the response should be generally be empty, but there are edge cases.
+	if cfg.ResponseInto != nil {
+		*cfg.ResponseInto = res
+	}
+	if responseBodyInto, ok := cfg.ResponseBodyInto.(**http.Response); ok {
+		*responseBodyInto = res
+	}
+
+	// If there was a connection error in the final request or any other transport error,
+	// return that early without trying to coerce into an APIError.
+	if err != nil {
+		return err
+	}
+
+	if res.StatusCode >= 400 {
+		contents, err := io.ReadAll(res.Body)
+		res.Body.Close()
+		if err != nil {
+			return err
+		}
+
+		// If there is an APIError, re-populate the response body so that debugging
+		// utilities can conveniently dump the response without issue.
+		res.Body = io.NopCloser(bytes.NewBuffer(contents))
+
+		// Load the contents into the error format if it is provided.
+		aerr := apierror.Error{Request: cfg.Request, Response: res, StatusCode: res.StatusCode}
+		err = aerr.UnmarshalJSON(contents)
+		if err != nil {
+			return err
+		}
+		return &aerr
+	}
+
+	_, intoCustomResponseBody := cfg.ResponseBodyInto.(**http.Response)
+	if cfg.ResponseBodyInto == nil || intoCustomResponseBody {
+		// We aren't reading the response body in this scope, but whoever is will need the
+		// cancel func from the context to observe request timeouts.
+		// Put the cancel function in the response body so it can be handled elsewhere.
+		if cancel != nil {
+			res.Body = &bodyWithTimeout{rc: res.Body, stop: cancel}
+			cancel = nil
+		}
+		return nil
+	}
+
+	contents, err := io.ReadAll(res.Body)
+	res.Body.Close()
+	if err != nil {
+		return fmt.Errorf("error reading response body: %w", err)
+	}
+
+	// If we are not json, return plaintext
+	contentType := res.Header.Get("content-type")
+	mediaType, _, _ := mime.ParseMediaType(contentType)
+	isJSON := strings.Contains(mediaType, "application/json") || strings.HasSuffix(mediaType, "+json")
+	if !isJSON {
+		switch dst := cfg.ResponseBodyInto.(type) {
+		case *string:
+			*dst = string(contents)
+		case **string:
+			tmp := string(contents)
+			*dst = &tmp
+		case *[]byte:
+			*dst = contents
+		default:
+			return fmt.Errorf("expected destination type of 'string' or '[]byte' for responses with content-type '%s' that is not 'application/json'", contentType)
+		}
+		return nil
+	}
+
+	// If the response happens to be a byte array, deserialize the body as-is.
+	switch dst := cfg.ResponseBodyInto.(type) {
+	case *[]byte:
+		*dst = contents
+	}
+
+	err = json.NewDecoder(bytes.NewReader(contents)).Decode(cfg.ResponseBodyInto)
+	if err != nil {
+		return fmt.Errorf("error parsing response json: %w", err)
+	}
+
+	return nil
+}
+
+func ExecuteNewRequest(ctx context.Context, method string, u string, body any, dst any, opts ...RequestOption) error {
+	cfg, err := NewRequestConfig(ctx, method, u, body, dst, opts...)
+	if err != nil {
+		return err
+	}
+	return cfg.Execute()
+}
+
+func (cfg *RequestConfig) Clone(ctx context.Context) *RequestConfig {
+	if cfg == nil {
+		return nil
+	}
+	req := cfg.Request.Clone(ctx)
+	var err error
+	if req.Body != nil {
+		req.Body, err = req.GetBody()
+	}
+	if err != nil {
+		return nil
+	}
+	new := &RequestConfig{
+		MaxRetries:     cfg.MaxRetries,
+		RequestTimeout: cfg.RequestTimeout,
+		Context:        ctx,
+		Request:        req,
+		BaseURL:        cfg.BaseURL,
+		HTTPClient:     cfg.HTTPClient,
+		Middlewares:    cfg.Middlewares,
+		APIKey:         cfg.APIKey,
+		AuthToken:      cfg.AuthToken,
+	}
+
+	return new
+}
+
+func (cfg *RequestConfig) Apply(opts ...RequestOption) error {
+	for _, opt := range opts {
+		err := opt.Apply(cfg)
+		if err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+// PreRequestOptions is used to collect all the options which need to be known before
+// a call to [RequestConfig.ExecuteNewRequest], such as path parameters
+// or global defaults.
+// PreRequestOptions will return a [RequestConfig] with the options applied.
+//
+// Only request option functions of type [PreRequestOptionFunc] are applied.
+func PreRequestOptions(opts ...RequestOption) (RequestConfig, error) {
+	cfg := RequestConfig{}
+	for _, opt := range opts {
+		if opt, ok := opt.(PreRequestOptionFunc); ok {
+			err := opt.Apply(&cfg)
+			if err != nil {
+				return cfg, err
+			}
+		}
+	}
+	return cfg, nil
+}
+
+// WithDefaultBaseURL returns a RequestOption that sets the client's default Base URL.
+// This is always overridden by setting a base URL with WithBaseURL.
+// WithBaseURL should be used instead of WithDefaultBaseURL except in internal code.
+func WithDefaultBaseURL(baseURL string) RequestOption {
+	u, err := url.Parse(baseURL)
+	return RequestOptionFunc(func(r *RequestConfig) error {
+		if err != nil {
+			return err
+		}
+		r.DefaultBaseURL = u
+		return nil
+	})
+}

vendor/github.com/anthropics/anthropic-sdk-go/message.go 🔗

@@ -0,0 +1,4562 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"net/http"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/paramutil"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/packages/ssestream"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// MessageService contains methods and other services that help with interacting
+// with the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewMessageService] method instead.
+type MessageService struct {
+	Options []option.RequestOption
+	Batches MessageBatchService
+}
+
+// NewMessageService generates a new service that applies the given options to each
+// request. These options are applied after the parent client's options (if there
+// is one), and before any request-specific options.
+func NewMessageService(opts ...option.RequestOption) (r MessageService) {
+	r = MessageService{}
+	r.Options = opts
+	r.Batches = NewMessageBatchService(opts...)
+	return
+}
+
+// Send a structured list of input messages with text and/or image content, and the
+// model will generate the next message in the conversation.
+//
+// The Messages API can be used for either single queries or stateless multi-turn
+// conversations.
+//
+// Learn more about the Messages API in our [user guide](/en/docs/initial-setup)
+//
+// Note: If you choose to set a timeout for this request, we recommend 10 minutes.
+func (r *MessageService) New(ctx context.Context, body MessageNewParams, opts ...option.RequestOption) (res *Message, err error) {
+	opts = append(r.Options[:], opts...)
+
+	// For non-streaming requests, calculate the appropriate timeout based on maxTokens
+	// and check against model-specific limits
+	timeout, timeoutErr := CalculateNonStreamingTimeout(int(body.MaxTokens), body.Model, opts)
+	if timeoutErr != nil {
+		return nil, timeoutErr
+	}
+	opts = append(opts, option.WithRequestTimeout(timeout))
+
+	path := "v1/messages"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, body, &res, opts...)
+	return
+}
+
+// Send a structured list of input messages with text and/or image content, and the
+// model will generate the next message in the conversation.
+//
+// The Messages API can be used for either single queries or stateless multi-turn
+// conversations.
+//
+// Learn more about the Messages API in our [user guide](/en/docs/initial-setup)
+//
+// Note: If you choose to set a timeout for this request, we recommend 10 minutes.
+func (r *MessageService) NewStreaming(ctx context.Context, body MessageNewParams, opts ...option.RequestOption) (stream *ssestream.Stream[MessageStreamEventUnion]) {
+	var (
+		raw *http.Response
+		err error
+	)
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithJSONSet("stream", true)}, opts...)
+	path := "v1/messages"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, body, &raw, opts...)
+	return ssestream.NewStream[MessageStreamEventUnion](ssestream.NewDecoder(raw), err)
+}
+
+// Count the number of tokens in a Message.
+//
+// The Token Count API can be used to count the number of tokens in a Message,
+// including tools, images, and documents, without creating it.
+//
+// Learn more about token counting in our
+// [user guide](/en/docs/build-with-claude/token-counting)
+func (r *MessageService) CountTokens(ctx context.Context, body MessageCountTokensParams, opts ...option.RequestOption) (res *MessageTokensCount, err error) {
+	opts = append(r.Options[:], opts...)
+	path := "v1/messages/count_tokens"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, body, &res, opts...)
+	return
+}
+
+// The properties Data, MediaType, Type are required.
+type Base64ImageSourceParam struct {
+	Data string `json:"data,required" format:"byte"`
+	// Any of "image/jpeg", "image/png", "image/gif", "image/webp".
+	MediaType Base64ImageSourceMediaType `json:"media_type,omitzero,required"`
+	// This field can be elided, and will marshal its zero value as "base64".
+	Type constant.Base64 `json:"type,required"`
+	paramObj
+}
+
+func (r Base64ImageSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow Base64ImageSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *Base64ImageSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type Base64ImageSourceMediaType string
+
+const (
+	Base64ImageSourceMediaTypeImageJPEG Base64ImageSourceMediaType = "image/jpeg"
+	Base64ImageSourceMediaTypeImagePNG  Base64ImageSourceMediaType = "image/png"
+	Base64ImageSourceMediaTypeImageGIF  Base64ImageSourceMediaType = "image/gif"
+	Base64ImageSourceMediaTypeImageWebP Base64ImageSourceMediaType = "image/webp"
+)
+
+// The properties Data, MediaType, Type are required.
+type Base64PDFSourceParam struct {
+	Data string `json:"data,required" format:"byte"`
+	// This field can be elided, and will marshal its zero value as "application/pdf".
+	MediaType constant.ApplicationPDF `json:"media_type,required"`
+	// This field can be elided, and will marshal its zero value as "base64".
+	Type constant.Base64 `json:"type,required"`
+	paramObj
+}
+
+func (r Base64PDFSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow Base64PDFSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *Base64PDFSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewCacheControlEphemeralParam() CacheControlEphemeralParam {
+	return CacheControlEphemeralParam{
+		Type: "ephemeral",
+	}
+}
+
+// This struct has a constant value, construct it with
+// [NewCacheControlEphemeralParam].
+type CacheControlEphemeralParam struct {
+	Type constant.Ephemeral `json:"type,required"`
+	paramObj
+}
+
+func (r CacheControlEphemeralParam) MarshalJSON() (data []byte, err error) {
+	type shadow CacheControlEphemeralParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *CacheControlEphemeralParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type CitationCharLocation struct {
+	CitedText      string                `json:"cited_text,required"`
+	DocumentIndex  int64                 `json:"document_index,required"`
+	DocumentTitle  string                `json:"document_title,required"`
+	EndCharIndex   int64                 `json:"end_char_index,required"`
+	StartCharIndex int64                 `json:"start_char_index,required"`
+	Type           constant.CharLocation `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText      respjson.Field
+		DocumentIndex  respjson.Field
+		DocumentTitle  respjson.Field
+		EndCharIndex   respjson.Field
+		StartCharIndex respjson.Field
+		Type           respjson.Field
+		ExtraFields    map[string]respjson.Field
+		raw            string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r CitationCharLocation) RawJSON() string { return r.JSON.raw }
+func (r *CitationCharLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, DocumentIndex, DocumentTitle, EndCharIndex,
+// StartCharIndex, Type are required.
+type CitationCharLocationParam struct {
+	DocumentTitle  param.Opt[string] `json:"document_title,omitzero,required"`
+	CitedText      string            `json:"cited_text,required"`
+	DocumentIndex  int64             `json:"document_index,required"`
+	EndCharIndex   int64             `json:"end_char_index,required"`
+	StartCharIndex int64             `json:"start_char_index,required"`
+	// This field can be elided, and will marshal its zero value as "char_location".
+	Type constant.CharLocation `json:"type,required"`
+	paramObj
+}
+
+func (r CitationCharLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow CitationCharLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *CitationCharLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type CitationContentBlockLocation struct {
+	CitedText       string                        `json:"cited_text,required"`
+	DocumentIndex   int64                         `json:"document_index,required"`
+	DocumentTitle   string                        `json:"document_title,required"`
+	EndBlockIndex   int64                         `json:"end_block_index,required"`
+	StartBlockIndex int64                         `json:"start_block_index,required"`
+	Type            constant.ContentBlockLocation `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndBlockIndex   respjson.Field
+		StartBlockIndex respjson.Field
+		Type            respjson.Field
+		ExtraFields     map[string]respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r CitationContentBlockLocation) RawJSON() string { return r.JSON.raw }
+func (r *CitationContentBlockLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, DocumentIndex, DocumentTitle, EndBlockIndex,
+// StartBlockIndex, Type are required.
+type CitationContentBlockLocationParam struct {
+	DocumentTitle   param.Opt[string] `json:"document_title,omitzero,required"`
+	CitedText       string            `json:"cited_text,required"`
+	DocumentIndex   int64             `json:"document_index,required"`
+	EndBlockIndex   int64             `json:"end_block_index,required"`
+	StartBlockIndex int64             `json:"start_block_index,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "content_block_location".
+	Type constant.ContentBlockLocation `json:"type,required"`
+	paramObj
+}
+
+func (r CitationContentBlockLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow CitationContentBlockLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *CitationContentBlockLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type CitationPageLocation struct {
+	CitedText       string                `json:"cited_text,required"`
+	DocumentIndex   int64                 `json:"document_index,required"`
+	DocumentTitle   string                `json:"document_title,required"`
+	EndPageNumber   int64                 `json:"end_page_number,required"`
+	StartPageNumber int64                 `json:"start_page_number,required"`
+	Type            constant.PageLocation `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndPageNumber   respjson.Field
+		StartPageNumber respjson.Field
+		Type            respjson.Field
+		ExtraFields     map[string]respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r CitationPageLocation) RawJSON() string { return r.JSON.raw }
+func (r *CitationPageLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, DocumentIndex, DocumentTitle, EndPageNumber,
+// StartPageNumber, Type are required.
+type CitationPageLocationParam struct {
+	DocumentTitle   param.Opt[string] `json:"document_title,omitzero,required"`
+	CitedText       string            `json:"cited_text,required"`
+	DocumentIndex   int64             `json:"document_index,required"`
+	EndPageNumber   int64             `json:"end_page_number,required"`
+	StartPageNumber int64             `json:"start_page_number,required"`
+	// This field can be elided, and will marshal its zero value as "page_location".
+	Type constant.PageLocation `json:"type,required"`
+	paramObj
+}
+
+func (r CitationPageLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow CitationPageLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *CitationPageLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CitedText, EncryptedIndex, Title, Type, URL are required.
+type CitationWebSearchResultLocationParam struct {
+	Title          param.Opt[string] `json:"title,omitzero,required"`
+	CitedText      string            `json:"cited_text,required"`
+	EncryptedIndex string            `json:"encrypted_index,required"`
+	URL            string            `json:"url,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_result_location".
+	Type constant.WebSearchResultLocation `json:"type,required"`
+	paramObj
+}
+
+func (r CitationWebSearchResultLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow CitationWebSearchResultLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *CitationWebSearchResultLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type CitationsConfigParam struct {
+	Enabled param.Opt[bool] `json:"enabled,omitzero"`
+	paramObj
+}
+
+func (r CitationsConfigParam) MarshalJSON() (data []byte, err error) {
+	type shadow CitationsConfigParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *CitationsConfigParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type CitationsDelta struct {
+	Citation CitationsDeltaCitationUnion `json:"citation,required"`
+	Type     constant.CitationsDelta     `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Citation    respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r CitationsDelta) RawJSON() string { return r.JSON.raw }
+func (r *CitationsDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// CitationsDeltaCitationUnion contains all possible properties and values from
+// [CitationCharLocation], [CitationPageLocation], [CitationContentBlockLocation],
+// [CitationsWebSearchResultLocation].
+//
+// Use the [CitationsDeltaCitationUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type CitationsDeltaCitationUnion struct {
+	CitedText     string `json:"cited_text"`
+	DocumentIndex int64  `json:"document_index"`
+	DocumentTitle string `json:"document_title"`
+	// This field is from variant [CitationCharLocation].
+	EndCharIndex int64 `json:"end_char_index"`
+	// This field is from variant [CitationCharLocation].
+	StartCharIndex int64 `json:"start_char_index"`
+	// Any of "char_location", "page_location", "content_block_location",
+	// "web_search_result_location".
+	Type string `json:"type"`
+	// This field is from variant [CitationPageLocation].
+	EndPageNumber int64 `json:"end_page_number"`
+	// This field is from variant [CitationPageLocation].
+	StartPageNumber int64 `json:"start_page_number"`
+	// This field is from variant [CitationContentBlockLocation].
+	EndBlockIndex int64 `json:"end_block_index"`
+	// This field is from variant [CitationContentBlockLocation].
+	StartBlockIndex int64 `json:"start_block_index"`
+	// This field is from variant [CitationsWebSearchResultLocation].
+	EncryptedIndex string `json:"encrypted_index"`
+	// This field is from variant [CitationsWebSearchResultLocation].
+	Title string `json:"title"`
+	// This field is from variant [CitationsWebSearchResultLocation].
+	URL  string `json:"url"`
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndCharIndex    respjson.Field
+		StartCharIndex  respjson.Field
+		Type            respjson.Field
+		EndPageNumber   respjson.Field
+		StartPageNumber respjson.Field
+		EndBlockIndex   respjson.Field
+		StartBlockIndex respjson.Field
+		EncryptedIndex  respjson.Field
+		Title           respjson.Field
+		URL             respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// anyCitationsDeltaCitation is implemented by each variant of
+// [CitationsDeltaCitationUnion] to add type safety for the return type of
+// [CitationsDeltaCitationUnion.AsAny]
+type anyCitationsDeltaCitation interface {
+	implCitationsDeltaCitationUnion()
+}
+
+func (CitationCharLocation) implCitationsDeltaCitationUnion()             {}
+func (CitationPageLocation) implCitationsDeltaCitationUnion()             {}
+func (CitationContentBlockLocation) implCitationsDeltaCitationUnion()     {}
+func (CitationsWebSearchResultLocation) implCitationsDeltaCitationUnion() {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := CitationsDeltaCitationUnion.AsAny().(type) {
+//	case anthropic.CitationCharLocation:
+//	case anthropic.CitationPageLocation:
+//	case anthropic.CitationContentBlockLocation:
+//	case anthropic.CitationsWebSearchResultLocation:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u CitationsDeltaCitationUnion) AsAny() anyCitationsDeltaCitation {
+	switch u.Type {
+	case "char_location":
+		return u.AsCharLocation()
+	case "page_location":
+		return u.AsPageLocation()
+	case "content_block_location":
+		return u.AsContentBlockLocation()
+	case "web_search_result_location":
+		return u.AsWebSearchResultLocation()
+	}
+	return nil
+}
+
+func (u CitationsDeltaCitationUnion) AsCharLocation() (v CitationCharLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u CitationsDeltaCitationUnion) AsPageLocation() (v CitationPageLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u CitationsDeltaCitationUnion) AsContentBlockLocation() (v CitationContentBlockLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u CitationsDeltaCitationUnion) AsWebSearchResultLocation() (v CitationsWebSearchResultLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u CitationsDeltaCitationUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *CitationsDeltaCitationUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type CitationsWebSearchResultLocation struct {
+	CitedText      string                           `json:"cited_text,required"`
+	EncryptedIndex string                           `json:"encrypted_index,required"`
+	Title          string                           `json:"title,required"`
+	Type           constant.WebSearchResultLocation `json:"type,required"`
+	URL            string                           `json:"url,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CitedText      respjson.Field
+		EncryptedIndex respjson.Field
+		Title          respjson.Field
+		Type           respjson.Field
+		URL            respjson.Field
+		ExtraFields    map[string]respjson.Field
+		raw            string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r CitationsWebSearchResultLocation) RawJSON() string { return r.JSON.raw }
+func (r *CitationsWebSearchResultLocation) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// ContentBlockUnion contains all possible properties and values from [TextBlock],
+// [ToolUseBlock], [ServerToolUseBlock], [WebSearchToolResultBlock],
+// [ThinkingBlock], [RedactedThinkingBlock].
+//
+// Use the [ContentBlockUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type ContentBlockUnion struct {
+	// This field is from variant [TextBlock].
+	Citations []TextCitationUnion `json:"citations"`
+	// This field is from variant [TextBlock].
+	Text string `json:"text"`
+	// Any of "text", "tool_use", "server_tool_use", "web_search_tool_result",
+	// "thinking", "redacted_thinking".
+	Type  string          `json:"type"`
+	ID    string          `json:"id"`
+	Input json.RawMessage `json:"input"`
+	Name  string          `json:"name"`
+	// This field is from variant [WebSearchToolResultBlock].
+	Content WebSearchToolResultBlockContentUnion `json:"content"`
+	// This field is from variant [WebSearchToolResultBlock].
+	ToolUseID string `json:"tool_use_id"`
+	// This field is from variant [ThinkingBlock].
+	Signature string `json:"signature"`
+	// This field is from variant [ThinkingBlock].
+	Thinking string `json:"thinking"`
+	// This field is from variant [RedactedThinkingBlock].
+	Data string `json:"data"`
+	JSON struct {
+		Citations respjson.Field
+		Text      respjson.Field
+		Type      respjson.Field
+		ID        respjson.Field
+		Input     respjson.Field
+		Name      respjson.Field
+		Content   respjson.Field
+		ToolUseID respjson.Field
+		Signature respjson.Field
+		Thinking  respjson.Field
+		Data      respjson.Field
+		raw       string
+	} `json:"-"`
+}
+
+func (r ContentBlockUnion) ToParam() ContentBlockParamUnion {
+	switch variant := r.AsAny().(type) {
+	case TextBlock:
+		p := variant.ToParam()
+		return ContentBlockParamUnion{OfText: &p}
+	case ToolUseBlock:
+		p := variant.ToParam()
+		return ContentBlockParamUnion{OfToolUse: &p}
+	case ThinkingBlock:
+		p := variant.ToParam()
+		return ContentBlockParamUnion{OfThinking: &p}
+	case RedactedThinkingBlock:
+		p := variant.ToParam()
+		return ContentBlockParamUnion{OfRedactedThinking: &p}
+	}
+	return ContentBlockParamUnion{}
+}
+
+// anyContentBlock is implemented by each variant of [ContentBlockUnion] to add
+// type safety for the return type of [ContentBlockUnion.AsAny]
+type anyContentBlock interface {
+	implContentBlockUnion()
+}
+
+func (TextBlock) implContentBlockUnion()                {}
+func (ToolUseBlock) implContentBlockUnion()             {}
+func (ServerToolUseBlock) implContentBlockUnion()       {}
+func (WebSearchToolResultBlock) implContentBlockUnion() {}
+func (ThinkingBlock) implContentBlockUnion()            {}
+func (RedactedThinkingBlock) implContentBlockUnion()    {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := ContentBlockUnion.AsAny().(type) {
+//	case anthropic.TextBlock:
+//	case anthropic.ToolUseBlock:
+//	case anthropic.ServerToolUseBlock:
+//	case anthropic.WebSearchToolResultBlock:
+//	case anthropic.ThinkingBlock:
+//	case anthropic.RedactedThinkingBlock:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u ContentBlockUnion) AsAny() anyContentBlock {
+	switch u.Type {
+	case "text":
+		return u.AsText()
+	case "tool_use":
+		return u.AsToolUse()
+	case "server_tool_use":
+		return u.AsServerToolUse()
+	case "web_search_tool_result":
+		return u.AsWebSearchToolResult()
+	case "thinking":
+		return u.AsThinking()
+	case "redacted_thinking":
+		return u.AsRedactedThinking()
+	}
+	return nil
+}
+
+func (u ContentBlockUnion) AsText() (v TextBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockUnion) AsToolUse() (v ToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockUnion) AsServerToolUse() (v ServerToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockUnion) AsWebSearchToolResult() (v WebSearchToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockUnion) AsThinking() (v ThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockUnion) AsRedactedThinking() (v RedactedThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u ContentBlockUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *ContentBlockUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewServerToolUseBlock(id string, input any) ContentBlockParamUnion {
+	var serverToolUse ServerToolUseBlockParam
+	serverToolUse.ID = id
+	serverToolUse.Input = input
+	return ContentBlockParamUnion{OfServerToolUse: &serverToolUse}
+}
+
+func NewWebSearchToolResultBlock[
+	T []WebSearchResultBlockParam | WebSearchToolRequestErrorParam,
+](content T, toolUseID string) ContentBlockParamUnion {
+	var webSearchToolResult WebSearchToolResultBlockParam
+	switch v := any(content).(type) {
+	case []WebSearchResultBlockParam:
+		webSearchToolResult.Content.OfWebSearchToolResultBlockItem = v
+	case WebSearchToolRequestErrorParam:
+		webSearchToolResult.Content.OfRequestWebSearchToolResultError = &v
+	}
+	webSearchToolResult.ToolUseID = toolUseID
+	return ContentBlockParamUnion{OfWebSearchToolResult: &webSearchToolResult}
+}
+
+func NewTextBlock(text string) ContentBlockParamUnion {
+	var variant TextBlockParam
+	variant.Text = text
+	return ContentBlockParamUnion{OfText: &variant}
+}
+
+func NewImageBlock[T Base64ImageSourceParam | URLImageSourceParam](source T) ContentBlockParamUnion {
+	var image ImageBlockParam
+	switch v := any(source).(type) {
+	case Base64ImageSourceParam:
+		image.Source.OfBase64 = &v
+	case URLImageSourceParam:
+		image.Source.OfURL = &v
+	}
+	return ContentBlockParamUnion{OfImage: &image}
+}
+
+func NewImageBlockBase64(mediaType string, encodedData string) ContentBlockParamUnion {
+	return ContentBlockParamUnion{
+		OfImage: &ImageBlockParam{
+			Source: ImageBlockParamSourceUnion{
+				OfBase64: &Base64ImageSourceParam{
+					Data:      encodedData,
+					MediaType: Base64ImageSourceMediaType(mediaType),
+				},
+			},
+		},
+	}
+}
+
+func NewToolUseBlock(id string, input any, name string) ContentBlockParamUnion {
+	var toolUse ToolUseBlockParam
+	toolUse.ID = id
+	toolUse.Input = input
+	toolUse.Name = name
+	return ContentBlockParamUnion{OfToolUse: &toolUse}
+}
+
+func NewToolResultBlock(toolUseID string, content string, isError bool) ContentBlockParamUnion {
+	toolBlock := ToolResultBlockParam{
+		ToolUseID: toolUseID,
+		Content: []ToolResultBlockParamContentUnion{
+			{OfText: &TextBlockParam{Text: content}},
+		},
+		IsError: Bool(isError),
+	}
+	return ContentBlockParamUnion{OfToolResult: &toolBlock}
+}
+
+func NewDocumentBlock[
+	T Base64PDFSourceParam | PlainTextSourceParam | ContentBlockSourceParam | URLPDFSourceParam,
+](source T) ContentBlockParamUnion {
+	var document DocumentBlockParam
+	switch v := any(source).(type) {
+	case Base64PDFSourceParam:
+		document.Source.OfBase64 = &v
+	case PlainTextSourceParam:
+		document.Source.OfText = &v
+	case ContentBlockSourceParam:
+		document.Source.OfContent = &v
+	case URLPDFSourceParam:
+		document.Source.OfURL = &v
+	}
+	return ContentBlockParamUnion{OfDocument: &document}
+}
+
+func NewThinkingBlock(signature string, thinking string) ContentBlockParamUnion {
+	var variant ThinkingBlockParam
+	variant.Signature = signature
+	variant.Thinking = thinking
+	return ContentBlockParamUnion{OfThinking: &variant}
+}
+
+func NewRedactedThinkingBlock(data string) ContentBlockParamUnion {
+	var redactedThinking RedactedThinkingBlockParam
+	redactedThinking.Data = data
+	return ContentBlockParamUnion{OfRedactedThinking: &redactedThinking}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type ContentBlockParamUnion struct {
+	OfServerToolUse       *ServerToolUseBlockParam       `json:",omitzero,inline"`
+	OfWebSearchToolResult *WebSearchToolResultBlockParam `json:",omitzero,inline"`
+	OfText                *TextBlockParam                `json:",omitzero,inline"`
+	OfImage               *ImageBlockParam               `json:",omitzero,inline"`
+	OfToolUse             *ToolUseBlockParam             `json:",omitzero,inline"`
+	OfToolResult          *ToolResultBlockParam          `json:",omitzero,inline"`
+	OfDocument            *DocumentBlockParam            `json:",omitzero,inline"`
+	OfThinking            *ThinkingBlockParam            `json:",omitzero,inline"`
+	OfRedactedThinking    *RedactedThinkingBlockParam    `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u ContentBlockParamUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfServerToolUse,
+		u.OfWebSearchToolResult,
+		u.OfText,
+		u.OfImage,
+		u.OfToolUse,
+		u.OfToolResult,
+		u.OfDocument,
+		u.OfThinking,
+		u.OfRedactedThinking)
+}
+func (u *ContentBlockParamUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *ContentBlockParamUnion) asAny() any {
+	if !param.IsOmitted(u.OfServerToolUse) {
+		return u.OfServerToolUse
+	} else if !param.IsOmitted(u.OfWebSearchToolResult) {
+		return u.OfWebSearchToolResult
+	} else if !param.IsOmitted(u.OfText) {
+		return u.OfText
+	} else if !param.IsOmitted(u.OfImage) {
+		return u.OfImage
+	} else if !param.IsOmitted(u.OfToolUse) {
+		return u.OfToolUse
+	} else if !param.IsOmitted(u.OfToolResult) {
+		return u.OfToolResult
+	} else if !param.IsOmitted(u.OfDocument) {
+		return u.OfDocument
+	} else if !param.IsOmitted(u.OfThinking) {
+		return u.OfThinking
+	} else if !param.IsOmitted(u.OfRedactedThinking) {
+		return u.OfRedactedThinking
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetText() *string {
+	if vt := u.OfText; vt != nil {
+		return &vt.Text
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetIsError() *bool {
+	if vt := u.OfToolResult; vt != nil && vt.IsError.Valid() {
+		return &vt.IsError.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetContext() *string {
+	if vt := u.OfDocument; vt != nil && vt.Context.Valid() {
+		return &vt.Context.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetTitle() *string {
+	if vt := u.OfDocument; vt != nil && vt.Title.Valid() {
+		return &vt.Title.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetSignature() *string {
+	if vt := u.OfThinking; vt != nil {
+		return &vt.Signature
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetThinking() *string {
+	if vt := u.OfThinking; vt != nil {
+		return &vt.Thinking
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetData() *string {
+	if vt := u.OfRedactedThinking; vt != nil {
+		return &vt.Data
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetID() *string {
+	if vt := u.OfServerToolUse; vt != nil {
+		return (*string)(&vt.ID)
+	} else if vt := u.OfToolUse; vt != nil {
+		return (*string)(&vt.ID)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetName() *string {
+	if vt := u.OfServerToolUse; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfToolUse; vt != nil {
+		return (*string)(&vt.Name)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetType() *string {
+	if vt := u.OfServerToolUse; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchToolResult; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfImage; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfToolUse; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfToolResult; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfDocument; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfThinking; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfRedactedThinking; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ContentBlockParamUnion) GetToolUseID() *string {
+	if vt := u.OfWebSearchToolResult; vt != nil {
+		return (*string)(&vt.ToolUseID)
+	} else if vt := u.OfToolResult; vt != nil {
+		return (*string)(&vt.ToolUseID)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's Input property, if present.
+func (u ContentBlockParamUnion) GetInput() *any {
+	if vt := u.OfServerToolUse; vt != nil {
+		return &vt.Input
+	} else if vt := u.OfToolUse; vt != nil {
+		return &vt.Input
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u ContentBlockParamUnion) GetCacheControl() *CacheControlEphemeralParam {
+	if vt := u.OfServerToolUse; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfWebSearchToolResult; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfText; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfImage; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfToolUse; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfToolResult; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfDocument; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}
+
+// Returns a subunion which exports methods to access subproperties
+//
+// Or use AsAny() to get the underlying value
+func (u ContentBlockParamUnion) GetContent() (res contentBlockParamUnionContent) {
+	if vt := u.OfWebSearchToolResult; vt != nil {
+		res.any = vt.Content.asAny()
+	} else if vt := u.OfToolResult; vt != nil {
+		res.any = &vt.Content
+	}
+	return
+}
+
+// Can have the runtime types [_[]WebSearchResultBlockParam],
+// [_[]ToolResultBlockParamContentUnion]
+type contentBlockParamUnionContent struct{ any }
+
+// Use the following switch statement to get the type of the union:
+//
+//	switch u.AsAny().(type) {
+//	case *[]anthropic.WebSearchResultBlockParam:
+//	case *[]anthropic.ToolResultBlockParamContentUnion:
+//	default:
+//	    fmt.Errorf("not present")
+//	}
+func (u contentBlockParamUnionContent) AsAny() any { return u.any }
+
+// Returns a subunion which exports methods to access subproperties
+//
+// Or use AsAny() to get the underlying value
+func (u ContentBlockParamUnion) GetCitations() (res contentBlockParamUnionCitations) {
+	if vt := u.OfText; vt != nil {
+		res.any = &vt.Citations
+	} else if vt := u.OfDocument; vt != nil {
+		res.any = &vt.Citations
+	}
+	return
+}
+
+// Can have the runtime types [*[]TextCitationParamUnion], [*CitationsConfigParam]
+type contentBlockParamUnionCitations struct{ any }
+
+// Use the following switch statement to get the type of the union:
+//
+//	switch u.AsAny().(type) {
+//	case *[]anthropic.TextCitationParamUnion:
+//	case *anthropic.CitationsConfigParam:
+//	default:
+//	    fmt.Errorf("not present")
+//	}
+func (u contentBlockParamUnionCitations) AsAny() any { return u.any }
+
+// Returns a subunion which exports methods to access subproperties
+//
+// Or use AsAny() to get the underlying value
+func (u ContentBlockParamUnion) GetSource() (res contentBlockParamUnionSource) {
+	if vt := u.OfImage; vt != nil {
+		res.any = vt.Source.asAny()
+	} else if vt := u.OfDocument; vt != nil {
+		res.any = vt.Source.asAny()
+	}
+	return
+}
+
+// Can have the runtime types [*Base64ImageSourceParam], [*URLImageSourceParam],
+// [*Base64PDFSourceParam], [*PlainTextSourceParam], [*ContentBlockSourceParam],
+// [*URLPDFSourceParam]
+type contentBlockParamUnionSource struct{ any }
+
+// Use the following switch statement to get the type of the union:
+//
+//	switch u.AsAny().(type) {
+//	case *anthropic.Base64ImageSourceParam:
+//	case *anthropic.URLImageSourceParam:
+//	case *anthropic.Base64PDFSourceParam:
+//	case *anthropic.PlainTextSourceParam:
+//	case *anthropic.ContentBlockSourceParam:
+//	case *anthropic.URLPDFSourceParam:
+//	default:
+//	    fmt.Errorf("not present")
+//	}
+func (u contentBlockParamUnionSource) AsAny() any { return u.any }
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u contentBlockParamUnionSource) GetContent() *ContentBlockSourceContentUnionParam {
+	switch vt := u.any.(type) {
+	case *DocumentBlockParamSourceUnion:
+		return vt.GetContent()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u contentBlockParamUnionSource) GetData() *string {
+	switch vt := u.any.(type) {
+	case *ImageBlockParamSourceUnion:
+		return vt.GetData()
+	case *DocumentBlockParamSourceUnion:
+		return vt.GetData()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u contentBlockParamUnionSource) GetMediaType() *string {
+	switch vt := u.any.(type) {
+	case *ImageBlockParamSourceUnion:
+		return vt.GetMediaType()
+	case *DocumentBlockParamSourceUnion:
+		return vt.GetMediaType()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u contentBlockParamUnionSource) GetType() *string {
+	switch vt := u.any.(type) {
+	case *ImageBlockParamSourceUnion:
+		return vt.GetType()
+	case *DocumentBlockParamSourceUnion:
+		return vt.GetType()
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u contentBlockParamUnionSource) GetURL() *string {
+	switch vt := u.any.(type) {
+	case *ImageBlockParamSourceUnion:
+		return vt.GetURL()
+	case *DocumentBlockParamSourceUnion:
+		return vt.GetURL()
+	}
+	return nil
+}
+
+func init() {
+	apijson.RegisterUnion[ContentBlockParamUnion](
+		"type",
+		apijson.Discriminator[ServerToolUseBlockParam]("server_tool_use"),
+		apijson.Discriminator[WebSearchToolResultBlockParam]("web_search_tool_result"),
+		apijson.Discriminator[TextBlockParam]("text"),
+		apijson.Discriminator[ImageBlockParam]("image"),
+		apijson.Discriminator[ToolUseBlockParam]("tool_use"),
+		apijson.Discriminator[ToolResultBlockParam]("tool_result"),
+		apijson.Discriminator[DocumentBlockParam]("document"),
+		apijson.Discriminator[ThinkingBlockParam]("thinking"),
+		apijson.Discriminator[RedactedThinkingBlockParam]("redacted_thinking"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[DocumentBlockParamSourceUnion](
+		"type",
+		apijson.Discriminator[Base64PDFSourceParam]("base64"),
+		apijson.Discriminator[PlainTextSourceParam]("text"),
+		apijson.Discriminator[ContentBlockSourceParam]("content"),
+		apijson.Discriminator[URLPDFSourceParam]("url"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[ImageBlockParamSourceUnion](
+		"type",
+		apijson.Discriminator[Base64ImageSourceParam]("base64"),
+		apijson.Discriminator[URLImageSourceParam]("url"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[TextCitationParamUnion](
+		"type",
+		apijson.Discriminator[CitationCharLocationParam]("char_location"),
+		apijson.Discriminator[CitationPageLocationParam]("page_location"),
+		apijson.Discriminator[CitationContentBlockLocationParam]("content_block_location"),
+		apijson.Discriminator[CitationWebSearchResultLocationParam]("web_search_result_location"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[ThinkingConfigParamUnion](
+		"type",
+		apijson.Discriminator[ThinkingConfigEnabledParam]("enabled"),
+		apijson.Discriminator[ThinkingConfigDisabledParam]("disabled"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[ToolChoiceUnionParam](
+		"type",
+		apijson.Discriminator[ToolChoiceAutoParam]("auto"),
+		apijson.Discriminator[ToolChoiceAnyParam]("any"),
+		apijson.Discriminator[ToolChoiceToolParam]("tool"),
+		apijson.Discriminator[ToolChoiceNoneParam]("none"),
+	)
+}
+
+func init() {
+	apijson.RegisterUnion[ToolResultBlockParamContentUnion](
+		"type",
+		apijson.Discriminator[TextBlockParam]("text"),
+		apijson.Discriminator[ImageBlockParam]("image"),
+	)
+}
+
+// The properties Content, Type are required.
+type ContentBlockSourceParam struct {
+	Content ContentBlockSourceContentUnionParam `json:"content,omitzero,required"`
+	// This field can be elided, and will marshal its zero value as "content".
+	Type constant.Content `json:"type,required"`
+	paramObj
+}
+
+func (r ContentBlockSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow ContentBlockSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ContentBlockSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type ContentBlockSourceContentUnionParam struct {
+	OfString                    param.Opt[string]                     `json:",omitzero,inline"`
+	OfContentBlockSourceContent []ContentBlockSourceContentUnionParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u ContentBlockSourceContentUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfString, u.OfContentBlockSourceContent)
+}
+func (u *ContentBlockSourceContentUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *ContentBlockSourceContentUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfString) {
+		return &u.OfString.Value
+	} else if !param.IsOmitted(u.OfContentBlockSourceContent) {
+		return &u.OfContentBlockSourceContent
+	}
+	return nil
+}
+
+// The properties Source, Type are required.
+type DocumentBlockParam struct {
+	Source  DocumentBlockParamSourceUnion `json:"source,omitzero,required"`
+	Context param.Opt[string]             `json:"context,omitzero"`
+	Title   param.Opt[string]             `json:"title,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	Citations    CitationsConfigParam       `json:"citations,omitzero"`
+	// This field can be elided, and will marshal its zero value as "document".
+	Type constant.Document `json:"type,required"`
+	paramObj
+}
+
+func (r DocumentBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow DocumentBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *DocumentBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type DocumentBlockParamSourceUnion struct {
+	OfBase64  *Base64PDFSourceParam    `json:",omitzero,inline"`
+	OfText    *PlainTextSourceParam    `json:",omitzero,inline"`
+	OfContent *ContentBlockSourceParam `json:",omitzero,inline"`
+	OfURL     *URLPDFSourceParam       `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u DocumentBlockParamSourceUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfBase64, u.OfText, u.OfContent, u.OfURL)
+}
+func (u *DocumentBlockParamSourceUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *DocumentBlockParamSourceUnion) asAny() any {
+	if !param.IsOmitted(u.OfBase64) {
+		return u.OfBase64
+	} else if !param.IsOmitted(u.OfText) {
+		return u.OfText
+	} else if !param.IsOmitted(u.OfContent) {
+		return u.OfContent
+	} else if !param.IsOmitted(u.OfURL) {
+		return u.OfURL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u DocumentBlockParamSourceUnion) GetContent() *ContentBlockSourceContentUnionParam {
+	if vt := u.OfContent; vt != nil {
+		return &vt.Content
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u DocumentBlockParamSourceUnion) GetURL() *string {
+	if vt := u.OfURL; vt != nil {
+		return &vt.URL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u DocumentBlockParamSourceUnion) GetData() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.Data)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Data)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u DocumentBlockParamSourceUnion) GetMediaType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.MediaType)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.MediaType)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u DocumentBlockParamSourceUnion) GetType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfContent; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfURL; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// The properties Source, Type are required.
+type ImageBlockParam struct {
+	Source ImageBlockParamSourceUnion `json:"source,omitzero,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "image".
+	Type constant.Image `json:"type,required"`
+	paramObj
+}
+
+func (r ImageBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow ImageBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ImageBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type ImageBlockParamSourceUnion struct {
+	OfBase64 *Base64ImageSourceParam `json:",omitzero,inline"`
+	OfURL    *URLImageSourceParam    `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u ImageBlockParamSourceUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfBase64, u.OfURL)
+}
+func (u *ImageBlockParamSourceUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *ImageBlockParamSourceUnion) asAny() any {
+	if !param.IsOmitted(u.OfBase64) {
+		return u.OfBase64
+	} else if !param.IsOmitted(u.OfURL) {
+		return u.OfURL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ImageBlockParamSourceUnion) GetData() *string {
+	if vt := u.OfBase64; vt != nil {
+		return &vt.Data
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ImageBlockParamSourceUnion) GetMediaType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.MediaType)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ImageBlockParamSourceUnion) GetURL() *string {
+	if vt := u.OfURL; vt != nil {
+		return &vt.URL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ImageBlockParamSourceUnion) GetType() *string {
+	if vt := u.OfBase64; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfURL; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+type InputJSONDelta struct {
+	PartialJSON string                  `json:"partial_json,required"`
+	Type        constant.InputJSONDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		PartialJSON respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r InputJSONDelta) RawJSON() string { return r.JSON.raw }
+func (r *InputJSONDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type Message struct {
+	// Unique object identifier.
+	//
+	// The format and length of IDs may change over time.
+	ID string `json:"id,required"`
+	// Content generated by the model.
+	//
+	// This is an array of content blocks, each of which has a `type` that determines
+	// its shape.
+	//
+	// Example:
+	//
+	// ```json
+	// [{ "type": "text", "text": "Hi, I'm Claude." }]
+	// ```
+	//
+	// If the request input `messages` ended with an `assistant` turn, then the
+	// response `content` will continue directly from that last turn. You can use this
+	// to constrain the model's output.
+	//
+	// For example, if the input `messages` were:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Then the response `content` might be:
+	//
+	// ```json
+	// [{ "type": "text", "text": "B)" }]
+	// ```
+	Content []ContentBlockUnion `json:"content,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,required"`
+	// Conversational role of the generated message.
+	//
+	// This will always be `"assistant"`.
+	Role constant.Assistant `json:"role,required"`
+	// The reason that we stopped.
+	//
+	// This may be one the following values:
+	//
+	// - `"end_turn"`: the model reached a natural stopping point
+	// - `"max_tokens"`: we exceeded the requested `max_tokens` or the model's maximum
+	// - `"stop_sequence"`: one of your provided custom `stop_sequences` was generated
+	// - `"tool_use"`: the model invoked one or more tools
+	//
+	// In non-streaming mode this value is always non-null. In streaming mode, it is
+	// null in the `message_start` event and non-null otherwise.
+	//
+	// Any of "end_turn", "max_tokens", "stop_sequence", "tool_use", "pause_turn",
+	// "refusal".
+	StopReason StopReason `json:"stop_reason,required"`
+	// Which custom stop sequence was generated, if any.
+	//
+	// This value will be a non-null string if one of your custom stop sequences was
+	// generated.
+	StopSequence string `json:"stop_sequence,required"`
+	// Object type.
+	//
+	// For Messages, this is always `"message"`.
+	Type constant.Message `json:"type,required"`
+	// Billing and rate-limit usage.
+	//
+	// Anthropic's API bills and rate-limits by token counts, as tokens represent the
+	// underlying cost to our systems.
+	//
+	// Under the hood, the API transforms requests into a format suitable for the
+	// model. The model's output then goes through a parsing stage before becoming an
+	// API response. As a result, the token counts in `usage` will not match one-to-one
+	// with the exact visible content of an API request or response.
+	//
+	// For example, `output_tokens` will be non-zero, even for an empty string response
+	// from Claude.
+	//
+	// Total input tokens in a request is the summation of `input_tokens`,
+	// `cache_creation_input_tokens`, and `cache_read_input_tokens`.
+	Usage Usage `json:"usage,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID           respjson.Field
+		Content      respjson.Field
+		Model        respjson.Field
+		Role         respjson.Field
+		StopReason   respjson.Field
+		StopSequence respjson.Field
+		Type         respjson.Field
+		Usage        respjson.Field
+		ExtraFields  map[string]respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r Message) RawJSON() string { return r.JSON.raw }
+func (r *Message) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r Message) ToParam() MessageParam {
+	var p MessageParam
+	p.Role = MessageParamRole(r.Role)
+	p.Content = make([]ContentBlockParamUnion, len(r.Content))
+	for i, c := range r.Content {
+		p.Content[i] = c.ToParam()
+	}
+	return p
+}
+
+// The reason that we stopped.
+//
+// This may be one the following values:
+//
+// - `"end_turn"`: the model reached a natural stopping point
+// - `"max_tokens"`: we exceeded the requested `max_tokens` or the model's maximum
+// - `"stop_sequence"`: one of your provided custom `stop_sequences` was generated
+// - `"tool_use"`: the model invoked one or more tools
+//
+// In non-streaming mode this value is always non-null. In streaming mode, it is
+// null in the `message_start` event and non-null otherwise.
+type MessageStopReason string
+
+const (
+	MessageStopReasonEndTurn      MessageStopReason = "end_turn"
+	MessageStopReasonMaxTokens    MessageStopReason = "max_tokens"
+	MessageStopReasonStopSequence MessageStopReason = "stop_sequence"
+	MessageStopReasonToolUse      MessageStopReason = "tool_use"
+)
+
+func MessageCountTokensToolParamOfTool(inputSchema ToolInputSchemaParam, name string) MessageCountTokensToolUnionParam {
+	var variant ToolParam
+	variant.InputSchema = inputSchema
+	variant.Name = name
+	return MessageCountTokensToolUnionParam{OfTool: &variant}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type MessageCountTokensToolUnionParam struct {
+	OfTool                  *ToolParam                                     `json:",omitzero,inline"`
+	OfBashTool20250124      *ToolBash20250124Param                         `json:",omitzero,inline"`
+	OfTextEditor20250124    *ToolTextEditor20250124Param                   `json:",omitzero,inline"`
+	OfTextEditor20250429    *MessageCountTokensToolTextEditor20250429Param `json:",omitzero,inline"`
+	OfWebSearchTool20250305 *WebSearchTool20250305Param                    `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u MessageCountTokensToolUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfTool,
+		u.OfBashTool20250124,
+		u.OfTextEditor20250124,
+		u.OfTextEditor20250429,
+		u.OfWebSearchTool20250305)
+}
+func (u *MessageCountTokensToolUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *MessageCountTokensToolUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfTool) {
+		return u.OfTool
+	} else if !param.IsOmitted(u.OfBashTool20250124) {
+		return u.OfBashTool20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250124) {
+		return u.OfTextEditor20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250429) {
+		return u.OfTextEditor20250429
+	} else if !param.IsOmitted(u.OfWebSearchTool20250305) {
+		return u.OfWebSearchTool20250305
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetInputSchema() *ToolInputSchemaParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.InputSchema
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetDescription() *string {
+	if vt := u.OfTool; vt != nil && vt.Description.Valid() {
+		return &vt.Description.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetAllowedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.AllowedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetBlockedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.BlockedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetMaxUses() *int64 {
+	if vt := u.OfWebSearchTool20250305; vt != nil && vt.MaxUses.Valid() {
+		return &vt.MaxUses.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetUserLocation() *WebSearchTool20250305UserLocationParam {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.UserLocation
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetName() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Name)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u MessageCountTokensToolUnionParam) GetType() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u MessageCountTokensToolUnionParam) GetCacheControl() *CacheControlEphemeralParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}
+
+// The properties Name, Type are required.
+type MessageCountTokensToolTextEditor20250429Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as
+	// "str_replace_based_edit_tool".
+	Name constant.StrReplaceBasedEditTool `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "text_editor_20250429".
+	Type constant.TextEditor20250429 `json:"type,required"`
+	paramObj
+}
+
+func (r MessageCountTokensToolTextEditor20250429Param) MarshalJSON() (data []byte, err error) {
+	type shadow MessageCountTokensToolTextEditor20250429Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MessageCountTokensToolTextEditor20250429Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageDeltaUsage struct {
+	// The cumulative number of input tokens used to create the cache entry.
+	CacheCreationInputTokens int64 `json:"cache_creation_input_tokens,required"`
+	// The cumulative number of input tokens read from the cache.
+	CacheReadInputTokens int64 `json:"cache_read_input_tokens,required"`
+	// The cumulative number of input tokens which were used.
+	InputTokens int64 `json:"input_tokens,required"`
+	// The cumulative number of output tokens which were used.
+	OutputTokens int64 `json:"output_tokens,required"`
+	// The number of server tool requests.
+	ServerToolUse ServerToolUsage `json:"server_tool_use,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CacheCreationInputTokens respjson.Field
+		CacheReadInputTokens     respjson.Field
+		InputTokens              respjson.Field
+		OutputTokens             respjson.Field
+		ServerToolUse            respjson.Field
+		ExtraFields              map[string]respjson.Field
+		raw                      string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageDeltaUsage) RawJSON() string { return r.JSON.raw }
+func (r *MessageDeltaUsage) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Content, Role are required.
+type MessageParam struct {
+	Content []ContentBlockParamUnion `json:"content,omitzero,required"`
+	// Any of "user", "assistant".
+	Role MessageParamRole `json:"role,omitzero,required"`
+	paramObj
+}
+
+func NewUserMessage(blocks ...ContentBlockParamUnion) MessageParam {
+	return MessageParam{
+		Role:    MessageParamRoleUser,
+		Content: blocks,
+	}
+}
+
+func NewAssistantMessage(blocks ...ContentBlockParamUnion) MessageParam {
+	return MessageParam{
+		Role:    MessageParamRoleAssistant,
+		Content: blocks,
+	}
+}
+
+func (r MessageParam) MarshalJSON() (data []byte, err error) {
+	type shadow MessageParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MessageParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageParamRole string
+
+const (
+	MessageParamRoleUser      MessageParamRole = "user"
+	MessageParamRoleAssistant MessageParamRole = "assistant"
+)
+
+type MessageTokensCount struct {
+	// The total number of tokens across the provided list of messages, system prompt,
+	// and tools.
+	InputTokens int64 `json:"input_tokens,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		InputTokens respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageTokensCount) RawJSON() string { return r.JSON.raw }
+func (r *MessageTokensCount) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MetadataParam struct {
+	// An external identifier for the user who is associated with the request.
+	//
+	// This should be a uuid, hash value, or other opaque identifier. Anthropic may use
+	// this id to help detect abuse. Do not include any identifying information such as
+	// name, email address, or phone number.
+	UserID param.Opt[string] `json:"user_id,omitzero"`
+	paramObj
+}
+
+func (r MetadataParam) MarshalJSON() (data []byte, err error) {
+	type shadow MetadataParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MetadataParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The model that will complete your prompt.\n\nSee
+// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+// details and options.
+type Model string
+
+const (
+	ModelClaude3_7SonnetLatest      Model = "claude-3-7-sonnet-latest"
+	ModelClaude3_7Sonnet20250219    Model = "claude-3-7-sonnet-20250219"
+	ModelClaude3_5HaikuLatest       Model = "claude-3-5-haiku-latest"
+	ModelClaude3_5Haiku20241022     Model = "claude-3-5-haiku-20241022"
+	ModelClaudeSonnet4_20250514     Model = "claude-sonnet-4-20250514"
+	ModelClaudeSonnet4_0            Model = "claude-sonnet-4-0"
+	ModelClaude4Sonnet20250514      Model = "claude-4-sonnet-20250514"
+	ModelClaude3_5SonnetLatest      Model = "claude-3-5-sonnet-latest"
+	ModelClaude3_5Sonnet20241022    Model = "claude-3-5-sonnet-20241022"
+	ModelClaude_3_5_Sonnet_20240620 Model = "claude-3-5-sonnet-20240620"
+	ModelClaudeOpus4_0              Model = "claude-opus-4-0"
+	ModelClaudeOpus4_20250514       Model = "claude-opus-4-20250514"
+	ModelClaude4Opus20250514        Model = "claude-4-opus-20250514"
+	ModelClaude3OpusLatest          Model = "claude-3-opus-latest"
+	ModelClaude_3_Opus_20240229     Model = "claude-3-opus-20240229"
+	// Deprecated: Will reach end-of-life on July 21st, 2025. Please migrate to a newer
+	// model. Visit https://docs.anthropic.com/en/docs/resources/model-deprecations for
+	// more information.
+	ModelClaude_3_Sonnet_20240229 Model = "claude-3-sonnet-20240229"
+	ModelClaude_3_Haiku_20240307  Model = "claude-3-haiku-20240307"
+	// Deprecated: Will reach end-of-life on July 21st, 2025. Please migrate to a newer
+	// model. Visit https://docs.anthropic.com/en/docs/resources/model-deprecations for
+	// more information.
+	ModelClaude_2_1 Model = "claude-2.1"
+	// Deprecated: Will reach end-of-life on July 21st, 2025. Please migrate to a newer
+	// model. Visit https://docs.anthropic.com/en/docs/resources/model-deprecations for
+	// more information.
+	ModelClaude_2_0 Model = "claude-2.0"
+)
+
+// The properties Data, MediaType, Type are required.
+type PlainTextSourceParam struct {
+	Data string `json:"data,required"`
+	// This field can be elided, and will marshal its zero value as "text/plain".
+	MediaType constant.TextPlain `json:"media_type,required"`
+	// This field can be elided, and will marshal its zero value as "text".
+	Type constant.Text `json:"type,required"`
+	paramObj
+}
+
+func (r PlainTextSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow PlainTextSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *PlainTextSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// RawContentBlockDeltaUnion contains all possible properties and values from
+// [TextDelta], [InputJSONDelta], [CitationsDelta], [ThinkingDelta],
+// [SignatureDelta].
+//
+// Use the [RawContentBlockDeltaUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type RawContentBlockDeltaUnion struct {
+	// This field is from variant [TextDelta].
+	Text string `json:"text"`
+	// Any of "text_delta", "input_json_delta", "citations_delta", "thinking_delta",
+	// "signature_delta".
+	Type string `json:"type"`
+	// This field is from variant [InputJSONDelta].
+	PartialJSON string `json:"partial_json"`
+	// This field is from variant [CitationsDelta].
+	Citation CitationsDeltaCitationUnion `json:"citation"`
+	// This field is from variant [ThinkingDelta].
+	Thinking string `json:"thinking"`
+	// This field is from variant [SignatureDelta].
+	Signature string `json:"signature"`
+	JSON      struct {
+		Text        respjson.Field
+		Type        respjson.Field
+		PartialJSON respjson.Field
+		Citation    respjson.Field
+		Thinking    respjson.Field
+		Signature   respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// anyRawContentBlockDelta is implemented by each variant of
+// [RawContentBlockDeltaUnion] to add type safety for the return type of
+// [RawContentBlockDeltaUnion.AsAny]
+type anyRawContentBlockDelta interface {
+	implRawContentBlockDeltaUnion()
+}
+
+func (TextDelta) implRawContentBlockDeltaUnion()      {}
+func (InputJSONDelta) implRawContentBlockDeltaUnion() {}
+func (CitationsDelta) implRawContentBlockDeltaUnion() {}
+func (ThinkingDelta) implRawContentBlockDeltaUnion()  {}
+func (SignatureDelta) implRawContentBlockDeltaUnion() {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := RawContentBlockDeltaUnion.AsAny().(type) {
+//	case anthropic.TextDelta:
+//	case anthropic.InputJSONDelta:
+//	case anthropic.CitationsDelta:
+//	case anthropic.ThinkingDelta:
+//	case anthropic.SignatureDelta:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u RawContentBlockDeltaUnion) AsAny() anyRawContentBlockDelta {
+	switch u.Type {
+	case "text_delta":
+		return u.AsTextDelta()
+	case "input_json_delta":
+		return u.AsInputJSONDelta()
+	case "citations_delta":
+		return u.AsCitationsDelta()
+	case "thinking_delta":
+		return u.AsThinkingDelta()
+	case "signature_delta":
+		return u.AsSignatureDelta()
+	}
+	return nil
+}
+
+func (u RawContentBlockDeltaUnion) AsTextDelta() (v TextDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u RawContentBlockDeltaUnion) AsInputJSONDelta() (v InputJSONDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u RawContentBlockDeltaUnion) AsCitationsDelta() (v CitationsDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u RawContentBlockDeltaUnion) AsThinkingDelta() (v ThinkingDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u RawContentBlockDeltaUnion) AsSignatureDelta() (v SignatureDelta) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u RawContentBlockDeltaUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *RawContentBlockDeltaUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ContentBlockDeltaEvent struct {
+	Delta RawContentBlockDeltaUnion  `json:"delta,required"`
+	Index int64                      `json:"index,required"`
+	Type  constant.ContentBlockDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Delta       respjson.Field
+		Index       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ContentBlockDeltaEvent) RawJSON() string { return r.JSON.raw }
+func (r *ContentBlockDeltaEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ContentBlockStartEvent struct {
+	ContentBlock ContentBlockStartEventContentBlockUnion `json:"content_block,required"`
+	Index        int64                                   `json:"index,required"`
+	Type         constant.ContentBlockStart              `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ContentBlock respjson.Field
+		Index        respjson.Field
+		Type         respjson.Field
+		ExtraFields  map[string]respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ContentBlockStartEvent) RawJSON() string { return r.JSON.raw }
+func (r *ContentBlockStartEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// ContentBlockStartEventContentBlockUnion contains all possible properties and
+// values from [TextBlock], [ToolUseBlock], [ServerToolUseBlock],
+// [WebSearchToolResultBlock], [ThinkingBlock], [RedactedThinkingBlock].
+//
+// Use the [ContentBlockStartEventContentBlockUnion.AsAny] method to switch on the
+// variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type ContentBlockStartEventContentBlockUnion struct {
+	// This field is from variant [TextBlock].
+	Citations []TextCitationUnion `json:"citations"`
+	// This field is from variant [TextBlock].
+	Text string `json:"text"`
+	// Any of "text", "tool_use", "server_tool_use", "web_search_tool_result",
+	// "thinking", "redacted_thinking".
+	Type  string `json:"type"`
+	ID    string `json:"id"`
+	Input any    `json:"input"`
+	Name  string `json:"name"`
+	// This field is from variant [WebSearchToolResultBlock].
+	Content WebSearchToolResultBlockContentUnion `json:"content"`
+	// This field is from variant [WebSearchToolResultBlock].
+	ToolUseID string `json:"tool_use_id"`
+	// This field is from variant [ThinkingBlock].
+	Signature string `json:"signature"`
+	// This field is from variant [ThinkingBlock].
+	Thinking string `json:"thinking"`
+	// This field is from variant [RedactedThinkingBlock].
+	Data string `json:"data"`
+	JSON struct {
+		Citations respjson.Field
+		Text      respjson.Field
+		Type      respjson.Field
+		ID        respjson.Field
+		Input     respjson.Field
+		Name      respjson.Field
+		Content   respjson.Field
+		ToolUseID respjson.Field
+		Signature respjson.Field
+		Thinking  respjson.Field
+		Data      respjson.Field
+		raw       string
+	} `json:"-"`
+}
+
+// anyContentBlockStartEventContentBlock is implemented by each variant of
+// [ContentBlockStartEventContentBlockUnion] to add type safety for the return type
+// of [ContentBlockStartEventContentBlockUnion.AsAny]
+type anyContentBlockStartEventContentBlock interface {
+	implContentBlockStartEventContentBlockUnion()
+}
+
+func (TextBlock) implContentBlockStartEventContentBlockUnion()                {}
+func (ToolUseBlock) implContentBlockStartEventContentBlockUnion()             {}
+func (ServerToolUseBlock) implContentBlockStartEventContentBlockUnion()       {}
+func (WebSearchToolResultBlock) implContentBlockStartEventContentBlockUnion() {}
+func (ThinkingBlock) implContentBlockStartEventContentBlockUnion()            {}
+func (RedactedThinkingBlock) implContentBlockStartEventContentBlockUnion()    {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := ContentBlockStartEventContentBlockUnion.AsAny().(type) {
+//	case anthropic.TextBlock:
+//	case anthropic.ToolUseBlock:
+//	case anthropic.ServerToolUseBlock:
+//	case anthropic.WebSearchToolResultBlock:
+//	case anthropic.ThinkingBlock:
+//	case anthropic.RedactedThinkingBlock:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u ContentBlockStartEventContentBlockUnion) AsAny() anyContentBlockStartEventContentBlock {
+	switch u.Type {
+	case "text":
+		return u.AsText()
+	case "tool_use":
+		return u.AsToolUse()
+	case "server_tool_use":
+		return u.AsServerToolUse()
+	case "web_search_tool_result":
+		return u.AsWebSearchToolResult()
+	case "thinking":
+		return u.AsThinking()
+	case "redacted_thinking":
+		return u.AsRedactedThinking()
+	}
+	return nil
+}
+
+func (u ContentBlockStartEventContentBlockUnion) AsText() (v TextBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockStartEventContentBlockUnion) AsToolUse() (v ToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockStartEventContentBlockUnion) AsServerToolUse() (v ServerToolUseBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockStartEventContentBlockUnion) AsWebSearchToolResult() (v WebSearchToolResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockStartEventContentBlockUnion) AsThinking() (v ThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ContentBlockStartEventContentBlockUnion) AsRedactedThinking() (v RedactedThinkingBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u ContentBlockStartEventContentBlockUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *ContentBlockStartEventContentBlockUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ContentBlockStopEvent struct {
+	Index int64                     `json:"index,required"`
+	Type  constant.ContentBlockStop `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Index       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ContentBlockStopEvent) RawJSON() string { return r.JSON.raw }
+func (r *ContentBlockStopEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageDeltaEvent struct {
+	Delta MessageDeltaEventDelta `json:"delta,required"`
+	Type  constant.MessageDelta  `json:"type,required"`
+	// Billing and rate-limit usage.
+	//
+	// Anthropic's API bills and rate-limits by token counts, as tokens represent the
+	// underlying cost to our systems.
+	//
+	// Under the hood, the API transforms requests into a format suitable for the
+	// model. The model's output then goes through a parsing stage before becoming an
+	// API response. As a result, the token counts in `usage` will not match one-to-one
+	// with the exact visible content of an API request or response.
+	//
+	// For example, `output_tokens` will be non-zero, even for an empty string response
+	// from Claude.
+	//
+	// Total input tokens in a request is the summation of `input_tokens`,
+	// `cache_creation_input_tokens`, and `cache_read_input_tokens`.
+	Usage MessageDeltaUsage `json:"usage,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Delta       respjson.Field
+		Type        respjson.Field
+		Usage       respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageDeltaEvent) RawJSON() string { return r.JSON.raw }
+func (r *MessageDeltaEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageDeltaEventDelta struct {
+	// Any of "end_turn", "max_tokens", "stop_sequence", "tool_use", "pause_turn",
+	// "refusal".
+	StopReason   StopReason `json:"stop_reason,required"`
+	StopSequence string     `json:"stop_sequence,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		StopReason   respjson.Field
+		StopSequence respjson.Field
+		ExtraFields  map[string]respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageDeltaEventDelta) RawJSON() string { return r.JSON.raw }
+func (r *MessageDeltaEventDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageStartEvent struct {
+	Message Message               `json:"message,required"`
+	Type    constant.MessageStart `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageStartEvent) RawJSON() string { return r.JSON.raw }
+func (r *MessageStartEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageStopEvent struct {
+	Type constant.MessageStop `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageStopEvent) RawJSON() string { return r.JSON.raw }
+func (r *MessageStopEvent) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// MessageStreamEventUnion contains all possible properties and values from
+// [MessageStartEvent], [MessageDeltaEvent], [MessageStopEvent],
+// [ContentBlockStartEvent], [ContentBlockDeltaEvent], [ContentBlockStopEvent].
+//
+// Use the [MessageStreamEventUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type MessageStreamEventUnion struct {
+	// This field is from variant [MessageStartEvent].
+	Message Message `json:"message"`
+	// Any of "message_start", "message_delta", "message_stop", "content_block_start",
+	// "content_block_delta", "content_block_stop".
+	Type string `json:"type"`
+	// This field is a union of [MessageDeltaEventDelta], [RawContentBlockDeltaUnion]
+	Delta MessageStreamEventUnionDelta `json:"delta"`
+	// This field is from variant [MessageDeltaEvent].
+	Usage MessageDeltaUsage `json:"usage"`
+	// This field is from variant [ContentBlockStartEvent].
+	ContentBlock ContentBlockStartEventContentBlockUnion `json:"content_block"`
+	Index        int64                                   `json:"index"`
+	JSON         struct {
+		Message      respjson.Field
+		Type         respjson.Field
+		Delta        respjson.Field
+		Usage        respjson.Field
+		ContentBlock respjson.Field
+		Index        respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+// anyMessageStreamEvent is implemented by each variant of
+// [MessageStreamEventUnion] to add type safety for the return type of
+// [MessageStreamEventUnion.AsAny]
+type anyMessageStreamEvent interface {
+	implMessageStreamEventUnion()
+}
+
+func (MessageStartEvent) implMessageStreamEventUnion()      {}
+func (MessageDeltaEvent) implMessageStreamEventUnion()      {}
+func (MessageStopEvent) implMessageStreamEventUnion()       {}
+func (ContentBlockStartEvent) implMessageStreamEventUnion() {}
+func (ContentBlockDeltaEvent) implMessageStreamEventUnion() {}
+func (ContentBlockStopEvent) implMessageStreamEventUnion()  {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := MessageStreamEventUnion.AsAny().(type) {
+//	case anthropic.MessageStartEvent:
+//	case anthropic.MessageDeltaEvent:
+//	case anthropic.MessageStopEvent:
+//	case anthropic.ContentBlockStartEvent:
+//	case anthropic.ContentBlockDeltaEvent:
+//	case anthropic.ContentBlockStopEvent:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u MessageStreamEventUnion) AsAny() anyMessageStreamEvent {
+	switch u.Type {
+	case "message_start":
+		return u.AsMessageStart()
+	case "message_delta":
+		return u.AsMessageDelta()
+	case "message_stop":
+		return u.AsMessageStop()
+	case "content_block_start":
+		return u.AsContentBlockStart()
+	case "content_block_delta":
+		return u.AsContentBlockDelta()
+	case "content_block_stop":
+		return u.AsContentBlockStop()
+	}
+	return nil
+}
+
+func (u MessageStreamEventUnion) AsMessageStart() (v MessageStartEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageStreamEventUnion) AsMessageDelta() (v MessageDeltaEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageStreamEventUnion) AsMessageStop() (v MessageStopEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageStreamEventUnion) AsContentBlockStart() (v ContentBlockStartEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageStreamEventUnion) AsContentBlockDelta() (v ContentBlockDeltaEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageStreamEventUnion) AsContentBlockStop() (v ContentBlockStopEvent) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u MessageStreamEventUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *MessageStreamEventUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// MessageStreamEventUnionDelta is an implicit subunion of
+// [MessageStreamEventUnion]. MessageStreamEventUnionDelta provides convenient
+// access to the sub-properties of the union.
+//
+// For type safety it is recommended to directly use a variant of the
+// [MessageStreamEventUnion].
+type MessageStreamEventUnionDelta struct {
+	// This field is from variant [MessageDeltaEventDelta].
+	StopReason StopReason `json:"stop_reason"`
+	// This field is from variant [MessageDeltaEventDelta].
+	StopSequence string `json:"stop_sequence"`
+	// This field is from variant [RawContentBlockDeltaUnion].
+	Text string `json:"text"`
+	Type string `json:"type"`
+	// This field is from variant [RawContentBlockDeltaUnion].
+	PartialJSON string `json:"partial_json"`
+	// This field is from variant [RawContentBlockDeltaUnion].
+	Citation CitationsDeltaCitationUnion `json:"citation"`
+	// This field is from variant [RawContentBlockDeltaUnion].
+	Thinking string `json:"thinking"`
+	// This field is from variant [RawContentBlockDeltaUnion].
+	Signature string `json:"signature"`
+	JSON      struct {
+		StopReason   respjson.Field
+		StopSequence respjson.Field
+		Text         respjson.Field
+		Type         respjson.Field
+		PartialJSON  respjson.Field
+		Citation     respjson.Field
+		Thinking     respjson.Field
+		Signature    respjson.Field
+		raw          string
+	} `json:"-"`
+}
+
+func (r *MessageStreamEventUnionDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Accumulate builds up the Message incrementally from a MessageStreamEvent. The Message then can be used as
+// any other Message, except with the caveat that the Message.JSON field which normally can be used to inspect
+// the JSON sent over the network may not be populated fully.
+//
+//	message := anthropic.Message{}
+//	for stream.Next() {
+//		event := stream.Current()
+//		message.Accumulate(event)
+//	}
+func (acc *Message) Accumulate(event MessageStreamEventUnion) error {
+	if acc == nil {
+		return fmt.Errorf("accumulate: cannot accumlate into nil Message")
+	}
+
+	switch event := event.AsAny().(type) {
+	case MessageStartEvent:
+		*acc = event.Message
+	case MessageDeltaEvent:
+		acc.StopReason = event.Delta.StopReason
+		acc.StopSequence = event.Delta.StopSequence
+		acc.Usage.OutputTokens = event.Usage.OutputTokens
+	case MessageStopEvent:
+		accJson, err := json.Marshal(acc)
+		if err != nil {
+			return fmt.Errorf("error converting content block to JSON: %w", err)
+		}
+		acc.JSON.raw = string(accJson)
+	case ContentBlockStartEvent:
+		acc.Content = append(acc.Content, ContentBlockUnion{})
+		err := acc.Content[len(acc.Content)-1].UnmarshalJSON([]byte(event.ContentBlock.RawJSON()))
+		if err != nil {
+			return err
+		}
+	case ContentBlockDeltaEvent:
+		if len(acc.Content) == 0 {
+			return fmt.Errorf("received event of type %s but there was no content block", event.Type)
+		}
+		cb := &acc.Content[len(acc.Content)-1]
+		switch delta := event.Delta.AsAny().(type) {
+		case TextDelta:
+			cb.Text += delta.Text
+		case InputJSONDelta:
+			if len(delta.PartialJSON) != 0 {
+				if string(cb.Input) == "{}" {
+					cb.Input = []byte(delta.PartialJSON)
+				} else {
+					cb.Input = append(cb.Input, []byte(delta.PartialJSON)...)
+				}
+			}
+		case ThinkingDelta:
+			cb.Thinking += delta.Thinking
+		case SignatureDelta:
+			cb.Signature += delta.Signature
+		case CitationsDelta:
+			citation := TextCitationUnion{}
+			err := citation.UnmarshalJSON([]byte(delta.Citation.RawJSON()))
+			if err != nil {
+				return fmt.Errorf("could not unmarshal citation delta into citation type: %w", err)
+			}
+			cb.Citations = append(cb.Citations, citation)
+		}
+	case ContentBlockStopEvent:
+		if len(acc.Content) == 0 {
+			return fmt.Errorf("received event of type %s but there was no content block", event.Type)
+		}
+		contentBlock := &acc.Content[len(acc.Content)-1]
+		cbJson, err := json.Marshal(contentBlock)
+		if err != nil {
+			return fmt.Errorf("error converting content block to JSON: %w", err)
+		}
+		contentBlock.JSON.raw = string(cbJson)
+	}
+
+	return nil
+}
+
+type RedactedThinkingBlock struct {
+	Data string                    `json:"data,required"`
+	Type constant.RedactedThinking `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Data        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r RedactedThinkingBlock) RawJSON() string { return r.JSON.raw }
+func (r *RedactedThinkingBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r RedactedThinkingBlock) ToParam() RedactedThinkingBlockParam {
+	var p RedactedThinkingBlockParam
+	p.Type = r.Type
+	p.Data = r.Data
+	return p
+}
+
+// The properties Data, Type are required.
+type RedactedThinkingBlockParam struct {
+	Data string `json:"data,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "redacted_thinking".
+	Type constant.RedactedThinking `json:"type,required"`
+	paramObj
+}
+
+func (r RedactedThinkingBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow RedactedThinkingBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *RedactedThinkingBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ServerToolUsage struct {
+	// The number of web search tool requests.
+	WebSearchRequests int64 `json:"web_search_requests,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		WebSearchRequests respjson.Field
+		ExtraFields       map[string]respjson.Field
+		raw               string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ServerToolUsage) RawJSON() string { return r.JSON.raw }
+func (r *ServerToolUsage) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ServerToolUseBlock struct {
+	ID    string                 `json:"id,required"`
+	Input any                    `json:"input,required"`
+	Name  constant.WebSearch     `json:"name,required"`
+	Type  constant.ServerToolUse `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Input       respjson.Field
+		Name        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ServerToolUseBlock) RawJSON() string { return r.JSON.raw }
+func (r *ServerToolUseBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties ID, Input, Name, Type are required.
+type ServerToolUseBlockParam struct {
+	ID    string `json:"id,required"`
+	Input any    `json:"input,omitzero,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "web_search".
+	Name constant.WebSearch `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as "server_tool_use".
+	Type constant.ServerToolUse `json:"type,required"`
+	paramObj
+}
+
+func (r ServerToolUseBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow ServerToolUseBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ServerToolUseBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type SignatureDelta struct {
+	Signature string                  `json:"signature,required"`
+	Type      constant.SignatureDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Signature   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r SignatureDelta) RawJSON() string { return r.JSON.raw }
+func (r *SignatureDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type StopReason string
+
+const (
+	StopReasonEndTurn      StopReason = "end_turn"
+	StopReasonMaxTokens    StopReason = "max_tokens"
+	StopReasonStopSequence StopReason = "stop_sequence"
+	StopReasonToolUse      StopReason = "tool_use"
+	StopReasonPauseTurn    StopReason = "pause_turn"
+	StopReasonRefusal      StopReason = "refusal"
+)
+
+type TextBlock struct {
+	// Citations supporting the text block.
+	//
+	// The type of citation returned will depend on the type of document being cited.
+	// Citing a PDF results in `page_location`, plain text results in `char_location`,
+	// and content document results in `content_block_location`.
+	Citations []TextCitationUnion `json:"citations,required"`
+	Text      string              `json:"text,required"`
+	Type      constant.Text       `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Citations   respjson.Field
+		Text        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r TextBlock) RawJSON() string { return r.JSON.raw }
+func (r *TextBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r TextBlock) ToParam() TextBlockParam {
+	var p TextBlockParam
+	p.Type = r.Type
+	p.Text = r.Text
+
+	// Distinguish between a nil and zero length slice, since some compatible
+	// APIs may not require citations.
+	if r.Citations != nil {
+		p.Citations = make([]TextCitationParamUnion, len(r.Citations))
+	}
+
+	for i, citation := range r.Citations {
+		switch citationVariant := citation.AsAny().(type) {
+		case CitationCharLocation:
+			var citationParam CitationCharLocationParam
+			citationParam.Type = citationVariant.Type
+			citationParam.DocumentTitle = paramutil.ToOpt(citationVariant.DocumentTitle, citationVariant.JSON.DocumentTitle)
+			citationParam.CitedText = citationVariant.CitedText
+			citationParam.DocumentIndex = citationVariant.DocumentIndex
+			citationParam.EndCharIndex = citationVariant.EndCharIndex
+			citationParam.StartCharIndex = citationVariant.StartCharIndex
+			p.Citations[i] = TextCitationParamUnion{OfCharLocation: &citationParam}
+		case CitationPageLocation:
+			var citationParam CitationPageLocationParam
+			citationParam.Type = citationVariant.Type
+			citationParam.DocumentTitle = paramutil.ToOpt(citationVariant.DocumentTitle, citationVariant.JSON.DocumentTitle)
+			citationParam.DocumentIndex = citationVariant.DocumentIndex
+			citationParam.EndPageNumber = citationVariant.EndPageNumber
+			citationParam.StartPageNumber = citationVariant.StartPageNumber
+			p.Citations[i] = TextCitationParamUnion{OfPageLocation: &citationParam}
+		case CitationContentBlockLocation:
+			var citationParam CitationContentBlockLocationParam
+			citationParam.Type = citationVariant.Type
+			citationParam.DocumentTitle = paramutil.ToOpt(citationVariant.DocumentTitle, citationVariant.JSON.DocumentTitle)
+			citationParam.CitedText = citationVariant.CitedText
+			citationParam.DocumentIndex = citationVariant.DocumentIndex
+			citationParam.EndBlockIndex = citationVariant.EndBlockIndex
+			citationParam.StartBlockIndex = citationVariant.StartBlockIndex
+			p.Citations[i] = TextCitationParamUnion{OfContentBlockLocation: &citationParam}
+		}
+	}
+	return p
+}
+
+// The properties Text, Type are required.
+type TextBlockParam struct {
+	Text      string                   `json:"text,required"`
+	Citations []TextCitationParamUnion `json:"citations,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "text".
+	Type constant.Text `json:"type,required"`
+	paramObj
+}
+
+func (r TextBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow TextBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *TextBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// TextCitationUnion contains all possible properties and values from
+// [CitationCharLocation], [CitationPageLocation], [CitationContentBlockLocation],
+// [CitationsWebSearchResultLocation].
+//
+// Use the [TextCitationUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type TextCitationUnion struct {
+	CitedText     string `json:"cited_text"`
+	DocumentIndex int64  `json:"document_index"`
+	DocumentTitle string `json:"document_title"`
+	// This field is from variant [CitationCharLocation].
+	EndCharIndex int64 `json:"end_char_index"`
+	// This field is from variant [CitationCharLocation].
+	StartCharIndex int64 `json:"start_char_index"`
+	// Any of "char_location", "page_location", "content_block_location",
+	// "web_search_result_location".
+	Type string `json:"type"`
+	// This field is from variant [CitationPageLocation].
+	EndPageNumber int64 `json:"end_page_number"`
+	// This field is from variant [CitationPageLocation].
+	StartPageNumber int64 `json:"start_page_number"`
+	// This field is from variant [CitationContentBlockLocation].
+	EndBlockIndex int64 `json:"end_block_index"`
+	// This field is from variant [CitationContentBlockLocation].
+	StartBlockIndex int64 `json:"start_block_index"`
+	// This field is from variant [CitationsWebSearchResultLocation].
+	EncryptedIndex string `json:"encrypted_index"`
+	// This field is from variant [CitationsWebSearchResultLocation].
+	Title string `json:"title"`
+	// This field is from variant [CitationsWebSearchResultLocation].
+	URL  string `json:"url"`
+	JSON struct {
+		CitedText       respjson.Field
+		DocumentIndex   respjson.Field
+		DocumentTitle   respjson.Field
+		EndCharIndex    respjson.Field
+		StartCharIndex  respjson.Field
+		Type            respjson.Field
+		EndPageNumber   respjson.Field
+		StartPageNumber respjson.Field
+		EndBlockIndex   respjson.Field
+		StartBlockIndex respjson.Field
+		EncryptedIndex  respjson.Field
+		Title           respjson.Field
+		URL             respjson.Field
+		raw             string
+	} `json:"-"`
+}
+
+// anyTextCitation is implemented by each variant of [TextCitationUnion] to add
+// type safety for the return type of [TextCitationUnion.AsAny]
+type anyTextCitation interface {
+	implTextCitationUnion()
+}
+
+func (CitationCharLocation) implTextCitationUnion()             {}
+func (CitationPageLocation) implTextCitationUnion()             {}
+func (CitationContentBlockLocation) implTextCitationUnion()     {}
+func (CitationsWebSearchResultLocation) implTextCitationUnion() {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := TextCitationUnion.AsAny().(type) {
+//	case anthropic.CitationCharLocation:
+//	case anthropic.CitationPageLocation:
+//	case anthropic.CitationContentBlockLocation:
+//	case anthropic.CitationsWebSearchResultLocation:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u TextCitationUnion) AsAny() anyTextCitation {
+	switch u.Type {
+	case "char_location":
+		return u.AsCharLocation()
+	case "page_location":
+		return u.AsPageLocation()
+	case "content_block_location":
+		return u.AsContentBlockLocation()
+	case "web_search_result_location":
+		return u.AsWebSearchResultLocation()
+	}
+	return nil
+}
+
+func (u TextCitationUnion) AsCharLocation() (v CitationCharLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u TextCitationUnion) AsPageLocation() (v CitationPageLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u TextCitationUnion) AsContentBlockLocation() (v CitationContentBlockLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u TextCitationUnion) AsWebSearchResultLocation() (v CitationsWebSearchResultLocation) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u TextCitationUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *TextCitationUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type TextCitationParamUnion struct {
+	OfCharLocation            *CitationCharLocationParam            `json:",omitzero,inline"`
+	OfPageLocation            *CitationPageLocationParam            `json:",omitzero,inline"`
+	OfContentBlockLocation    *CitationContentBlockLocationParam    `json:",omitzero,inline"`
+	OfWebSearchResultLocation *CitationWebSearchResultLocationParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u TextCitationParamUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfCharLocation, u.OfPageLocation, u.OfContentBlockLocation, u.OfWebSearchResultLocation)
+}
+func (u *TextCitationParamUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *TextCitationParamUnion) asAny() any {
+	if !param.IsOmitted(u.OfCharLocation) {
+		return u.OfCharLocation
+	} else if !param.IsOmitted(u.OfPageLocation) {
+		return u.OfPageLocation
+	} else if !param.IsOmitted(u.OfContentBlockLocation) {
+		return u.OfContentBlockLocation
+	} else if !param.IsOmitted(u.OfWebSearchResultLocation) {
+		return u.OfWebSearchResultLocation
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetEndCharIndex() *int64 {
+	if vt := u.OfCharLocation; vt != nil {
+		return &vt.EndCharIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetStartCharIndex() *int64 {
+	if vt := u.OfCharLocation; vt != nil {
+		return &vt.StartCharIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetEndPageNumber() *int64 {
+	if vt := u.OfPageLocation; vt != nil {
+		return &vt.EndPageNumber
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetStartPageNumber() *int64 {
+	if vt := u.OfPageLocation; vt != nil {
+		return &vt.StartPageNumber
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetEndBlockIndex() *int64 {
+	if vt := u.OfContentBlockLocation; vt != nil {
+		return &vt.EndBlockIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetStartBlockIndex() *int64 {
+	if vt := u.OfContentBlockLocation; vt != nil {
+		return &vt.StartBlockIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetEncryptedIndex() *string {
+	if vt := u.OfWebSearchResultLocation; vt != nil {
+		return &vt.EncryptedIndex
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetTitle() *string {
+	if vt := u.OfWebSearchResultLocation; vt != nil && vt.Title.Valid() {
+		return &vt.Title.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetURL() *string {
+	if vt := u.OfWebSearchResultLocation; vt != nil {
+		return &vt.URL
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetCitedText() *string {
+	if vt := u.OfCharLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	} else if vt := u.OfPageLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	} else if vt := u.OfContentBlockLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	} else if vt := u.OfWebSearchResultLocation; vt != nil {
+		return (*string)(&vt.CitedText)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetDocumentIndex() *int64 {
+	if vt := u.OfCharLocation; vt != nil {
+		return (*int64)(&vt.DocumentIndex)
+	} else if vt := u.OfPageLocation; vt != nil {
+		return (*int64)(&vt.DocumentIndex)
+	} else if vt := u.OfContentBlockLocation; vt != nil {
+		return (*int64)(&vt.DocumentIndex)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetDocumentTitle() *string {
+	if vt := u.OfCharLocation; vt != nil && vt.DocumentTitle.Valid() {
+		return &vt.DocumentTitle.Value
+	} else if vt := u.OfPageLocation; vt != nil && vt.DocumentTitle.Valid() {
+		return &vt.DocumentTitle.Value
+	} else if vt := u.OfContentBlockLocation; vt != nil && vt.DocumentTitle.Valid() {
+		return &vt.DocumentTitle.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u TextCitationParamUnion) GetType() *string {
+	if vt := u.OfCharLocation; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfPageLocation; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfContentBlockLocation; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchResultLocation; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+type TextDelta struct {
+	Text string             `json:"text,required"`
+	Type constant.TextDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Text        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r TextDelta) RawJSON() string { return r.JSON.raw }
+func (r *TextDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ThinkingBlock struct {
+	Signature string            `json:"signature,required"`
+	Thinking  string            `json:"thinking,required"`
+	Type      constant.Thinking `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Signature   respjson.Field
+		Thinking    respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ThinkingBlock) RawJSON() string { return r.JSON.raw }
+func (r *ThinkingBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r ThinkingBlock) ToParam() ThinkingBlockParam {
+	var p ThinkingBlockParam
+	p.Type = r.Type
+	p.Signature = r.Signature
+	p.Thinking = r.Thinking
+	return p
+}
+
+// The properties Signature, Thinking, Type are required.
+type ThinkingBlockParam struct {
+	Signature string `json:"signature,required"`
+	Thinking  string `json:"thinking,required"`
+	// This field can be elided, and will marshal its zero value as "thinking".
+	Type constant.Thinking `json:"type,required"`
+	paramObj
+}
+
+func (r ThinkingBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow ThinkingBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ThinkingBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewThinkingConfigDisabledParam() ThinkingConfigDisabledParam {
+	return ThinkingConfigDisabledParam{
+		Type: "disabled",
+	}
+}
+
+// This struct has a constant value, construct it with
+// [NewThinkingConfigDisabledParam].
+type ThinkingConfigDisabledParam struct {
+	Type constant.Disabled `json:"type,required"`
+	paramObj
+}
+
+func (r ThinkingConfigDisabledParam) MarshalJSON() (data []byte, err error) {
+	type shadow ThinkingConfigDisabledParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ThinkingConfigDisabledParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties BudgetTokens, Type are required.
+type ThinkingConfigEnabledParam struct {
+	// Determines how many tokens Claude can use for its internal reasoning process.
+	// Larger budgets can enable more thorough analysis for complex problems, improving
+	// response quality.
+	//
+	// Must be ≥1024 and less than `max_tokens`.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	BudgetTokens int64 `json:"budget_tokens,required"`
+	// This field can be elided, and will marshal its zero value as "enabled".
+	Type constant.Enabled `json:"type,required"`
+	paramObj
+}
+
+func (r ThinkingConfigEnabledParam) MarshalJSON() (data []byte, err error) {
+	type shadow ThinkingConfigEnabledParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ThinkingConfigEnabledParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func ThinkingConfigParamOfEnabled(budgetTokens int64) ThinkingConfigParamUnion {
+	var enabled ThinkingConfigEnabledParam
+	enabled.BudgetTokens = budgetTokens
+	return ThinkingConfigParamUnion{OfEnabled: &enabled}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type ThinkingConfigParamUnion struct {
+	OfEnabled  *ThinkingConfigEnabledParam  `json:",omitzero,inline"`
+	OfDisabled *ThinkingConfigDisabledParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u ThinkingConfigParamUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfEnabled, u.OfDisabled)
+}
+func (u *ThinkingConfigParamUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *ThinkingConfigParamUnion) asAny() any {
+	if !param.IsOmitted(u.OfEnabled) {
+		return u.OfEnabled
+	} else if !param.IsOmitted(u.OfDisabled) {
+		return u.OfDisabled
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ThinkingConfigParamUnion) GetBudgetTokens() *int64 {
+	if vt := u.OfEnabled; vt != nil {
+		return &vt.BudgetTokens
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ThinkingConfigParamUnion) GetType() *string {
+	if vt := u.OfEnabled; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfDisabled; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+type ThinkingDelta struct {
+	Thinking string                 `json:"thinking,required"`
+	Type     constant.ThinkingDelta `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Thinking    respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ThinkingDelta) RawJSON() string { return r.JSON.raw }
+func (r *ThinkingDelta) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties InputSchema, Name are required.
+type ToolParam struct {
+	// [JSON schema](https://json-schema.org/draft/2020-12) for this tool's input.
+	//
+	// This defines the shape of the `input` that your tool accepts and that the model
+	// will produce.
+	InputSchema ToolInputSchemaParam `json:"input_schema,omitzero,required"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	Name string `json:"name,required"`
+	// Description of what this tool does.
+	//
+	// Tool descriptions should be as detailed as possible. The more information that
+	// the model has about what the tool is and how to use it, the better it will
+	// perform. You can use natural language descriptions to reinforce important
+	// aspects of the tool input JSON schema.
+	Description param.Opt[string] `json:"description,omitzero"`
+	// Any of "custom".
+	Type ToolType `json:"type,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	paramObj
+}
+
+func (r ToolParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// [JSON schema](https://json-schema.org/draft/2020-12) for this tool's input.
+//
+// This defines the shape of the `input` that your tool accepts and that the model
+// will produce.
+//
+// The property Type is required.
+type ToolInputSchemaParam struct {
+	Properties any      `json:"properties,omitzero"`
+	Required   []string `json:"required,omitzero"`
+	// This field can be elided, and will marshal its zero value as "object".
+	Type        constant.Object `json:"type,required"`
+	ExtraFields map[string]any  `json:"-"`
+	paramObj
+}
+
+func (r ToolInputSchemaParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolInputSchemaParam
+	return param.MarshalWithExtras(r, (*shadow)(&r), r.ExtraFields)
+}
+func (r *ToolInputSchemaParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ToolType string
+
+const (
+	ToolTypeCustom ToolType = "custom"
+)
+
+// The properties Name, Type are required.
+type ToolBash20250124Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "bash".
+	Name constant.Bash `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as "bash_20250124".
+	Type constant.Bash20250124 `json:"type,required"`
+	paramObj
+}
+
+func (r ToolBash20250124Param) MarshalJSON() (data []byte, err error) {
+	type shadow ToolBash20250124Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolBash20250124Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func ToolChoiceParamOfTool(name string) ToolChoiceUnionParam {
+	var tool ToolChoiceToolParam
+	tool.Name = name
+	return ToolChoiceUnionParam{OfTool: &tool}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type ToolChoiceUnionParam struct {
+	OfAuto *ToolChoiceAutoParam `json:",omitzero,inline"`
+	OfAny  *ToolChoiceAnyParam  `json:",omitzero,inline"`
+	OfTool *ToolChoiceToolParam `json:",omitzero,inline"`
+	OfNone *ToolChoiceNoneParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u ToolChoiceUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfAuto, u.OfAny, u.OfTool, u.OfNone)
+}
+func (u *ToolChoiceUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *ToolChoiceUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfAuto) {
+		return u.OfAuto
+	} else if !param.IsOmitted(u.OfAny) {
+		return u.OfAny
+	} else if !param.IsOmitted(u.OfTool) {
+		return u.OfTool
+	} else if !param.IsOmitted(u.OfNone) {
+		return u.OfNone
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolChoiceUnionParam) GetName() *string {
+	if vt := u.OfTool; vt != nil {
+		return &vt.Name
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolChoiceUnionParam) GetType() *string {
+	if vt := u.OfAuto; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfAny; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfNone; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolChoiceUnionParam) GetDisableParallelToolUse() *bool {
+	if vt := u.OfAuto; vt != nil && vt.DisableParallelToolUse.Valid() {
+		return &vt.DisableParallelToolUse.Value
+	} else if vt := u.OfAny; vt != nil && vt.DisableParallelToolUse.Valid() {
+		return &vt.DisableParallelToolUse.Value
+	} else if vt := u.OfTool; vt != nil && vt.DisableParallelToolUse.Valid() {
+		return &vt.DisableParallelToolUse.Value
+	}
+	return nil
+}
+
+// The model will use any available tools.
+//
+// The property Type is required.
+type ToolChoiceAnyParam struct {
+	// Whether to disable parallel tool use.
+	//
+	// Defaults to `false`. If set to `true`, the model will output exactly one tool
+	// use.
+	DisableParallelToolUse param.Opt[bool] `json:"disable_parallel_tool_use,omitzero"`
+	// This field can be elided, and will marshal its zero value as "any".
+	Type constant.Any `json:"type,required"`
+	paramObj
+}
+
+func (r ToolChoiceAnyParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolChoiceAnyParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolChoiceAnyParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The model will automatically decide whether to use tools.
+//
+// The property Type is required.
+type ToolChoiceAutoParam struct {
+	// Whether to disable parallel tool use.
+	//
+	// Defaults to `false`. If set to `true`, the model will output at most one tool
+	// use.
+	DisableParallelToolUse param.Opt[bool] `json:"disable_parallel_tool_use,omitzero"`
+	// This field can be elided, and will marshal its zero value as "auto".
+	Type constant.Auto `json:"type,required"`
+	paramObj
+}
+
+func (r ToolChoiceAutoParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolChoiceAutoParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolChoiceAutoParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewToolChoiceNoneParam() ToolChoiceNoneParam {
+	return ToolChoiceNoneParam{
+		Type: "none",
+	}
+}
+
+// The model will not be allowed to use tools.
+//
+// This struct has a constant value, construct it with [NewToolChoiceNoneParam].
+type ToolChoiceNoneParam struct {
+	Type constant.None `json:"type,required"`
+	paramObj
+}
+
+func (r ToolChoiceNoneParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolChoiceNoneParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolChoiceNoneParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The model will use the specified tool with `tool_choice.name`.
+//
+// The properties Name, Type are required.
+type ToolChoiceToolParam struct {
+	// The name of the tool to use.
+	Name string `json:"name,required"`
+	// Whether to disable parallel tool use.
+	//
+	// Defaults to `false`. If set to `true`, the model will output exactly one tool
+	// use.
+	DisableParallelToolUse param.Opt[bool] `json:"disable_parallel_tool_use,omitzero"`
+	// This field can be elided, and will marshal its zero value as "tool".
+	Type constant.Tool `json:"type,required"`
+	paramObj
+}
+
+func (r ToolChoiceToolParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolChoiceToolParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolChoiceToolParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties ToolUseID, Type are required.
+type ToolResultBlockParam struct {
+	ToolUseID string          `json:"tool_use_id,required"`
+	IsError   param.Opt[bool] `json:"is_error,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam         `json:"cache_control,omitzero"`
+	Content      []ToolResultBlockParamContentUnion `json:"content,omitzero"`
+	// This field can be elided, and will marshal its zero value as "tool_result".
+	Type constant.ToolResult `json:"type,required"`
+	paramObj
+}
+
+func (r ToolResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type ToolResultBlockParamContentUnion struct {
+	OfText  *TextBlockParam  `json:",omitzero,inline"`
+	OfImage *ImageBlockParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u ToolResultBlockParamContentUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfText, u.OfImage)
+}
+func (u *ToolResultBlockParamContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *ToolResultBlockParamContentUnion) asAny() any {
+	if !param.IsOmitted(u.OfText) {
+		return u.OfText
+	} else if !param.IsOmitted(u.OfImage) {
+		return u.OfImage
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolResultBlockParamContentUnion) GetText() *string {
+	if vt := u.OfText; vt != nil {
+		return &vt.Text
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolResultBlockParamContentUnion) GetCitations() []TextCitationParamUnion {
+	if vt := u.OfText; vt != nil {
+		return vt.Citations
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolResultBlockParamContentUnion) GetSource() *ImageBlockParamSourceUnion {
+	if vt := u.OfImage; vt != nil {
+		return &vt.Source
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolResultBlockParamContentUnion) GetType() *string {
+	if vt := u.OfText; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfImage; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u ToolResultBlockParamContentUnion) GetCacheControl() *CacheControlEphemeralParam {
+	if vt := u.OfText; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfImage; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}
+
+// The properties Name, Type are required.
+type ToolTextEditor20250124Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as
+	// "str_replace_editor".
+	Name constant.StrReplaceEditor `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "text_editor_20250124".
+	Type constant.TextEditor20250124 `json:"type,required"`
+	paramObj
+}
+
+func (r ToolTextEditor20250124Param) MarshalJSON() (data []byte, err error) {
+	type shadow ToolTextEditor20250124Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolTextEditor20250124Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func ToolUnionParamOfTool(inputSchema ToolInputSchemaParam, name string) ToolUnionParam {
+	var variant ToolParam
+	variant.InputSchema = inputSchema
+	variant.Name = name
+	return ToolUnionParam{OfTool: &variant}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type ToolUnionParam struct {
+	OfTool                  *ToolParam                        `json:",omitzero,inline"`
+	OfBashTool20250124      *ToolBash20250124Param            `json:",omitzero,inline"`
+	OfTextEditor20250124    *ToolTextEditor20250124Param      `json:",omitzero,inline"`
+	OfTextEditor20250429    *ToolUnionTextEditor20250429Param `json:",omitzero,inline"`
+	OfWebSearchTool20250305 *WebSearchTool20250305Param       `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u ToolUnionParam) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfTool,
+		u.OfBashTool20250124,
+		u.OfTextEditor20250124,
+		u.OfTextEditor20250429,
+		u.OfWebSearchTool20250305)
+}
+func (u *ToolUnionParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *ToolUnionParam) asAny() any {
+	if !param.IsOmitted(u.OfTool) {
+		return u.OfTool
+	} else if !param.IsOmitted(u.OfBashTool20250124) {
+		return u.OfBashTool20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250124) {
+		return u.OfTextEditor20250124
+	} else if !param.IsOmitted(u.OfTextEditor20250429) {
+		return u.OfTextEditor20250429
+	} else if !param.IsOmitted(u.OfWebSearchTool20250305) {
+		return u.OfWebSearchTool20250305
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetInputSchema() *ToolInputSchemaParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.InputSchema
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetDescription() *string {
+	if vt := u.OfTool; vt != nil && vt.Description.Valid() {
+		return &vt.Description.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetAllowedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.AllowedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetBlockedDomains() []string {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return vt.BlockedDomains
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetMaxUses() *int64 {
+	if vt := u.OfWebSearchTool20250305; vt != nil && vt.MaxUses.Valid() {
+		return &vt.MaxUses.Value
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetUserLocation() *WebSearchTool20250305UserLocationParam {
+	if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.UserLocation
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetName() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Name)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Name)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's property, if present.
+func (u ToolUnionParam) GetType() *string {
+	if vt := u.OfTool; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return (*string)(&vt.Type)
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return (*string)(&vt.Type)
+	}
+	return nil
+}
+
+// Returns a pointer to the underlying variant's CacheControl property, if present.
+func (u ToolUnionParam) GetCacheControl() *CacheControlEphemeralParam {
+	if vt := u.OfTool; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfBashTool20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250124; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfTextEditor20250429; vt != nil {
+		return &vt.CacheControl
+	} else if vt := u.OfWebSearchTool20250305; vt != nil {
+		return &vt.CacheControl
+	}
+	return nil
+}
+
+// The properties Name, Type are required.
+type ToolUnionTextEditor20250429Param struct {
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as
+	// "str_replace_based_edit_tool".
+	Name constant.StrReplaceBasedEditTool `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "text_editor_20250429".
+	Type constant.TextEditor20250429 `json:"type,required"`
+	paramObj
+}
+
+func (r ToolUnionTextEditor20250429Param) MarshalJSON() (data []byte, err error) {
+	type shadow ToolUnionTextEditor20250429Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolUnionTextEditor20250429Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ToolUseBlock struct {
+	ID    string           `json:"id,required"`
+	Input json.RawMessage  `json:"input,required"`
+	Name  string           `json:"name,required"`
+	Type  constant.ToolUse `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Input       respjson.Field
+		Name        respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ToolUseBlock) RawJSON() string { return r.JSON.raw }
+func (r *ToolUseBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (r ToolUseBlock) ToParam() ToolUseBlockParam {
+	var toolUse ToolUseBlockParam
+	toolUse.Type = r.Type
+	toolUse.ID = r.ID
+	toolUse.Input = r.Input
+	toolUse.Name = r.Name
+	return toolUse
+}
+
+// The properties ID, Input, Name, Type are required.
+type ToolUseBlockParam struct {
+	ID    string `json:"id,required"`
+	Input any    `json:"input,omitzero,required"`
+	Name  string `json:"name,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as "tool_use".
+	Type constant.ToolUse `json:"type,required"`
+	paramObj
+}
+
+func (r ToolUseBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow ToolUseBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *ToolUseBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Type, URL are required.
+type URLImageSourceParam struct {
+	URL string `json:"url,required"`
+	// This field can be elided, and will marshal its zero value as "url".
+	Type constant.URL `json:"type,required"`
+	paramObj
+}
+
+func (r URLImageSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow URLImageSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *URLImageSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Type, URL are required.
+type URLPDFSourceParam struct {
+	URL string `json:"url,required"`
+	// This field can be elided, and will marshal its zero value as "url".
+	Type constant.URL `json:"type,required"`
+	paramObj
+}
+
+func (r URLPDFSourceParam) MarshalJSON() (data []byte, err error) {
+	type shadow URLPDFSourceParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *URLPDFSourceParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type Usage struct {
+	// The number of input tokens used to create the cache entry.
+	CacheCreationInputTokens int64 `json:"cache_creation_input_tokens,required"`
+	// The number of input tokens read from the cache.
+	CacheReadInputTokens int64 `json:"cache_read_input_tokens,required"`
+	// The number of input tokens which were used.
+	InputTokens int64 `json:"input_tokens,required"`
+	// The number of output tokens which were used.
+	OutputTokens int64 `json:"output_tokens,required"`
+	// The number of server tool requests.
+	ServerToolUse ServerToolUsage `json:"server_tool_use,required"`
+	// If the request used the priority, standard, or batch tier.
+	//
+	// Any of "standard", "priority", "batch".
+	ServiceTier UsageServiceTier `json:"service_tier,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CacheCreationInputTokens respjson.Field
+		CacheReadInputTokens     respjson.Field
+		InputTokens              respjson.Field
+		OutputTokens             respjson.Field
+		ServerToolUse            respjson.Field
+		ServiceTier              respjson.Field
+		ExtraFields              map[string]respjson.Field
+		raw                      string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r Usage) RawJSON() string { return r.JSON.raw }
+func (r *Usage) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// If the request used the priority, standard, or batch tier.
+type UsageServiceTier string
+
+const (
+	UsageServiceTierStandard UsageServiceTier = "standard"
+	UsageServiceTierPriority UsageServiceTier = "priority"
+	UsageServiceTierBatch    UsageServiceTier = "batch"
+)
+
+type WebSearchResultBlock struct {
+	EncryptedContent string                   `json:"encrypted_content,required"`
+	PageAge          string                   `json:"page_age,required"`
+	Title            string                   `json:"title,required"`
+	Type             constant.WebSearchResult `json:"type,required"`
+	URL              string                   `json:"url,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		EncryptedContent respjson.Field
+		PageAge          respjson.Field
+		Title            respjson.Field
+		Type             respjson.Field
+		URL              respjson.Field
+		ExtraFields      map[string]respjson.Field
+		raw              string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r WebSearchResultBlock) RawJSON() string { return r.JSON.raw }
+func (r *WebSearchResultBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties EncryptedContent, Title, Type, URL are required.
+type WebSearchResultBlockParam struct {
+	EncryptedContent string            `json:"encrypted_content,required"`
+	Title            string            `json:"title,required"`
+	URL              string            `json:"url,required"`
+	PageAge          param.Opt[string] `json:"page_age,omitzero"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_result".
+	Type constant.WebSearchResult `json:"type,required"`
+	paramObj
+}
+
+func (r WebSearchResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow WebSearchResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *WebSearchResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Name, Type are required.
+type WebSearchTool20250305Param struct {
+	// Maximum number of times the tool can be used in the API request.
+	MaxUses param.Opt[int64] `json:"max_uses,omitzero"`
+	// If provided, only these domains will be included in results. Cannot be used
+	// alongside `blocked_domains`.
+	AllowedDomains []string `json:"allowed_domains,omitzero"`
+	// If provided, these domains will never appear in results. Cannot be used
+	// alongside `allowed_domains`.
+	BlockedDomains []string `json:"blocked_domains,omitzero"`
+	// Parameters for the user's location. Used to provide more relevant search
+	// results.
+	UserLocation WebSearchTool20250305UserLocationParam `json:"user_location,omitzero"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// Name of the tool.
+	//
+	// This is how the tool will be called by the model and in `tool_use` blocks.
+	//
+	// This field can be elided, and will marshal its zero value as "web_search".
+	Name constant.WebSearch `json:"name,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_20250305".
+	Type constant.WebSearch20250305 `json:"type,required"`
+	paramObj
+}
+
+func (r WebSearchTool20250305Param) MarshalJSON() (data []byte, err error) {
+	type shadow WebSearchTool20250305Param
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *WebSearchTool20250305Param) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Parameters for the user's location. Used to provide more relevant search
+// results.
+//
+// The property Type is required.
+type WebSearchTool20250305UserLocationParam struct {
+	// The city of the user.
+	City param.Opt[string] `json:"city,omitzero"`
+	// The two letter
+	// [ISO country code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2) of the
+	// user.
+	Country param.Opt[string] `json:"country,omitzero"`
+	// The region of the user.
+	Region param.Opt[string] `json:"region,omitzero"`
+	// The [IANA timezone](https://nodatime.org/TimeZones) of the user.
+	Timezone param.Opt[string] `json:"timezone,omitzero"`
+	// This field can be elided, and will marshal its zero value as "approximate".
+	Type constant.Approximate `json:"type,required"`
+	paramObj
+}
+
+func (r WebSearchTool20250305UserLocationParam) MarshalJSON() (data []byte, err error) {
+	type shadow WebSearchTool20250305UserLocationParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *WebSearchTool20250305UserLocationParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties ErrorCode, Type are required.
+type WebSearchToolRequestErrorParam struct {
+	// Any of "invalid_tool_input", "unavailable", "max_uses_exceeded",
+	// "too_many_requests", "query_too_long".
+	ErrorCode WebSearchToolRequestErrorErrorCode `json:"error_code,omitzero,required"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_tool_result_error".
+	Type constant.WebSearchToolResultError `json:"type,required"`
+	paramObj
+}
+
+func (r WebSearchToolRequestErrorParam) MarshalJSON() (data []byte, err error) {
+	type shadow WebSearchToolRequestErrorParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *WebSearchToolRequestErrorParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type WebSearchToolRequestErrorErrorCode string
+
+const (
+	WebSearchToolRequestErrorErrorCodeInvalidToolInput WebSearchToolRequestErrorErrorCode = "invalid_tool_input"
+	WebSearchToolRequestErrorErrorCodeUnavailable      WebSearchToolRequestErrorErrorCode = "unavailable"
+	WebSearchToolRequestErrorErrorCodeMaxUsesExceeded  WebSearchToolRequestErrorErrorCode = "max_uses_exceeded"
+	WebSearchToolRequestErrorErrorCodeTooManyRequests  WebSearchToolRequestErrorErrorCode = "too_many_requests"
+	WebSearchToolRequestErrorErrorCodeQueryTooLong     WebSearchToolRequestErrorErrorCode = "query_too_long"
+)
+
+type WebSearchToolResultBlock struct {
+	Content   WebSearchToolResultBlockContentUnion `json:"content,required"`
+	ToolUseID string                               `json:"tool_use_id,required"`
+	Type      constant.WebSearchToolResult         `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Content     respjson.Field
+		ToolUseID   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r WebSearchToolResultBlock) RawJSON() string { return r.JSON.raw }
+func (r *WebSearchToolResultBlock) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// WebSearchToolResultBlockContentUnion contains all possible properties and values
+// from [WebSearchToolResultError], [[]WebSearchResultBlock].
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+//
+// If the underlying value is not a json object, one of the following properties
+// will be valid: OfWebSearchResultBlockArray]
+type WebSearchToolResultBlockContentUnion struct {
+	// This field will be present if the value is a [[]WebSearchResultBlock] instead of
+	// an object.
+	OfWebSearchResultBlockArray []WebSearchResultBlock `json:",inline"`
+	// This field is from variant [WebSearchToolResultError].
+	ErrorCode WebSearchToolResultErrorErrorCode `json:"error_code"`
+	// This field is from variant [WebSearchToolResultError].
+	Type constant.WebSearchToolResultError `json:"type"`
+	JSON struct {
+		OfWebSearchResultBlockArray respjson.Field
+		ErrorCode                   respjson.Field
+		Type                        respjson.Field
+		raw                         string
+	} `json:"-"`
+}
+
+func (u WebSearchToolResultBlockContentUnion) AsResponseWebSearchToolResultError() (v WebSearchToolResultError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u WebSearchToolResultBlockContentUnion) AsWebSearchResultBlockArray() (v []WebSearchResultBlock) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u WebSearchToolResultBlockContentUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *WebSearchToolResultBlockContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties Content, ToolUseID, Type are required.
+type WebSearchToolResultBlockParam struct {
+	Content   WebSearchToolResultBlockParamContentUnion `json:"content,omitzero,required"`
+	ToolUseID string                                    `json:"tool_use_id,required"`
+	// Create a cache control breakpoint at this content block.
+	CacheControl CacheControlEphemeralParam `json:"cache_control,omitzero"`
+	// This field can be elided, and will marshal its zero value as
+	// "web_search_tool_result".
+	Type constant.WebSearchToolResult `json:"type,required"`
+	paramObj
+}
+
+func (r WebSearchToolResultBlockParam) MarshalJSON() (data []byte, err error) {
+	type shadow WebSearchToolResultBlockParam
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *WebSearchToolResultBlockParam) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func NewWebSearchToolRequestError(errorCode WebSearchToolRequestErrorErrorCode) WebSearchToolResultBlockParamContentUnion {
+	var variant WebSearchToolRequestErrorParam
+	variant.ErrorCode = errorCode
+	return WebSearchToolResultBlockParamContentUnion{OfRequestWebSearchToolResultError: &variant}
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type WebSearchToolResultBlockParamContentUnion struct {
+	OfWebSearchToolResultBlockItem    []WebSearchResultBlockParam     `json:",omitzero,inline"`
+	OfRequestWebSearchToolResultError *WebSearchToolRequestErrorParam `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u WebSearchToolResultBlockParamContentUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfWebSearchToolResultBlockItem, u.OfRequestWebSearchToolResultError)
+}
+func (u *WebSearchToolResultBlockParamContentUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *WebSearchToolResultBlockParamContentUnion) asAny() any {
+	if !param.IsOmitted(u.OfWebSearchToolResultBlockItem) {
+		return &u.OfWebSearchToolResultBlockItem
+	} else if !param.IsOmitted(u.OfRequestWebSearchToolResultError) {
+		return u.OfRequestWebSearchToolResultError
+	}
+	return nil
+}
+
+type WebSearchToolResultError struct {
+	// Any of "invalid_tool_input", "unavailable", "max_uses_exceeded",
+	// "too_many_requests", "query_too_long".
+	ErrorCode WebSearchToolResultErrorErrorCode `json:"error_code,required"`
+	Type      constant.WebSearchToolResultError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ErrorCode   respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r WebSearchToolResultError) RawJSON() string { return r.JSON.raw }
+func (r *WebSearchToolResultError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type WebSearchToolResultErrorErrorCode string
+
+const (
+	WebSearchToolResultErrorErrorCodeInvalidToolInput WebSearchToolResultErrorErrorCode = "invalid_tool_input"
+	WebSearchToolResultErrorErrorCodeUnavailable      WebSearchToolResultErrorErrorCode = "unavailable"
+	WebSearchToolResultErrorErrorCodeMaxUsesExceeded  WebSearchToolResultErrorErrorCode = "max_uses_exceeded"
+	WebSearchToolResultErrorErrorCodeTooManyRequests  WebSearchToolResultErrorErrorCode = "too_many_requests"
+	WebSearchToolResultErrorErrorCodeQueryTooLong     WebSearchToolResultErrorErrorCode = "query_too_long"
+)
+
+type MessageNewParams struct {
+	// The maximum number of tokens to generate before stopping.
+	//
+	// Note that our models may stop _before_ reaching this maximum. This parameter
+	// only specifies the absolute maximum number of tokens to generate.
+	//
+	// Different models have different maximum values for this parameter. See
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for details.
+	MaxTokens int64 `json:"max_tokens,required"`
+	// Input messages.
+	//
+	// Our models are trained to operate on alternating `user` and `assistant`
+	// conversational turns. When creating a new `Message`, you specify the prior
+	// conversational turns with the `messages` parameter, and the model then generates
+	// the next `Message` in the conversation. Consecutive `user` or `assistant` turns
+	// in your request will be combined into a single turn.
+	//
+	// Each input message must be an object with a `role` and `content`. You can
+	// specify a single `user`-role message, or you can include multiple `user` and
+	// `assistant` messages.
+	//
+	// If the final message uses the `assistant` role, the response content will
+	// continue immediately from the content in that message. This can be used to
+	// constrain part of the model's response.
+	//
+	// Example with a single `user` message:
+	//
+	// ```json
+	// [{ "role": "user", "content": "Hello, Claude" }]
+	// ```
+	//
+	// Example with multiple conversational turns:
+	//
+	// ```json
+	// [
+	//
+	//	{ "role": "user", "content": "Hello there." },
+	//	{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
+	//	{ "role": "user", "content": "Can you explain LLMs in plain English?" }
+	//
+	// ]
+	// ```
+	//
+	// Example with a partially-filled response from Claude:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Each input message `content` may be either a single `string` or an array of
+	// content blocks, where each block has a specific `type`. Using a `string` for
+	// `content` is shorthand for an array of one content block of type `"text"`. The
+	// following input messages are equivalent:
+	//
+	// ```json
+	// { "role": "user", "content": "Hello, Claude" }
+	// ```
+	//
+	// ```json
+	// { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
+	// ```
+	//
+	// Starting with Claude 3 models, you can also send image content blocks:
+	//
+	// ```json
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": [
+	//	    {
+	//	      "type": "image",
+	//	      "source": {
+	//	        "type": "base64",
+	//	        "media_type": "image/jpeg",
+	//	        "data": "/9j/4AAQSkZJRg..."
+	//	      }
+	//	    },
+	//	    { "type": "text", "text": "What is in this image?" }
+	//	  ]
+	//	}
+	//
+	// ```
+	//
+	// We currently support the `base64` source type for images, and the `image/jpeg`,
+	// `image/png`, `image/gif`, and `image/webp` media types.
+	//
+	// See [examples](https://docs.anthropic.com/en/api/messages-examples#vision) for
+	// more input examples.
+	//
+	// Note that if you want to include a
+	// [system prompt](https://docs.anthropic.com/en/docs/system-prompts), you can use
+	// the top-level `system` parameter — there is no `"system"` role for input
+	// messages in the Messages API.
+	//
+	// There is a limit of 100000 messages in a single request.
+	Messages []MessageParam `json:"messages,omitzero,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,omitzero,required"`
+	// Amount of randomness injected into the response.
+	//
+	// Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0`
+	// for analytical / multiple choice, and closer to `1.0` for creative and
+	// generative tasks.
+	//
+	// Note that even with `temperature` of `0.0`, the results will not be fully
+	// deterministic.
+	Temperature param.Opt[float64] `json:"temperature,omitzero"`
+	// Only sample from the top K options for each subsequent token.
+	//
+	// Used to remove "long tail" low probability responses.
+	// [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277).
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopK param.Opt[int64] `json:"top_k,omitzero"`
+	// Use nucleus sampling.
+	//
+	// In nucleus sampling, we compute the cumulative distribution over all the options
+	// for each subsequent token in decreasing probability order and cut it off once it
+	// reaches a particular probability specified by `top_p`. You should either alter
+	// `temperature` or `top_p`, but not both.
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopP param.Opt[float64] `json:"top_p,omitzero"`
+	// An object describing metadata about the request.
+	Metadata MetadataParam `json:"metadata,omitzero"`
+	// Determines whether to use priority capacity (if available) or standard capacity
+	// for this request.
+	//
+	// Anthropic offers different levels of service for your API requests. See
+	// [service-tiers](https://docs.anthropic.com/en/api/service-tiers) for details.
+	//
+	// Any of "auto", "standard_only".
+	ServiceTier MessageNewParamsServiceTier `json:"service_tier,omitzero"`
+	// Custom text sequences that will cause the model to stop generating.
+	//
+	// Our models will normally stop when they have naturally completed their turn,
+	// which will result in a response `stop_reason` of `"end_turn"`.
+	//
+	// If you want the model to stop generating when it encounters custom strings of
+	// text, you can use the `stop_sequences` parameter. If the model encounters one of
+	// the custom sequences, the response `stop_reason` value will be `"stop_sequence"`
+	// and the response `stop_sequence` value will contain the matched stop sequence.
+	StopSequences []string `json:"stop_sequences,omitzero"`
+	// System prompt.
+	//
+	// A system prompt is a way of providing context and instructions to Claude, such
+	// as specifying a particular goal or role. See our
+	// [guide to system prompts](https://docs.anthropic.com/en/docs/system-prompts).
+	System []TextBlockParam `json:"system,omitzero"`
+	// Configuration for enabling Claude's extended thinking.
+	//
+	// When enabled, responses include `thinking` content blocks showing Claude's
+	// thinking process before the final answer. Requires a minimum budget of 1,024
+	// tokens and counts towards your `max_tokens` limit.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	Thinking ThinkingConfigParamUnion `json:"thinking,omitzero"`
+	// How the model should use the provided tools. The model can use a specific tool,
+	// any available tool, decide by itself, or not use tools at all.
+	ToolChoice ToolChoiceUnionParam `json:"tool_choice,omitzero"`
+	// Definitions of tools that the model may use.
+	//
+	// If you include `tools` in your API request, the model may return `tool_use`
+	// content blocks that represent the model's use of those tools. You can then run
+	// those tools using the tool input generated by the model and then optionally
+	// return results back to the model using `tool_result` content blocks.
+	//
+	// Each tool definition includes:
+	//
+	//   - `name`: Name of the tool.
+	//   - `description`: Optional, but strongly-recommended description of the tool.
+	//   - `input_schema`: [JSON schema](https://json-schema.org/draft/2020-12) for the
+	//     tool `input` shape that the model will produce in `tool_use` output content
+	//     blocks.
+	//
+	// For example, if you defined `tools` as:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "name": "get_stock_price",
+	//	  "description": "Get the current stock price for a given ticker symbol.",
+	//	  "input_schema": {
+	//	    "type": "object",
+	//	    "properties": {
+	//	      "ticker": {
+	//	        "type": "string",
+	//	        "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
+	//	      }
+	//	    },
+	//	    "required": ["ticker"]
+	//	  }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// And then asked the model "What's the S&P 500 at today?", the model might produce
+	// `tool_use` content blocks in the response like this:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_use",
+	//	  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "name": "get_stock_price",
+	//	  "input": { "ticker": "^GSPC" }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// You might then run your `get_stock_price` tool with `{"ticker": "^GSPC"}` as an
+	// input, and return the following back to the model in a subsequent `user`
+	// message:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_result",
+	//	  "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "content": "259.75 USD"
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// Tools can be used for workflows that include running client-side tools and
+	// functions, or more generally whenever you want the model to produce a particular
+	// JSON structure of output.
+	//
+	// See our [guide](https://docs.anthropic.com/en/docs/tool-use) for more details.
+	Tools []ToolUnionParam `json:"tools,omitzero"`
+	paramObj
+}
+
+func (r MessageNewParams) MarshalJSON() (data []byte, err error) {
+	type shadow MessageNewParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MessageNewParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Determines whether to use priority capacity (if available) or standard capacity
+// for this request.
+//
+// Anthropic offers different levels of service for your API requests. See
+// [service-tiers](https://docs.anthropic.com/en/api/service-tiers) for details.
+type MessageNewParamsServiceTier string
+
+const (
+	MessageNewParamsServiceTierAuto         MessageNewParamsServiceTier = "auto"
+	MessageNewParamsServiceTierStandardOnly MessageNewParamsServiceTier = "standard_only"
+)
+
+type MessageCountTokensParams struct {
+	// Input messages.
+	//
+	// Our models are trained to operate on alternating `user` and `assistant`
+	// conversational turns. When creating a new `Message`, you specify the prior
+	// conversational turns with the `messages` parameter, and the model then generates
+	// the next `Message` in the conversation. Consecutive `user` or `assistant` turns
+	// in your request will be combined into a single turn.
+	//
+	// Each input message must be an object with a `role` and `content`. You can
+	// specify a single `user`-role message, or you can include multiple `user` and
+	// `assistant` messages.
+	//
+	// If the final message uses the `assistant` role, the response content will
+	// continue immediately from the content in that message. This can be used to
+	// constrain part of the model's response.
+	//
+	// Example with a single `user` message:
+	//
+	// ```json
+	// [{ "role": "user", "content": "Hello, Claude" }]
+	// ```
+	//
+	// Example with multiple conversational turns:
+	//
+	// ```json
+	// [
+	//
+	//	{ "role": "user", "content": "Hello there." },
+	//	{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
+	//	{ "role": "user", "content": "Can you explain LLMs in plain English?" }
+	//
+	// ]
+	// ```
+	//
+	// Example with a partially-filled response from Claude:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Each input message `content` may be either a single `string` or an array of
+	// content blocks, where each block has a specific `type`. Using a `string` for
+	// `content` is shorthand for an array of one content block of type `"text"`. The
+	// following input messages are equivalent:
+	//
+	// ```json
+	// { "role": "user", "content": "Hello, Claude" }
+	// ```
+	//
+	// ```json
+	// { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
+	// ```
+	//
+	// Starting with Claude 3 models, you can also send image content blocks:
+	//
+	// ```json
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": [
+	//	    {
+	//	      "type": "image",
+	//	      "source": {
+	//	        "type": "base64",
+	//	        "media_type": "image/jpeg",
+	//	        "data": "/9j/4AAQSkZJRg..."
+	//	      }
+	//	    },
+	//	    { "type": "text", "text": "What is in this image?" }
+	//	  ]
+	//	}
+	//
+	// ```
+	//
+	// We currently support the `base64` source type for images, and the `image/jpeg`,
+	// `image/png`, `image/gif`, and `image/webp` media types.
+	//
+	// See [examples](https://docs.anthropic.com/en/api/messages-examples#vision) for
+	// more input examples.
+	//
+	// Note that if you want to include a
+	// [system prompt](https://docs.anthropic.com/en/docs/system-prompts), you can use
+	// the top-level `system` parameter — there is no `"system"` role for input
+	// messages in the Messages API.
+	//
+	// There is a limit of 100000 messages in a single request.
+	Messages []MessageParam `json:"messages,omitzero,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,omitzero,required"`
+	// System prompt.
+	//
+	// A system prompt is a way of providing context and instructions to Claude, such
+	// as specifying a particular goal or role. See our
+	// [guide to system prompts](https://docs.anthropic.com/en/docs/system-prompts).
+	System MessageCountTokensParamsSystemUnion `json:"system,omitzero"`
+	// Configuration for enabling Claude's extended thinking.
+	//
+	// When enabled, responses include `thinking` content blocks showing Claude's
+	// thinking process before the final answer. Requires a minimum budget of 1,024
+	// tokens and counts towards your `max_tokens` limit.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	Thinking ThinkingConfigParamUnion `json:"thinking,omitzero"`
+	// How the model should use the provided tools. The model can use a specific tool,
+	// any available tool, decide by itself, or not use tools at all.
+	ToolChoice ToolChoiceUnionParam `json:"tool_choice,omitzero"`
+	// Definitions of tools that the model may use.
+	//
+	// If you include `tools` in your API request, the model may return `tool_use`
+	// content blocks that represent the model's use of those tools. You can then run
+	// those tools using the tool input generated by the model and then optionally
+	// return results back to the model using `tool_result` content blocks.
+	//
+	// Each tool definition includes:
+	//
+	//   - `name`: Name of the tool.
+	//   - `description`: Optional, but strongly-recommended description of the tool.
+	//   - `input_schema`: [JSON schema](https://json-schema.org/draft/2020-12) for the
+	//     tool `input` shape that the model will produce in `tool_use` output content
+	//     blocks.
+	//
+	// For example, if you defined `tools` as:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "name": "get_stock_price",
+	//	  "description": "Get the current stock price for a given ticker symbol.",
+	//	  "input_schema": {
+	//	    "type": "object",
+	//	    "properties": {
+	//	      "ticker": {
+	//	        "type": "string",
+	//	        "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
+	//	      }
+	//	    },
+	//	    "required": ["ticker"]
+	//	  }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// And then asked the model "What's the S&P 500 at today?", the model might produce
+	// `tool_use` content blocks in the response like this:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_use",
+	//	  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "name": "get_stock_price",
+	//	  "input": { "ticker": "^GSPC" }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// You might then run your `get_stock_price` tool with `{"ticker": "^GSPC"}` as an
+	// input, and return the following back to the model in a subsequent `user`
+	// message:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_result",
+	//	  "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "content": "259.75 USD"
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// Tools can be used for workflows that include running client-side tools and
+	// functions, or more generally whenever you want the model to produce a particular
+	// JSON structure of output.
+	//
+	// See our [guide](https://docs.anthropic.com/en/docs/tool-use) for more details.
+	Tools []MessageCountTokensToolUnionParam `json:"tools,omitzero"`
+	paramObj
+}
+
+func (r MessageCountTokensParams) MarshalJSON() (data []byte, err error) {
+	type shadow MessageCountTokensParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MessageCountTokensParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Only one field can be non-zero.
+//
+// Use [param.IsOmitted] to confirm if a field is set.
+type MessageCountTokensParamsSystemUnion struct {
+	OfString         param.Opt[string] `json:",omitzero,inline"`
+	OfTextBlockArray []TextBlockParam  `json:",omitzero,inline"`
+	paramUnion
+}
+
+func (u MessageCountTokensParamsSystemUnion) MarshalJSON() ([]byte, error) {
+	return param.MarshalUnion(u, u.OfString, u.OfTextBlockArray)
+}
+func (u *MessageCountTokensParamsSystemUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, u)
+}
+
+func (u *MessageCountTokensParamsSystemUnion) asAny() any {
+	if !param.IsOmitted(u.OfString) {
+		return &u.OfString.Value
+	} else if !param.IsOmitted(u.OfTextBlockArray) {
+		return &u.OfTextBlockArray
+	}
+	return nil
+}
+
+// CalculateNonStreamingTimeout calculates the appropriate timeout for a non-streaming request
+// based on the maximum number of tokens and the model's non-streaming token limit
+func CalculateNonStreamingTimeout(maxTokens int, model Model, opts []option.RequestOption) (time.Duration, error) {
+	preCfg, err := requestconfig.PreRequestOptions(opts...)
+	if err != nil {
+		return 0, fmt.Errorf("error applying request options: %w", err)
+	}
+	// if the user has set a specific request timeout, use that
+	if preCfg.RequestTimeout != 0 {
+		return preCfg.RequestTimeout, nil
+	}
+
+	maximumTime := 60 * 60 * time.Second
+	defaultTime := 60 * 10 * time.Second
+
+	expectedTime := time.Duration(float64(maximumTime) * float64(maxTokens) / 128000.0)
+
+	// If the model has a non-streaming token limit and max_tokens exceeds it,
+	// or if the expected time exceeds default time, recommend streaming
+	maxNonStreamingTokens, hasLimit := constant.ModelNonStreamingTokens[string(model)]
+	if expectedTime > defaultTime || (hasLimit && maxTokens > maxNonStreamingTokens) {
+		return 0, fmt.Errorf("streaming is strongly recommended for operations that may take longer than 10 minutes")
+	}
+
+	return expectedTime, nil
+}

vendor/github.com/anthropics/anthropic-sdk-go/messagebatch.go 🔗

@@ -0,0 +1,825 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"encoding/json"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/apiquery"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/jsonl"
+	"github.com/anthropics/anthropic-sdk-go/packages/pagination"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/shared"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// MessageBatchService contains methods and other services that help with
+// interacting with the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewMessageBatchService] method instead.
+type MessageBatchService struct {
+	Options []option.RequestOption
+}
+
+// NewMessageBatchService generates a new service that applies the given options to
+// each request. These options are applied after the parent client's options (if
+// there is one), and before any request-specific options.
+func NewMessageBatchService(opts ...option.RequestOption) (r MessageBatchService) {
+	r = MessageBatchService{}
+	r.Options = opts
+	return
+}
+
+// Send a batch of Message creation requests.
+//
+// The Message Batches API can be used to process multiple Messages API requests at
+// once. Once a Message Batch is created, it begins processing immediately. Batches
+// can take up to 24 hours to complete.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *MessageBatchService) New(ctx context.Context, body MessageBatchNewParams, opts ...option.RequestOption) (res *MessageBatch, err error) {
+	opts = append(r.Options[:], opts...)
+	path := "v1/messages/batches"
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, body, &res, opts...)
+	return
+}
+
+// This endpoint is idempotent and can be used to poll for Message Batch
+// completion. To access the results of a Message Batch, make a request to the
+// `results_url` field in the response.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *MessageBatchService) Get(ctx context.Context, messageBatchID string, opts ...option.RequestOption) (res *MessageBatch, err error) {
+	opts = append(r.Options[:], opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &res, opts...)
+	return
+}
+
+// List all Message Batches within a Workspace. Most recently created batches are
+// returned first.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *MessageBatchService) List(ctx context.Context, query MessageBatchListParams, opts ...option.RequestOption) (res *pagination.Page[MessageBatch], err error) {
+	var raw *http.Response
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithResponseInto(&raw)}, opts...)
+	path := "v1/messages/batches"
+	cfg, err := requestconfig.NewRequestConfig(ctx, http.MethodGet, path, query, &res, opts...)
+	if err != nil {
+		return nil, err
+	}
+	err = cfg.Execute()
+	if err != nil {
+		return nil, err
+	}
+	res.SetPageConfig(cfg, raw)
+	return res, nil
+}
+
+// List all Message Batches within a Workspace. Most recently created batches are
+// returned first.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *MessageBatchService) ListAutoPaging(ctx context.Context, query MessageBatchListParams, opts ...option.RequestOption) *pagination.PageAutoPager[MessageBatch] {
+	return pagination.NewPageAutoPager(r.List(ctx, query, opts...))
+}
+
+// Delete a Message Batch.
+//
+// Message Batches can only be deleted once they've finished processing. If you'd
+// like to delete an in-progress batch, you must first cancel it.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *MessageBatchService) Delete(ctx context.Context, messageBatchID string, opts ...option.RequestOption) (res *DeletedMessageBatch, err error) {
+	opts = append(r.Options[:], opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodDelete, path, nil, &res, opts...)
+	return
+}
+
+// Batches may be canceled any time before processing ends. Once cancellation is
+// initiated, the batch enters a `canceling` state, at which time the system may
+// complete any in-progress, non-interruptible requests before finalizing
+// cancellation.
+//
+// The number of canceled requests is specified in `request_counts`. To determine
+// which requests were canceled, check the individual results within the batch.
+// Note that cancellation may not result in any canceled requests if they were
+// non-interruptible.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *MessageBatchService) Cancel(ctx context.Context, messageBatchID string, opts ...option.RequestOption) (res *MessageBatch, err error) {
+	opts = append(r.Options[:], opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s/cancel", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodPost, path, nil, &res, opts...)
+	return
+}
+
+// Streams the results of a Message Batch as a `.jsonl` file.
+//
+// Each line in the file is a JSON object containing the result of a single request
+// in the Message Batch. Results are not guaranteed to be in the same order as
+// requests. Use the `custom_id` field to match results to requests.
+//
+// Learn more about the Message Batches API in our
+// [user guide](/en/docs/build-with-claude/batch-processing)
+func (r *MessageBatchService) ResultsStreaming(ctx context.Context, messageBatchID string, opts ...option.RequestOption) (stream *jsonl.Stream[MessageBatchIndividualResponse]) {
+	var (
+		raw *http.Response
+		err error
+	)
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithHeader("Accept", "application/x-jsonl")}, opts...)
+	if messageBatchID == "" {
+		err = errors.New("missing required message_batch_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/messages/batches/%s/results", messageBatchID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &raw, opts...)
+	return jsonl.NewStream[MessageBatchIndividualResponse](raw, err)
+}
+
+type DeletedMessageBatch struct {
+	// ID of the Message Batch.
+	ID string `json:"id,required"`
+	// Deleted object type.
+	//
+	// For Message Batches, this is always `"message_batch_deleted"`.
+	Type constant.MessageBatchDeleted `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r DeletedMessageBatch) RawJSON() string { return r.JSON.raw }
+func (r *DeletedMessageBatch) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageBatch struct {
+	// Unique object identifier.
+	//
+	// The format and length of IDs may change over time.
+	ID string `json:"id,required"`
+	// RFC 3339 datetime string representing the time at which the Message Batch was
+	// archived and its results became unavailable.
+	ArchivedAt time.Time `json:"archived_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which cancellation was
+	// initiated for the Message Batch. Specified only if cancellation was initiated.
+	CancelInitiatedAt time.Time `json:"cancel_initiated_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which the Message Batch was
+	// created.
+	CreatedAt time.Time `json:"created_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which processing for the
+	// Message Batch ended. Specified only once processing ends.
+	//
+	// Processing ends when every request in a Message Batch has either succeeded,
+	// errored, canceled, or expired.
+	EndedAt time.Time `json:"ended_at,required" format:"date-time"`
+	// RFC 3339 datetime string representing the time at which the Message Batch will
+	// expire and end processing, which is 24 hours after creation.
+	ExpiresAt time.Time `json:"expires_at,required" format:"date-time"`
+	// Processing status of the Message Batch.
+	//
+	// Any of "in_progress", "canceling", "ended".
+	ProcessingStatus MessageBatchProcessingStatus `json:"processing_status,required"`
+	// Tallies requests within the Message Batch, categorized by their status.
+	//
+	// Requests start as `processing` and move to one of the other statuses only once
+	// processing of the entire batch ends. The sum of all values always matches the
+	// total number of requests in the batch.
+	RequestCounts MessageBatchRequestCounts `json:"request_counts,required"`
+	// URL to a `.jsonl` file containing the results of the Message Batch requests.
+	// Specified only once processing ends.
+	//
+	// Results in the file are not guaranteed to be in the same order as requests. Use
+	// the `custom_id` field to match results to requests.
+	ResultsURL string `json:"results_url,required"`
+	// Object type.
+	//
+	// For Message Batches, this is always `"message_batch"`.
+	Type constant.MessageBatch `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID                respjson.Field
+		ArchivedAt        respjson.Field
+		CancelInitiatedAt respjson.Field
+		CreatedAt         respjson.Field
+		EndedAt           respjson.Field
+		ExpiresAt         respjson.Field
+		ProcessingStatus  respjson.Field
+		RequestCounts     respjson.Field
+		ResultsURL        respjson.Field
+		Type              respjson.Field
+		ExtraFields       map[string]respjson.Field
+		raw               string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageBatch) RawJSON() string { return r.JSON.raw }
+func (r *MessageBatch) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Processing status of the Message Batch.
+type MessageBatchProcessingStatus string
+
+const (
+	MessageBatchProcessingStatusInProgress MessageBatchProcessingStatus = "in_progress"
+	MessageBatchProcessingStatusCanceling  MessageBatchProcessingStatus = "canceling"
+	MessageBatchProcessingStatusEnded      MessageBatchProcessingStatus = "ended"
+)
+
+type MessageBatchCanceledResult struct {
+	Type constant.Canceled `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageBatchCanceledResult) RawJSON() string { return r.JSON.raw }
+func (r *MessageBatchCanceledResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageBatchErroredResult struct {
+	Error shared.ErrorResponse `json:"error,required"`
+	Type  constant.Errored     `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Error       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageBatchErroredResult) RawJSON() string { return r.JSON.raw }
+func (r *MessageBatchErroredResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageBatchExpiredResult struct {
+	Type constant.Expired `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageBatchExpiredResult) RawJSON() string { return r.JSON.raw }
+func (r *MessageBatchExpiredResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// This is a single line in the response `.jsonl` file and does not represent the
+// response as a whole.
+type MessageBatchIndividualResponse struct {
+	// Developer-provided ID created for each request in a Message Batch. Useful for
+	// matching results to requests, as results may be given out of request order.
+	//
+	// Must be unique for each request within the Message Batch.
+	CustomID string `json:"custom_id,required"`
+	// Processing result for this request.
+	//
+	// Contains a Message output if processing was successful, an error response if
+	// processing failed, or the reason why processing was not attempted, such as
+	// cancellation or expiration.
+	Result MessageBatchResultUnion `json:"result,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		CustomID    respjson.Field
+		Result      respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageBatchIndividualResponse) RawJSON() string { return r.JSON.raw }
+func (r *MessageBatchIndividualResponse) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageBatchRequestCounts struct {
+	// Number of requests in the Message Batch that have been canceled.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Canceled int64 `json:"canceled,required"`
+	// Number of requests in the Message Batch that encountered an error.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Errored int64 `json:"errored,required"`
+	// Number of requests in the Message Batch that have expired.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Expired int64 `json:"expired,required"`
+	// Number of requests in the Message Batch that are processing.
+	Processing int64 `json:"processing,required"`
+	// Number of requests in the Message Batch that have completed successfully.
+	//
+	// This is zero until processing of the entire Message Batch has ended.
+	Succeeded int64 `json:"succeeded,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Canceled    respjson.Field
+		Errored     respjson.Field
+		Expired     respjson.Field
+		Processing  respjson.Field
+		Succeeded   respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageBatchRequestCounts) RawJSON() string { return r.JSON.raw }
+func (r *MessageBatchRequestCounts) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// MessageBatchResultUnion contains all possible properties and values from
+// [MessageBatchSucceededResult], [MessageBatchErroredResult],
+// [MessageBatchCanceledResult], [MessageBatchExpiredResult].
+//
+// Use the [MessageBatchResultUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type MessageBatchResultUnion struct {
+	// This field is from variant [MessageBatchSucceededResult].
+	Message Message `json:"message"`
+	// Any of "succeeded", "errored", "canceled", "expired".
+	Type string `json:"type"`
+	// This field is from variant [MessageBatchErroredResult].
+	Error shared.ErrorResponse `json:"error"`
+	JSON  struct {
+		Message respjson.Field
+		Type    respjson.Field
+		Error   respjson.Field
+		raw     string
+	} `json:"-"`
+}
+
+// anyMessageBatchResult is implemented by each variant of
+// [MessageBatchResultUnion] to add type safety for the return type of
+// [MessageBatchResultUnion.AsAny]
+type anyMessageBatchResult interface {
+	implMessageBatchResultUnion()
+}
+
+func (MessageBatchSucceededResult) implMessageBatchResultUnion() {}
+func (MessageBatchErroredResult) implMessageBatchResultUnion()   {}
+func (MessageBatchCanceledResult) implMessageBatchResultUnion()  {}
+func (MessageBatchExpiredResult) implMessageBatchResultUnion()   {}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := MessageBatchResultUnion.AsAny().(type) {
+//	case anthropic.MessageBatchSucceededResult:
+//	case anthropic.MessageBatchErroredResult:
+//	case anthropic.MessageBatchCanceledResult:
+//	case anthropic.MessageBatchExpiredResult:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u MessageBatchResultUnion) AsAny() anyMessageBatchResult {
+	switch u.Type {
+	case "succeeded":
+		return u.AsSucceeded()
+	case "errored":
+		return u.AsErrored()
+	case "canceled":
+		return u.AsCanceled()
+	case "expired":
+		return u.AsExpired()
+	}
+	return nil
+}
+
+func (u MessageBatchResultUnion) AsSucceeded() (v MessageBatchSucceededResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageBatchResultUnion) AsErrored() (v MessageBatchErroredResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageBatchResultUnion) AsCanceled() (v MessageBatchCanceledResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u MessageBatchResultUnion) AsExpired() (v MessageBatchExpiredResult) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u MessageBatchResultUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *MessageBatchResultUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageBatchSucceededResult struct {
+	Message Message            `json:"message,required"`
+	Type    constant.Succeeded `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r MessageBatchSucceededResult) RawJSON() string { return r.JSON.raw }
+func (r *MessageBatchSucceededResult) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type MessageBatchNewParams struct {
+	// List of requests for prompt completion. Each is an individual request to create
+	// a Message.
+	Requests []MessageBatchNewParamsRequest `json:"requests,omitzero,required"`
+	paramObj
+}
+
+func (r MessageBatchNewParams) MarshalJSON() (data []byte, err error) {
+	type shadow MessageBatchNewParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MessageBatchNewParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// The properties CustomID, Params are required.
+type MessageBatchNewParamsRequest struct {
+	// Developer-provided ID created for each request in a Message Batch. Useful for
+	// matching results to requests, as results may be given out of request order.
+	//
+	// Must be unique for each request within the Message Batch.
+	CustomID string `json:"custom_id,required"`
+	// Messages API creation parameters for the individual request.
+	//
+	// See the [Messages API reference](/en/api/messages) for full documentation on
+	// available parameters.
+	Params MessageBatchNewParamsRequestParams `json:"params,omitzero,required"`
+	paramObj
+}
+
+func (r MessageBatchNewParamsRequest) MarshalJSON() (data []byte, err error) {
+	type shadow MessageBatchNewParamsRequest
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MessageBatchNewParamsRequest) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// Messages API creation parameters for the individual request.
+//
+// See the [Messages API reference](/en/api/messages) for full documentation on
+// available parameters.
+//
+// The properties MaxTokens, Messages, Model are required.
+type MessageBatchNewParamsRequestParams struct {
+	// The maximum number of tokens to generate before stopping.
+	//
+	// Note that our models may stop _before_ reaching this maximum. This parameter
+	// only specifies the absolute maximum number of tokens to generate.
+	//
+	// Different models have different maximum values for this parameter. See
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for details.
+	MaxTokens int64 `json:"max_tokens,required"`
+	// Input messages.
+	//
+	// Our models are trained to operate on alternating `user` and `assistant`
+	// conversational turns. When creating a new `Message`, you specify the prior
+	// conversational turns with the `messages` parameter, and the model then generates
+	// the next `Message` in the conversation. Consecutive `user` or `assistant` turns
+	// in your request will be combined into a single turn.
+	//
+	// Each input message must be an object with a `role` and `content`. You can
+	// specify a single `user`-role message, or you can include multiple `user` and
+	// `assistant` messages.
+	//
+	// If the final message uses the `assistant` role, the response content will
+	// continue immediately from the content in that message. This can be used to
+	// constrain part of the model's response.
+	//
+	// Example with a single `user` message:
+	//
+	// ```json
+	// [{ "role": "user", "content": "Hello, Claude" }]
+	// ```
+	//
+	// Example with multiple conversational turns:
+	//
+	// ```json
+	// [
+	//
+	//	{ "role": "user", "content": "Hello there." },
+	//	{ "role": "assistant", "content": "Hi, I'm Claude. How can I help you?" },
+	//	{ "role": "user", "content": "Can you explain LLMs in plain English?" }
+	//
+	// ]
+	// ```
+	//
+	// Example with a partially-filled response from Claude:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
+	//	},
+	//	{ "role": "assistant", "content": "The best answer is (" }
+	//
+	// ]
+	// ```
+	//
+	// Each input message `content` may be either a single `string` or an array of
+	// content blocks, where each block has a specific `type`. Using a `string` for
+	// `content` is shorthand for an array of one content block of type `"text"`. The
+	// following input messages are equivalent:
+	//
+	// ```json
+	// { "role": "user", "content": "Hello, Claude" }
+	// ```
+	//
+	// ```json
+	// { "role": "user", "content": [{ "type": "text", "text": "Hello, Claude" }] }
+	// ```
+	//
+	// Starting with Claude 3 models, you can also send image content blocks:
+	//
+	// ```json
+	//
+	//	{
+	//	  "role": "user",
+	//	  "content": [
+	//	    {
+	//	      "type": "image",
+	//	      "source": {
+	//	        "type": "base64",
+	//	        "media_type": "image/jpeg",
+	//	        "data": "/9j/4AAQSkZJRg..."
+	//	      }
+	//	    },
+	//	    { "type": "text", "text": "What is in this image?" }
+	//	  ]
+	//	}
+	//
+	// ```
+	//
+	// We currently support the `base64` source type for images, and the `image/jpeg`,
+	// `image/png`, `image/gif`, and `image/webp` media types.
+	//
+	// See [examples](https://docs.anthropic.com/en/api/messages-examples#vision) for
+	// more input examples.
+	//
+	// Note that if you want to include a
+	// [system prompt](https://docs.anthropic.com/en/docs/system-prompts), you can use
+	// the top-level `system` parameter — there is no `"system"` role for input
+	// messages in the Messages API.
+	//
+	// There is a limit of 100000 messages in a single request.
+	Messages []MessageParam `json:"messages,omitzero,required"`
+	// The model that will complete your prompt.\n\nSee
+	// [models](https://docs.anthropic.com/en/docs/models-overview) for additional
+	// details and options.
+	Model Model `json:"model,omitzero,required"`
+	// Whether to incrementally stream the response using server-sent events.
+	//
+	// See [streaming](https://docs.anthropic.com/en/api/messages-streaming) for
+	// details.
+	Stream param.Opt[bool] `json:"stream,omitzero"`
+	// Amount of randomness injected into the response.
+	//
+	// Defaults to `1.0`. Ranges from `0.0` to `1.0`. Use `temperature` closer to `0.0`
+	// for analytical / multiple choice, and closer to `1.0` for creative and
+	// generative tasks.
+	//
+	// Note that even with `temperature` of `0.0`, the results will not be fully
+	// deterministic.
+	Temperature param.Opt[float64] `json:"temperature,omitzero"`
+	// Only sample from the top K options for each subsequent token.
+	//
+	// Used to remove "long tail" low probability responses.
+	// [Learn more technical details here](https://towardsdatascience.com/how-to-sample-from-language-models-682bceb97277).
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopK param.Opt[int64] `json:"top_k,omitzero"`
+	// Use nucleus sampling.
+	//
+	// In nucleus sampling, we compute the cumulative distribution over all the options
+	// for each subsequent token in decreasing probability order and cut it off once it
+	// reaches a particular probability specified by `top_p`. You should either alter
+	// `temperature` or `top_p`, but not both.
+	//
+	// Recommended for advanced use cases only. You usually only need to use
+	// `temperature`.
+	TopP param.Opt[float64] `json:"top_p,omitzero"`
+	// An object describing metadata about the request.
+	Metadata MetadataParam `json:"metadata,omitzero"`
+	// Determines whether to use priority capacity (if available) or standard capacity
+	// for this request.
+	//
+	// Anthropic offers different levels of service for your API requests. See
+	// [service-tiers](https://docs.anthropic.com/en/api/service-tiers) for details.
+	//
+	// Any of "auto", "standard_only".
+	ServiceTier string `json:"service_tier,omitzero"`
+	// Custom text sequences that will cause the model to stop generating.
+	//
+	// Our models will normally stop when they have naturally completed their turn,
+	// which will result in a response `stop_reason` of `"end_turn"`.
+	//
+	// If you want the model to stop generating when it encounters custom strings of
+	// text, you can use the `stop_sequences` parameter. If the model encounters one of
+	// the custom sequences, the response `stop_reason` value will be `"stop_sequence"`
+	// and the response `stop_sequence` value will contain the matched stop sequence.
+	StopSequences []string `json:"stop_sequences,omitzero"`
+	// System prompt.
+	//
+	// A system prompt is a way of providing context and instructions to Claude, such
+	// as specifying a particular goal or role. See our
+	// [guide to system prompts](https://docs.anthropic.com/en/docs/system-prompts).
+	System []TextBlockParam `json:"system,omitzero"`
+	// Configuration for enabling Claude's extended thinking.
+	//
+	// When enabled, responses include `thinking` content blocks showing Claude's
+	// thinking process before the final answer. Requires a minimum budget of 1,024
+	// tokens and counts towards your `max_tokens` limit.
+	//
+	// See
+	// [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking)
+	// for details.
+	Thinking ThinkingConfigParamUnion `json:"thinking,omitzero"`
+	// How the model should use the provided tools. The model can use a specific tool,
+	// any available tool, decide by itself, or not use tools at all.
+	ToolChoice ToolChoiceUnionParam `json:"tool_choice,omitzero"`
+	// Definitions of tools that the model may use.
+	//
+	// If you include `tools` in your API request, the model may return `tool_use`
+	// content blocks that represent the model's use of those tools. You can then run
+	// those tools using the tool input generated by the model and then optionally
+	// return results back to the model using `tool_result` content blocks.
+	//
+	// Each tool definition includes:
+	//
+	//   - `name`: Name of the tool.
+	//   - `description`: Optional, but strongly-recommended description of the tool.
+	//   - `input_schema`: [JSON schema](https://json-schema.org/draft/2020-12) for the
+	//     tool `input` shape that the model will produce in `tool_use` output content
+	//     blocks.
+	//
+	// For example, if you defined `tools` as:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "name": "get_stock_price",
+	//	  "description": "Get the current stock price for a given ticker symbol.",
+	//	  "input_schema": {
+	//	    "type": "object",
+	//	    "properties": {
+	//	      "ticker": {
+	//	        "type": "string",
+	//	        "description": "The stock ticker symbol, e.g. AAPL for Apple Inc."
+	//	      }
+	//	    },
+	//	    "required": ["ticker"]
+	//	  }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// And then asked the model "What's the S&P 500 at today?", the model might produce
+	// `tool_use` content blocks in the response like this:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_use",
+	//	  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "name": "get_stock_price",
+	//	  "input": { "ticker": "^GSPC" }
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// You might then run your `get_stock_price` tool with `{"ticker": "^GSPC"}` as an
+	// input, and return the following back to the model in a subsequent `user`
+	// message:
+	//
+	// ```json
+	// [
+	//
+	//	{
+	//	  "type": "tool_result",
+	//	  "tool_use_id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
+	//	  "content": "259.75 USD"
+	//	}
+	//
+	// ]
+	// ```
+	//
+	// Tools can be used for workflows that include running client-side tools and
+	// functions, or more generally whenever you want the model to produce a particular
+	// JSON structure of output.
+	//
+	// See our [guide](https://docs.anthropic.com/en/docs/tool-use) for more details.
+	Tools []ToolUnionParam `json:"tools,omitzero"`
+	paramObj
+}
+
+func (r MessageBatchNewParamsRequestParams) MarshalJSON() (data []byte, err error) {
+	type shadow MessageBatchNewParamsRequestParams
+	return param.MarshalObject(r, (*shadow)(&r))
+}
+func (r *MessageBatchNewParamsRequestParams) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func init() {
+	apijson.RegisterFieldValidator[MessageBatchNewParamsRequestParams](
+		"service_tier", "auto", "standard_only",
+	)
+}
+
+type MessageBatchListParams struct {
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately after this object.
+	AfterID param.Opt[string] `query:"after_id,omitzero" json:"-"`
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately before this object.
+	BeforeID param.Opt[string] `query:"before_id,omitzero" json:"-"`
+	// Number of items to return per page.
+	//
+	// Defaults to `20`. Ranges from `1` to `1000`.
+	Limit param.Opt[int64] `query:"limit,omitzero" json:"-"`
+	paramObj
+}
+
+// URLQuery serializes [MessageBatchListParams]'s query parameters as `url.Values`.
+func (r MessageBatchListParams) URLQuery() (v url.Values, err error) {
+	return apiquery.MarshalWithSettings(r, apiquery.QuerySettings{
+		ArrayFormat:  apiquery.ArrayQueryFormatComma,
+		NestedFormat: apiquery.NestedQueryFormatBrackets,
+	})
+}

vendor/github.com/anthropics/anthropic-sdk-go/model.go 🔗

@@ -0,0 +1,149 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package anthropic
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"net/http"
+	"net/url"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/apiquery"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/pagination"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// ModelService contains methods and other services that help with interacting with
+// the anthropic API.
+//
+// Note, unlike clients, this service does not read variables from the environment
+// automatically. You should not instantiate this service directly, and instead use
+// the [NewModelService] method instead.
+type ModelService struct {
+	Options []option.RequestOption
+}
+
+// NewModelService generates a new service that applies the given options to each
+// request. These options are applied after the parent client's options (if there
+// is one), and before any request-specific options.
+func NewModelService(opts ...option.RequestOption) (r ModelService) {
+	r = ModelService{}
+	r.Options = opts
+	return
+}
+
+// Get a specific model.
+//
+// The Models API response can be used to determine information about a specific
+// model or resolve a model alias to a model ID.
+func (r *ModelService) Get(ctx context.Context, modelID string, query ModelGetParams, opts ...option.RequestOption) (res *ModelInfo, err error) {
+	for _, v := range query.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	if modelID == "" {
+		err = errors.New("missing required model_id parameter")
+		return
+	}
+	path := fmt.Sprintf("v1/models/%s", modelID)
+	err = requestconfig.ExecuteNewRequest(ctx, http.MethodGet, path, nil, &res, opts...)
+	return
+}
+
+// List available models.
+//
+// The Models API response can be used to determine which models are available for
+// use in the API. More recently released models are listed first.
+func (r *ModelService) List(ctx context.Context, params ModelListParams, opts ...option.RequestOption) (res *pagination.Page[ModelInfo], err error) {
+	var raw *http.Response
+	for _, v := range params.Betas {
+		opts = append(opts, option.WithHeaderAdd("anthropic-beta", fmt.Sprintf("%s", v)))
+	}
+	opts = append(r.Options[:], opts...)
+	opts = append([]option.RequestOption{option.WithResponseInto(&raw)}, opts...)
+	path := "v1/models"
+	cfg, err := requestconfig.NewRequestConfig(ctx, http.MethodGet, path, params, &res, opts...)
+	if err != nil {
+		return nil, err
+	}
+	err = cfg.Execute()
+	if err != nil {
+		return nil, err
+	}
+	res.SetPageConfig(cfg, raw)
+	return res, nil
+}
+
+// List available models.
+//
+// The Models API response can be used to determine which models are available for
+// use in the API. More recently released models are listed first.
+func (r *ModelService) ListAutoPaging(ctx context.Context, params ModelListParams, opts ...option.RequestOption) *pagination.PageAutoPager[ModelInfo] {
+	return pagination.NewPageAutoPager(r.List(ctx, params, opts...))
+}
+
+type ModelInfo struct {
+	// Unique model identifier.
+	ID string `json:"id,required"`
+	// RFC 3339 datetime string representing the time at which the model was released.
+	// May be set to an epoch value if the release date is unknown.
+	CreatedAt time.Time `json:"created_at,required" format:"date-time"`
+	// A human-readable name for the model.
+	DisplayName string `json:"display_name,required"`
+	// Object type.
+	//
+	// For Models, this is always `"model"`.
+	Type constant.Model `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		ID          respjson.Field
+		CreatedAt   respjson.Field
+		DisplayName respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ModelInfo) RawJSON() string { return r.JSON.raw }
+func (r *ModelInfo) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ModelGetParams struct {
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+type ModelListParams struct {
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately after this object.
+	AfterID param.Opt[string] `query:"after_id,omitzero" json:"-"`
+	// ID of the object to use as a cursor for pagination. When provided, returns the
+	// page of results immediately before this object.
+	BeforeID param.Opt[string] `query:"before_id,omitzero" json:"-"`
+	// Number of items to return per page.
+	//
+	// Defaults to `20`. Ranges from `1` to `1000`.
+	Limit param.Opt[int64] `query:"limit,omitzero" json:"-"`
+	// Optional header to specify the beta version(s) you want to use.
+	Betas []AnthropicBeta `header:"anthropic-beta,omitzero" json:"-"`
+	paramObj
+}
+
+// URLQuery serializes [ModelListParams]'s query parameters as `url.Values`.
+func (r ModelListParams) URLQuery() (v url.Values, err error) {
+	return apiquery.MarshalWithSettings(r, apiquery.QuerySettings{
+		ArrayFormat:  apiquery.ArrayQueryFormatComma,
+		NestedFormat: apiquery.NestedQueryFormatBrackets,
+	})
+}

vendor/github.com/anthropics/anthropic-sdk-go/option/requestoption.go 🔗

@@ -0,0 +1,284 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package option
+
+import (
+	"bytes"
+	"fmt"
+	"io"
+	"net/http"
+	"net/url"
+	"strings"
+	"time"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/tidwall/sjson"
+)
+
+// RequestOption is an option for the requests made by the anthropic API Client
+// which can be supplied to clients, services, and methods. You can read more about this functional
+// options pattern in our [README].
+//
+// [README]: https://pkg.go.dev/github.com/anthropics/anthropic-sdk-go#readme-requestoptions
+type RequestOption = requestconfig.RequestOption
+
+// WithBaseURL returns a RequestOption that sets the BaseURL for the client.
+//
+// For security reasons, ensure that the base URL is trusted.
+func WithBaseURL(base string) RequestOption {
+	u, err := url.Parse(base)
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		if err != nil {
+			return fmt.Errorf("requestoption: WithBaseURL failed to parse url %s\n", err)
+		}
+
+		if u.Path != "" && !strings.HasSuffix(u.Path, "/") {
+			u.Path += "/"
+		}
+		r.BaseURL = u
+		return nil
+	})
+}
+
+// HTTPClient is primarily used to describe an [*http.Client], but also
+// supports custom implementations.
+//
+// For bespoke implementations, prefer using an [*http.Client] with a
+// custom transport. See [http.RoundTripper] for further information.
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+// WithHTTPClient returns a RequestOption that changes the underlying http client used to make this
+// request, which by default is [http.DefaultClient].
+//
+// For custom uses cases, it is recommended to provide an [*http.Client] with a custom
+// [http.RoundTripper] as its transport, rather than directly implementing [HTTPClient].
+func WithHTTPClient(client HTTPClient) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		if client == nil {
+			return fmt.Errorf("requestoption: custom http client cannot be nil")
+		}
+
+		if c, ok := client.(*http.Client); ok {
+			// Prefer the native client if possible.
+			r.HTTPClient = c
+			r.CustomHTTPDoer = nil
+		} else {
+			r.CustomHTTPDoer = client
+		}
+
+		return nil
+	})
+}
+
+// MiddlewareNext is a function which is called by a middleware to pass an HTTP request
+// to the next stage in the middleware chain.
+type MiddlewareNext = func(*http.Request) (*http.Response, error)
+
+// Middleware is a function which intercepts HTTP requests, processing or modifying
+// them, and then passing the request to the next middleware or handler
+// in the chain by calling the provided MiddlewareNext function.
+type Middleware = func(*http.Request, MiddlewareNext) (*http.Response, error)
+
+// WithMiddleware returns a RequestOption that applies the given middleware
+// to the requests made. Each middleware will execute in the order they were given.
+func WithMiddleware(middlewares ...Middleware) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.Middlewares = append(r.Middlewares, middlewares...)
+		return nil
+	})
+}
+
+// WithMaxRetries returns a RequestOption that sets the maximum number of retries that the client
+// attempts to make. When given 0, the client only makes one request. By
+// default, the client retries two times.
+//
+// WithMaxRetries panics when retries is negative.
+func WithMaxRetries(retries int) RequestOption {
+	if retries < 0 {
+		panic("option: cannot have fewer than 0 retries")
+	}
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.MaxRetries = retries
+		return nil
+	})
+}
+
+// WithHeader returns a RequestOption that sets the header value to the associated key. It overwrites
+// any value if there was one already present.
+func WithHeader(key, value string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.Request.Header.Set(key, value)
+		return nil
+	})
+}
+
+// WithHeaderAdd returns a RequestOption that adds the header value to the associated key. It appends
+// onto any existing values.
+func WithHeaderAdd(key, value string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.Request.Header.Add(key, value)
+		return nil
+	})
+}
+
+// WithHeaderDel returns a RequestOption that deletes the header value(s) associated with the given key.
+func WithHeaderDel(key string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.Request.Header.Del(key)
+		return nil
+	})
+}
+
+// WithQuery returns a RequestOption that sets the query value to the associated key. It overwrites
+// any value if there was one already present.
+func WithQuery(key, value string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		query := r.Request.URL.Query()
+		query.Set(key, value)
+		r.Request.URL.RawQuery = query.Encode()
+		return nil
+	})
+}
+
+// WithQueryAdd returns a RequestOption that adds the query value to the associated key. It appends
+// onto any existing values.
+func WithQueryAdd(key, value string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		query := r.Request.URL.Query()
+		query.Add(key, value)
+		r.Request.URL.RawQuery = query.Encode()
+		return nil
+	})
+}
+
+// WithQueryDel returns a RequestOption that deletes the query value(s) associated with the key.
+func WithQueryDel(key string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		query := r.Request.URL.Query()
+		query.Del(key)
+		r.Request.URL.RawQuery = query.Encode()
+		return nil
+	})
+}
+
+// WithJSONSet returns a RequestOption that sets the body's JSON value associated with the key.
+// The key accepts a string as defined by the [sjson format].
+//
+// [sjson format]: https://github.com/tidwall/sjson
+func WithJSONSet(key string, value any) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) (err error) {
+		var b []byte
+
+		if r.Body == nil {
+			b, err = sjson.SetBytes(nil, key, value)
+			if err != nil {
+				return err
+			}
+		} else if buffer, ok := r.Body.(*bytes.Buffer); ok {
+			b = buffer.Bytes()
+			b, err = sjson.SetBytes(b, key, value)
+			if err != nil {
+				return err
+			}
+		} else {
+			return fmt.Errorf("cannot use WithJSONSet on a body that is not serialized as *bytes.Buffer")
+		}
+
+		r.Body = bytes.NewBuffer(b)
+		return nil
+	})
+}
+
+// WithJSONDel returns a RequestOption that deletes the body's JSON value associated with the key.
+// The key accepts a string as defined by the [sjson format].
+//
+// [sjson format]: https://github.com/tidwall/sjson
+func WithJSONDel(key string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) (err error) {
+		if buffer, ok := r.Body.(*bytes.Buffer); ok {
+			b := buffer.Bytes()
+			b, err = sjson.DeleteBytes(b, key)
+			if err != nil {
+				return err
+			}
+			r.Body = bytes.NewBuffer(b)
+			return nil
+		}
+
+		return fmt.Errorf("cannot use WithJSONDel on a body that is not serialized as *bytes.Buffer")
+	})
+}
+
+// WithResponseBodyInto returns a RequestOption that overwrites the deserialization target with
+// the given destination. If provided, we don't deserialize into the default struct.
+func WithResponseBodyInto(dst any) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.ResponseBodyInto = dst
+		return nil
+	})
+}
+
+// WithResponseInto returns a RequestOption that copies the [*http.Response] into the given address.
+func WithResponseInto(dst **http.Response) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.ResponseInto = dst
+		return nil
+	})
+}
+
+// WithRequestBody returns a RequestOption that provides a custom serialized body with the given
+// content type.
+//
+// body accepts an io.Reader or raw []bytes.
+func WithRequestBody(contentType string, body any) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		if reader, ok := body.(io.Reader); ok {
+			r.Body = reader
+			return r.Apply(WithHeader("Content-Type", contentType))
+		}
+
+		if b, ok := body.([]byte); ok {
+			r.Body = bytes.NewBuffer(b)
+			return r.Apply(WithHeader("Content-Type", contentType))
+		}
+
+		return fmt.Errorf("body must be a byte slice or implement io.Reader")
+	})
+}
+
+// WithRequestTimeout returns a RequestOption that sets the timeout for
+// each request attempt. This should be smaller than the timeout defined in
+// the context, which spans all retries.
+func WithRequestTimeout(dur time.Duration) RequestOption {
+	// we need this to be a PreRequestOptionFunc so that it can be applied at the endpoint level
+	// see: CalculateNonStreamingTimeout
+	return requestconfig.PreRequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.RequestTimeout = dur
+		return nil
+	})
+}
+
+// WithEnvironmentProduction returns a RequestOption that sets the current
+// environment to be the "production" environment. An environment specifies which base URL
+// to use by default.
+func WithEnvironmentProduction() RequestOption {
+	return requestconfig.WithDefaultBaseURL("https://api.anthropic.com/")
+}
+
+// WithAPIKey returns a RequestOption that sets the client setting "api_key".
+func WithAPIKey(value string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.APIKey = value
+		return r.Apply(WithHeader("X-Api-Key", r.APIKey))
+	})
+}
+
+// WithAuthToken returns a RequestOption that sets the client setting "auth_token".
+func WithAuthToken(value string) RequestOption {
+	return requestconfig.RequestOptionFunc(func(r *requestconfig.RequestConfig) error {
+		r.AuthToken = value
+		return r.Apply(WithHeader("authorization", fmt.Sprintf("Bearer %s", r.AuthToken)))
+	})
+}

vendor/github.com/anthropics/anthropic-sdk-go/packages/jsonl/jsonl.go 🔗

@@ -0,0 +1,57 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package jsonl
+
+import (
+	"bufio"
+	"encoding/json"
+	"io"
+	"net/http"
+)
+
+type Stream[T any] struct {
+	rc  io.ReadCloser
+	scn *bufio.Scanner
+	cur T
+	err error
+}
+
+func NewStream[T any](res *http.Response, err error) *Stream[T] {
+	if res == nil || res.Body == nil {
+		return nil
+	}
+
+	return &Stream[T]{
+		rc:  res.Body,
+		scn: bufio.NewScanner(res.Body),
+		err: err,
+	}
+}
+
+func (s *Stream[T]) Next() bool {
+	if s.err != nil {
+		return false
+	}
+
+	if !s.scn.Scan() {
+		return false
+	}
+
+	line := s.scn.Bytes()
+	var nxt T
+	s.err = json.Unmarshal(line, &nxt)
+	s.cur = nxt
+	return s.err == nil
+}
+
+func (s *Stream[T]) Current() T {
+	return s.cur
+}
+
+func (s *Stream[T]) Err() error {
+	return s.err
+}
+
+func (s *Stream[T]) Close() error {
+	return s.rc.Close()
+}

vendor/github.com/anthropics/anthropic-sdk-go/packages/pagination/pagination.go 🔗

@@ -0,0 +1,134 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package pagination
+
+import (
+	"net/http"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/internal/requestconfig"
+	"github.com/anthropics/anthropic-sdk-go/option"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+)
+
+// aliased to make [param.APIUnion] private when embedding
+type paramUnion = param.APIUnion
+
+// aliased to make [param.APIObject] private when embedding
+type paramObj = param.APIObject
+
+type Page[T any] struct {
+	Data    []T    `json:"data"`
+	HasMore bool   `json:"has_more"`
+	FirstID string `json:"first_id,nullable"`
+	LastID  string `json:"last_id,nullable"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Data        respjson.Field
+		HasMore     respjson.Field
+		FirstID     respjson.Field
+		LastID      respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+	cfg *requestconfig.RequestConfig
+	res *http.Response
+}
+
+// Returns the unmodified JSON received from the API
+func (r Page[T]) RawJSON() string { return r.JSON.raw }
+func (r *Page[T]) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+// GetNextPage returns the next page as defined by this pagination style. When
+// there is no next page, this function will return a 'nil' for the page value, but
+// will not return an error
+func (r *Page[T]) GetNextPage() (res *Page[T], err error) {
+	if r.JSON.HasMore.Valid() && r.HasMore == false {
+		return nil, nil
+	}
+	cfg := r.cfg.Clone(r.cfg.Context)
+	if r.cfg.Request.URL.Query().Has("before_id") {
+		next := r.FirstID
+		if next == "" {
+			return nil, nil
+		}
+		err = cfg.Apply(option.WithQuery("before_id", next))
+		if err != nil {
+			return nil, err
+		}
+	} else {
+		next := r.LastID
+		if next == "" {
+			return nil, nil
+		}
+		err = cfg.Apply(option.WithQuery("after_id", next))
+		if err != nil {
+			return nil, err
+		}
+	}
+	var raw *http.Response
+	cfg.ResponseInto = &raw
+	cfg.ResponseBodyInto = &res
+	err = cfg.Execute()
+	if err != nil {
+		return nil, err
+	}
+	res.SetPageConfig(cfg, raw)
+	return res, nil
+}
+
+func (r *Page[T]) SetPageConfig(cfg *requestconfig.RequestConfig, res *http.Response) {
+	if r == nil {
+		r = &Page[T]{}
+	}
+	r.cfg = cfg
+	r.res = res
+}
+
+type PageAutoPager[T any] struct {
+	page *Page[T]
+	cur  T
+	idx  int
+	run  int
+	err  error
+	paramObj
+}
+
+func NewPageAutoPager[T any](page *Page[T], err error) *PageAutoPager[T] {
+	return &PageAutoPager[T]{
+		page: page,
+		err:  err,
+	}
+}
+
+func (r *PageAutoPager[T]) Next() bool {
+	if r.page == nil || len(r.page.Data) == 0 {
+		return false
+	}
+	if r.idx >= len(r.page.Data) {
+		r.idx = 0
+		r.page, r.err = r.page.GetNextPage()
+		if r.err != nil || r.page == nil || len(r.page.Data) == 0 {
+			return false
+		}
+	}
+	r.cur = r.page.Data[r.idx]
+	r.run += 1
+	r.idx += 1
+	return true
+}
+
+func (r *PageAutoPager[T]) Current() T {
+	return r.cur
+}
+
+func (r *PageAutoPager[T]) Err() error {
+	return r.err
+}
+
+func (r *PageAutoPager[T]) Index() int {
+	return r.run
+}

vendor/github.com/anthropics/anthropic-sdk-go/packages/param/encoder.go 🔗

@@ -0,0 +1,101 @@
+package param
+
+import (
+	"encoding/json"
+	"fmt"
+	"reflect"
+	"time"
+
+	shimjson "github.com/anthropics/anthropic-sdk-go/internal/encoding/json"
+
+	"github.com/tidwall/sjson"
+)
+
+// EncodedAsDate is not be stable and shouldn't be relied upon
+type EncodedAsDate Opt[time.Time]
+
+type forceOmit int
+
+func (m EncodedAsDate) MarshalJSON() ([]byte, error) {
+	underlying := Opt[time.Time](m)
+	bytes := underlying.MarshalJSONWithTimeLayout("2006-01-02")
+	if len(bytes) > 0 {
+		return bytes, nil
+	}
+	return underlying.MarshalJSON()
+}
+
+// MarshalObject uses a shimmed 'encoding/json' from Go 1.24, to support the 'omitzero' tag
+//
+// Stability for the API of MarshalObject is not guaranteed.
+func MarshalObject[T ParamStruct](f T, underlying any) ([]byte, error) {
+	return MarshalWithExtras(f, underlying, f.extraFields())
+}
+
+// MarshalWithExtras is used to marshal a struct with additional properties.
+//
+// Stability for the API of MarshalWithExtras is not guaranteed.
+func MarshalWithExtras[T ParamStruct, R any](f T, underlying any, extras map[string]R) ([]byte, error) {
+	if f.null() {
+		return []byte("null"), nil
+	} else if len(extras) > 0 {
+		bytes, err := shimjson.Marshal(underlying)
+		if err != nil {
+			return nil, err
+		}
+		for k, v := range extras {
+			var a any = v
+			if a == Omit {
+				// Errors when handling ForceOmitted are ignored.
+				if b, e := sjson.DeleteBytes(bytes, k); e == nil {
+					bytes = b
+				}
+				continue
+			}
+			bytes, err = sjson.SetBytes(bytes, k, v)
+			if err != nil {
+				return nil, err
+			}
+		}
+		return bytes, nil
+	} else if ovr, ok := f.Overrides(); ok {
+		return shimjson.Marshal(ovr)
+	} else {
+		return shimjson.Marshal(underlying)
+	}
+}
+
+// MarshalUnion uses a shimmed 'encoding/json' from Go 1.24, to support the 'omitzero' tag
+//
+// Stability for the API of MarshalUnion is not guaranteed.
+func MarshalUnion[T ParamStruct](metadata T, variants ...any) ([]byte, error) {
+	nPresent := 0
+	presentIdx := -1
+	for i, variant := range variants {
+		if !IsOmitted(variant) {
+			nPresent++
+			presentIdx = i
+		}
+	}
+	if nPresent == 0 || presentIdx == -1 {
+		if ovr, ok := metadata.Overrides(); ok {
+			return shimjson.Marshal(ovr)
+		}
+		return []byte(`null`), nil
+	} else if nPresent > 1 {
+		return nil, &json.MarshalerError{
+			Type: typeFor[T](),
+			Err:  fmt.Errorf("expected union to have only one present variant, got %d", nPresent),
+		}
+	}
+	return shimjson.Marshal(variants[presentIdx])
+}
+
+// typeFor is shimmed from Go 1.23 "reflect" package
+func typeFor[T any]() reflect.Type {
+	var v T
+	if t := reflect.TypeOf(v); t != nil {
+		return t // optimize for T being a non-interface kind
+	}
+	return reflect.TypeOf((*T)(nil)).Elem() // only for an interface kind
+}

vendor/github.com/anthropics/anthropic-sdk-go/packages/param/option.go 🔗

@@ -0,0 +1,121 @@
+package param
+
+import (
+	"encoding/json"
+	"fmt"
+	shimjson "github.com/anthropics/anthropic-sdk-go/internal/encoding/json"
+	"time"
+)
+
+func NewOpt[T comparable](v T) Opt[T] {
+	return Opt[T]{Value: v, status: included}
+}
+
+// Null creates optional field with the JSON value "null".
+//
+// To set a struct to null, use [NullStruct].
+func Null[T comparable]() Opt[T] { return Opt[T]{status: null} }
+
+type status int8
+
+const (
+	omitted status = iota
+	null
+	included
+)
+
+// Opt represents an optional parameter of type T. Use
+// the [Opt.Valid] method to confirm.
+type Opt[T comparable] struct {
+	Value T
+	// indicates whether the field should be omitted, null, or valid
+	status status
+	opt
+}
+
+// Valid returns true if the value is not "null" or omitted.
+//
+// To check if explicitly null, use [Opt.Null].
+func (o Opt[T]) Valid() bool {
+	var empty Opt[T]
+	return o.status == included || o != empty && o.status != null
+}
+
+func (o Opt[T]) Or(v T) T {
+	if o.Valid() {
+		return o.Value
+	}
+	return v
+}
+
+func (o Opt[T]) String() string {
+	if o.null() {
+		return "null"
+	}
+	if s, ok := any(o.Value).(fmt.Stringer); ok {
+		return s.String()
+	}
+	return fmt.Sprintf("%v", o.Value)
+}
+
+func (o Opt[T]) MarshalJSON() ([]byte, error) {
+	if !o.Valid() {
+		return []byte("null"), nil
+	}
+	return json.Marshal(o.Value)
+}
+
+func (o *Opt[T]) UnmarshalJSON(data []byte) error {
+	if string(data) == "null" {
+		o.status = null
+		return nil
+	}
+
+	var value *T
+	if err := json.Unmarshal(data, &value); err != nil {
+		return err
+	}
+
+	if value == nil {
+		o.status = omitted
+		return nil
+	}
+
+	o.status = included
+	o.Value = *value
+	return nil
+}
+
+// MarshalJSONWithTimeLayout is necessary to bypass the internal caching performed
+// by [json.Marshal]. Prefer to use [Opt.MarshalJSON] instead.
+//
+// This function requires that the generic type parameter of [Opt] is not [time.Time].
+func (o Opt[T]) MarshalJSONWithTimeLayout(format string) []byte {
+	t, ok := any(o.Value).(time.Time)
+	if !ok || o.null() {
+		return nil
+	}
+
+	b, err := json.Marshal(t.Format(shimjson.TimeLayout(format)))
+	if err != nil {
+		return nil
+	}
+	return b
+}
+
+func (o Opt[T]) null() bool   { return o.status == null }
+func (o Opt[T]) isZero() bool { return o == Opt[T]{} }
+
+// opt helps limit the [Optional] interface to only types in this package
+type opt struct{}
+
+func (opt) implOpt() {}
+
+// This interface is useful for internal purposes.
+type Optional interface {
+	Valid() bool
+	null() bool
+
+	isZero() bool
+	implOpt()
+}

vendor/github.com/anthropics/anthropic-sdk-go/packages/param/param.go 🔗

@@ -0,0 +1,158 @@
+package param
+
+import (
+	"encoding/json"
+	"reflect"
+)
+
+// NullStruct is used to set a struct to the JSON value null.
+// Check for null structs with [IsNull].
+//
+// Only the first type parameter should be provided,
+// the type PtrT will be inferred.
+//
+//	json.Marshal(param.NullStruct[MyStruct]()) -> 'null'
+//
+// To send null to an [Opt] field use [Null].
+func NullStruct[T ParamStruct, PtrT InferPtr[T]]() T {
+	var t T
+	pt := PtrT(&t)
+	pt.setMetadata(nil)
+	return *pt
+}
+
+// Override replaces the value of a struct with any type.
+//
+// Only the first type parameter should be provided,
+// the type PtrT will be inferred.
+//
+// It's often useful for providing raw JSON
+//
+//	param.Override[MyStruct](json.RawMessage(`{"foo": "bar"}`))
+//
+// The public fields of the returned struct T will be unset.
+//
+// To override a specific field in a struct, use its [SetExtraFields] method.
+func Override[T ParamStruct, PtrT InferPtr[T]](v any) T {
+	var t T
+	pt := PtrT(&t)
+	pt.setMetadata(v)
+	return *pt
+}
+
+// IsOmitted returns true if v is the zero value of its type.
+//
+// If IsOmitted is true, and the field uses a `json:"...,omitzero"` tag,
+// the field will be omitted from the request.
+//
+// If v is set explicitly to the JSON value "null", IsOmitted returns false.
+func IsOmitted(v any) bool {
+	if v == nil {
+		return false
+	}
+	if o, ok := v.(Optional); ok {
+		return o.isZero()
+	}
+	return reflect.ValueOf(v).IsZero()
+}
+
+// IsNull returns true if v was set to the JSON value null.
+//
+// To set a param to null use [NullStruct] or [Null]
+// depending on the type of v.
+//
+// IsNull returns false if the value is omitted.
+func IsNull(v ParamNullable) bool {
+	return v.null()
+}
+
+// ParamNullable encapsulates all structs in parameters,
+// and all [Opt] types in parameters.
+type ParamNullable interface {
+	null() bool
+}
+
+// ParamStruct represents the set of all structs that are
+// used in API parameters, by convention these usually end in
+// "Params" or "Param".
+type ParamStruct interface {
+	Overrides() (any, bool)
+	null() bool
+	extraFields() map[string]any
+}
+
+// This is an implementation detail and should never be explicitly set.
+type InferPtr[T ParamStruct] interface {
+	setMetadata(any)
+	*T
+}
+
+// APIObject should be embedded in api object fields, preferably using an alias to make private
+type APIObject struct{ metadata }
+
+// APIUnion should be embedded in all api unions fields, preferably using an alias to make private
+type APIUnion struct{ metadata }
+
+// Overrides returns the value of the struct when it is created with
+// [Override], the second argument helps differentiate an explicit null.
+func (m metadata) Overrides() (any, bool) {
+	if _, ok := m.any.(metadataExtraFields); ok {
+		return nil, false
+	}
+	return m.any, m.any != nil
+}
+
+// ExtraFields returns the extra fields added to the JSON object.
+func (m metadata) ExtraFields() map[string]any {
+	if extras, ok := m.any.(metadataExtraFields); ok {
+		return extras
+	}
+	return nil
+}
+
+// Omit can be used with [metadata.SetExtraFields] to ensure that a
+// required field is omitted. This is useful as an escape hatch for
+// when a required is unwanted for some unexpected reason.
+const Omit forceOmit = -1
+
+// SetExtraFields adds extra fields to the JSON object.
+//
+// SetExtraFields will override any existing fields with the same key.
+// For security reasons, ensure this is only used with trusted input data.
+//
+// To intentionally omit a required field, use [Omit].
+//
+//	foo.SetExtraFields(map[string]any{"bar": Omit})
+//
+// If the struct already contains the field ExtraFields, then this
+// method will have no effect.
+func (m *metadata) SetExtraFields(extraFields map[string]any) {
+	m.any = metadataExtraFields(extraFields)
+}
+
+// extraFields aliases [metadata.ExtraFields] to avoid name collisions.
+func (m metadata) extraFields() map[string]any { return m.ExtraFields() }
+
+func (m metadata) null() bool {
+	if _, ok := m.any.(metadataNull); ok {
+		return true
+	}
+
+	if msg, ok := m.any.(json.RawMessage); ok {
+		return string(msg) == "null"
+	}
+
+	return false
+}
+
+type metadata struct{ any }
+type metadataNull struct{}
+type metadataExtraFields map[string]any
+
+func (m *metadata) setMetadata(override any) {
+	if override == nil {
+		m.any = metadataNull{}
+		return
+	}
+	m.any = override
+}

vendor/github.com/anthropics/anthropic-sdk-go/packages/respjson/respjson.go 🔗

@@ -0,0 +1,88 @@
+package respjson
+
+// A Field provides metadata to indicate the presence of a value.
+//
+// Use [Field.Valid] to check if an optional value was null or omitted.
+//
+// A Field will always occur in the following structure, where it
+// mirrors the original field in it's parent struct:
+//
+//	type ExampleObject struct {
+//		Foo bool	`json:"foo"`
+//		Bar int		`json:"bar"`
+//		// ...
+//
+//		// JSON provides metadata about the object.
+//		JSON struct {
+//			Foo Field
+//			Bar Field
+//			// ...
+//		} `json:"-"`
+//	}
+//
+// To differentiate a "nullish" value from the zero value,
+// use the [Field.Valid] method.
+//
+//	if !example.JSON.Foo.Valid() {
+//		println("Foo is null or omitted")
+//	}
+//
+//	if example.Foo {
+//		println("Foo is true")
+//	} else {
+//		println("Foo is false")
+//	}
+//
+// To differentiate if a field was omitted or the JSON value "null",
+// use the [Field.Raw] method.
+//
+//	if example.JSON.Foo.Raw() == "null" {
+//		println("Foo is null")
+//	}
+//
+//	if example.JSON.Foo.Raw() == "" {
+//		println("Foo was omitted")
+//	}
+//
+// Otherwise, if the field was invalid and couldn't be marshalled successfully,
+// [Field.Valid] will be false and [Field.Raw] will not be empty.
+type Field struct {
+	status
+	raw string
+}
+
+const (
+	omitted status = iota
+	null
+	invalid
+	valid
+)
+
+type status int8
+
+// Valid returns true if the parent field was set.
+// Valid returns false if the value doesn't exist, is JSON null, or
+// is an unexpected type.
+func (j Field) Valid() bool { return j.status > invalid }
+
+const Null string = "null"
+const Omitted string = ""
+
+// Returns the raw JSON value of the field.
+func (j Field) Raw() string {
+	if j.status == omitted {
+		return ""
+	}
+	return j.raw
+}
+
+func NewField(raw string) Field {
+	if raw == "null" {
+		return Field{status: null, raw: Null}
+	}
+	return Field{status: valid, raw: raw}
+}
+
+func NewInvalidField(raw string) Field {
+	return Field{status: invalid, raw: raw}
+}

vendor/github.com/anthropics/anthropic-sdk-go/packages/ssestream/ssestream.go 🔗

@@ -0,0 +1,198 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package ssestream
+
+import (
+	"bufio"
+	"bytes"
+	"encoding/json"
+	"fmt"
+	"io"
+	"net/http"
+	"strings"
+)
+
+type Decoder interface {
+	Event() Event
+	Next() bool
+	Close() error
+	Err() error
+}
+
+func NewDecoder(res *http.Response) Decoder {
+	if res == nil || res.Body == nil {
+		return nil
+	}
+
+	var decoder Decoder
+	contentType := res.Header.Get("content-type")
+	if t, ok := decoderTypes[contentType]; ok {
+		decoder = t(res.Body)
+	} else {
+		scn := bufio.NewScanner(res.Body)
+		scn.Buffer(nil, bufio.MaxScanTokenSize<<4)
+		decoder = &eventStreamDecoder{rc: res.Body, scn: scn}
+	}
+	return decoder
+}
+
+var decoderTypes = map[string](func(io.ReadCloser) Decoder){}
+
+func RegisterDecoder(contentType string, decoder func(io.ReadCloser) Decoder) {
+	decoderTypes[strings.ToLower(contentType)] = decoder
+}
+
+type Event struct {
+	Type string
+	Data []byte
+}
+
+// A base implementation of a Decoder for text/event-stream.
+type eventStreamDecoder struct {
+	evt Event
+	rc  io.ReadCloser
+	scn *bufio.Scanner
+	err error
+}
+
+func (s *eventStreamDecoder) Next() bool {
+	if s.err != nil {
+		return false
+	}
+
+	event := ""
+	data := bytes.NewBuffer(nil)
+
+	for s.scn.Scan() {
+		txt := s.scn.Bytes()
+
+		// Dispatch event on an empty line
+		if len(txt) == 0 {
+			s.evt = Event{
+				Type: event,
+				Data: data.Bytes(),
+			}
+			return true
+		}
+
+		// Split a string like "event: bar" into name="event" and value=" bar".
+		name, value, _ := bytes.Cut(txt, []byte(":"))
+
+		// Consume an optional space after the colon if it exists.
+		if len(value) > 0 && value[0] == ' ' {
+			value = value[1:]
+		}
+
+		switch string(name) {
+		case "":
+			// An empty line in the for ": something" is a comment and should be ignored.
+			continue
+		case "event":
+			event = string(value)
+		case "data":
+			_, s.err = data.Write(value)
+			if s.err != nil {
+				break
+			}
+			_, s.err = data.WriteRune('\n')
+			if s.err != nil {
+				break
+			}
+		}
+	}
+
+	if s.scn.Err() != nil {
+		s.err = s.scn.Err()
+	}
+
+	return false
+}
+
+func (s *eventStreamDecoder) Event() Event {
+	return s.evt
+}
+
+func (s *eventStreamDecoder) Close() error {
+	return s.rc.Close()
+}
+
+func (s *eventStreamDecoder) Err() error {
+	return s.err
+}
+
+type Stream[T any] struct {
+	decoder Decoder
+	cur     T
+	err     error
+}
+
+func NewStream[T any](decoder Decoder, err error) *Stream[T] {
+	return &Stream[T]{
+		decoder: decoder,
+		err:     err,
+	}
+}
+
+// Next returns false if the stream has ended or an error occurred.
+// Call Stream.Current() to get the current value.
+// Call Stream.Err() to get the error.
+//
+//		for stream.Next() {
+//			data := stream.Current()
+//		}
+//
+//	 	if stream.Err() != nil {
+//			...
+//	 	}
+func (s *Stream[T]) Next() bool {
+	if s.err != nil {
+		return false
+	}
+
+	for s.decoder.Next() {
+		switch s.decoder.Event().Type {
+		case "completion":
+			var nxt T
+			s.err = json.Unmarshal(s.decoder.Event().Data, &nxt)
+			if s.err != nil {
+				return false
+			}
+			s.cur = nxt
+			return true
+		case "message_start", "message_delta", "message_stop", "content_block_start", "content_block_delta", "content_block_stop":
+			var nxt T
+			s.err = json.Unmarshal(s.decoder.Event().Data, &nxt)
+			if s.err != nil {
+				return false
+			}
+			s.cur = nxt
+			return true
+		case "ping":
+			continue
+		case "error":
+			s.err = fmt.Errorf("received error while streaming: %s", string(s.decoder.Event().Data))
+			return false
+		}
+	}
+
+	// decoder.Next() may be false because of an error
+	s.err = s.decoder.Err()
+
+	return false
+}
+
+func (s *Stream[T]) Current() T {
+	return s.cur
+}
+
+func (s *Stream[T]) Err() error {
+	return s.err
+}
+
+func (s *Stream[T]) Close() error {
+	if s.decoder == nil {
+		// already closed
+		return nil
+	}
+	return s.decoder.Close()
+}

vendor/github.com/anthropics/anthropic-sdk-go/release-please-config.json 🔗

@@ -0,0 +1,67 @@
+{
+  "packages": {
+    ".": {}
+  },
+  "$schema": "https://raw.githubusercontent.com/stainless-api/release-please/main/schemas/config.json",
+  "include-v-in-tag": true,
+  "include-component-in-tag": false,
+  "versioning": "prerelease",
+  "prerelease": true,
+  "bump-minor-pre-major": true,
+  "bump-patch-for-minor-pre-major": false,
+  "pull-request-header": "Automated Release PR",
+  "pull-request-title-pattern": "release: ${version}",
+  "changelog-sections": [
+    {
+      "type": "feat",
+      "section": "Features"
+    },
+    {
+      "type": "fix",
+      "section": "Bug Fixes"
+    },
+    {
+      "type": "perf",
+      "section": "Performance Improvements"
+    },
+    {
+      "type": "revert",
+      "section": "Reverts"
+    },
+    {
+      "type": "chore",
+      "section": "Chores"
+    },
+    {
+      "type": "docs",
+      "section": "Documentation"
+    },
+    {
+      "type": "style",
+      "section": "Styles"
+    },
+    {
+      "type": "refactor",
+      "section": "Refactors"
+    },
+    {
+      "type": "test",
+      "section": "Tests",
+      "hidden": true
+    },
+    {
+      "type": "build",
+      "section": "Build System"
+    },
+    {
+      "type": "ci",
+      "section": "Continuous Integration",
+      "hidden": true
+    }
+  ],
+  "release-type": "go",
+  "extra-files": [
+    "internal/version.go",
+    "README.md"
+  ]
+}

vendor/github.com/anthropics/anthropic-sdk-go/shared/constant/constants.go 🔗

@@ -0,0 +1,304 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package constant
+
+import (
+	"encoding/json"
+)
+
+// ModelNonStreamingTokens defines the maximum tokens for models that should limit
+// non-streaming requests.
+var ModelNonStreamingTokens = map[string]int{
+	"claude-opus-4-20250514":                8192,
+	"claude-4-opus-20250514":                8192,
+	"claude-opus-4-0":                       8192,
+	"anthropic.claude-opus-4-20250514-v1:0": 8192,
+	"claude-opus-4@20250514":                8192,
+}
+
+type Constant[T any] interface {
+	Default() T
+}
+
+// ValueOf gives the default value of a constant from its type. It's helpful when
+// constructing constants as variants in a one-of. Note that empty structs are
+// marshalled by default. Usage: constant.ValueOf[constant.Foo]()
+func ValueOf[T Constant[T]]() T {
+	var t T
+	return t.Default()
+}
+
+type Any string                          // Always "any"
+type APIError string                     // Always "api_error"
+type ApplicationPDF string               // Always "application/pdf"
+type Approximate string                  // Always "approximate"
+type Assistant string                    // Always "assistant"
+type AuthenticationError string          // Always "authentication_error"
+type Auto string                         // Always "auto"
+type Base64 string                       // Always "base64"
+type Bash string                         // Always "bash"
+type Bash20241022 string                 // Always "bash_20241022"
+type Bash20250124 string                 // Always "bash_20250124"
+type BillingError string                 // Always "billing_error"
+type Canceled string                     // Always "canceled"
+type CharLocation string                 // Always "char_location"
+type CitationsDelta string               // Always "citations_delta"
+type CodeExecution string                // Always "code_execution"
+type CodeExecution20250522 string        // Always "code_execution_20250522"
+type CodeExecutionOutput string          // Always "code_execution_output"
+type CodeExecutionResult string          // Always "code_execution_result"
+type CodeExecutionToolResult string      // Always "code_execution_tool_result"
+type CodeExecutionToolResultError string // Always "code_execution_tool_result_error"
+type Completion string                   // Always "completion"
+type Computer string                     // Always "computer"
+type Computer20241022 string             // Always "computer_20241022"
+type Computer20250124 string             // Always "computer_20250124"
+type ContainerUpload string              // Always "container_upload"
+type Content string                      // Always "content"
+type ContentBlockDelta string            // Always "content_block_delta"
+type ContentBlockLocation string         // Always "content_block_location"
+type ContentBlockStart string            // Always "content_block_start"
+type ContentBlockStop string             // Always "content_block_stop"
+type Disabled string                     // Always "disabled"
+type Document string                     // Always "document"
+type Enabled string                      // Always "enabled"
+type Ephemeral string                    // Always "ephemeral"
+type Error string                        // Always "error"
+type Errored string                      // Always "errored"
+type Expired string                      // Always "expired"
+type File string                         // Always "file"
+type Image string                        // Always "image"
+type InputJSONDelta string               // Always "input_json_delta"
+type InvalidRequestError string          // Always "invalid_request_error"
+type MCPToolResult string                // Always "mcp_tool_result"
+type MCPToolUse string                   // Always "mcp_tool_use"
+type Message string                      // Always "message"
+type MessageBatch string                 // Always "message_batch"
+type MessageBatchDeleted string          // Always "message_batch_deleted"
+type MessageDelta string                 // Always "message_delta"
+type MessageStart string                 // Always "message_start"
+type MessageStop string                  // Always "message_stop"
+type Model string                        // Always "model"
+type None string                         // Always "none"
+type NotFoundError string                // Always "not_found_error"
+type Object string                       // Always "object"
+type OverloadedError string              // Always "overloaded_error"
+type PageLocation string                 // Always "page_location"
+type PermissionError string              // Always "permission_error"
+type RateLimitError string               // Always "rate_limit_error"
+type RedactedThinking string             // Always "redacted_thinking"
+type ServerToolUse string                // Always "server_tool_use"
+type SignatureDelta string               // Always "signature_delta"
+type StrReplaceBasedEditTool string      // Always "str_replace_based_edit_tool"
+type StrReplaceEditor string             // Always "str_replace_editor"
+type Succeeded string                    // Always "succeeded"
+type Text string                         // Always "text"
+type TextDelta string                    // Always "text_delta"
+type TextEditor20241022 string           // Always "text_editor_20241022"
+type TextEditor20250124 string           // Always "text_editor_20250124"
+type TextEditor20250429 string           // Always "text_editor_20250429"
+type TextPlain string                    // Always "text/plain"
+type Thinking string                     // Always "thinking"
+type ThinkingDelta string                // Always "thinking_delta"
+type TimeoutError string                 // Always "timeout_error"
+type Tool string                         // Always "tool"
+type ToolResult string                   // Always "tool_result"
+type ToolUse string                      // Always "tool_use"
+type URL string                          // Always "url"
+type WebSearch string                    // Always "web_search"
+type WebSearch20250305 string            // Always "web_search_20250305"
+type WebSearchResult string              // Always "web_search_result"
+type WebSearchResultLocation string      // Always "web_search_result_location"
+type WebSearchToolResult string          // Always "web_search_tool_result"
+type WebSearchToolResultError string     // Always "web_search_tool_result_error"
+
+func (c Any) Default() Any                                     { return "any" }
+func (c APIError) Default() APIError                           { return "api_error" }
+func (c ApplicationPDF) Default() ApplicationPDF               { return "application/pdf" }
+func (c Approximate) Default() Approximate                     { return "approximate" }
+func (c Assistant) Default() Assistant                         { return "assistant" }
+func (c AuthenticationError) Default() AuthenticationError     { return "authentication_error" }
+func (c Auto) Default() Auto                                   { return "auto" }
+func (c Base64) Default() Base64                               { return "base64" }
+func (c Bash) Default() Bash                                   { return "bash" }
+func (c Bash20241022) Default() Bash20241022                   { return "bash_20241022" }
+func (c Bash20250124) Default() Bash20250124                   { return "bash_20250124" }
+func (c BillingError) Default() BillingError                   { return "billing_error" }
+func (c Canceled) Default() Canceled                           { return "canceled" }
+func (c CharLocation) Default() CharLocation                   { return "char_location" }
+func (c CitationsDelta) Default() CitationsDelta               { return "citations_delta" }
+func (c CodeExecution) Default() CodeExecution                 { return "code_execution" }
+func (c CodeExecution20250522) Default() CodeExecution20250522 { return "code_execution_20250522" }
+func (c CodeExecutionOutput) Default() CodeExecutionOutput     { return "code_execution_output" }
+func (c CodeExecutionResult) Default() CodeExecutionResult     { return "code_execution_result" }
+func (c CodeExecutionToolResult) Default() CodeExecutionToolResult {
+	return "code_execution_tool_result"
+}
+func (c CodeExecutionToolResultError) Default() CodeExecutionToolResultError {
+	return "code_execution_tool_result_error"
+}
+func (c Completion) Default() Completion                     { return "completion" }
+func (c Computer) Default() Computer                         { return "computer" }
+func (c Computer20241022) Default() Computer20241022         { return "computer_20241022" }
+func (c Computer20250124) Default() Computer20250124         { return "computer_20250124" }
+func (c ContainerUpload) Default() ContainerUpload           { return "container_upload" }
+func (c Content) Default() Content                           { return "content" }
+func (c ContentBlockDelta) Default() ContentBlockDelta       { return "content_block_delta" }
+func (c ContentBlockLocation) Default() ContentBlockLocation { return "content_block_location" }
+func (c ContentBlockStart) Default() ContentBlockStart       { return "content_block_start" }
+func (c ContentBlockStop) Default() ContentBlockStop         { return "content_block_stop" }
+func (c Disabled) Default() Disabled                         { return "disabled" }
+func (c Document) Default() Document                         { return "document" }
+func (c Enabled) Default() Enabled                           { return "enabled" }
+func (c Ephemeral) Default() Ephemeral                       { return "ephemeral" }
+func (c Error) Default() Error                               { return "error" }
+func (c Errored) Default() Errored                           { return "errored" }
+func (c Expired) Default() Expired                           { return "expired" }
+func (c File) Default() File                                 { return "file" }
+func (c Image) Default() Image                               { return "image" }
+func (c InputJSONDelta) Default() InputJSONDelta             { return "input_json_delta" }
+func (c InvalidRequestError) Default() InvalidRequestError   { return "invalid_request_error" }
+func (c MCPToolResult) Default() MCPToolResult               { return "mcp_tool_result" }
+func (c MCPToolUse) Default() MCPToolUse                     { return "mcp_tool_use" }
+func (c Message) Default() Message                           { return "message" }
+func (c MessageBatch) Default() MessageBatch                 { return "message_batch" }
+func (c MessageBatchDeleted) Default() MessageBatchDeleted   { return "message_batch_deleted" }
+func (c MessageDelta) Default() MessageDelta                 { return "message_delta" }
+func (c MessageStart) Default() MessageStart                 { return "message_start" }
+func (c MessageStop) Default() MessageStop                   { return "message_stop" }
+func (c Model) Default() Model                               { return "model" }
+func (c None) Default() None                                 { return "none" }
+func (c NotFoundError) Default() NotFoundError               { return "not_found_error" }
+func (c Object) Default() Object                             { return "object" }
+func (c OverloadedError) Default() OverloadedError           { return "overloaded_error" }
+func (c PageLocation) Default() PageLocation                 { return "page_location" }
+func (c PermissionError) Default() PermissionError           { return "permission_error" }
+func (c RateLimitError) Default() RateLimitError             { return "rate_limit_error" }
+func (c RedactedThinking) Default() RedactedThinking         { return "redacted_thinking" }
+func (c ServerToolUse) Default() ServerToolUse               { return "server_tool_use" }
+func (c SignatureDelta) Default() SignatureDelta             { return "signature_delta" }
+func (c StrReplaceBasedEditTool) Default() StrReplaceBasedEditTool {
+	return "str_replace_based_edit_tool"
+}
+func (c StrReplaceEditor) Default() StrReplaceEditor     { return "str_replace_editor" }
+func (c Succeeded) Default() Succeeded                   { return "succeeded" }
+func (c Text) Default() Text                             { return "text" }
+func (c TextDelta) Default() TextDelta                   { return "text_delta" }
+func (c TextEditor20241022) Default() TextEditor20241022 { return "text_editor_20241022" }
+func (c TextEditor20250124) Default() TextEditor20250124 { return "text_editor_20250124" }
+func (c TextEditor20250429) Default() TextEditor20250429 { return "text_editor_20250429" }
+func (c TextPlain) Default() TextPlain                   { return "text/plain" }
+func (c Thinking) Default() Thinking                     { return "thinking" }
+func (c ThinkingDelta) Default() ThinkingDelta           { return "thinking_delta" }
+func (c TimeoutError) Default() TimeoutError             { return "timeout_error" }
+func (c Tool) Default() Tool                             { return "tool" }
+func (c ToolResult) Default() ToolResult                 { return "tool_result" }
+func (c ToolUse) Default() ToolUse                       { return "tool_use" }
+func (c URL) Default() URL                               { return "url" }
+func (c WebSearch) Default() WebSearch                   { return "web_search" }
+func (c WebSearch20250305) Default() WebSearch20250305   { return "web_search_20250305" }
+func (c WebSearchResult) Default() WebSearchResult       { return "web_search_result" }
+func (c WebSearchResultLocation) Default() WebSearchResultLocation {
+	return "web_search_result_location"
+}
+func (c WebSearchToolResult) Default() WebSearchToolResult { return "web_search_tool_result" }
+func (c WebSearchToolResultError) Default() WebSearchToolResultError {
+	return "web_search_tool_result_error"
+}
+
+func (c Any) MarshalJSON() ([]byte, error)                          { return marshalString(c) }
+func (c APIError) MarshalJSON() ([]byte, error)                     { return marshalString(c) }
+func (c ApplicationPDF) MarshalJSON() ([]byte, error)               { return marshalString(c) }
+func (c Approximate) MarshalJSON() ([]byte, error)                  { return marshalString(c) }
+func (c Assistant) MarshalJSON() ([]byte, error)                    { return marshalString(c) }
+func (c AuthenticationError) MarshalJSON() ([]byte, error)          { return marshalString(c) }
+func (c Auto) MarshalJSON() ([]byte, error)                         { return marshalString(c) }
+func (c Base64) MarshalJSON() ([]byte, error)                       { return marshalString(c) }
+func (c Bash) MarshalJSON() ([]byte, error)                         { return marshalString(c) }
+func (c Bash20241022) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c Bash20250124) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c BillingError) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c Canceled) MarshalJSON() ([]byte, error)                     { return marshalString(c) }
+func (c CharLocation) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c CitationsDelta) MarshalJSON() ([]byte, error)               { return marshalString(c) }
+func (c CodeExecution) MarshalJSON() ([]byte, error)                { return marshalString(c) }
+func (c CodeExecution20250522) MarshalJSON() ([]byte, error)        { return marshalString(c) }
+func (c CodeExecutionOutput) MarshalJSON() ([]byte, error)          { return marshalString(c) }
+func (c CodeExecutionResult) MarshalJSON() ([]byte, error)          { return marshalString(c) }
+func (c CodeExecutionToolResult) MarshalJSON() ([]byte, error)      { return marshalString(c) }
+func (c CodeExecutionToolResultError) MarshalJSON() ([]byte, error) { return marshalString(c) }
+func (c Completion) MarshalJSON() ([]byte, error)                   { return marshalString(c) }
+func (c Computer) MarshalJSON() ([]byte, error)                     { return marshalString(c) }
+func (c Computer20241022) MarshalJSON() ([]byte, error)             { return marshalString(c) }
+func (c Computer20250124) MarshalJSON() ([]byte, error)             { return marshalString(c) }
+func (c ContainerUpload) MarshalJSON() ([]byte, error)              { return marshalString(c) }
+func (c Content) MarshalJSON() ([]byte, error)                      { return marshalString(c) }
+func (c ContentBlockDelta) MarshalJSON() ([]byte, error)            { return marshalString(c) }
+func (c ContentBlockLocation) MarshalJSON() ([]byte, error)         { return marshalString(c) }
+func (c ContentBlockStart) MarshalJSON() ([]byte, error)            { return marshalString(c) }
+func (c ContentBlockStop) MarshalJSON() ([]byte, error)             { return marshalString(c) }
+func (c Disabled) MarshalJSON() ([]byte, error)                     { return marshalString(c) }
+func (c Document) MarshalJSON() ([]byte, error)                     { return marshalString(c) }
+func (c Enabled) MarshalJSON() ([]byte, error)                      { return marshalString(c) }
+func (c Ephemeral) MarshalJSON() ([]byte, error)                    { return marshalString(c) }
+func (c Error) MarshalJSON() ([]byte, error)                        { return marshalString(c) }
+func (c Errored) MarshalJSON() ([]byte, error)                      { return marshalString(c) }
+func (c Expired) MarshalJSON() ([]byte, error)                      { return marshalString(c) }
+func (c File) MarshalJSON() ([]byte, error)                         { return marshalString(c) }
+func (c Image) MarshalJSON() ([]byte, error)                        { return marshalString(c) }
+func (c InputJSONDelta) MarshalJSON() ([]byte, error)               { return marshalString(c) }
+func (c InvalidRequestError) MarshalJSON() ([]byte, error)          { return marshalString(c) }
+func (c MCPToolResult) MarshalJSON() ([]byte, error)                { return marshalString(c) }
+func (c MCPToolUse) MarshalJSON() ([]byte, error)                   { return marshalString(c) }
+func (c Message) MarshalJSON() ([]byte, error)                      { return marshalString(c) }
+func (c MessageBatch) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c MessageBatchDeleted) MarshalJSON() ([]byte, error)          { return marshalString(c) }
+func (c MessageDelta) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c MessageStart) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c MessageStop) MarshalJSON() ([]byte, error)                  { return marshalString(c) }
+func (c Model) MarshalJSON() ([]byte, error)                        { return marshalString(c) }
+func (c None) MarshalJSON() ([]byte, error)                         { return marshalString(c) }
+func (c NotFoundError) MarshalJSON() ([]byte, error)                { return marshalString(c) }
+func (c Object) MarshalJSON() ([]byte, error)                       { return marshalString(c) }
+func (c OverloadedError) MarshalJSON() ([]byte, error)              { return marshalString(c) }
+func (c PageLocation) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c PermissionError) MarshalJSON() ([]byte, error)              { return marshalString(c) }
+func (c RateLimitError) MarshalJSON() ([]byte, error)               { return marshalString(c) }
+func (c RedactedThinking) MarshalJSON() ([]byte, error)             { return marshalString(c) }
+func (c ServerToolUse) MarshalJSON() ([]byte, error)                { return marshalString(c) }
+func (c SignatureDelta) MarshalJSON() ([]byte, error)               { return marshalString(c) }
+func (c StrReplaceBasedEditTool) MarshalJSON() ([]byte, error)      { return marshalString(c) }
+func (c StrReplaceEditor) MarshalJSON() ([]byte, error)             { return marshalString(c) }
+func (c Succeeded) MarshalJSON() ([]byte, error)                    { return marshalString(c) }
+func (c Text) MarshalJSON() ([]byte, error)                         { return marshalString(c) }
+func (c TextDelta) MarshalJSON() ([]byte, error)                    { return marshalString(c) }
+func (c TextEditor20241022) MarshalJSON() ([]byte, error)           { return marshalString(c) }
+func (c TextEditor20250124) MarshalJSON() ([]byte, error)           { return marshalString(c) }
+func (c TextEditor20250429) MarshalJSON() ([]byte, error)           { return marshalString(c) }
+func (c TextPlain) MarshalJSON() ([]byte, error)                    { return marshalString(c) }
+func (c Thinking) MarshalJSON() ([]byte, error)                     { return marshalString(c) }
+func (c ThinkingDelta) MarshalJSON() ([]byte, error)                { return marshalString(c) }
+func (c TimeoutError) MarshalJSON() ([]byte, error)                 { return marshalString(c) }
+func (c Tool) MarshalJSON() ([]byte, error)                         { return marshalString(c) }
+func (c ToolResult) MarshalJSON() ([]byte, error)                   { return marshalString(c) }
+func (c ToolUse) MarshalJSON() ([]byte, error)                      { return marshalString(c) }
+func (c URL) MarshalJSON() ([]byte, error)                          { return marshalString(c) }
+func (c WebSearch) MarshalJSON() ([]byte, error)                    { return marshalString(c) }
+func (c WebSearch20250305) MarshalJSON() ([]byte, error)            { return marshalString(c) }
+func (c WebSearchResult) MarshalJSON() ([]byte, error)              { return marshalString(c) }
+func (c WebSearchResultLocation) MarshalJSON() ([]byte, error)      { return marshalString(c) }
+func (c WebSearchToolResult) MarshalJSON() ([]byte, error)          { return marshalString(c) }
+func (c WebSearchToolResultError) MarshalJSON() ([]byte, error)     { return marshalString(c) }
+
+type constant[T any] interface {
+	Constant[T]
+	*T
+}
+
+func marshalString[T ~string, PT constant[T]](v T) ([]byte, error) {
+	var zero T
+	if v == zero {
+		v = PT(&v).Default()
+	}
+	return json.Marshal(string(v))
+}

vendor/github.com/anthropics/anthropic-sdk-go/shared/shared.go 🔗

@@ -0,0 +1,334 @@
+// File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.
+
+package shared
+
+import (
+	"encoding/json"
+
+	"github.com/anthropics/anthropic-sdk-go/internal/apijson"
+	"github.com/anthropics/anthropic-sdk-go/packages/param"
+	"github.com/anthropics/anthropic-sdk-go/packages/respjson"
+	"github.com/anthropics/anthropic-sdk-go/shared/constant"
+)
+
+// aliased to make [param.APIUnion] private when embedding
+type paramUnion = param.APIUnion
+
+// aliased to make [param.APIObject] private when embedding
+type paramObj = param.APIObject
+
+type APIErrorObject struct {
+	Message string            `json:"message,required"`
+	Type    constant.APIError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r APIErrorObject) RawJSON() string { return r.JSON.raw }
+func (r *APIErrorObject) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (APIErrorObject) ImplErrorObjectUnion() {}
+
+type AuthenticationError struct {
+	Message string                       `json:"message,required"`
+	Type    constant.AuthenticationError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r AuthenticationError) RawJSON() string { return r.JSON.raw }
+func (r *AuthenticationError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (AuthenticationError) ImplErrorObjectUnion() {}
+
+type BillingError struct {
+	Message string                `json:"message,required"`
+	Type    constant.BillingError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r BillingError) RawJSON() string { return r.JSON.raw }
+func (r *BillingError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (BillingError) ImplErrorObjectUnion() {}
+
+// ErrorObjectUnion contains all possible properties and values from
+// [InvalidRequestError], [AuthenticationError], [BillingError], [PermissionError],
+// [NotFoundError], [RateLimitError], [GatewayTimeoutError], [APIErrorObject],
+// [OverloadedError].
+//
+// Use the [ErrorObjectUnion.AsAny] method to switch on the variant.
+//
+// Use the methods beginning with 'As' to cast the union to one of its variants.
+type ErrorObjectUnion struct {
+	Message string `json:"message"`
+	// Any of "invalid_request_error", "authentication_error", "billing_error",
+	// "permission_error", "not_found_error", "rate_limit_error", "timeout_error",
+	// "api_error", "overloaded_error".
+	Type string `json:"type"`
+	JSON struct {
+		Message respjson.Field
+		Type    respjson.Field
+		raw     string
+	} `json:"-"`
+}
+
+// anyErrorObject is implemented by each variant of [ErrorObjectUnion] to add type
+// safety for the return type of [ErrorObjectUnion.AsAny]
+type anyErrorObject interface {
+	ImplErrorObjectUnion()
+}
+
+// Use the following switch statement to find the correct variant
+//
+//	switch variant := ErrorObjectUnion.AsAny().(type) {
+//	case shared.InvalidRequestError:
+//	case shared.AuthenticationError:
+//	case shared.BillingError:
+//	case shared.PermissionError:
+//	case shared.NotFoundError:
+//	case shared.RateLimitError:
+//	case shared.GatewayTimeoutError:
+//	case shared.APIErrorObject:
+//	case shared.OverloadedError:
+//	default:
+//	  fmt.Errorf("no variant present")
+//	}
+func (u ErrorObjectUnion) AsAny() anyErrorObject {
+	switch u.Type {
+	case "invalid_request_error":
+		return u.AsInvalidRequestError()
+	case "authentication_error":
+		return u.AsAuthenticationError()
+	case "billing_error":
+		return u.AsBillingError()
+	case "permission_error":
+		return u.AsPermissionError()
+	case "not_found_error":
+		return u.AsNotFoundError()
+	case "rate_limit_error":
+		return u.AsRateLimitError()
+	case "timeout_error":
+		return u.AsTimeoutError()
+	case "api_error":
+		return u.AsAPIError()
+	case "overloaded_error":
+		return u.AsOverloadedError()
+	}
+	return nil
+}
+
+func (u ErrorObjectUnion) AsInvalidRequestError() (v InvalidRequestError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsAuthenticationError() (v AuthenticationError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsBillingError() (v BillingError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsPermissionError() (v PermissionError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsNotFoundError() (v NotFoundError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsRateLimitError() (v RateLimitError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsTimeoutError() (v GatewayTimeoutError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsAPIError() (v APIErrorObject) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+func (u ErrorObjectUnion) AsOverloadedError() (v OverloadedError) {
+	apijson.UnmarshalRoot(json.RawMessage(u.JSON.raw), &v)
+	return
+}
+
+// Returns the unmodified JSON received from the API
+func (u ErrorObjectUnion) RawJSON() string { return u.JSON.raw }
+
+func (r *ErrorObjectUnion) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type ErrorResponse struct {
+	Error ErrorObjectUnion `json:"error,required"`
+	Type  constant.Error   `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Error       respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r ErrorResponse) RawJSON() string { return r.JSON.raw }
+func (r *ErrorResponse) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+type GatewayTimeoutError struct {
+	Message string                `json:"message,required"`
+	Type    constant.TimeoutError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r GatewayTimeoutError) RawJSON() string { return r.JSON.raw }
+func (r *GatewayTimeoutError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (GatewayTimeoutError) ImplErrorObjectUnion() {}
+
+type InvalidRequestError struct {
+	Message string                       `json:"message,required"`
+	Type    constant.InvalidRequestError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r InvalidRequestError) RawJSON() string { return r.JSON.raw }
+func (r *InvalidRequestError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (InvalidRequestError) ImplErrorObjectUnion() {}
+
+type NotFoundError struct {
+	Message string                 `json:"message,required"`
+	Type    constant.NotFoundError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r NotFoundError) RawJSON() string { return r.JSON.raw }
+func (r *NotFoundError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (NotFoundError) ImplErrorObjectUnion() {}
+
+type OverloadedError struct {
+	Message string                   `json:"message,required"`
+	Type    constant.OverloadedError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r OverloadedError) RawJSON() string { return r.JSON.raw }
+func (r *OverloadedError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (OverloadedError) ImplErrorObjectUnion() {}
+
+type PermissionError struct {
+	Message string                   `json:"message,required"`
+	Type    constant.PermissionError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r PermissionError) RawJSON() string { return r.JSON.raw }
+func (r *PermissionError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (PermissionError) ImplErrorObjectUnion() {}
+
+type RateLimitError struct {
+	Message string                  `json:"message,required"`
+	Type    constant.RateLimitError `json:"type,required"`
+	// JSON contains metadata for fields, check presence with [respjson.Field.Valid].
+	JSON struct {
+		Message     respjson.Field
+		Type        respjson.Field
+		ExtraFields map[string]respjson.Field
+		raw         string
+	} `json:"-"`
+}
+
+// Returns the unmodified JSON received from the API
+func (r RateLimitError) RawJSON() string { return r.JSON.raw }
+func (r *RateLimitError) UnmarshalJSON(data []byte) error {
+	return apijson.UnmarshalRoot(data, r)
+}
+
+func (RateLimitError) ImplErrorObjectUnion() {}

vendor/github.com/atotto/clipboard/.travis.yml 🔗

@@ -0,0 +1,22 @@
+language: go
+
+os:
+ - linux
+ - osx
+ - windows
+
+go:
+ - go1.13.x
+ - go1.x
+
+services:
+ - xvfb
+
+before_install:
+ - export DISPLAY=:99.0
+
+script:
+ - if [ "$TRAVIS_OS_NAME" = "linux" ]; then sudo apt-get install xsel; fi
+ - go test -v .
+ - if [ "$TRAVIS_OS_NAME" = "linux" ]; then sudo apt-get install xclip; fi
+ - go test -v .

vendor/github.com/atotto/clipboard/LICENSE 🔗

@@ -0,0 +1,27 @@
+Copyright (c) 2013 Ato Araki. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of @atotto. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

vendor/github.com/atotto/clipboard/README.md 🔗

@@ -0,0 +1,48 @@
+[![Build Status](https://travis-ci.org/atotto/clipboard.svg?branch=master)](https://travis-ci.org/atotto/clipboard)
+
+[![GoDoc](https://godoc.org/github.com/atotto/clipboard?status.svg)](http://godoc.org/github.com/atotto/clipboard)
+
+# Clipboard for Go
+
+Provide copying and pasting to the Clipboard for Go.
+
+Build:
+
+    $ go get github.com/atotto/clipboard
+
+Platforms:
+
+* OSX
+* Windows 7 (probably work on other Windows)
+* Linux, Unix (requires 'xclip' or 'xsel' command to be installed)
+
+
+Document: 
+
+* http://godoc.org/github.com/atotto/clipboard
+
+Notes:
+
+* Text string only
+* UTF-8 text encoding only (no conversion)
+
+TODO:
+
+* Clipboard watcher(?)
+
+## Commands:
+
+paste shell command:
+
+    $ go get github.com/atotto/clipboard/cmd/gopaste
+    $ # example:
+    $ gopaste > document.txt
+
+copy shell command:
+
+    $ go get github.com/atotto/clipboard/cmd/gocopy
+    $ # example:
+    $ cat document.txt | gocopy
+
+
+

vendor/github.com/atotto/clipboard/clipboard.go 🔗

@@ -0,0 +1,20 @@
+// Copyright 2013 @atotto. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package clipboard read/write on clipboard
+package clipboard
+
+// ReadAll read string from clipboard
+func ReadAll() (string, error) {
+	return readAll()
+}
+
+// WriteAll write string to clipboard
+func WriteAll(text string) error {
+	return writeAll(text)
+}
+
+// Unsupported might be set true during clipboard init, to help callers decide
+// whether or not to offer clipboard options.
+var Unsupported bool

vendor/github.com/atotto/clipboard/clipboard_darwin.go 🔗

@@ -0,0 +1,52 @@
+// Copyright 2013 @atotto. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build darwin
+
+package clipboard
+
+import (
+	"os/exec"
+)
+
+var (
+	pasteCmdArgs = "pbpaste"
+	copyCmdArgs  = "pbcopy"
+)
+
+func getPasteCommand() *exec.Cmd {
+	return exec.Command(pasteCmdArgs)
+}
+
+func getCopyCommand() *exec.Cmd {
+	return exec.Command(copyCmdArgs)
+}
+
+func readAll() (string, error) {
+	pasteCmd := getPasteCommand()
+	out, err := pasteCmd.Output()
+	if err != nil {
+		return "", err
+	}
+	return string(out), nil
+}
+
+func writeAll(text string) error {
+	copyCmd := getCopyCommand()
+	in, err := copyCmd.StdinPipe()
+	if err != nil {
+		return err
+	}
+
+	if err := copyCmd.Start(); err != nil {
+		return err
+	}
+	if _, err := in.Write([]byte(text)); err != nil {
+		return err
+	}
+	if err := in.Close(); err != nil {
+		return err
+	}
+	return copyCmd.Wait()
+}

vendor/github.com/atotto/clipboard/clipboard_plan9.go 🔗

@@ -0,0 +1,42 @@
+// Copyright 2013 @atotto. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build plan9
+
+package clipboard
+
+import (
+	"os"
+	"io/ioutil"
+)
+
+func readAll() (string, error) {
+	f, err := os.Open("/dev/snarf")
+	if err != nil {
+		return "", err
+	}
+	defer f.Close()
+
+	str, err := ioutil.ReadAll(f)
+	if err != nil {
+		return "", err
+	}
+	
+	return string(str), nil
+}
+
+func writeAll(text string) error {
+	f, err := os.OpenFile("/dev/snarf", os.O_WRONLY, 0666)
+	if err != nil {
+		return err
+	}
+	defer f.Close()
+	
+	_, err = f.Write([]byte(text))
+	if err != nil {
+		return err
+	}
+	
+	return nil
+}

vendor/github.com/atotto/clipboard/clipboard_unix.go 🔗

@@ -0,0 +1,149 @@
+// Copyright 2013 @atotto. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build freebsd linux netbsd openbsd solaris dragonfly
+
+package clipboard
+
+import (
+	"errors"
+	"os"
+	"os/exec"
+)
+
+const (
+	xsel               = "xsel"
+	xclip              = "xclip"
+	powershellExe      = "powershell.exe"
+	clipExe            = "clip.exe"
+	wlcopy             = "wl-copy"
+	wlpaste            = "wl-paste"
+	termuxClipboardGet = "termux-clipboard-get"
+	termuxClipboardSet = "termux-clipboard-set"
+)
+
+var (
+	Primary bool
+	trimDos bool
+
+	pasteCmdArgs []string
+	copyCmdArgs  []string
+
+	xselPasteArgs = []string{xsel, "--output", "--clipboard"}
+	xselCopyArgs  = []string{xsel, "--input", "--clipboard"}
+
+	xclipPasteArgs = []string{xclip, "-out", "-selection", "clipboard"}
+	xclipCopyArgs  = []string{xclip, "-in", "-selection", "clipboard"}
+
+	powershellExePasteArgs = []string{powershellExe, "Get-Clipboard"}
+	clipExeCopyArgs        = []string{clipExe}
+
+	wlpasteArgs = []string{wlpaste, "--no-newline"}
+	wlcopyArgs  = []string{wlcopy}
+
+	termuxPasteArgs = []string{termuxClipboardGet}
+	termuxCopyArgs  = []string{termuxClipboardSet}
+
+	missingCommands = errors.New("No clipboard utilities available. Please install xsel, xclip, wl-clipboard or Termux:API add-on for termux-clipboard-get/set.")
+)
+
+func init() {
+	if os.Getenv("WAYLAND_DISPLAY") != "" {
+		pasteCmdArgs = wlpasteArgs
+		copyCmdArgs = wlcopyArgs
+
+		if _, err := exec.LookPath(wlcopy); err == nil {
+			if _, err := exec.LookPath(wlpaste); err == nil {
+				return
+			}
+		}
+	}
+
+	pasteCmdArgs = xclipPasteArgs
+	copyCmdArgs = xclipCopyArgs
+
+	if _, err := exec.LookPath(xclip); err == nil {
+		return
+	}
+
+	pasteCmdArgs = xselPasteArgs
+	copyCmdArgs = xselCopyArgs
+
+	if _, err := exec.LookPath(xsel); err == nil {
+		return
+	}
+
+	pasteCmdArgs = termuxPasteArgs
+	copyCmdArgs = termuxCopyArgs
+
+	if _, err := exec.LookPath(termuxClipboardSet); err == nil {
+		if _, err := exec.LookPath(termuxClipboardGet); err == nil {
+			return
+		}
+	}
+
+	pasteCmdArgs = powershellExePasteArgs
+	copyCmdArgs = clipExeCopyArgs
+	trimDos = true
+
+	if _, err := exec.LookPath(clipExe); err == nil {
+		if _, err := exec.LookPath(powershellExe); err == nil {
+			return
+		}
+	}
+
+	Unsupported = true
+}
+
+func getPasteCommand() *exec.Cmd {
+	if Primary {
+		pasteCmdArgs = pasteCmdArgs[:1]
+	}
+	return exec.Command(pasteCmdArgs[0], pasteCmdArgs[1:]...)
+}
+
+func getCopyCommand() *exec.Cmd {
+	if Primary {
+		copyCmdArgs = copyCmdArgs[:1]
+	}
+	return exec.Command(copyCmdArgs[0], copyCmdArgs[1:]...)
+}
+
+func readAll() (string, error) {
+	if Unsupported {
+		return "", missingCommands
+	}
+	pasteCmd := getPasteCommand()
+	out, err := pasteCmd.Output()
+	if err != nil {
+		return "", err
+	}
+	result := string(out)
+	if trimDos && len(result) > 1 {
+		result = result[:len(result)-2]
+	}
+	return result, nil
+}
+
+func writeAll(text string) error {
+	if Unsupported {
+		return missingCommands
+	}
+	copyCmd := getCopyCommand()
+	in, err := copyCmd.StdinPipe()
+	if err != nil {
+		return err
+	}
+
+	if err := copyCmd.Start(); err != nil {
+		return err
+	}
+	if _, err := in.Write([]byte(text)); err != nil {
+		return err
+	}
+	if err := in.Close(); err != nil {
+		return err
+	}
+	return copyCmd.Wait()
+}

vendor/github.com/atotto/clipboard/clipboard_windows.go 🔗

@@ -0,0 +1,157 @@
+// Copyright 2013 @atotto. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build windows
+
+package clipboard
+
+import (
+	"runtime"
+	"syscall"
+	"time"
+	"unsafe"
+)
+
+const (
+	cfUnicodetext = 13
+	gmemMoveable  = 0x0002
+)
+
+var (
+	user32                     = syscall.MustLoadDLL("user32")
+	isClipboardFormatAvailable = user32.MustFindProc("IsClipboardFormatAvailable")
+	openClipboard              = user32.MustFindProc("OpenClipboard")
+	closeClipboard             = user32.MustFindProc("CloseClipboard")
+	emptyClipboard             = user32.MustFindProc("EmptyClipboard")
+	getClipboardData           = user32.MustFindProc("GetClipboardData")
+	setClipboardData           = user32.MustFindProc("SetClipboardData")
+
+	kernel32     = syscall.NewLazyDLL("kernel32")
+	globalAlloc  = kernel32.NewProc("GlobalAlloc")
+	globalFree   = kernel32.NewProc("GlobalFree")
+	globalLock   = kernel32.NewProc("GlobalLock")
+	globalUnlock = kernel32.NewProc("GlobalUnlock")
+	lstrcpy      = kernel32.NewProc("lstrcpyW")
+)
+
+// waitOpenClipboard opens the clipboard, waiting for up to a second to do so.
+func waitOpenClipboard() error {
+	started := time.Now()
+	limit := started.Add(time.Second)
+	var r uintptr
+	var err error
+	for time.Now().Before(limit) {
+		r, _, err = openClipboard.Call(0)
+		if r != 0 {
+			return nil
+		}
+		time.Sleep(time.Millisecond)
+	}
+	return err
+}
+
+func readAll() (string, error) {
+	// LockOSThread ensure that the whole method will keep executing on the same thread from begin to end (it actually locks the goroutine thread attribution).
+	// Otherwise if the goroutine switch thread during execution (which is a common practice), the OpenClipboard and CloseClipboard will happen on two different threads, and it will result in a clipboard deadlock.
+	runtime.LockOSThread()
+	defer runtime.UnlockOSThread()
+	if formatAvailable, _, err := isClipboardFormatAvailable.Call(cfUnicodetext); formatAvailable == 0 {
+		return "", err
+	}
+	err := waitOpenClipboard()
+	if err != nil {
+		return "", err
+	}
+
+	h, _, err := getClipboardData.Call(cfUnicodetext)
+	if h == 0 {
+		_, _, _ = closeClipboard.Call()
+		return "", err
+	}
+
+	l, _, err := globalLock.Call(h)
+	if l == 0 {
+		_, _, _ = closeClipboard.Call()
+		return "", err
+	}
+
+	text := syscall.UTF16ToString((*[1 << 20]uint16)(unsafe.Pointer(l))[:])
+
+	r, _, err := globalUnlock.Call(h)
+	if r == 0 {
+		_, _, _ = closeClipboard.Call()
+		return "", err
+	}
+
+	closed, _, err := closeClipboard.Call()
+	if closed == 0 {
+		return "", err
+	}
+	return text, nil
+}
+
+func writeAll(text string) error {
+	// LockOSThread ensure that the whole method will keep executing on the same thread from begin to end (it actually locks the goroutine thread attribution).
+	// Otherwise if the goroutine switch thread during execution (which is a common practice), the OpenClipboard and CloseClipboard will happen on two different threads, and it will result in a clipboard deadlock.
+	runtime.LockOSThread()
+	defer runtime.UnlockOSThread()
+
+	err := waitOpenClipboard()
+	if err != nil {
+		return err
+	}
+
+	r, _, err := emptyClipboard.Call(0)
+	if r == 0 {
+		_, _, _ = closeClipboard.Call()
+		return err
+	}
+
+	data := syscall.StringToUTF16(text)
+
+	// "If the hMem parameter identifies a memory object, the object must have
+	// been allocated using the function with the GMEM_MOVEABLE flag."
+	h, _, err := globalAlloc.Call(gmemMoveable, uintptr(len(data)*int(unsafe.Sizeof(data[0]))))
+	if h == 0 {
+		_, _, _ = closeClipboard.Call()
+		return err
+	}
+	defer func() {
+		if h != 0 {
+			globalFree.Call(h)
+		}
+	}()
+
+	l, _, err := globalLock.Call(h)
+	if l == 0 {
+		_, _, _ = closeClipboard.Call()
+		return err
+	}
+
+	r, _, err = lstrcpy.Call(l, uintptr(unsafe.Pointer(&data[0])))
+	if r == 0 {
+		_, _, _ = closeClipboard.Call()
+		return err
+	}
+
+	r, _, err = globalUnlock.Call(h)
+	if r == 0 {
+		if err.(syscall.Errno) != 0 {
+			_, _, _ = closeClipboard.Call()
+			return err
+		}
+	}
+
+	r, _, err = setClipboardData.Call(cfUnicodetext, h)
+	if r == 0 {
+		_, _, _ = closeClipboard.Call()
+		return err
+	}
+	h = 0 // suppress deferred cleanup
+	closed, _, err := closeClipboard.Call()
+	if closed == 0 {
+		return err
+	}
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/aws/accountid_endpoint_mode.go 🔗

@@ -0,0 +1,18 @@
+package aws
+
+// AccountIDEndpointMode controls how a resolved AWS account ID is handled for endpoint routing.
+type AccountIDEndpointMode string
+
+const (
+	// AccountIDEndpointModeUnset indicates the AWS account ID will not be used for endpoint routing
+	AccountIDEndpointModeUnset AccountIDEndpointMode = ""
+
+	// AccountIDEndpointModePreferred indicates the AWS account ID will be used for endpoint routing if present
+	AccountIDEndpointModePreferred = "preferred"
+
+	// AccountIDEndpointModeRequired indicates an error will be returned if the AWS account ID is not resolved from identity
+	AccountIDEndpointModeRequired = "required"
+
+	// AccountIDEndpointModeDisabled indicates the AWS account ID will be ignored during endpoint routing
+	AccountIDEndpointModeDisabled = "disabled"
+)

vendor/github.com/aws/aws-sdk-go-v2/aws/config.go 🔗

@@ -0,0 +1,211 @@
+package aws
+
+import (
+	"net/http"
+
+	smithybearer "github.com/aws/smithy-go/auth/bearer"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// HTTPClient provides the interface to provide custom HTTPClients. Generally
+// *http.Client is sufficient for most use cases. The HTTPClient should not
+// follow 301 or 302 redirects.
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+// A Config provides service configuration for service clients.
+type Config struct {
+	// The region to send requests to. This parameter is required and must
+	// be configured globally or on a per-client basis unless otherwise
+	// noted. A full list of regions is found in the "Regions and Endpoints"
+	// document.
+	//
+	// See http://docs.aws.amazon.com/general/latest/gr/rande.html for
+	// information on AWS regions.
+	Region string
+
+	// The credentials object to use when signing requests.
+	// Use the LoadDefaultConfig to load configuration from all the SDK's supported
+	// sources, and resolve credentials using the SDK's default credential chain.
+	Credentials CredentialsProvider
+
+	// The Bearer Authentication token provider to use for authenticating API
+	// operation calls with a Bearer Authentication token. The API clients and
+	// operation must support Bearer Authentication scheme in order for the
+	// token provider to be used. API clients created with NewFromConfig will
+	// automatically be configured with this option, if the API client support
+	// Bearer Authentication.
+	//
+	// The SDK's config.LoadDefaultConfig can automatically populate this
+	// option for external configuration options such as SSO session.
+	// https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html
+	BearerAuthTokenProvider smithybearer.TokenProvider
+
+	// The HTTP Client the SDK's API clients will use to invoke HTTP requests.
+	// The SDK defaults to a BuildableClient allowing API clients to create
+	// copies of the HTTP Client for service specific customizations.
+	//
+	// Use a (*http.Client) for custom behavior. Using a custom http.Client
+	// will prevent the SDK from modifying the HTTP client.
+	HTTPClient HTTPClient
+
+	// An endpoint resolver that can be used to provide or override an endpoint
+	// for the given service and region.
+	//
+	// See the `aws.EndpointResolver` documentation for additional usage
+	// information.
+	//
+	// Deprecated: See Config.EndpointResolverWithOptions
+	EndpointResolver EndpointResolver
+
+	// An endpoint resolver that can be used to provide or override an endpoint
+	// for the given service and region.
+	//
+	// When EndpointResolverWithOptions is specified, it will be used by a
+	// service client rather than using EndpointResolver if also specified.
+	//
+	// See the `aws.EndpointResolverWithOptions` documentation for additional
+	// usage information.
+	//
+	// Deprecated: with the release of endpoint resolution v2 in API clients,
+	// EndpointResolver and EndpointResolverWithOptions are deprecated.
+	// Providing a value for this field will likely prevent you from using
+	// newer endpoint-related service features. See API client options
+	// EndpointResolverV2 and BaseEndpoint.
+	EndpointResolverWithOptions EndpointResolverWithOptions
+
+	// RetryMaxAttempts specifies the maximum number attempts an API client
+	// will call an operation that fails with a retryable error.
+	//
+	// API Clients will only use this value to construct a retryer if the
+	// Config.Retryer member is not nil. This value will be ignored if
+	// Retryer is not nil.
+	RetryMaxAttempts int
+
+	// RetryMode specifies the retry model the API client will be created with.
+	//
+	// API Clients will only use this value to construct a retryer if the
+	// Config.Retryer member is not nil. This value will be ignored if
+	// Retryer is not nil.
+	RetryMode RetryMode
+
+	// Retryer is a function that provides a Retryer implementation. A Retryer
+	// guides how HTTP requests should be retried in case of recoverable
+	// failures. When nil the API client will use a default retryer.
+	//
+	// In general, the provider function should return a new instance of a
+	// Retryer if you are attempting to provide a consistent Retryer
+	// configuration across all clients. This will ensure that each client will
+	// be provided a new instance of the Retryer implementation, and will avoid
+	// issues such as sharing the same retry token bucket across services.
+	//
+	// If not nil, RetryMaxAttempts, and RetryMode will be ignored by API
+	// clients.
+	Retryer func() Retryer
+
+	// ConfigSources are the sources that were used to construct the Config.
+	// Allows for additional configuration to be loaded by clients.
+	ConfigSources []interface{}
+
+	// APIOptions provides the set of middleware mutations modify how the API
+	// client requests will be handled. This is useful for adding additional
+	// tracing data to a request, or changing behavior of the SDK's client.
+	APIOptions []func(*middleware.Stack) error
+
+	// The logger writer interface to write logging messages to. Defaults to
+	// standard error.
+	Logger logging.Logger
+
+	// Configures the events that will be sent to the configured logger. This
+	// can be used to configure the logging of signing, retries, request, and
+	// responses of the SDK clients.
+	//
+	// See the ClientLogMode type documentation for the complete set of logging
+	// modes and available configuration.
+	ClientLogMode ClientLogMode
+
+	// The configured DefaultsMode. If not specified, service clients will
+	// default to legacy.
+	//
+	// Supported modes are: auto, cross-region, in-region, legacy, mobile,
+	// standard
+	DefaultsMode DefaultsMode
+
+	// The RuntimeEnvironment configuration, only populated if the DefaultsMode
+	// is set to DefaultsModeAuto and is initialized by
+	// `config.LoadDefaultConfig`. You should not populate this structure
+	// programmatically, or rely on the values here within your applications.
+	RuntimeEnvironment RuntimeEnvironment
+
+	// AppId is an optional application specific identifier that can be set.
+	// When set it will be appended to the User-Agent header of every request
+	// in the form of App/{AppId}. This variable is sourced from environment
+	// variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.
+	// See https://docs.aws.amazon.com/sdkref/latest/guide/settings-reference.html for
+	// more information on environment variables and shared config settings.
+	AppID string
+
+	// BaseEndpoint is an intermediary transfer location to a service specific
+	// BaseEndpoint on a service's Options.
+	BaseEndpoint *string
+
+	// DisableRequestCompression toggles if an operation request could be
+	// compressed or not. Will be set to false by default. This variable is sourced from
+	// environment variable AWS_DISABLE_REQUEST_COMPRESSION or the shared config profile attribute
+	// disable_request_compression
+	DisableRequestCompression bool
+
+	// RequestMinCompressSizeBytes sets the inclusive min bytes of a request body that could be
+	// compressed. Will be set to 10240 by default and must be within 0 and 10485760 bytes inclusively.
+	// This variable is sourced from environment variable AWS_REQUEST_MIN_COMPRESSION_SIZE_BYTES or
+	// the shared config profile attribute request_min_compression_size_bytes
+	RequestMinCompressSizeBytes int64
+
+	// Controls how a resolved AWS account ID is handled for endpoint routing.
+	AccountIDEndpointMode AccountIDEndpointMode
+}
+
+// NewConfig returns a new Config pointer that can be chained with builder
+// methods to set multiple configuration values inline without using pointers.
+func NewConfig() *Config {
+	return &Config{}
+}
+
+// Copy will return a shallow copy of the Config object.
+func (c Config) Copy() Config {
+	cp := c
+	return cp
+}
+
+// EndpointDiscoveryEnableState indicates if endpoint discovery is
+// enabled, disabled, auto or unset state.
+//
+// Default behavior (Auto or Unset) indicates operations that require endpoint
+// discovery will use Endpoint Discovery by default. Operations that
+// optionally use Endpoint Discovery will not use Endpoint Discovery
+// unless EndpointDiscovery is explicitly enabled.
+type EndpointDiscoveryEnableState uint
+
+// Enumeration values for EndpointDiscoveryEnableState
+const (
+	// EndpointDiscoveryUnset represents EndpointDiscoveryEnableState is unset.
+	// Users do not need to use this value explicitly. The behavior for unset
+	// is the same as for EndpointDiscoveryAuto.
+	EndpointDiscoveryUnset EndpointDiscoveryEnableState = iota
+
+	// EndpointDiscoveryAuto represents an AUTO state that allows endpoint
+	// discovery only when required by the api. This is the default
+	// configuration resolved by the client if endpoint discovery is neither
+	// enabled or disabled.
+	EndpointDiscoveryAuto // default state
+
+	// EndpointDiscoveryDisabled indicates client MUST not perform endpoint
+	// discovery even when required.
+	EndpointDiscoveryDisabled
+
+	// EndpointDiscoveryEnabled indicates client MUST always perform endpoint
+	// discovery if supported for the operation.
+	EndpointDiscoveryEnabled
+)

vendor/github.com/aws/aws-sdk-go-v2/aws/context.go 🔗

@@ -0,0 +1,22 @@
+package aws
+
+import (
+	"context"
+	"time"
+)
+
+type suppressedContext struct {
+	context.Context
+}
+
+func (s *suppressedContext) Deadline() (deadline time.Time, ok bool) {
+	return time.Time{}, false
+}
+
+func (s *suppressedContext) Done() <-chan struct{} {
+	return nil
+}
+
+func (s *suppressedContext) Err() error {
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/credential_cache.go 🔗

@@ -0,0 +1,224 @@
+package aws
+
+import (
+	"context"
+	"fmt"
+	"sync/atomic"
+	"time"
+
+	sdkrand "github.com/aws/aws-sdk-go-v2/internal/rand"
+	"github.com/aws/aws-sdk-go-v2/internal/sync/singleflight"
+)
+
+// CredentialsCacheOptions are the options
+type CredentialsCacheOptions struct {
+
+	// ExpiryWindow will allow the credentials to trigger refreshing prior to
+	// the credentials actually expiring. This is beneficial so race conditions
+	// with expiring credentials do not cause request to fail unexpectedly
+	// due to ExpiredTokenException exceptions.
+	//
+	// An ExpiryWindow of 10s would cause calls to IsExpired() to return true
+	// 10 seconds before the credentials are actually expired. This can cause an
+	// increased number of requests to refresh the credentials to occur.
+	//
+	// If ExpiryWindow is 0 or less it will be ignored.
+	ExpiryWindow time.Duration
+
+	// ExpiryWindowJitterFrac provides a mechanism for randomizing the
+	// expiration of credentials within the configured ExpiryWindow by a random
+	// percentage. Valid values are between 0.0 and 1.0.
+	//
+	// As an example if ExpiryWindow is 60 seconds and ExpiryWindowJitterFrac
+	// is 0.5 then credentials will be set to expire between 30 to 60 seconds
+	// prior to their actual expiration time.
+	//
+	// If ExpiryWindow is 0 or less then ExpiryWindowJitterFrac is ignored.
+	// If ExpiryWindowJitterFrac is 0 then no randomization will be applied to the window.
+	// If ExpiryWindowJitterFrac < 0 the value will be treated as 0.
+	// If ExpiryWindowJitterFrac > 1 the value will be treated as 1.
+	ExpiryWindowJitterFrac float64
+}
+
+// CredentialsCache provides caching and concurrency safe credentials retrieval
+// via the provider's retrieve method.
+//
+// CredentialsCache will look for optional interfaces on the Provider to adjust
+// how the credential cache handles credentials caching.
+//
+//   - HandleFailRefreshCredentialsCacheStrategy - Allows provider to handle
+//     credential refresh failures. This could return an updated Credentials
+//     value, or attempt another means of retrieving credentials.
+//
+//   - AdjustExpiresByCredentialsCacheStrategy - Allows provider to adjust how
+//     credentials Expires is modified. This could modify how the Credentials
+//     Expires is adjusted based on the CredentialsCache ExpiryWindow option.
+//     Such as providing a floor not to reduce the Expires below.
+type CredentialsCache struct {
+	provider CredentialsProvider
+
+	options CredentialsCacheOptions
+	creds   atomic.Value
+	sf      singleflight.Group
+}
+
+// NewCredentialsCache returns a CredentialsCache that wraps provider. Provider
+// is expected to not be nil. A variadic list of one or more functions can be
+// provided to modify the CredentialsCache configuration. This allows for
+// configuration of credential expiry window and jitter.
+func NewCredentialsCache(provider CredentialsProvider, optFns ...func(options *CredentialsCacheOptions)) *CredentialsCache {
+	options := CredentialsCacheOptions{}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	if options.ExpiryWindow < 0 {
+		options.ExpiryWindow = 0
+	}
+
+	if options.ExpiryWindowJitterFrac < 0 {
+		options.ExpiryWindowJitterFrac = 0
+	} else if options.ExpiryWindowJitterFrac > 1 {
+		options.ExpiryWindowJitterFrac = 1
+	}
+
+	return &CredentialsCache{
+		provider: provider,
+		options:  options,
+	}
+}
+
+// Retrieve returns the credentials. If the credentials have already been
+// retrieved, and not expired the cached credentials will be returned. If the
+// credentials have not been retrieved yet, or expired the provider's Retrieve
+// method will be called.
+//
+// Returns and error if the provider's retrieve method returns an error.
+func (p *CredentialsCache) Retrieve(ctx context.Context) (Credentials, error) {
+	if creds, ok := p.getCreds(); ok && !creds.Expired() {
+		return creds, nil
+	}
+
+	resCh := p.sf.DoChan("", func() (interface{}, error) {
+		return p.singleRetrieve(&suppressedContext{ctx})
+	})
+	select {
+	case res := <-resCh:
+		return res.Val.(Credentials), res.Err
+	case <-ctx.Done():
+		return Credentials{}, &RequestCanceledError{Err: ctx.Err()}
+	}
+}
+
+func (p *CredentialsCache) singleRetrieve(ctx context.Context) (interface{}, error) {
+	currCreds, ok := p.getCreds()
+	if ok && !currCreds.Expired() {
+		return currCreds, nil
+	}
+
+	newCreds, err := p.provider.Retrieve(ctx)
+	if err != nil {
+		handleFailToRefresh := defaultHandleFailToRefresh
+		if cs, ok := p.provider.(HandleFailRefreshCredentialsCacheStrategy); ok {
+			handleFailToRefresh = cs.HandleFailToRefresh
+		}
+		newCreds, err = handleFailToRefresh(ctx, currCreds, err)
+		if err != nil {
+			return Credentials{}, fmt.Errorf("failed to refresh cached credentials, %w", err)
+		}
+	}
+
+	if newCreds.CanExpire && p.options.ExpiryWindow > 0 {
+		adjustExpiresBy := defaultAdjustExpiresBy
+		if cs, ok := p.provider.(AdjustExpiresByCredentialsCacheStrategy); ok {
+			adjustExpiresBy = cs.AdjustExpiresBy
+		}
+
+		randFloat64, err := sdkrand.CryptoRandFloat64()
+		if err != nil {
+			return Credentials{}, fmt.Errorf("failed to get random provider, %w", err)
+		}
+
+		var jitter time.Duration
+		if p.options.ExpiryWindowJitterFrac > 0 {
+			jitter = time.Duration(randFloat64 *
+				p.options.ExpiryWindowJitterFrac * float64(p.options.ExpiryWindow))
+		}
+
+		newCreds, err = adjustExpiresBy(newCreds, -(p.options.ExpiryWindow - jitter))
+		if err != nil {
+			return Credentials{}, fmt.Errorf("failed to adjust credentials expires, %w", err)
+		}
+	}
+
+	p.creds.Store(&newCreds)
+	return newCreds, nil
+}
+
+// getCreds returns the currently stored credentials and true. Returning false
+// if no credentials were stored.
+func (p *CredentialsCache) getCreds() (Credentials, bool) {
+	v := p.creds.Load()
+	if v == nil {
+		return Credentials{}, false
+	}
+
+	c := v.(*Credentials)
+	if c == nil || !c.HasKeys() {
+		return Credentials{}, false
+	}
+
+	return *c, true
+}
+
+// Invalidate will invalidate the cached credentials. The next call to Retrieve
+// will cause the provider's Retrieve method to be called.
+func (p *CredentialsCache) Invalidate() {
+	p.creds.Store((*Credentials)(nil))
+}
+
+// IsCredentialsProvider returns whether credential provider wrapped by CredentialsCache
+// matches the target provider type.
+func (p *CredentialsCache) IsCredentialsProvider(target CredentialsProvider) bool {
+	return IsCredentialsProvider(p.provider, target)
+}
+
+// HandleFailRefreshCredentialsCacheStrategy is an interface for
+// CredentialsCache to allow CredentialsProvider  how failed to refresh
+// credentials is handled.
+type HandleFailRefreshCredentialsCacheStrategy interface {
+	// Given the previously cached Credentials, if any, and refresh error, may
+	// returns new or modified set of Credentials, or error.
+	//
+	// Credential caches may use default implementation if nil.
+	HandleFailToRefresh(context.Context, Credentials, error) (Credentials, error)
+}
+
+// defaultHandleFailToRefresh returns the passed in error.
+func defaultHandleFailToRefresh(ctx context.Context, _ Credentials, err error) (Credentials, error) {
+	return Credentials{}, err
+}
+
+// AdjustExpiresByCredentialsCacheStrategy is an interface for CredentialCache
+// to allow CredentialsProvider to intercept adjustments to Credentials expiry
+// based on expectations and use cases of CredentialsProvider.
+//
+// Credential caches may use default implementation if nil.
+type AdjustExpiresByCredentialsCacheStrategy interface {
+	// Given a Credentials as input, applying any mutations and
+	// returning the potentially updated Credentials, or error.
+	AdjustExpiresBy(Credentials, time.Duration) (Credentials, error)
+}
+
+// defaultAdjustExpiresBy adds the duration to the passed in credentials Expires,
+// and returns the updated credentials value. If Credentials value's CanExpire
+// is false, the passed in credentials are returned unchanged.
+func defaultAdjustExpiresBy(creds Credentials, dur time.Duration) (Credentials, error) {
+	if !creds.CanExpire {
+		return creds, nil
+	}
+
+	creds.Expires = creds.Expires.Add(dur)
+	return creds, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/credentials.go 🔗

@@ -0,0 +1,173 @@
+package aws
+
+import (
+	"context"
+	"fmt"
+	"reflect"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+)
+
+// AnonymousCredentials provides a sentinel CredentialsProvider that should be
+// used to instruct the SDK's signing middleware to not sign the request.
+//
+// Using `nil` credentials when configuring an API client will achieve the same
+// result. The AnonymousCredentials type allows you to configure the SDK's
+// external config loading to not attempt to source credentials from the shared
+// config or environment.
+//
+// For example you can use this CredentialsProvider with an API client's
+// Options to instruct the client not to sign a request for accessing public
+// S3 bucket objects.
+//
+// The following example demonstrates using the AnonymousCredentials to prevent
+// SDK's external config loading attempt to resolve credentials.
+//
+//	cfg, err := config.LoadDefaultConfig(context.TODO(),
+//	     config.WithCredentialsProvider(aws.AnonymousCredentials{}),
+//	)
+//	if err != nil {
+//	     log.Fatalf("failed to load config, %v", err)
+//	}
+//
+//	client := s3.NewFromConfig(cfg)
+//
+// Alternatively you can leave the API client Option's `Credential` member to
+// nil. If using the `NewFromConfig` constructor you'll need to explicitly set
+// the `Credentials` member to nil, if the external config resolved a
+// credential provider.
+//
+//	client := s3.New(s3.Options{
+//	     // Credentials defaults to a nil value.
+//	})
+//
+// This can also be configured for specific operations calls too.
+//
+//	cfg, err := config.LoadDefaultConfig(context.TODO())
+//	if err != nil {
+//	     log.Fatalf("failed to load config, %v", err)
+//	}
+//
+//	client := s3.NewFromConfig(config)
+//
+//	result, err := client.GetObject(context.TODO(), s3.GetObject{
+//	     Bucket: aws.String("example-bucket"),
+//	     Key: aws.String("example-key"),
+//	}, func(o *s3.Options) {
+//	     o.Credentials = nil
+//	     // Or
+//	     o.Credentials = aws.AnonymousCredentials{}
+//	})
+type AnonymousCredentials struct{}
+
+// Retrieve implements the CredentialsProvider interface, but will always
+// return error, and cannot be used to sign a request. The AnonymousCredentials
+// type is used as a sentinel type instructing the AWS request signing
+// middleware to not sign a request.
+func (AnonymousCredentials) Retrieve(context.Context) (Credentials, error) {
+	return Credentials{Source: "AnonymousCredentials"},
+		fmt.Errorf("the AnonymousCredentials is not a valid credential provider, and cannot be used to sign AWS requests with")
+}
+
+// A Credentials is the AWS credentials value for individual credential fields.
+type Credentials struct {
+	// AWS Access key ID
+	AccessKeyID string
+
+	// AWS Secret Access Key
+	SecretAccessKey string
+
+	// AWS Session Token
+	SessionToken string
+
+	// Source of the credentials
+	Source string
+
+	// States if the credentials can expire or not.
+	CanExpire bool
+
+	// The time the credentials will expire at. Should be ignored if CanExpire
+	// is false.
+	Expires time.Time
+
+	// The ID of the account for the credentials.
+	AccountID string
+}
+
+// Expired returns if the credentials have expired.
+func (v Credentials) Expired() bool {
+	if v.CanExpire {
+		// Calling Round(0) on the current time will truncate the monotonic
+		// reading only. Ensures credential expiry time is always based on
+		// reported wall-clock time.
+		return !v.Expires.After(sdk.NowTime().Round(0))
+	}
+
+	return false
+}
+
+// HasKeys returns if the credentials keys are set.
+func (v Credentials) HasKeys() bool {
+	return len(v.AccessKeyID) > 0 && len(v.SecretAccessKey) > 0
+}
+
+// A CredentialsProvider is the interface for any component which will provide
+// credentials Credentials. A CredentialsProvider is required to manage its own
+// Expired state, and what to be expired means.
+//
+// A credentials provider implementation can be wrapped with a CredentialCache
+// to cache the credential value retrieved. Without the cache the SDK will
+// attempt to retrieve the credentials for every request.
+type CredentialsProvider interface {
+	// Retrieve returns nil if it successfully retrieved the value.
+	// Error is returned if the value were not obtainable, or empty.
+	Retrieve(ctx context.Context) (Credentials, error)
+}
+
+// CredentialsProviderFunc provides a helper wrapping a function value to
+// satisfy the CredentialsProvider interface.
+type CredentialsProviderFunc func(context.Context) (Credentials, error)
+
+// Retrieve delegates to the function value the CredentialsProviderFunc wraps.
+func (fn CredentialsProviderFunc) Retrieve(ctx context.Context) (Credentials, error) {
+	return fn(ctx)
+}
+
+type isCredentialsProvider interface {
+	IsCredentialsProvider(CredentialsProvider) bool
+}
+
+// IsCredentialsProvider returns whether the target CredentialProvider is the same type as provider when comparing the
+// implementation type.
+//
+// If provider has a method IsCredentialsProvider(CredentialsProvider) bool it will be responsible for validating
+// whether target matches the credential provider type.
+//
+// When comparing the CredentialProvider implementations provider and target for equality, the following rules are used:
+//
+//	If provider is of type T and target is of type V, true if type *T is the same as type *V, otherwise false
+//	If provider is of type *T and target is of type V, true if type *T is the same as type *V, otherwise false
+//	If provider is of type T and target is of type *V, true if type *T is the same as type *V, otherwise false
+//	If provider is of type *T and target is of type *V,true if type *T is the same as type *V, otherwise false
+func IsCredentialsProvider(provider, target CredentialsProvider) bool {
+	if target == nil || provider == nil {
+		return provider == target
+	}
+
+	if x, ok := provider.(isCredentialsProvider); ok {
+		return x.IsCredentialsProvider(target)
+	}
+
+	targetType := reflect.TypeOf(target)
+	if targetType.Kind() != reflect.Ptr {
+		targetType = reflect.PtrTo(targetType)
+	}
+
+	providerType := reflect.TypeOf(provider)
+	if providerType.Kind() != reflect.Ptr {
+		providerType = reflect.PtrTo(providerType)
+	}
+
+	return targetType.AssignableTo(providerType)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/defaults/auto.go 🔗

@@ -0,0 +1,38 @@
+package defaults
+
+import (
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"runtime"
+	"strings"
+)
+
+var getGOOS = func() string {
+	return runtime.GOOS
+}
+
+// ResolveDefaultsModeAuto is used to determine the effective aws.DefaultsMode when the mode
+// is set to aws.DefaultsModeAuto.
+func ResolveDefaultsModeAuto(region string, environment aws.RuntimeEnvironment) aws.DefaultsMode {
+	goos := getGOOS()
+	if goos == "android" || goos == "ios" {
+		return aws.DefaultsModeMobile
+	}
+
+	var currentRegion string
+	if len(environment.EnvironmentIdentifier) > 0 {
+		currentRegion = environment.Region
+	}
+
+	if len(currentRegion) == 0 && len(environment.EC2InstanceMetadataRegion) > 0 {
+		currentRegion = environment.EC2InstanceMetadataRegion
+	}
+
+	if len(region) > 0 && len(currentRegion) > 0 {
+		if strings.EqualFold(region, currentRegion) {
+			return aws.DefaultsModeInRegion
+		}
+		return aws.DefaultsModeCrossRegion
+	}
+
+	return aws.DefaultsModeStandard
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/defaults/configuration.go 🔗

@@ -0,0 +1,43 @@
+package defaults
+
+import (
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// Configuration is the set of SDK configuration options that are determined based
+// on the configured DefaultsMode.
+type Configuration struct {
+	// RetryMode is the configuration's default retry mode API clients should
+	// use for constructing a Retryer.
+	RetryMode aws.RetryMode
+
+	// ConnectTimeout is the maximum amount of time a dial will wait for
+	// a connect to complete.
+	//
+	// See https://pkg.go.dev/net#Dialer.Timeout
+	ConnectTimeout *time.Duration
+
+	// TLSNegotiationTimeout specifies the maximum amount of time waiting to
+	// wait for a TLS handshake.
+	//
+	// See https://pkg.go.dev/net/http#Transport.TLSHandshakeTimeout
+	TLSNegotiationTimeout *time.Duration
+}
+
+// GetConnectTimeout returns the ConnectTimeout value, returns false if the value is not set.
+func (c *Configuration) GetConnectTimeout() (time.Duration, bool) {
+	if c.ConnectTimeout == nil {
+		return 0, false
+	}
+	return *c.ConnectTimeout, true
+}
+
+// GetTLSNegotiationTimeout returns the TLSNegotiationTimeout value, returns false if the value is not set.
+func (c *Configuration) GetTLSNegotiationTimeout() (time.Duration, bool) {
+	if c.TLSNegotiationTimeout == nil {
+		return 0, false
+	}
+	return *c.TLSNegotiationTimeout, true
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/defaults/defaults.go 🔗

@@ -0,0 +1,50 @@
+// Code generated by github.com/aws/aws-sdk-go-v2/internal/codegen/cmd/defaultsconfig. DO NOT EDIT.
+
+package defaults
+
+import (
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"time"
+)
+
+// GetModeConfiguration returns the default Configuration descriptor for the given mode.
+//
+// Supports the following modes: cross-region, in-region, mobile, standard
+func GetModeConfiguration(mode aws.DefaultsMode) (Configuration, error) {
+	var mv aws.DefaultsMode
+	mv.SetFromString(string(mode))
+
+	switch mv {
+	case aws.DefaultsModeCrossRegion:
+		settings := Configuration{
+			ConnectTimeout:        aws.Duration(3100 * time.Millisecond),
+			RetryMode:             aws.RetryMode("standard"),
+			TLSNegotiationTimeout: aws.Duration(3100 * time.Millisecond),
+		}
+		return settings, nil
+	case aws.DefaultsModeInRegion:
+		settings := Configuration{
+			ConnectTimeout:        aws.Duration(1100 * time.Millisecond),
+			RetryMode:             aws.RetryMode("standard"),
+			TLSNegotiationTimeout: aws.Duration(1100 * time.Millisecond),
+		}
+		return settings, nil
+	case aws.DefaultsModeMobile:
+		settings := Configuration{
+			ConnectTimeout:        aws.Duration(30000 * time.Millisecond),
+			RetryMode:             aws.RetryMode("standard"),
+			TLSNegotiationTimeout: aws.Duration(30000 * time.Millisecond),
+		}
+		return settings, nil
+	case aws.DefaultsModeStandard:
+		settings := Configuration{
+			ConnectTimeout:        aws.Duration(3100 * time.Millisecond),
+			RetryMode:             aws.RetryMode("standard"),
+			TLSNegotiationTimeout: aws.Duration(3100 * time.Millisecond),
+		}
+		return settings, nil
+	default:
+		return Configuration{}, fmt.Errorf("unsupported defaults mode: %v", mode)
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/defaultsmode.go 🔗

@@ -0,0 +1,95 @@
+// Code generated by github.com/aws/aws-sdk-go-v2/internal/codegen/cmd/defaultsmode. DO NOT EDIT.
+
+package aws
+
+import (
+	"strings"
+)
+
+// DefaultsMode is the SDK defaults mode setting.
+type DefaultsMode string
+
+// The DefaultsMode constants.
+const (
+	// DefaultsModeAuto is an experimental mode that builds on the standard mode.
+	// The SDK will attempt to discover the execution environment to determine the
+	// appropriate settings automatically.
+	//
+	// Note that the auto detection is heuristics-based and does not guarantee 100%
+	// accuracy. STANDARD mode will be used if the execution environment cannot
+	// be determined. The auto detection might query EC2 Instance Metadata service
+	// (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html),
+	// which might introduce latency. Therefore we recommend choosing an explicit
+	// defaults_mode instead if startup latency is critical to your application
+	DefaultsModeAuto DefaultsMode = "auto"
+
+	// DefaultsModeCrossRegion builds on the standard mode and includes optimization
+	// tailored for applications which call AWS services in a different region
+	//
+	// Note that the default values vended from this mode might change as best practices
+	// may evolve. As a result, it is encouraged to perform tests when upgrading
+	// the SDK
+	DefaultsModeCrossRegion DefaultsMode = "cross-region"
+
+	// DefaultsModeInRegion builds on the standard mode and includes optimization
+	// tailored for applications which call AWS services from within the same AWS
+	// region
+	//
+	// Note that the default values vended from this mode might change as best practices
+	// may evolve. As a result, it is encouraged to perform tests when upgrading
+	// the SDK
+	DefaultsModeInRegion DefaultsMode = "in-region"
+
+	// DefaultsModeLegacy provides default settings that vary per SDK and were used
+	// prior to establishment of defaults_mode
+	DefaultsModeLegacy DefaultsMode = "legacy"
+
+	// DefaultsModeMobile builds on the standard mode and includes optimization
+	// tailored for mobile applications
+	//
+	// Note that the default values vended from this mode might change as best practices
+	// may evolve. As a result, it is encouraged to perform tests when upgrading
+	// the SDK
+	DefaultsModeMobile DefaultsMode = "mobile"
+
+	// DefaultsModeStandard provides the latest recommended default values that
+	// should be safe to run in most scenarios
+	//
+	// Note that the default values vended from this mode might change as best practices
+	// may evolve. As a result, it is encouraged to perform tests when upgrading
+	// the SDK
+	DefaultsModeStandard DefaultsMode = "standard"
+)
+
+// SetFromString sets the DefaultsMode value to one of the pre-defined constants that matches
+// the provided string when compared using EqualFold. If the value does not match a known
+// constant it will be set to as-is and the function will return false. As a special case, if the
+// provided value is a zero-length string, the mode will be set to LegacyDefaultsMode.
+func (d *DefaultsMode) SetFromString(v string) (ok bool) {
+	switch {
+	case strings.EqualFold(v, string(DefaultsModeAuto)):
+		*d = DefaultsModeAuto
+		ok = true
+	case strings.EqualFold(v, string(DefaultsModeCrossRegion)):
+		*d = DefaultsModeCrossRegion
+		ok = true
+	case strings.EqualFold(v, string(DefaultsModeInRegion)):
+		*d = DefaultsModeInRegion
+		ok = true
+	case strings.EqualFold(v, string(DefaultsModeLegacy)):
+		*d = DefaultsModeLegacy
+		ok = true
+	case strings.EqualFold(v, string(DefaultsModeMobile)):
+		*d = DefaultsModeMobile
+		ok = true
+	case strings.EqualFold(v, string(DefaultsModeStandard)):
+		*d = DefaultsModeStandard
+		ok = true
+	case len(v) == 0:
+		*d = DefaultsModeLegacy
+		ok = true
+	default:
+		*d = DefaultsMode(v)
+	}
+	return ok
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/doc.go 🔗

@@ -0,0 +1,62 @@
+// Package aws provides the core SDK's utilities and shared types. Use this package's
+// utilities to simplify setting and reading API operations parameters.
+//
+// # Value and Pointer Conversion Utilities
+//
+// This package includes a helper conversion utility for each scalar type the SDK's
+// API use. These utilities make getting a pointer of the scalar, and dereferencing
+// a pointer easier.
+//
+// Each conversion utility comes in two forms. Value to Pointer and Pointer to Value.
+// The Pointer to value will safely dereference the pointer and return its value.
+// If the pointer was nil, the scalar's zero value will be returned.
+//
+// The value to pointer functions will be named after the scalar type. So get a
+// *string from a string value use the "String" function. This makes it easy to
+// to get pointer of a literal string value, because getting the address of a
+// literal requires assigning the value to a variable first.
+//
+//	var strPtr *string
+//
+//	// Without the SDK's conversion functions
+//	str := "my string"
+//	strPtr = &str
+//
+//	// With the SDK's conversion functions
+//	strPtr = aws.String("my string")
+//
+//	// Convert *string to string value
+//	str = aws.ToString(strPtr)
+//
+// In addition to scalars the aws package also includes conversion utilities for
+// map and slice for commonly types used in API parameters. The map and slice
+// conversion functions use similar naming pattern as the scalar conversion
+// functions.
+//
+//	var strPtrs []*string
+//	var strs []string = []string{"Go", "Gophers", "Go"}
+//
+//	// Convert []string to []*string
+//	strPtrs = aws.StringSlice(strs)
+//
+//	// Convert []*string to []string
+//	strs = aws.ToStringSlice(strPtrs)
+//
+// # SDK Default HTTP Client
+//
+// The SDK will use the http.DefaultClient if a HTTP client is not provided to
+// the SDK's Session, or service client constructor. This means that if the
+// http.DefaultClient is modified by other components of your application the
+// modifications will be picked up by the SDK as well.
+//
+// In some cases this might be intended, but it is a better practice to create
+// a custom HTTP Client to share explicitly through your application. You can
+// configure the SDK to use the custom HTTP Client by setting the HTTPClient
+// value of the SDK's Config type when creating a Session or service client.
+package aws
+
+// generate.go uses a build tag of "ignore", go run doesn't need to specify
+// this because go run ignores all build flags when running a go file directly.
+//go:generate go run -tags codegen generate.go
+//go:generate go run -tags codegen logging_generate.go
+//go:generate gofmt -w -s .

vendor/github.com/aws/aws-sdk-go-v2/aws/endpoints.go 🔗

@@ -0,0 +1,247 @@
+package aws
+
+import (
+	"fmt"
+)
+
+// DualStackEndpointState is a constant to describe the dual-stack endpoint resolution behavior.
+type DualStackEndpointState uint
+
+const (
+	// DualStackEndpointStateUnset is the default value behavior for dual-stack endpoint resolution.
+	DualStackEndpointStateUnset DualStackEndpointState = iota
+
+	// DualStackEndpointStateEnabled enables dual-stack endpoint resolution for service endpoints.
+	DualStackEndpointStateEnabled
+
+	// DualStackEndpointStateDisabled disables dual-stack endpoint resolution for endpoints.
+	DualStackEndpointStateDisabled
+)
+
+// GetUseDualStackEndpoint takes a service's EndpointResolverOptions and returns the UseDualStackEndpoint value.
+// Returns boolean false if the provided options does not have a method to retrieve the DualStackEndpointState.
+func GetUseDualStackEndpoint(options ...interface{}) (value DualStackEndpointState, found bool) {
+	type iface interface {
+		GetUseDualStackEndpoint() DualStackEndpointState
+	}
+	for _, option := range options {
+		if i, ok := option.(iface); ok {
+			value = i.GetUseDualStackEndpoint()
+			found = true
+			break
+		}
+	}
+	return value, found
+}
+
+// FIPSEndpointState is a constant to describe the FIPS endpoint resolution behavior.
+type FIPSEndpointState uint
+
+const (
+	// FIPSEndpointStateUnset is the default value behavior for FIPS endpoint resolution.
+	FIPSEndpointStateUnset FIPSEndpointState = iota
+
+	// FIPSEndpointStateEnabled enables FIPS endpoint resolution for service endpoints.
+	FIPSEndpointStateEnabled
+
+	// FIPSEndpointStateDisabled disables FIPS endpoint resolution for endpoints.
+	FIPSEndpointStateDisabled
+)
+
+// GetUseFIPSEndpoint takes a service's EndpointResolverOptions and returns the UseDualStackEndpoint value.
+// Returns boolean false if the provided options does not have a method to retrieve the DualStackEndpointState.
+func GetUseFIPSEndpoint(options ...interface{}) (value FIPSEndpointState, found bool) {
+	type iface interface {
+		GetUseFIPSEndpoint() FIPSEndpointState
+	}
+	for _, option := range options {
+		if i, ok := option.(iface); ok {
+			value = i.GetUseFIPSEndpoint()
+			found = true
+			break
+		}
+	}
+	return value, found
+}
+
+// Endpoint represents the endpoint a service client should make API operation
+// calls to.
+//
+// The SDK will automatically resolve these endpoints per API client using an
+// internal endpoint resolvers. If you'd like to provide custom endpoint
+// resolving behavior you can implement the EndpointResolver interface.
+//
+// Deprecated: This structure was used with the global [EndpointResolver]
+// interface, which has been deprecated in favor of service-specific endpoint
+// resolution. See the deprecation docs on that interface for more information.
+type Endpoint struct {
+	// The base URL endpoint the SDK API clients will use to make API calls to.
+	// The SDK will suffix URI path and query elements to this endpoint.
+	URL string
+
+	// Specifies if the endpoint's hostname can be modified by the SDK's API
+	// client.
+	//
+	// If the hostname is mutable the SDK API clients may modify any part of
+	// the hostname based on the requirements of the API, (e.g. adding, or
+	// removing content in the hostname). Such as, Amazon S3 API client
+	// prefixing "bucketname" to the hostname, or changing the
+	// hostname service name component from "s3." to "s3-accesspoint.dualstack."
+	// for the dualstack endpoint of an S3 Accesspoint resource.
+	//
+	// Care should be taken when providing a custom endpoint for an API. If the
+	// endpoint hostname is mutable, and the client cannot modify the endpoint
+	// correctly, the operation call will most likely fail, or have undefined
+	// behavior.
+	//
+	// If hostname is immutable, the SDK API clients will not modify the
+	// hostname of the URL. This may cause the API client not to function
+	// correctly if the API requires the operation specific hostname values
+	// to be used by the client.
+	//
+	// This flag does not modify the API client's behavior if this endpoint
+	// will be used instead of Endpoint Discovery, or if the endpoint will be
+	// used to perform Endpoint Discovery. That behavior is configured via the
+	// API Client's Options.
+	HostnameImmutable bool
+
+	// The AWS partition the endpoint belongs to.
+	PartitionID string
+
+	// The service name that should be used for signing the requests to the
+	// endpoint.
+	SigningName string
+
+	// The region that should be used for signing the request to the endpoint.
+	SigningRegion string
+
+	// The signing method that should be used for signing the requests to the
+	// endpoint.
+	SigningMethod string
+
+	// The source of the Endpoint. By default, this will be EndpointSourceServiceMetadata.
+	// When providing a custom endpoint, you should set the source as EndpointSourceCustom.
+	// If source is not provided when providing a custom endpoint, the SDK may not
+	// perform required host mutations correctly. Source should be used along with
+	// HostnameImmutable property as per the usage requirement.
+	Source EndpointSource
+}
+
+// EndpointSource is the endpoint source type.
+//
+// Deprecated: The global [Endpoint] structure is deprecated.
+type EndpointSource int
+
+const (
+	// EndpointSourceServiceMetadata denotes service modeled endpoint metadata is used as Endpoint Source.
+	EndpointSourceServiceMetadata EndpointSource = iota
+
+	// EndpointSourceCustom denotes endpoint is a custom endpoint. This source should be used when
+	// user provides a custom endpoint to be used by the SDK.
+	EndpointSourceCustom
+)
+
+// EndpointNotFoundError is a sentinel error to indicate that the
+// EndpointResolver implementation was unable to resolve an endpoint for the
+// given service and region. Resolvers should use this to indicate that an API
+// client should fallback and attempt to use it's internal default resolver to
+// resolve the endpoint.
+type EndpointNotFoundError struct {
+	Err error
+}
+
+// Error is the error message.
+func (e *EndpointNotFoundError) Error() string {
+	return fmt.Sprintf("endpoint not found, %v", e.Err)
+}
+
+// Unwrap returns the underlying error.
+func (e *EndpointNotFoundError) Unwrap() error {
+	return e.Err
+}
+
+// EndpointResolver is an endpoint resolver that can be used to provide or
+// override an endpoint for the given service and region. API clients will
+// attempt to use the EndpointResolver first to resolve an endpoint if
+// available. If the EndpointResolver returns an EndpointNotFoundError error,
+// API clients will fallback to attempting to resolve the endpoint using its
+// internal default endpoint resolver.
+//
+// Deprecated: The global endpoint resolution interface is deprecated. The API
+// for endpoint resolution is now unique to each service and is set via the
+// EndpointResolverV2 field on service client options. Setting a value for
+// EndpointResolver on aws.Config or service client options will prevent you
+// from using any endpoint-related service features released after the
+// introduction of EndpointResolverV2. You may also encounter broken or
+// unexpected behavior when using the old global interface with services that
+// use many endpoint-related customizations such as S3.
+type EndpointResolver interface {
+	ResolveEndpoint(service, region string) (Endpoint, error)
+}
+
+// EndpointResolverFunc wraps a function to satisfy the EndpointResolver interface.
+//
+// Deprecated: The global endpoint resolution interface is deprecated. See
+// deprecation docs on [EndpointResolver].
+type EndpointResolverFunc func(service, region string) (Endpoint, error)
+
+// ResolveEndpoint calls the wrapped function and returns the results.
+func (e EndpointResolverFunc) ResolveEndpoint(service, region string) (Endpoint, error) {
+	return e(service, region)
+}
+
+// EndpointResolverWithOptions is an endpoint resolver that can be used to provide or
+// override an endpoint for the given service, region, and the service client's EndpointOptions. API clients will
+// attempt to use the EndpointResolverWithOptions first to resolve an endpoint if
+// available. If the EndpointResolverWithOptions returns an EndpointNotFoundError error,
+// API clients will fallback to attempting to resolve the endpoint using its
+// internal default endpoint resolver.
+//
+// Deprecated: The global endpoint resolution interface is deprecated. See
+// deprecation docs on [EndpointResolver].
+type EndpointResolverWithOptions interface {
+	ResolveEndpoint(service, region string, options ...interface{}) (Endpoint, error)
+}
+
+// EndpointResolverWithOptionsFunc wraps a function to satisfy the EndpointResolverWithOptions interface.
+//
+// Deprecated: The global endpoint resolution interface is deprecated. See
+// deprecation docs on [EndpointResolver].
+type EndpointResolverWithOptionsFunc func(service, region string, options ...interface{}) (Endpoint, error)
+
+// ResolveEndpoint calls the wrapped function and returns the results.
+func (e EndpointResolverWithOptionsFunc) ResolveEndpoint(service, region string, options ...interface{}) (Endpoint, error) {
+	return e(service, region, options...)
+}
+
+// GetDisableHTTPS takes a service's EndpointResolverOptions and returns the DisableHTTPS value.
+// Returns boolean false if the provided options does not have a method to retrieve the DisableHTTPS.
+func GetDisableHTTPS(options ...interface{}) (value bool, found bool) {
+	type iface interface {
+		GetDisableHTTPS() bool
+	}
+	for _, option := range options {
+		if i, ok := option.(iface); ok {
+			value = i.GetDisableHTTPS()
+			found = true
+			break
+		}
+	}
+	return value, found
+}
+
+// GetResolvedRegion takes a service's EndpointResolverOptions and returns the ResolvedRegion value.
+// Returns boolean false if the provided options does not have a method to retrieve the ResolvedRegion.
+func GetResolvedRegion(options ...interface{}) (value string, found bool) {
+	type iface interface {
+		GetResolvedRegion() string
+	}
+	for _, option := range options {
+		if i, ok := option.(iface); ok {
+			value = i.GetResolvedRegion()
+			found = true
+			break
+		}
+	}
+	return value, found
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/errors.go 🔗

@@ -0,0 +1,9 @@
+package aws
+
+// MissingRegionError is an error that is returned if region configuration
+// value was not found.
+type MissingRegionError struct{}
+
+func (*MissingRegionError) Error() string {
+	return "an AWS region is required, but was not found"
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/from_ptr.go 🔗

@@ -0,0 +1,365 @@
+// Code generated by aws/generate.go DO NOT EDIT.
+
+package aws
+
+import (
+	"github.com/aws/smithy-go/ptr"
+	"time"
+)
+
+// ToBool returns bool value dereferenced if the passed
+// in pointer was not nil. Returns a bool zero value if the
+// pointer was nil.
+func ToBool(p *bool) (v bool) {
+	return ptr.ToBool(p)
+}
+
+// ToBoolSlice returns a slice of bool values, that are
+// dereferenced if the passed in pointer was not nil. Returns a bool
+// zero value if the pointer was nil.
+func ToBoolSlice(vs []*bool) []bool {
+	return ptr.ToBoolSlice(vs)
+}
+
+// ToBoolMap returns a map of bool values, that are
+// dereferenced if the passed in pointer was not nil. The bool
+// zero value is used if the pointer was nil.
+func ToBoolMap(vs map[string]*bool) map[string]bool {
+	return ptr.ToBoolMap(vs)
+}
+
+// ToByte returns byte value dereferenced if the passed
+// in pointer was not nil. Returns a byte zero value if the
+// pointer was nil.
+func ToByte(p *byte) (v byte) {
+	return ptr.ToByte(p)
+}
+
+// ToByteSlice returns a slice of byte values, that are
+// dereferenced if the passed in pointer was not nil. Returns a byte
+// zero value if the pointer was nil.
+func ToByteSlice(vs []*byte) []byte {
+	return ptr.ToByteSlice(vs)
+}
+
+// ToByteMap returns a map of byte values, that are
+// dereferenced if the passed in pointer was not nil. The byte
+// zero value is used if the pointer was nil.
+func ToByteMap(vs map[string]*byte) map[string]byte {
+	return ptr.ToByteMap(vs)
+}
+
+// ToString returns string value dereferenced if the passed
+// in pointer was not nil. Returns a string zero value if the
+// pointer was nil.
+func ToString(p *string) (v string) {
+	return ptr.ToString(p)
+}
+
+// ToStringSlice returns a slice of string values, that are
+// dereferenced if the passed in pointer was not nil. Returns a string
+// zero value if the pointer was nil.
+func ToStringSlice(vs []*string) []string {
+	return ptr.ToStringSlice(vs)
+}
+
+// ToStringMap returns a map of string values, that are
+// dereferenced if the passed in pointer was not nil. The string
+// zero value is used if the pointer was nil.
+func ToStringMap(vs map[string]*string) map[string]string {
+	return ptr.ToStringMap(vs)
+}
+
+// ToInt returns int value dereferenced if the passed
+// in pointer was not nil. Returns a int zero value if the
+// pointer was nil.
+func ToInt(p *int) (v int) {
+	return ptr.ToInt(p)
+}
+
+// ToIntSlice returns a slice of int values, that are
+// dereferenced if the passed in pointer was not nil. Returns a int
+// zero value if the pointer was nil.
+func ToIntSlice(vs []*int) []int {
+	return ptr.ToIntSlice(vs)
+}
+
+// ToIntMap returns a map of int values, that are
+// dereferenced if the passed in pointer was not nil. The int
+// zero value is used if the pointer was nil.
+func ToIntMap(vs map[string]*int) map[string]int {
+	return ptr.ToIntMap(vs)
+}
+
+// ToInt8 returns int8 value dereferenced if the passed
+// in pointer was not nil. Returns a int8 zero value if the
+// pointer was nil.
+func ToInt8(p *int8) (v int8) {
+	return ptr.ToInt8(p)
+}
+
+// ToInt8Slice returns a slice of int8 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a int8
+// zero value if the pointer was nil.
+func ToInt8Slice(vs []*int8) []int8 {
+	return ptr.ToInt8Slice(vs)
+}
+
+// ToInt8Map returns a map of int8 values, that are
+// dereferenced if the passed in pointer was not nil. The int8
+// zero value is used if the pointer was nil.
+func ToInt8Map(vs map[string]*int8) map[string]int8 {
+	return ptr.ToInt8Map(vs)
+}
+
+// ToInt16 returns int16 value dereferenced if the passed
+// in pointer was not nil. Returns a int16 zero value if the
+// pointer was nil.
+func ToInt16(p *int16) (v int16) {
+	return ptr.ToInt16(p)
+}
+
+// ToInt16Slice returns a slice of int16 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a int16
+// zero value if the pointer was nil.
+func ToInt16Slice(vs []*int16) []int16 {
+	return ptr.ToInt16Slice(vs)
+}
+
+// ToInt16Map returns a map of int16 values, that are
+// dereferenced if the passed in pointer was not nil. The int16
+// zero value is used if the pointer was nil.
+func ToInt16Map(vs map[string]*int16) map[string]int16 {
+	return ptr.ToInt16Map(vs)
+}
+
+// ToInt32 returns int32 value dereferenced if the passed
+// in pointer was not nil. Returns a int32 zero value if the
+// pointer was nil.
+func ToInt32(p *int32) (v int32) {
+	return ptr.ToInt32(p)
+}
+
+// ToInt32Slice returns a slice of int32 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a int32
+// zero value if the pointer was nil.
+func ToInt32Slice(vs []*int32) []int32 {
+	return ptr.ToInt32Slice(vs)
+}
+
+// ToInt32Map returns a map of int32 values, that are
+// dereferenced if the passed in pointer was not nil. The int32
+// zero value is used if the pointer was nil.
+func ToInt32Map(vs map[string]*int32) map[string]int32 {
+	return ptr.ToInt32Map(vs)
+}
+
+// ToInt64 returns int64 value dereferenced if the passed
+// in pointer was not nil. Returns a int64 zero value if the
+// pointer was nil.
+func ToInt64(p *int64) (v int64) {
+	return ptr.ToInt64(p)
+}
+
+// ToInt64Slice returns a slice of int64 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a int64
+// zero value if the pointer was nil.
+func ToInt64Slice(vs []*int64) []int64 {
+	return ptr.ToInt64Slice(vs)
+}
+
+// ToInt64Map returns a map of int64 values, that are
+// dereferenced if the passed in pointer was not nil. The int64
+// zero value is used if the pointer was nil.
+func ToInt64Map(vs map[string]*int64) map[string]int64 {
+	return ptr.ToInt64Map(vs)
+}
+
+// ToUint returns uint value dereferenced if the passed
+// in pointer was not nil. Returns a uint zero value if the
+// pointer was nil.
+func ToUint(p *uint) (v uint) {
+	return ptr.ToUint(p)
+}
+
+// ToUintSlice returns a slice of uint values, that are
+// dereferenced if the passed in pointer was not nil. Returns a uint
+// zero value if the pointer was nil.
+func ToUintSlice(vs []*uint) []uint {
+	return ptr.ToUintSlice(vs)
+}
+
+// ToUintMap returns a map of uint values, that are
+// dereferenced if the passed in pointer was not nil. The uint
+// zero value is used if the pointer was nil.
+func ToUintMap(vs map[string]*uint) map[string]uint {
+	return ptr.ToUintMap(vs)
+}
+
+// ToUint8 returns uint8 value dereferenced if the passed
+// in pointer was not nil. Returns a uint8 zero value if the
+// pointer was nil.
+func ToUint8(p *uint8) (v uint8) {
+	return ptr.ToUint8(p)
+}
+
+// ToUint8Slice returns a slice of uint8 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a uint8
+// zero value if the pointer was nil.
+func ToUint8Slice(vs []*uint8) []uint8 {
+	return ptr.ToUint8Slice(vs)
+}
+
+// ToUint8Map returns a map of uint8 values, that are
+// dereferenced if the passed in pointer was not nil. The uint8
+// zero value is used if the pointer was nil.
+func ToUint8Map(vs map[string]*uint8) map[string]uint8 {
+	return ptr.ToUint8Map(vs)
+}
+
+// ToUint16 returns uint16 value dereferenced if the passed
+// in pointer was not nil. Returns a uint16 zero value if the
+// pointer was nil.
+func ToUint16(p *uint16) (v uint16) {
+	return ptr.ToUint16(p)
+}
+
+// ToUint16Slice returns a slice of uint16 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a uint16
+// zero value if the pointer was nil.
+func ToUint16Slice(vs []*uint16) []uint16 {
+	return ptr.ToUint16Slice(vs)
+}
+
+// ToUint16Map returns a map of uint16 values, that are
+// dereferenced if the passed in pointer was not nil. The uint16
+// zero value is used if the pointer was nil.
+func ToUint16Map(vs map[string]*uint16) map[string]uint16 {
+	return ptr.ToUint16Map(vs)
+}
+
+// ToUint32 returns uint32 value dereferenced if the passed
+// in pointer was not nil. Returns a uint32 zero value if the
+// pointer was nil.
+func ToUint32(p *uint32) (v uint32) {
+	return ptr.ToUint32(p)
+}
+
+// ToUint32Slice returns a slice of uint32 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a uint32
+// zero value if the pointer was nil.
+func ToUint32Slice(vs []*uint32) []uint32 {
+	return ptr.ToUint32Slice(vs)
+}
+
+// ToUint32Map returns a map of uint32 values, that are
+// dereferenced if the passed in pointer was not nil. The uint32
+// zero value is used if the pointer was nil.
+func ToUint32Map(vs map[string]*uint32) map[string]uint32 {
+	return ptr.ToUint32Map(vs)
+}
+
+// ToUint64 returns uint64 value dereferenced if the passed
+// in pointer was not nil. Returns a uint64 zero value if the
+// pointer was nil.
+func ToUint64(p *uint64) (v uint64) {
+	return ptr.ToUint64(p)
+}
+
+// ToUint64Slice returns a slice of uint64 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a uint64
+// zero value if the pointer was nil.
+func ToUint64Slice(vs []*uint64) []uint64 {
+	return ptr.ToUint64Slice(vs)
+}
+
+// ToUint64Map returns a map of uint64 values, that are
+// dereferenced if the passed in pointer was not nil. The uint64
+// zero value is used if the pointer was nil.
+func ToUint64Map(vs map[string]*uint64) map[string]uint64 {
+	return ptr.ToUint64Map(vs)
+}
+
+// ToFloat32 returns float32 value dereferenced if the passed
+// in pointer was not nil. Returns a float32 zero value if the
+// pointer was nil.
+func ToFloat32(p *float32) (v float32) {
+	return ptr.ToFloat32(p)
+}
+
+// ToFloat32Slice returns a slice of float32 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a float32
+// zero value if the pointer was nil.
+func ToFloat32Slice(vs []*float32) []float32 {
+	return ptr.ToFloat32Slice(vs)
+}
+
+// ToFloat32Map returns a map of float32 values, that are
+// dereferenced if the passed in pointer was not nil. The float32
+// zero value is used if the pointer was nil.
+func ToFloat32Map(vs map[string]*float32) map[string]float32 {
+	return ptr.ToFloat32Map(vs)
+}
+
+// ToFloat64 returns float64 value dereferenced if the passed
+// in pointer was not nil. Returns a float64 zero value if the
+// pointer was nil.
+func ToFloat64(p *float64) (v float64) {
+	return ptr.ToFloat64(p)
+}
+
+// ToFloat64Slice returns a slice of float64 values, that are
+// dereferenced if the passed in pointer was not nil. Returns a float64
+// zero value if the pointer was nil.
+func ToFloat64Slice(vs []*float64) []float64 {
+	return ptr.ToFloat64Slice(vs)
+}
+
+// ToFloat64Map returns a map of float64 values, that are
+// dereferenced if the passed in pointer was not nil. The float64
+// zero value is used if the pointer was nil.
+func ToFloat64Map(vs map[string]*float64) map[string]float64 {
+	return ptr.ToFloat64Map(vs)
+}
+
+// ToTime returns time.Time value dereferenced if the passed
+// in pointer was not nil. Returns a time.Time zero value if the
+// pointer was nil.
+func ToTime(p *time.Time) (v time.Time) {
+	return ptr.ToTime(p)
+}
+
+// ToTimeSlice returns a slice of time.Time values, that are
+// dereferenced if the passed in pointer was not nil. Returns a time.Time
+// zero value if the pointer was nil.
+func ToTimeSlice(vs []*time.Time) []time.Time {
+	return ptr.ToTimeSlice(vs)
+}
+
+// ToTimeMap returns a map of time.Time values, that are
+// dereferenced if the passed in pointer was not nil. The time.Time
+// zero value is used if the pointer was nil.
+func ToTimeMap(vs map[string]*time.Time) map[string]time.Time {
+	return ptr.ToTimeMap(vs)
+}
+
+// ToDuration returns time.Duration value dereferenced if the passed
+// in pointer was not nil. Returns a time.Duration zero value if the
+// pointer was nil.
+func ToDuration(p *time.Duration) (v time.Duration) {
+	return ptr.ToDuration(p)
+}
+
+// ToDurationSlice returns a slice of time.Duration values, that are
+// dereferenced if the passed in pointer was not nil. Returns a time.Duration
+// zero value if the pointer was nil.
+func ToDurationSlice(vs []*time.Duration) []time.Duration {
+	return ptr.ToDurationSlice(vs)
+}
+
+// ToDurationMap returns a map of time.Duration values, that are
+// dereferenced if the passed in pointer was not nil. The time.Duration
+// zero value is used if the pointer was nil.
+func ToDurationMap(vs map[string]*time.Duration) map[string]time.Duration {
+	return ptr.ToDurationMap(vs)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/logging.go 🔗

@@ -0,0 +1,119 @@
+// Code generated by aws/logging_generate.go DO NOT EDIT.
+
+package aws
+
+// ClientLogMode represents the logging mode of SDK clients. The client logging mode is a bit-field where
+// each bit is a flag that describes the logging behavior for one or more client components.
+// The entire 64-bit group is reserved for later expansion by the SDK.
+//
+// Example: Setting ClientLogMode to enable logging of retries and requests
+//
+//	clientLogMode := aws.LogRetries | aws.LogRequest
+//
+// Example: Adding an additional log mode to an existing ClientLogMode value
+//
+//	clientLogMode |= aws.LogResponse
+type ClientLogMode uint64
+
+// Supported ClientLogMode bits that can be configured to toggle logging of specific SDK events.
+const (
+	LogSigning ClientLogMode = 1 << (64 - 1 - iota)
+	LogRetries
+	LogRequest
+	LogRequestWithBody
+	LogResponse
+	LogResponseWithBody
+	LogDeprecatedUsage
+	LogRequestEventMessage
+	LogResponseEventMessage
+)
+
+// IsSigning returns whether the Signing logging mode bit is set
+func (m ClientLogMode) IsSigning() bool {
+	return m&LogSigning != 0
+}
+
+// IsRetries returns whether the Retries logging mode bit is set
+func (m ClientLogMode) IsRetries() bool {
+	return m&LogRetries != 0
+}
+
+// IsRequest returns whether the Request logging mode bit is set
+func (m ClientLogMode) IsRequest() bool {
+	return m&LogRequest != 0
+}
+
+// IsRequestWithBody returns whether the RequestWithBody logging mode bit is set
+func (m ClientLogMode) IsRequestWithBody() bool {
+	return m&LogRequestWithBody != 0
+}
+
+// IsResponse returns whether the Response logging mode bit is set
+func (m ClientLogMode) IsResponse() bool {
+	return m&LogResponse != 0
+}
+
+// IsResponseWithBody returns whether the ResponseWithBody logging mode bit is set
+func (m ClientLogMode) IsResponseWithBody() bool {
+	return m&LogResponseWithBody != 0
+}
+
+// IsDeprecatedUsage returns whether the DeprecatedUsage logging mode bit is set
+func (m ClientLogMode) IsDeprecatedUsage() bool {
+	return m&LogDeprecatedUsage != 0
+}
+
+// IsRequestEventMessage returns whether the RequestEventMessage logging mode bit is set
+func (m ClientLogMode) IsRequestEventMessage() bool {
+	return m&LogRequestEventMessage != 0
+}
+
+// IsResponseEventMessage returns whether the ResponseEventMessage logging mode bit is set
+func (m ClientLogMode) IsResponseEventMessage() bool {
+	return m&LogResponseEventMessage != 0
+}
+
+// ClearSigning clears the Signing logging mode bit
+func (m *ClientLogMode) ClearSigning() {
+	*m &^= LogSigning
+}
+
+// ClearRetries clears the Retries logging mode bit
+func (m *ClientLogMode) ClearRetries() {
+	*m &^= LogRetries
+}
+
+// ClearRequest clears the Request logging mode bit
+func (m *ClientLogMode) ClearRequest() {
+	*m &^= LogRequest
+}
+
+// ClearRequestWithBody clears the RequestWithBody logging mode bit
+func (m *ClientLogMode) ClearRequestWithBody() {
+	*m &^= LogRequestWithBody
+}
+
+// ClearResponse clears the Response logging mode bit
+func (m *ClientLogMode) ClearResponse() {
+	*m &^= LogResponse
+}
+
+// ClearResponseWithBody clears the ResponseWithBody logging mode bit
+func (m *ClientLogMode) ClearResponseWithBody() {
+	*m &^= LogResponseWithBody
+}
+
+// ClearDeprecatedUsage clears the DeprecatedUsage logging mode bit
+func (m *ClientLogMode) ClearDeprecatedUsage() {
+	*m &^= LogDeprecatedUsage
+}
+
+// ClearRequestEventMessage clears the RequestEventMessage logging mode bit
+func (m *ClientLogMode) ClearRequestEventMessage() {
+	*m &^= LogRequestEventMessage
+}
+
+// ClearResponseEventMessage clears the ResponseEventMessage logging mode bit
+func (m *ClientLogMode) ClearResponseEventMessage() {
+	*m &^= LogResponseEventMessage
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/logging_generate.go 🔗

@@ -0,0 +1,95 @@
+//go:build clientlogmode
+// +build clientlogmode
+
+package main
+
+import (
+	"fmt"
+	"log"
+	"os"
+	"strings"
+	"text/template"
+)
+
+var config = struct {
+	ModeBits []string
+}{
+	// Items should be appended only to keep bit-flag positions stable
+	ModeBits: []string{
+		"Signing",
+		"Retries",
+		"Request",
+		"RequestWithBody",
+		"Response",
+		"ResponseWithBody",
+		"DeprecatedUsage",
+		"RequestEventMessage",
+		"ResponseEventMessage",
+	},
+}
+
+func bitName(name string) string {
+	return strings.ToUpper(name[:1]) + name[1:]
+}
+
+var tmpl = template.Must(template.New("ClientLogMode").Funcs(map[string]interface{}{
+	"symbolName": func(name string) string {
+		return "Log" + bitName(name)
+	},
+	"bitName": bitName,
+}).Parse(`// Code generated by aws/logging_generate.go DO NOT EDIT.
+
+package aws
+
+// ClientLogMode represents the logging mode of SDK clients. The client logging mode is a bit-field where
+// each bit is a flag that describes the logging behavior for one or more client components.
+// The entire 64-bit group is reserved for later expansion by the SDK.
+//
+// Example: Setting ClientLogMode to enable logging of retries and requests
+//  clientLogMode := aws.LogRetries | aws.LogRequest
+//
+// Example: Adding an additional log mode to an existing ClientLogMode value
+//  clientLogMode |= aws.LogResponse
+type ClientLogMode uint64
+
+// Supported ClientLogMode bits that can be configured to toggle logging of specific SDK events.
+const (
+{{- range $index, $field := .ModeBits }}
+	{{ (symbolName $field) }}{{- if (eq 0 $index) }} ClientLogMode = 1 << (64 - 1 - iota){{- end }}
+{{- end }}
+)
+{{ range $_, $field := .ModeBits }}
+// Is{{- bitName $field }} returns whether the {{ bitName $field }} logging mode bit is set
+func (m ClientLogMode) Is{{- bitName $field }}() bool {
+	return m&{{- (symbolName $field) }} != 0
+}
+{{ end }}
+{{- range $_, $field := .ModeBits }}
+// Clear{{- bitName $field }} clears the {{ bitName $field }} logging mode bit
+func (m *ClientLogMode) Clear{{- bitName $field }}() {
+	*m &^= {{ (symbolName $field) }}
+}
+{{ end -}}
+`))
+
+func main() {
+	uniqueBitFields := make(map[string]struct{})
+
+	for _, bitName := range config.ModeBits {
+		if _, ok := uniqueBitFields[strings.ToLower(bitName)]; ok {
+			panic(fmt.Sprintf("duplicate bit field: %s", bitName))
+		}
+		uniqueBitFields[bitName] = struct{}{}
+	}
+
+	file, err := os.Create("logging.go")
+	if err != nil {
+		log.Fatal(err)
+	}
+	defer file.Close()
+
+	err = tmpl.Execute(file, config)
+	if err != nil {
+		log.Fatal(err)
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/metadata.go 🔗

@@ -0,0 +1,213 @@
+package middleware
+
+import (
+	"context"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+
+	"github.com/aws/smithy-go/middleware"
+)
+
+// RegisterServiceMetadata registers metadata about the service and operation into the middleware context
+// so that it is available at runtime for other middleware to introspect.
+type RegisterServiceMetadata struct {
+	ServiceID     string
+	SigningName   string
+	Region        string
+	OperationName string
+}
+
+// ID returns the middleware identifier.
+func (s *RegisterServiceMetadata) ID() string {
+	return "RegisterServiceMetadata"
+}
+
+// HandleInitialize registers service metadata information into the middleware context, allowing for introspection.
+func (s RegisterServiceMetadata) HandleInitialize(
+	ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler,
+) (out middleware.InitializeOutput, metadata middleware.Metadata, err error) {
+	if len(s.ServiceID) > 0 {
+		ctx = SetServiceID(ctx, s.ServiceID)
+	}
+	if len(s.SigningName) > 0 {
+		ctx = SetSigningName(ctx, s.SigningName)
+	}
+	if len(s.Region) > 0 {
+		ctx = setRegion(ctx, s.Region)
+	}
+	if len(s.OperationName) > 0 {
+		ctx = setOperationName(ctx, s.OperationName)
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+// service metadata keys for storing and lookup of runtime stack information.
+type (
+	serviceIDKey               struct{}
+	signingNameKey             struct{}
+	signingRegionKey           struct{}
+	regionKey                  struct{}
+	operationNameKey           struct{}
+	partitionIDKey             struct{}
+	requiresLegacyEndpointsKey struct{}
+)
+
+// GetServiceID retrieves the service id from the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetServiceID(ctx context.Context) (v string) {
+	v, _ = middleware.GetStackValue(ctx, serviceIDKey{}).(string)
+	return v
+}
+
+// GetSigningName retrieves the service signing name from the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+//
+// Deprecated: This value is unstable. The resolved signing name is available
+// in the signer properties object passed to the signer.
+func GetSigningName(ctx context.Context) (v string) {
+	v, _ = middleware.GetStackValue(ctx, signingNameKey{}).(string)
+	return v
+}
+
+// GetSigningRegion retrieves the region from the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+//
+// Deprecated: This value is unstable. The resolved signing region is available
+// in the signer properties object passed to the signer.
+func GetSigningRegion(ctx context.Context) (v string) {
+	v, _ = middleware.GetStackValue(ctx, signingRegionKey{}).(string)
+	return v
+}
+
+// GetRegion retrieves the endpoint region from the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetRegion(ctx context.Context) (v string) {
+	v, _ = middleware.GetStackValue(ctx, regionKey{}).(string)
+	return v
+}
+
+// GetOperationName retrieves the service operation metadata from the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetOperationName(ctx context.Context) (v string) {
+	v, _ = middleware.GetStackValue(ctx, operationNameKey{}).(string)
+	return v
+}
+
+// GetPartitionID retrieves the endpoint partition id from the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetPartitionID(ctx context.Context) string {
+	v, _ := middleware.GetStackValue(ctx, partitionIDKey{}).(string)
+	return v
+}
+
+// GetRequiresLegacyEndpoints the flag used to indicate if legacy endpoint
+// customizations need to be executed.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetRequiresLegacyEndpoints(ctx context.Context) bool {
+	v, _ := middleware.GetStackValue(ctx, requiresLegacyEndpointsKey{}).(bool)
+	return v
+}
+
+// SetRequiresLegacyEndpoints set or modifies the flag indicated that
+// legacy endpoint customizations are needed.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func SetRequiresLegacyEndpoints(ctx context.Context, value bool) context.Context {
+	return middleware.WithStackValue(ctx, requiresLegacyEndpointsKey{}, value)
+}
+
+// SetSigningName set or modifies the sigv4 or sigv4a signing name on the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+//
+// Deprecated: This value is unstable. Use WithSigV4SigningName client option
+// funcs instead.
+func SetSigningName(ctx context.Context, value string) context.Context {
+	return middleware.WithStackValue(ctx, signingNameKey{}, value)
+}
+
+// SetSigningRegion sets or modifies the region on the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+//
+// Deprecated: This value is unstable. Use WithSigV4SigningRegion client option
+// funcs instead.
+func SetSigningRegion(ctx context.Context, value string) context.Context {
+	return middleware.WithStackValue(ctx, signingRegionKey{}, value)
+}
+
+// SetServiceID sets the service id on the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func SetServiceID(ctx context.Context, value string) context.Context {
+	return middleware.WithStackValue(ctx, serviceIDKey{}, value)
+}
+
+// setRegion sets the endpoint region on the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func setRegion(ctx context.Context, value string) context.Context {
+	return middleware.WithStackValue(ctx, regionKey{}, value)
+}
+
+// setOperationName sets the service operation on the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func setOperationName(ctx context.Context, value string) context.Context {
+	return middleware.WithStackValue(ctx, operationNameKey{}, value)
+}
+
+// SetPartitionID sets the partition id of a resolved region on the context
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func SetPartitionID(ctx context.Context, value string) context.Context {
+	return middleware.WithStackValue(ctx, partitionIDKey{}, value)
+}
+
+// EndpointSource key
+type endpointSourceKey struct{}
+
+// GetEndpointSource returns an endpoint source if set on context
+func GetEndpointSource(ctx context.Context) (v aws.EndpointSource) {
+	v, _ = middleware.GetStackValue(ctx, endpointSourceKey{}).(aws.EndpointSource)
+	return v
+}
+
+// SetEndpointSource sets endpoint source on context
+func SetEndpointSource(ctx context.Context, value aws.EndpointSource) context.Context {
+	return middleware.WithStackValue(ctx, endpointSourceKey{}, value)
+}
+
+type signingCredentialsKey struct{}
+
+// GetSigningCredentials returns the credentials that were used for signing if set on context.
+func GetSigningCredentials(ctx context.Context) (v aws.Credentials) {
+	v, _ = middleware.GetStackValue(ctx, signingCredentialsKey{}).(aws.Credentials)
+	return v
+}
+
+// SetSigningCredentials sets the credentails used for signing on the context.
+func SetSigningCredentials(ctx context.Context, value aws.Credentials) context.Context {
+	return middleware.WithStackValue(ctx, signingCredentialsKey{}, value)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/middleware.go 🔗

@@ -0,0 +1,168 @@
+package middleware
+
+import (
+	"context"
+	"fmt"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/internal/rand"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyrand "github.com/aws/smithy-go/rand"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// ClientRequestID is a Smithy BuildMiddleware that will generate a unique ID for logical API operation
+// invocation.
+type ClientRequestID struct{}
+
+// ID the identifier for the ClientRequestID
+func (r *ClientRequestID) ID() string {
+	return "ClientRequestID"
+}
+
+// HandleBuild attaches a unique operation invocation id for the operation to the request
+func (r ClientRequestID) HandleBuild(ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler) (
+	out middleware.BuildOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", req)
+	}
+
+	invocationID, err := smithyrand.NewUUID(rand.Reader).GetUUID()
+	if err != nil {
+		return out, metadata, err
+	}
+
+	const invocationIDHeader = "Amz-Sdk-Invocation-Id"
+	req.Header[invocationIDHeader] = append(req.Header[invocationIDHeader][:0], invocationID)
+
+	return next.HandleBuild(ctx, in)
+}
+
+// RecordResponseTiming records the response timing for the SDK client requests.
+type RecordResponseTiming struct{}
+
+// ID is the middleware identifier
+func (a *RecordResponseTiming) ID() string {
+	return "RecordResponseTiming"
+}
+
+// HandleDeserialize calculates response metadata and clock skew
+func (a RecordResponseTiming) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	responseAt := sdk.NowTime()
+	setResponseAt(&metadata, responseAt)
+
+	var serverTime time.Time
+
+	switch resp := out.RawResponse.(type) {
+	case *smithyhttp.Response:
+		respDateHeader := resp.Header.Get("Date")
+		if len(respDateHeader) == 0 {
+			break
+		}
+		var parseErr error
+		serverTime, parseErr = smithyhttp.ParseTime(respDateHeader)
+		if parseErr != nil {
+			logger := middleware.GetLogger(ctx)
+			logger.Logf(logging.Warn, "failed to parse response Date header value, got %v",
+				parseErr.Error())
+			break
+		}
+		setServerTime(&metadata, serverTime)
+	}
+
+	if !serverTime.IsZero() {
+		attemptSkew := serverTime.Sub(responseAt)
+		setAttemptSkew(&metadata, attemptSkew)
+	}
+
+	return out, metadata, err
+}
+
+type responseAtKey struct{}
+
+// GetResponseAt returns the time response was received at.
+func GetResponseAt(metadata middleware.Metadata) (v time.Time, ok bool) {
+	v, ok = metadata.Get(responseAtKey{}).(time.Time)
+	return v, ok
+}
+
+// setResponseAt sets the response time on the metadata.
+func setResponseAt(metadata *middleware.Metadata, v time.Time) {
+	metadata.Set(responseAtKey{}, v)
+}
+
+type serverTimeKey struct{}
+
+// GetServerTime returns the server time for response.
+func GetServerTime(metadata middleware.Metadata) (v time.Time, ok bool) {
+	v, ok = metadata.Get(serverTimeKey{}).(time.Time)
+	return v, ok
+}
+
+// setServerTime sets the server time on the metadata.
+func setServerTime(metadata *middleware.Metadata, v time.Time) {
+	metadata.Set(serverTimeKey{}, v)
+}
+
+type attemptSkewKey struct{}
+
+// GetAttemptSkew returns Attempt clock skew for response from metadata.
+func GetAttemptSkew(metadata middleware.Metadata) (v time.Duration, ok bool) {
+	v, ok = metadata.Get(attemptSkewKey{}).(time.Duration)
+	return v, ok
+}
+
+// setAttemptSkew sets the attempt clock skew on the metadata.
+func setAttemptSkew(metadata *middleware.Metadata, v time.Duration) {
+	metadata.Set(attemptSkewKey{}, v)
+}
+
+// AddClientRequestIDMiddleware adds ClientRequestID to the middleware stack
+func AddClientRequestIDMiddleware(stack *middleware.Stack) error {
+	return stack.Build.Add(&ClientRequestID{}, middleware.After)
+}
+
+// AddRecordResponseTiming adds RecordResponseTiming middleware to the
+// middleware stack.
+func AddRecordResponseTiming(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&RecordResponseTiming{}, middleware.After)
+}
+
+// rawResponseKey is the accessor key used to store and access the
+// raw response within the response metadata.
+type rawResponseKey struct{}
+
+// AddRawResponse middleware adds raw response on to the metadata
+type AddRawResponse struct{}
+
+// ID the identifier for the ClientRequestID
+func (m *AddRawResponse) ID() string {
+	return "AddRawResponseToMetadata"
+}
+
+// HandleDeserialize adds raw response on the middleware metadata
+func (m AddRawResponse) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	metadata.Set(rawResponseKey{}, out.RawResponse)
+	return out, metadata, err
+}
+
+// AddRawResponseToMetadata adds middleware to the middleware stack that
+// store raw response on to the metadata.
+func AddRawResponseToMetadata(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&AddRawResponse{}, middleware.Before)
+}
+
+// GetRawResponse returns raw response set on metadata
+func GetRawResponse(metadata middleware.Metadata) interface{} {
+	return metadata.Get(rawResponseKey{})
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/osname.go 🔗

@@ -0,0 +1,24 @@
+//go:build go1.16
+// +build go1.16
+
+package middleware
+
+import "runtime"
+
+func getNormalizedOSName() (os string) {
+	switch runtime.GOOS {
+	case "android":
+		os = "android"
+	case "linux":
+		os = "linux"
+	case "windows":
+		os = "windows"
+	case "darwin":
+		os = "macos"
+	case "ios":
+		os = "ios"
+	default:
+		os = "other"
+	}
+	return os
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/osname_go115.go 🔗

@@ -0,0 +1,24 @@
+//go:build !go1.16
+// +build !go1.16
+
+package middleware
+
+import "runtime"
+
+func getNormalizedOSName() (os string) {
+	switch runtime.GOOS {
+	case "android":
+		os = "android"
+	case "linux":
+		os = "linux"
+	case "windows":
+		os = "windows"
+	case "darwin":
+		// Due to Apple M1 we can't distinguish between macOS and iOS when GOOS/GOARCH is darwin/amd64
+		// For now declare this as "other" until we have a better detection mechanism.
+		fallthrough
+	default:
+		os = "other"
+	}
+	return os
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/private/metrics/metrics.go 🔗

@@ -0,0 +1,320 @@
+// Package metrics implements metrics gathering for SDK development purposes.
+//
+// This package is designated as private and is intended for use only by the
+// AWS client runtime. The exported API therein is not considered stable and
+// is subject to breaking changes without notice.
+package metrics
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"sync"
+	"time"
+
+	"github.com/aws/smithy-go/middleware"
+)
+
+const (
+	// ServiceIDKey is the key for the service ID metric.
+	ServiceIDKey = "ServiceId"
+	// OperationNameKey is the key for the operation name metric.
+	OperationNameKey = "OperationName"
+	// ClientRequestIDKey is the key for the client request ID metric.
+	ClientRequestIDKey = "ClientRequestId"
+	// APICallDurationKey is the key for the API call duration metric.
+	APICallDurationKey = "ApiCallDuration"
+	// APICallSuccessfulKey is the key for the API call successful metric.
+	APICallSuccessfulKey = "ApiCallSuccessful"
+	// MarshallingDurationKey is the key for the marshalling duration metric.
+	MarshallingDurationKey = "MarshallingDuration"
+	// InThroughputKey is the key for the input throughput metric.
+	InThroughputKey = "InThroughput"
+	// OutThroughputKey is the key for the output throughput metric.
+	OutThroughputKey = "OutThroughput"
+	// RetryCountKey is the key for the retry count metric.
+	RetryCountKey = "RetryCount"
+	// HTTPStatusCodeKey is the key for the HTTP status code metric.
+	HTTPStatusCodeKey = "HttpStatusCode"
+	// AWSExtendedRequestIDKey is the key for the AWS extended request ID metric.
+	AWSExtendedRequestIDKey = "AwsExtendedRequestId"
+	// AWSRequestIDKey is the key for the AWS request ID metric.
+	AWSRequestIDKey = "AwsRequestId"
+	// BackoffDelayDurationKey is the key for the backoff delay duration metric.
+	BackoffDelayDurationKey = "BackoffDelayDuration"
+	// StreamThroughputKey is the key for the stream throughput metric.
+	StreamThroughputKey = "Throughput"
+	// ConcurrencyAcquireDurationKey is the key for the concurrency acquire duration metric.
+	ConcurrencyAcquireDurationKey = "ConcurrencyAcquireDuration"
+	// PendingConcurrencyAcquiresKey is the key for the pending concurrency acquires metric.
+	PendingConcurrencyAcquiresKey = "PendingConcurrencyAcquires"
+	// SigningDurationKey is the key for the signing duration metric.
+	SigningDurationKey = "SigningDuration"
+	// UnmarshallingDurationKey is the key for the unmarshalling duration metric.
+	UnmarshallingDurationKey = "UnmarshallingDuration"
+	// TimeToFirstByteKey is the key for the time to first byte metric.
+	TimeToFirstByteKey = "TimeToFirstByte"
+	// ServiceCallDurationKey is the key for the service call duration metric.
+	ServiceCallDurationKey = "ServiceCallDuration"
+	// EndpointResolutionDurationKey is the key for the endpoint resolution duration metric.
+	EndpointResolutionDurationKey = "EndpointResolutionDuration"
+	// AttemptNumberKey is the key for the attempt number metric.
+	AttemptNumberKey = "AttemptNumber"
+	// MaxConcurrencyKey is the key for the max concurrency metric.
+	MaxConcurrencyKey = "MaxConcurrency"
+	// AvailableConcurrencyKey is the key for the available concurrency metric.
+	AvailableConcurrencyKey = "AvailableConcurrency"
+)
+
+// MetricPublisher provides the interface to provide custom MetricPublishers.
+// PostRequestMetrics will be invoked by the MetricCollection middleware to post request.
+// PostStreamMetrics will be invoked by ReadCloserWithMetrics to post stream metrics.
+type MetricPublisher interface {
+	PostRequestMetrics(*MetricData) error
+	PostStreamMetrics(*MetricData) error
+}
+
+// Serializer provides the interface to provide custom Serializers.
+// Serialize will transform any input object in its corresponding string representation.
+type Serializer interface {
+	Serialize(obj interface{}) (string, error)
+}
+
+// DefaultSerializer is an implementation of the Serializer interface.
+type DefaultSerializer struct{}
+
+// Serialize uses the default JSON serializer to obtain the string representation of an object.
+func (DefaultSerializer) Serialize(obj interface{}) (string, error) {
+	bytes, err := json.Marshal(obj)
+	if err != nil {
+		return "", err
+	}
+	return string(bytes), nil
+}
+
+type metricContextKey struct{}
+
+// MetricContext contains fields to store metric-related information.
+type MetricContext struct {
+	connectionCounter *SharedConnectionCounter
+	publisher         MetricPublisher
+	data              *MetricData
+}
+
+// MetricData stores the collected metric data.
+type MetricData struct {
+	RequestStartTime           time.Time
+	RequestEndTime             time.Time
+	APICallDuration            time.Duration
+	SerializeStartTime         time.Time
+	SerializeEndTime           time.Time
+	MarshallingDuration        time.Duration
+	ResolveEndpointStartTime   time.Time
+	ResolveEndpointEndTime     time.Time
+	EndpointResolutionDuration time.Duration
+	GetIdentityStartTime       time.Time
+	GetIdentityEndTime         time.Time
+	InThroughput               float64
+	OutThroughput              float64
+	RetryCount                 int
+	Success                    uint8
+	StatusCode                 int
+	ClientRequestID            string
+	ServiceID                  string
+	OperationName              string
+	PartitionID                string
+	Region                     string
+	UserAgent                  string
+	RequestContentLength       int64
+	Stream                     StreamMetrics
+	Attempts                   []AttemptMetrics
+}
+
+// StreamMetrics stores metrics related to streaming data.
+type StreamMetrics struct {
+	ReadDuration time.Duration
+	ReadBytes    int64
+	Throughput   float64
+}
+
+// AttemptMetrics stores metrics related to individual attempts.
+type AttemptMetrics struct {
+	ServiceCallStart           time.Time
+	ServiceCallEnd             time.Time
+	ServiceCallDuration        time.Duration
+	FirstByteTime              time.Time
+	TimeToFirstByte            time.Duration
+	ConnRequestedTime          time.Time
+	ConnObtainedTime           time.Time
+	ConcurrencyAcquireDuration time.Duration
+	SignStartTime              time.Time
+	SignEndTime                time.Time
+	SigningDuration            time.Duration
+	DeserializeStartTime       time.Time
+	DeserializeEndTime         time.Time
+	UnMarshallingDuration      time.Duration
+	RetryDelay                 time.Duration
+	ResponseContentLength      int64
+	StatusCode                 int
+	RequestID                  string
+	ExtendedRequestID          string
+	HTTPClient                 string
+	MaxConcurrency             int
+	PendingConnectionAcquires  int
+	AvailableConcurrency       int
+	ActiveRequests             int
+	ReusedConnection           bool
+}
+
+// Data returns the MetricData associated with the MetricContext.
+func (mc *MetricContext) Data() *MetricData {
+	return mc.data
+}
+
+// ConnectionCounter returns the SharedConnectionCounter associated with the MetricContext.
+func (mc *MetricContext) ConnectionCounter() *SharedConnectionCounter {
+	return mc.connectionCounter
+}
+
+// Publisher returns the MetricPublisher associated with the MetricContext.
+func (mc *MetricContext) Publisher() MetricPublisher {
+	return mc.publisher
+}
+
+// ComputeRequestMetrics calculates and populates derived metrics based on the collected data.
+func (md *MetricData) ComputeRequestMetrics() {
+
+	for idx := range md.Attempts {
+		attempt := &md.Attempts[idx]
+		attempt.ConcurrencyAcquireDuration = attempt.ConnObtainedTime.Sub(attempt.ConnRequestedTime)
+		attempt.SigningDuration = attempt.SignEndTime.Sub(attempt.SignStartTime)
+		attempt.UnMarshallingDuration = attempt.DeserializeEndTime.Sub(attempt.DeserializeStartTime)
+		attempt.TimeToFirstByte = attempt.FirstByteTime.Sub(attempt.ServiceCallStart)
+		attempt.ServiceCallDuration = attempt.ServiceCallEnd.Sub(attempt.ServiceCallStart)
+	}
+
+	md.APICallDuration = md.RequestEndTime.Sub(md.RequestStartTime)
+	md.MarshallingDuration = md.SerializeEndTime.Sub(md.SerializeStartTime)
+	md.EndpointResolutionDuration = md.ResolveEndpointEndTime.Sub(md.ResolveEndpointStartTime)
+
+	md.RetryCount = len(md.Attempts) - 1
+
+	latestAttempt, err := md.LatestAttempt()
+
+	if err != nil {
+		fmt.Printf("error retrieving attempts data due to: %s. Skipping Throughput metrics", err.Error())
+	} else {
+
+		md.StatusCode = latestAttempt.StatusCode
+
+		if md.Success == 1 {
+			if latestAttempt.ResponseContentLength > 0 && latestAttempt.ServiceCallDuration > 0 {
+				md.InThroughput = float64(latestAttempt.ResponseContentLength) / latestAttempt.ServiceCallDuration.Seconds()
+			}
+			if md.RequestContentLength > 0 && latestAttempt.ServiceCallDuration > 0 {
+				md.OutThroughput = float64(md.RequestContentLength) / latestAttempt.ServiceCallDuration.Seconds()
+			}
+		}
+	}
+}
+
+// LatestAttempt returns the latest attempt metrics.
+// It returns an error if no attempts are initialized.
+func (md *MetricData) LatestAttempt() (*AttemptMetrics, error) {
+	if md.Attempts == nil || len(md.Attempts) == 0 {
+		return nil, fmt.Errorf("no attempts initialized. NewAttempt() should be called first")
+	}
+	return &md.Attempts[len(md.Attempts)-1], nil
+}
+
+// NewAttempt initializes new attempt metrics.
+func (md *MetricData) NewAttempt() {
+	if md.Attempts == nil {
+		md.Attempts = []AttemptMetrics{}
+	}
+	md.Attempts = append(md.Attempts, AttemptMetrics{})
+}
+
+// SharedConnectionCounter is a counter shared across API calls.
+type SharedConnectionCounter struct {
+	mu sync.Mutex
+
+	activeRequests           int
+	pendingConnectionAcquire int
+}
+
+// ActiveRequests returns the count of active requests.
+func (cc *SharedConnectionCounter) ActiveRequests() int {
+	cc.mu.Lock()
+	defer cc.mu.Unlock()
+
+	return cc.activeRequests
+}
+
+// PendingConnectionAcquire returns the count of pending connection acquires.
+func (cc *SharedConnectionCounter) PendingConnectionAcquire() int {
+	cc.mu.Lock()
+	defer cc.mu.Unlock()
+
+	return cc.pendingConnectionAcquire
+}
+
+// AddActiveRequest increments the count of active requests.
+func (cc *SharedConnectionCounter) AddActiveRequest() {
+	cc.mu.Lock()
+	defer cc.mu.Unlock()
+
+	cc.activeRequests++
+}
+
+// RemoveActiveRequest decrements the count of active requests.
+func (cc *SharedConnectionCounter) RemoveActiveRequest() {
+	cc.mu.Lock()
+	defer cc.mu.Unlock()
+
+	cc.activeRequests--
+}
+
+// AddPendingConnectionAcquire increments the count of pending connection acquires.
+func (cc *SharedConnectionCounter) AddPendingConnectionAcquire() {
+	cc.mu.Lock()
+	defer cc.mu.Unlock()
+
+	cc.pendingConnectionAcquire++
+}
+
+// RemovePendingConnectionAcquire decrements the count of pending connection acquires.
+func (cc *SharedConnectionCounter) RemovePendingConnectionAcquire() {
+	cc.mu.Lock()
+	defer cc.mu.Unlock()
+
+	cc.pendingConnectionAcquire--
+}
+
+// InitMetricContext initializes the metric context with the provided counter and publisher.
+// It returns the updated context.
+func InitMetricContext(
+	ctx context.Context, counter *SharedConnectionCounter, publisher MetricPublisher,
+) context.Context {
+	if middleware.GetStackValue(ctx, metricContextKey{}) == nil {
+		ctx = middleware.WithStackValue(ctx, metricContextKey{}, &MetricContext{
+			connectionCounter: counter,
+			publisher:         publisher,
+			data: &MetricData{
+				Attempts: []AttemptMetrics{},
+				Stream:   StreamMetrics{},
+			},
+		})
+	}
+	return ctx
+}
+
+// Context returns the metric context from the given context.
+// It returns nil if the metric context is not found.
+func Context(ctx context.Context) *MetricContext {
+	mctx := middleware.GetStackValue(ctx, metricContextKey{})
+	if mctx == nil {
+		return nil
+	}
+	return mctx.(*MetricContext)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/recursion_detection.go 🔗

@@ -0,0 +1,94 @@
+package middleware
+
+import (
+	"context"
+	"fmt"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"os"
+)
+
+const envAwsLambdaFunctionName = "AWS_LAMBDA_FUNCTION_NAME"
+const envAmznTraceID = "_X_AMZN_TRACE_ID"
+const amznTraceIDHeader = "X-Amzn-Trace-Id"
+
+// AddRecursionDetection adds recursionDetection to the middleware stack
+func AddRecursionDetection(stack *middleware.Stack) error {
+	return stack.Build.Add(&RecursionDetection{}, middleware.After)
+}
+
+// RecursionDetection detects Lambda environment and sets its X-Ray trace ID to request header if absent
+// to avoid recursion invocation in Lambda
+type RecursionDetection struct{}
+
+// ID returns the middleware identifier
+func (m *RecursionDetection) ID() string {
+	return "RecursionDetection"
+}
+
+// HandleBuild detects Lambda environment and adds its trace ID to request header if absent
+func (m *RecursionDetection) HandleBuild(
+	ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler,
+) (
+	out middleware.BuildOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown request type %T", req)
+	}
+
+	_, hasLambdaEnv := os.LookupEnv(envAwsLambdaFunctionName)
+	xAmznTraceID, hasTraceID := os.LookupEnv(envAmznTraceID)
+	value := req.Header.Get(amznTraceIDHeader)
+	// only set the X-Amzn-Trace-Id header when it is not set initially, the
+	// current environment is Lambda and the _X_AMZN_TRACE_ID env variable exists
+	if value != "" || !hasLambdaEnv || !hasTraceID {
+		return next.HandleBuild(ctx, in)
+	}
+
+	req.Header.Set(amznTraceIDHeader, percentEncode(xAmznTraceID))
+	return next.HandleBuild(ctx, in)
+}
+
+func percentEncode(s string) string {
+	upperhex := "0123456789ABCDEF"
+	hexCount := 0
+	for i := 0; i < len(s); i++ {
+		c := s[i]
+		if shouldEncode(c) {
+			hexCount++
+		}
+	}
+
+	if hexCount == 0 {
+		return s
+	}
+
+	required := len(s) + 2*hexCount
+	t := make([]byte, required)
+	j := 0
+	for i := 0; i < len(s); i++ {
+		if c := s[i]; shouldEncode(c) {
+			t[j] = '%'
+			t[j+1] = upperhex[c>>4]
+			t[j+2] = upperhex[c&15]
+			j += 3
+		} else {
+			t[j] = c
+			j++
+		}
+	}
+	return string(t)
+}
+
+func shouldEncode(c byte) bool {
+	if 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || '0' <= c && c <= '9' {
+		return false
+	}
+	switch c {
+	case '-', '=', ';', ':', '+', '&', '[', ']', '{', '}', '"', '\'', ',':
+		return false
+	default:
+		return true
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/request_id.go 🔗

@@ -0,0 +1,27 @@
+package middleware
+
+import (
+	"github.com/aws/smithy-go/middleware"
+)
+
+// requestIDKey is used to retrieve request id from response metadata
+type requestIDKey struct{}
+
+// SetRequestIDMetadata sets the provided request id over middleware metadata
+func SetRequestIDMetadata(metadata *middleware.Metadata, id string) {
+	metadata.Set(requestIDKey{}, id)
+}
+
+// GetRequestIDMetadata retrieves the request id from middleware metadata
+// returns string and bool indicating value of request id, whether request id was set.
+func GetRequestIDMetadata(metadata middleware.Metadata) (string, bool) {
+	if !metadata.Has(requestIDKey{}) {
+		return "", false
+	}
+
+	v, ok := metadata.Get(requestIDKey{}).(string)
+	if !ok {
+		return "", true
+	}
+	return v, true
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/request_id_retriever.go 🔗

@@ -0,0 +1,53 @@
+package middleware
+
+import (
+	"context"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// AddRequestIDRetrieverMiddleware adds request id retriever middleware
+func AddRequestIDRetrieverMiddleware(stack *middleware.Stack) error {
+	// add error wrapper middleware before operation deserializers so that it can wrap the error response
+	// returned by operation deserializers
+	return stack.Deserialize.Insert(&RequestIDRetriever{}, "OperationDeserializer", middleware.Before)
+}
+
+// RequestIDRetriever middleware captures the AWS service request ID from the
+// raw response.
+type RequestIDRetriever struct {
+}
+
+// ID returns the middleware identifier
+func (m *RequestIDRetriever) ID() string {
+	return "RequestIDRetriever"
+}
+
+// HandleDeserialize pulls the AWS request ID from the response, storing it in
+// operation metadata.
+func (m *RequestIDRetriever) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+
+	resp, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		// No raw response to wrap with.
+		return out, metadata, err
+	}
+
+	// Different header which can map to request id
+	requestIDHeaderList := []string{"X-Amzn-Requestid", "X-Amz-RequestId"}
+
+	for _, h := range requestIDHeaderList {
+		// check for headers known to contain Request id
+		if v := resp.Header.Get(h); len(v) != 0 {
+			// set reqID on metadata for successful responses.
+			SetRequestIDMetadata(&metadata, v)
+			break
+		}
+	}
+
+	return out, metadata, err
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/middleware/user_agent.go 🔗

@@ -0,0 +1,305 @@
+package middleware
+
+import (
+	"context"
+	"fmt"
+	"os"
+	"runtime"
+	"sort"
+	"strings"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+var languageVersion = strings.TrimPrefix(runtime.Version(), "go")
+
+// SDKAgentKeyType is the metadata type to add to the SDK agent string
+type SDKAgentKeyType int
+
+// The set of valid SDKAgentKeyType constants. If an unknown value is assigned for SDKAgentKeyType it will
+// be mapped to AdditionalMetadata.
+const (
+	_ SDKAgentKeyType = iota
+	APIMetadata
+	OperatingSystemMetadata
+	LanguageMetadata
+	EnvironmentMetadata
+	FeatureMetadata
+	ConfigMetadata
+	FrameworkMetadata
+	AdditionalMetadata
+	ApplicationIdentifier
+	FeatureMetadata2
+)
+
+func (k SDKAgentKeyType) string() string {
+	switch k {
+	case APIMetadata:
+		return "api"
+	case OperatingSystemMetadata:
+		return "os"
+	case LanguageMetadata:
+		return "lang"
+	case EnvironmentMetadata:
+		return "exec-env"
+	case FeatureMetadata:
+		return "ft"
+	case ConfigMetadata:
+		return "cfg"
+	case FrameworkMetadata:
+		return "lib"
+	case ApplicationIdentifier:
+		return "app"
+	case FeatureMetadata2:
+		return "m"
+	case AdditionalMetadata:
+		fallthrough
+	default:
+		return "md"
+	}
+}
+
+const execEnvVar = `AWS_EXECUTION_ENV`
+
+var validChars = map[rune]bool{
+	'!': true, '#': true, '$': true, '%': true, '&': true, '\'': true, '*': true, '+': true,
+	'-': true, '.': true, '^': true, '_': true, '`': true, '|': true, '~': true,
+}
+
+// UserAgentFeature enumerates tracked SDK features.
+type UserAgentFeature string
+
+// Enumerates UserAgentFeature.
+const (
+	UserAgentFeatureResourceModel          UserAgentFeature = "A" // n/a (we don't generate separate resource types)
+	UserAgentFeatureWaiter                                  = "B"
+	UserAgentFeaturePaginator                               = "C"
+	UserAgentFeatureRetryModeLegacy                         = "D" // n/a (equivalent to standard)
+	UserAgentFeatureRetryModeStandard                       = "E"
+	UserAgentFeatureRetryModeAdaptive                       = "F"
+	UserAgentFeatureS3Transfer                              = "G"
+	UserAgentFeatureS3CryptoV1N                             = "H" // n/a (crypto client is external)
+	UserAgentFeatureS3CryptoV2                              = "I" // n/a
+	UserAgentFeatureS3ExpressBucket                         = "J"
+	UserAgentFeatureS3AccessGrants                          = "K" // not yet implemented
+	UserAgentFeatureGZIPRequestCompression                  = "L"
+)
+
+// RequestUserAgent is a build middleware that set the User-Agent for the request.
+type RequestUserAgent struct {
+	sdkAgent, userAgent *smithyhttp.UserAgentBuilder
+	features            map[UserAgentFeature]struct{}
+}
+
+// NewRequestUserAgent returns a new requestUserAgent which will set the User-Agent and X-Amz-User-Agent for the
+// request.
+//
+// User-Agent example:
+//
+//	aws-sdk-go-v2/1.2.3
+//
+// X-Amz-User-Agent example:
+//
+//	aws-sdk-go-v2/1.2.3 md/GOOS/linux md/GOARCH/amd64 lang/go/1.15
+func NewRequestUserAgent() *RequestUserAgent {
+	userAgent, sdkAgent := smithyhttp.NewUserAgentBuilder(), smithyhttp.NewUserAgentBuilder()
+	addProductName(userAgent)
+	addProductName(sdkAgent)
+
+	r := &RequestUserAgent{
+		sdkAgent:  sdkAgent,
+		userAgent: userAgent,
+		features:  map[UserAgentFeature]struct{}{},
+	}
+
+	addSDKMetadata(r)
+
+	return r
+}
+
+func addSDKMetadata(r *RequestUserAgent) {
+	r.AddSDKAgentKey(OperatingSystemMetadata, getNormalizedOSName())
+	r.AddSDKAgentKeyValue(LanguageMetadata, "go", languageVersion)
+	r.AddSDKAgentKeyValue(AdditionalMetadata, "GOOS", runtime.GOOS)
+	r.AddSDKAgentKeyValue(AdditionalMetadata, "GOARCH", runtime.GOARCH)
+	if ev := os.Getenv(execEnvVar); len(ev) > 0 {
+		r.AddSDKAgentKey(EnvironmentMetadata, ev)
+	}
+}
+
+func addProductName(builder *smithyhttp.UserAgentBuilder) {
+	builder.AddKeyValue(aws.SDKName, aws.SDKVersion)
+}
+
+// AddUserAgentKey retrieves a requestUserAgent from the provided stack, or initializes one.
+func AddUserAgentKey(key string) func(*middleware.Stack) error {
+	return func(stack *middleware.Stack) error {
+		requestUserAgent, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+		requestUserAgent.AddUserAgentKey(key)
+		return nil
+	}
+}
+
+// AddUserAgentKeyValue retrieves a requestUserAgent from the provided stack, or initializes one.
+func AddUserAgentKeyValue(key, value string) func(*middleware.Stack) error {
+	return func(stack *middleware.Stack) error {
+		requestUserAgent, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+		requestUserAgent.AddUserAgentKeyValue(key, value)
+		return nil
+	}
+}
+
+// AddSDKAgentKey retrieves a requestUserAgent from the provided stack, or initializes one.
+func AddSDKAgentKey(keyType SDKAgentKeyType, key string) func(*middleware.Stack) error {
+	return func(stack *middleware.Stack) error {
+		requestUserAgent, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+		requestUserAgent.AddSDKAgentKey(keyType, key)
+		return nil
+	}
+}
+
+// AddSDKAgentKeyValue retrieves a requestUserAgent from the provided stack, or initializes one.
+func AddSDKAgentKeyValue(keyType SDKAgentKeyType, key, value string) func(*middleware.Stack) error {
+	return func(stack *middleware.Stack) error {
+		requestUserAgent, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+		requestUserAgent.AddSDKAgentKeyValue(keyType, key, value)
+		return nil
+	}
+}
+
+// AddRequestUserAgentMiddleware registers a requestUserAgent middleware on the stack if not present.
+func AddRequestUserAgentMiddleware(stack *middleware.Stack) error {
+	_, err := getOrAddRequestUserAgent(stack)
+	return err
+}
+
+func getOrAddRequestUserAgent(stack *middleware.Stack) (*RequestUserAgent, error) {
+	id := (*RequestUserAgent)(nil).ID()
+	bm, ok := stack.Build.Get(id)
+	if !ok {
+		bm = NewRequestUserAgent()
+		err := stack.Build.Add(bm, middleware.After)
+		if err != nil {
+			return nil, err
+		}
+	}
+
+	requestUserAgent, ok := bm.(*RequestUserAgent)
+	if !ok {
+		return nil, fmt.Errorf("%T for %s middleware did not match expected type", bm, id)
+	}
+
+	return requestUserAgent, nil
+}
+
+// AddUserAgentKey adds the component identified by name to the User-Agent string.
+func (u *RequestUserAgent) AddUserAgentKey(key string) {
+	u.userAgent.AddKey(strings.Map(rules, key))
+}
+
+// AddUserAgentKeyValue adds the key identified by the given name and value to the User-Agent string.
+func (u *RequestUserAgent) AddUserAgentKeyValue(key, value string) {
+	u.userAgent.AddKeyValue(strings.Map(rules, key), strings.Map(rules, value))
+}
+
+// AddUserAgentFeature adds the feature ID to the tracking list to be emitted
+// in the final User-Agent string.
+func (u *RequestUserAgent) AddUserAgentFeature(feature UserAgentFeature) {
+	u.features[feature] = struct{}{}
+}
+
+// AddSDKAgentKey adds the component identified by name to the User-Agent string.
+func (u *RequestUserAgent) AddSDKAgentKey(keyType SDKAgentKeyType, key string) {
+	// TODO: should target sdkAgent
+	u.userAgent.AddKey(keyType.string() + "/" + strings.Map(rules, key))
+}
+
+// AddSDKAgentKeyValue adds the key identified by the given name and value to the User-Agent string.
+func (u *RequestUserAgent) AddSDKAgentKeyValue(keyType SDKAgentKeyType, key, value string) {
+	// TODO: should target sdkAgent
+	u.userAgent.AddKeyValue(keyType.string(), strings.Map(rules, key)+"#"+strings.Map(rules, value))
+}
+
+// ID the name of the middleware.
+func (u *RequestUserAgent) ID() string {
+	return "UserAgent"
+}
+
+// HandleBuild adds or appends the constructed user agent to the request.
+func (u *RequestUserAgent) HandleBuild(ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler) (
+	out middleware.BuildOutput, metadata middleware.Metadata, err error,
+) {
+	switch req := in.Request.(type) {
+	case *smithyhttp.Request:
+		u.addHTTPUserAgent(req)
+		// TODO: To be re-enabled
+		// u.addHTTPSDKAgent(req)
+	default:
+		return out, metadata, fmt.Errorf("unknown transport type %T", in)
+	}
+
+	return next.HandleBuild(ctx, in)
+}
+
+func (u *RequestUserAgent) addHTTPUserAgent(request *smithyhttp.Request) {
+	const userAgent = "User-Agent"
+	updateHTTPHeader(request, userAgent, u.userAgent.Build())
+	if len(u.features) > 0 {
+		updateHTTPHeader(request, userAgent, buildFeatureMetrics(u.features))
+	}
+}
+
+func (u *RequestUserAgent) addHTTPSDKAgent(request *smithyhttp.Request) {
+	const sdkAgent = "X-Amz-User-Agent"
+	updateHTTPHeader(request, sdkAgent, u.sdkAgent.Build())
+}
+
+func updateHTTPHeader(request *smithyhttp.Request, header string, value string) {
+	var current string
+	if v := request.Header[header]; len(v) > 0 {
+		current = v[0]
+	}
+	if len(current) > 0 {
+		current = value + " " + current
+	} else {
+		current = value
+	}
+	request.Header[header] = append(request.Header[header][:0], current)
+}
+
+func rules(r rune) rune {
+	switch {
+	case r >= '0' && r <= '9':
+		return r
+	case r >= 'A' && r <= 'Z' || r >= 'a' && r <= 'z':
+		return r
+	case validChars[r]:
+		return r
+	default:
+		return '-'
+	}
+}
+
+func buildFeatureMetrics(features map[UserAgentFeature]struct{}) string {
+	fs := make([]string, 0, len(features))
+	for f := range features {
+		fs = append(fs, string(f))
+	}
+
+	sort.Strings(fs)
+	return fmt.Sprintf("%s/%s", FeatureMetadata2.string(), strings.Join(fs, ","))
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/CHANGELOG.md 🔗

@@ -0,0 +1,114 @@
+# v1.6.3 (2024-06-28)
+
+* No change notes available for this release.
+
+# v1.6.2 (2024-03-29)
+
+* No change notes available for this release.
+
+# v1.6.1 (2024-02-21)
+
+* No change notes available for this release.
+
+# v1.6.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+
+# v1.5.4 (2023-12-07)
+
+* No change notes available for this release.
+
+# v1.5.3 (2023-11-30)
+
+* No change notes available for this release.
+
+# v1.5.2 (2023-11-29)
+
+* No change notes available for this release.
+
+# v1.5.1 (2023-11-15)
+
+* No change notes available for this release.
+
+# v1.5.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+
+# v1.4.14 (2023-10-06)
+
+* No change notes available for this release.
+
+# v1.4.13 (2023-08-18)
+
+* No change notes available for this release.
+
+# v1.4.12 (2023-08-07)
+
+* No change notes available for this release.
+
+# v1.4.11 (2023-07-31)
+
+* No change notes available for this release.
+
+# v1.4.10 (2022-12-02)
+
+* No change notes available for this release.
+
+# v1.4.9 (2022-10-24)
+
+* No change notes available for this release.
+
+# v1.4.8 (2022-09-14)
+
+* No change notes available for this release.
+
+# v1.4.7 (2022-09-02)
+
+* No change notes available for this release.
+
+# v1.4.6 (2022-08-31)
+
+* No change notes available for this release.
+
+# v1.4.5 (2022-08-29)
+
+* No change notes available for this release.
+
+# v1.4.4 (2022-08-09)
+
+* No change notes available for this release.
+
+# v1.4.3 (2022-06-29)
+
+* No change notes available for this release.
+
+# v1.4.2 (2022-06-07)
+
+* No change notes available for this release.
+
+# v1.4.1 (2022-03-24)
+
+* No change notes available for this release.
+
+# v1.4.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.3.0 (2022-02-24)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.2.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.1.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.0.0 (2021-11-06)
+
+* **Announcement**: Support has been added for AWS EventStream APIs for Kinesis, S3, and Transcribe Streaming. Support for the Lex Runtime V2 EventStream API will be added in a future release.
+* **Release**: Protocol support has been added for AWS event stream.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/debug.go 🔗

@@ -0,0 +1,144 @@
+package eventstream
+
+import (
+	"bytes"
+	"encoding/base64"
+	"encoding/json"
+	"fmt"
+	"strconv"
+)
+
+type decodedMessage struct {
+	rawMessage
+	Headers decodedHeaders `json:"headers"`
+}
+type jsonMessage struct {
+	Length     json.Number    `json:"total_length"`
+	HeadersLen json.Number    `json:"headers_length"`
+	PreludeCRC json.Number    `json:"prelude_crc"`
+	Headers    decodedHeaders `json:"headers"`
+	Payload    []byte         `json:"payload"`
+	CRC        json.Number    `json:"message_crc"`
+}
+
+func (d *decodedMessage) UnmarshalJSON(b []byte) (err error) {
+	var jsonMsg jsonMessage
+	if err = json.Unmarshal(b, &jsonMsg); err != nil {
+		return err
+	}
+
+	d.Length, err = numAsUint32(jsonMsg.Length)
+	if err != nil {
+		return err
+	}
+	d.HeadersLen, err = numAsUint32(jsonMsg.HeadersLen)
+	if err != nil {
+		return err
+	}
+	d.PreludeCRC, err = numAsUint32(jsonMsg.PreludeCRC)
+	if err != nil {
+		return err
+	}
+	d.Headers = jsonMsg.Headers
+	d.Payload = jsonMsg.Payload
+	d.CRC, err = numAsUint32(jsonMsg.CRC)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func (d *decodedMessage) MarshalJSON() ([]byte, error) {
+	jsonMsg := jsonMessage{
+		Length:     json.Number(strconv.Itoa(int(d.Length))),
+		HeadersLen: json.Number(strconv.Itoa(int(d.HeadersLen))),
+		PreludeCRC: json.Number(strconv.Itoa(int(d.PreludeCRC))),
+		Headers:    d.Headers,
+		Payload:    d.Payload,
+		CRC:        json.Number(strconv.Itoa(int(d.CRC))),
+	}
+
+	return json.Marshal(jsonMsg)
+}
+
+func numAsUint32(n json.Number) (uint32, error) {
+	v, err := n.Int64()
+	if err != nil {
+		return 0, fmt.Errorf("failed to get int64 json number, %v", err)
+	}
+
+	return uint32(v), nil
+}
+
+func (d decodedMessage) Message() Message {
+	return Message{
+		Headers: Headers(d.Headers),
+		Payload: d.Payload,
+	}
+}
+
+type decodedHeaders Headers
+
+func (hs *decodedHeaders) UnmarshalJSON(b []byte) error {
+	var jsonHeaders []struct {
+		Name  string      `json:"name"`
+		Type  valueType   `json:"type"`
+		Value interface{} `json:"value"`
+	}
+
+	decoder := json.NewDecoder(bytes.NewReader(b))
+	decoder.UseNumber()
+	if err := decoder.Decode(&jsonHeaders); err != nil {
+		return err
+	}
+
+	var headers Headers
+	for _, h := range jsonHeaders {
+		value, err := valueFromType(h.Type, h.Value)
+		if err != nil {
+			return err
+		}
+		headers.Set(h.Name, value)
+	}
+	*hs = decodedHeaders(headers)
+
+	return nil
+}
+
+func valueFromType(typ valueType, val interface{}) (Value, error) {
+	switch typ {
+	case trueValueType:
+		return BoolValue(true), nil
+	case falseValueType:
+		return BoolValue(false), nil
+	case int8ValueType:
+		v, err := val.(json.Number).Int64()
+		return Int8Value(int8(v)), err
+	case int16ValueType:
+		v, err := val.(json.Number).Int64()
+		return Int16Value(int16(v)), err
+	case int32ValueType:
+		v, err := val.(json.Number).Int64()
+		return Int32Value(int32(v)), err
+	case int64ValueType:
+		v, err := val.(json.Number).Int64()
+		return Int64Value(v), err
+	case bytesValueType:
+		v, err := base64.StdEncoding.DecodeString(val.(string))
+		return BytesValue(v), err
+	case stringValueType:
+		v, err := base64.StdEncoding.DecodeString(val.(string))
+		return StringValue(string(v)), err
+	case timestampValueType:
+		v, err := val.(json.Number).Int64()
+		return TimestampValue(timeFromEpochMilli(v)), err
+	case uuidValueType:
+		v, err := base64.StdEncoding.DecodeString(val.(string))
+		var tv UUIDValue
+		copy(tv[:], v)
+		return tv, err
+	default:
+		panic(fmt.Sprintf("unknown type, %s, %T", typ.String(), val))
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/decode.go 🔗

@@ -0,0 +1,218 @@
+package eventstream
+
+import (
+	"bytes"
+	"encoding/binary"
+	"encoding/hex"
+	"encoding/json"
+	"fmt"
+	"github.com/aws/smithy-go/logging"
+	"hash"
+	"hash/crc32"
+	"io"
+)
+
+// DecoderOptions is the Decoder configuration options.
+type DecoderOptions struct {
+	Logger      logging.Logger
+	LogMessages bool
+}
+
+// Decoder provides decoding of an Event Stream messages.
+type Decoder struct {
+	options DecoderOptions
+}
+
+// NewDecoder initializes and returns a Decoder for decoding event
+// stream messages from the reader provided.
+func NewDecoder(optFns ...func(*DecoderOptions)) *Decoder {
+	options := DecoderOptions{}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	return &Decoder{
+		options: options,
+	}
+}
+
+// Decode attempts to decode a single message from the event stream reader.
+// Will return the event stream message, or error if decodeMessage fails to read
+// the message from the stream.
+//
+// payloadBuf is a byte slice that will be used in the returned Message.Payload. Callers
+// must ensure that the Message.Payload from a previous decode has been consumed before passing in the same underlying
+// payloadBuf byte slice.
+func (d *Decoder) Decode(reader io.Reader, payloadBuf []byte) (m Message, err error) {
+	if d.options.Logger != nil && d.options.LogMessages {
+		debugMsgBuf := bytes.NewBuffer(nil)
+		reader = io.TeeReader(reader, debugMsgBuf)
+		defer func() {
+			logMessageDecode(d.options.Logger, debugMsgBuf, m, err)
+		}()
+	}
+
+	m, err = decodeMessage(reader, payloadBuf)
+
+	return m, err
+}
+
+// decodeMessage attempts to decode a single message from the event stream reader.
+// Will return the event stream message, or error if decodeMessage fails to read
+// the message from the reader.
+func decodeMessage(reader io.Reader, payloadBuf []byte) (m Message, err error) {
+	crc := crc32.New(crc32IEEETable)
+	hashReader := io.TeeReader(reader, crc)
+
+	prelude, err := decodePrelude(hashReader, crc)
+	if err != nil {
+		return Message{}, err
+	}
+
+	if prelude.HeadersLen > 0 {
+		lr := io.LimitReader(hashReader, int64(prelude.HeadersLen))
+		m.Headers, err = decodeHeaders(lr)
+		if err != nil {
+			return Message{}, err
+		}
+	}
+
+	if payloadLen := prelude.PayloadLen(); payloadLen > 0 {
+		buf, err := decodePayload(payloadBuf, io.LimitReader(hashReader, int64(payloadLen)))
+		if err != nil {
+			return Message{}, err
+		}
+		m.Payload = buf
+	}
+
+	msgCRC := crc.Sum32()
+	if err := validateCRC(reader, msgCRC); err != nil {
+		return Message{}, err
+	}
+
+	return m, nil
+}
+
+func logMessageDecode(logger logging.Logger, msgBuf *bytes.Buffer, msg Message, decodeErr error) {
+	w := bytes.NewBuffer(nil)
+	defer func() { logger.Logf(logging.Debug, w.String()) }()
+
+	fmt.Fprintf(w, "Raw message:\n%s\n",
+		hex.Dump(msgBuf.Bytes()))
+
+	if decodeErr != nil {
+		fmt.Fprintf(w, "decodeMessage error: %v\n", decodeErr)
+		return
+	}
+
+	rawMsg, err := msg.rawMessage()
+	if err != nil {
+		fmt.Fprintf(w, "failed to create raw message, %v\n", err)
+		return
+	}
+
+	decodedMsg := decodedMessage{
+		rawMessage: rawMsg,
+		Headers:    decodedHeaders(msg.Headers),
+	}
+
+	fmt.Fprintf(w, "Decoded message:\n")
+	encoder := json.NewEncoder(w)
+	if err := encoder.Encode(decodedMsg); err != nil {
+		fmt.Fprintf(w, "failed to generate decoded message, %v\n", err)
+	}
+}
+
+func decodePrelude(r io.Reader, crc hash.Hash32) (messagePrelude, error) {
+	var p messagePrelude
+
+	var err error
+	p.Length, err = decodeUint32(r)
+	if err != nil {
+		return messagePrelude{}, err
+	}
+
+	p.HeadersLen, err = decodeUint32(r)
+	if err != nil {
+		return messagePrelude{}, err
+	}
+
+	if err := p.ValidateLens(); err != nil {
+		return messagePrelude{}, err
+	}
+
+	preludeCRC := crc.Sum32()
+	if err := validateCRC(r, preludeCRC); err != nil {
+		return messagePrelude{}, err
+	}
+
+	p.PreludeCRC = preludeCRC
+
+	return p, nil
+}
+
+func decodePayload(buf []byte, r io.Reader) ([]byte, error) {
+	w := bytes.NewBuffer(buf[0:0])
+
+	_, err := io.Copy(w, r)
+	return w.Bytes(), err
+}
+
+func decodeUint8(r io.Reader) (uint8, error) {
+	type byteReader interface {
+		ReadByte() (byte, error)
+	}
+
+	if br, ok := r.(byteReader); ok {
+		v, err := br.ReadByte()
+		return v, err
+	}
+
+	var b [1]byte
+	_, err := io.ReadFull(r, b[:])
+	return b[0], err
+}
+
+func decodeUint16(r io.Reader) (uint16, error) {
+	var b [2]byte
+	bs := b[:]
+	_, err := io.ReadFull(r, bs)
+	if err != nil {
+		return 0, err
+	}
+	return binary.BigEndian.Uint16(bs), nil
+}
+
+func decodeUint32(r io.Reader) (uint32, error) {
+	var b [4]byte
+	bs := b[:]
+	_, err := io.ReadFull(r, bs)
+	if err != nil {
+		return 0, err
+	}
+	return binary.BigEndian.Uint32(bs), nil
+}
+
+func decodeUint64(r io.Reader) (uint64, error) {
+	var b [8]byte
+	bs := b[:]
+	_, err := io.ReadFull(r, bs)
+	if err != nil {
+		return 0, err
+	}
+	return binary.BigEndian.Uint64(bs), nil
+}
+
+func validateCRC(r io.Reader, expect uint32) error {
+	msgCRC, err := decodeUint32(r)
+	if err != nil {
+		return err
+	}
+
+	if msgCRC != expect {
+		return ChecksumError{}
+	}
+
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/encode.go 🔗

@@ -0,0 +1,167 @@
+package eventstream
+
+import (
+	"bytes"
+	"encoding/binary"
+	"encoding/hex"
+	"encoding/json"
+	"fmt"
+	"github.com/aws/smithy-go/logging"
+	"hash"
+	"hash/crc32"
+	"io"
+)
+
+// EncoderOptions is the configuration options for Encoder.
+type EncoderOptions struct {
+	Logger      logging.Logger
+	LogMessages bool
+}
+
+// Encoder provides EventStream message encoding.
+type Encoder struct {
+	options EncoderOptions
+
+	headersBuf *bytes.Buffer
+	messageBuf *bytes.Buffer
+}
+
+// NewEncoder initializes and returns an Encoder to encode Event Stream
+// messages.
+func NewEncoder(optFns ...func(*EncoderOptions)) *Encoder {
+	o := EncoderOptions{}
+
+	for _, fn := range optFns {
+		fn(&o)
+	}
+
+	return &Encoder{
+		options:    o,
+		headersBuf: bytes.NewBuffer(nil),
+		messageBuf: bytes.NewBuffer(nil),
+	}
+}
+
+// Encode encodes a single EventStream message to the io.Writer the Encoder
+// was created with. An error is returned if writing the message fails.
+func (e *Encoder) Encode(w io.Writer, msg Message) (err error) {
+	e.headersBuf.Reset()
+	e.messageBuf.Reset()
+
+	var writer io.Writer = e.messageBuf
+	if e.options.Logger != nil && e.options.LogMessages {
+		encodeMsgBuf := bytes.NewBuffer(nil)
+		writer = io.MultiWriter(writer, encodeMsgBuf)
+		defer func() {
+			logMessageEncode(e.options.Logger, encodeMsgBuf, msg, err)
+		}()
+	}
+
+	if err = EncodeHeaders(e.headersBuf, msg.Headers); err != nil {
+		return err
+	}
+
+	crc := crc32.New(crc32IEEETable)
+	hashWriter := io.MultiWriter(writer, crc)
+
+	headersLen := uint32(e.headersBuf.Len())
+	payloadLen := uint32(len(msg.Payload))
+
+	if err = encodePrelude(hashWriter, crc, headersLen, payloadLen); err != nil {
+		return err
+	}
+
+	if headersLen > 0 {
+		if _, err = io.Copy(hashWriter, e.headersBuf); err != nil {
+			return err
+		}
+	}
+
+	if payloadLen > 0 {
+		if _, err = hashWriter.Write(msg.Payload); err != nil {
+			return err
+		}
+	}
+
+	msgCRC := crc.Sum32()
+	if err := binary.Write(writer, binary.BigEndian, msgCRC); err != nil {
+		return err
+	}
+
+	_, err = io.Copy(w, e.messageBuf)
+
+	return err
+}
+
+func logMessageEncode(logger logging.Logger, msgBuf *bytes.Buffer, msg Message, encodeErr error) {
+	w := bytes.NewBuffer(nil)
+	defer func() { logger.Logf(logging.Debug, w.String()) }()
+
+	fmt.Fprintf(w, "Message to encode:\n")
+	encoder := json.NewEncoder(w)
+	if err := encoder.Encode(msg); err != nil {
+		fmt.Fprintf(w, "Failed to get encoded message, %v\n", err)
+	}
+
+	if encodeErr != nil {
+		fmt.Fprintf(w, "Encode error: %v\n", encodeErr)
+		return
+	}
+
+	fmt.Fprintf(w, "Raw message:\n%s\n", hex.Dump(msgBuf.Bytes()))
+}
+
+func encodePrelude(w io.Writer, crc hash.Hash32, headersLen, payloadLen uint32) error {
+	p := messagePrelude{
+		Length:     minMsgLen + headersLen + payloadLen,
+		HeadersLen: headersLen,
+	}
+	if err := p.ValidateLens(); err != nil {
+		return err
+	}
+
+	err := binaryWriteFields(w, binary.BigEndian,
+		p.Length,
+		p.HeadersLen,
+	)
+	if err != nil {
+		return err
+	}
+
+	p.PreludeCRC = crc.Sum32()
+	err = binary.Write(w, binary.BigEndian, p.PreludeCRC)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+// EncodeHeaders writes the header values to the writer encoded in the event
+// stream format. Returns an error if a header fails to encode.
+func EncodeHeaders(w io.Writer, headers Headers) error {
+	for _, h := range headers {
+		hn := headerName{
+			Len: uint8(len(h.Name)),
+		}
+		copy(hn.Name[:hn.Len], h.Name)
+		if err := hn.encode(w); err != nil {
+			return err
+		}
+
+		if err := h.Value.encode(w); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func binaryWriteFields(w io.Writer, order binary.ByteOrder, vs ...interface{}) error {
+	for _, v := range vs {
+		if err := binary.Write(w, order, v); err != nil {
+			return err
+		}
+	}
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/error.go 🔗

@@ -0,0 +1,23 @@
+package eventstream
+
+import "fmt"
+
+// LengthError provides the error for items being larger than a maximum length.
+type LengthError struct {
+	Part  string
+	Want  int
+	Have  int
+	Value interface{}
+}
+
+func (e LengthError) Error() string {
+	return fmt.Sprintf("%s length invalid, %d/%d, %v",
+		e.Part, e.Want, e.Have, e.Value)
+}
+
+// ChecksumError provides the error for message checksum invalidation errors.
+type ChecksumError struct{}
+
+func (e ChecksumError) Error() string {
+	return "message checksum mismatch"
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi/headers.go 🔗

@@ -0,0 +1,24 @@
+package eventstreamapi
+
+// EventStream headers with specific meaning to async API functionality.
+const (
+	ChunkSignatureHeader = `:chunk-signature` // chunk signature for message
+	DateHeader           = `:date`            // Date header for signature
+	ContentTypeHeader    = ":content-type"    // message payload content-type
+
+	// Message header and values
+	MessageTypeHeader    = `:message-type` // Identifies type of message.
+	EventMessageType     = `event`
+	ErrorMessageType     = `error`
+	ExceptionMessageType = `exception`
+
+	// Message Events
+	EventTypeHeader = `:event-type` // Identifies message event type e.g. "Stats".
+
+	// Message Error
+	ErrorCodeHeader    = `:error-code`
+	ErrorMessageHeader = `:error-message`
+
+	// Message Exception
+	ExceptionTypeHeader = `:exception-type`
+)

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi/middleware.go 🔗

@@ -0,0 +1,71 @@
+package eventstreamapi
+
+import (
+	"context"
+	"fmt"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"io"
+)
+
+type eventStreamWriterKey struct{}
+
+// GetInputStreamWriter returns EventTypeHeader io.PipeWriter used for the operation's input event stream.
+func GetInputStreamWriter(ctx context.Context) io.WriteCloser {
+	writeCloser, _ := middleware.GetStackValue(ctx, eventStreamWriterKey{}).(io.WriteCloser)
+	return writeCloser
+}
+
+func setInputStreamWriter(ctx context.Context, writeCloser io.WriteCloser) context.Context {
+	return middleware.WithStackValue(ctx, eventStreamWriterKey{}, writeCloser)
+}
+
+// InitializeStreamWriter is a Finalize middleware initializes an in-memory pipe for sending event stream messages
+// via the HTTP request body.
+type InitializeStreamWriter struct{}
+
+// AddInitializeStreamWriter adds the InitializeStreamWriter middleware to the provided stack.
+func AddInitializeStreamWriter(stack *middleware.Stack) error {
+	return stack.Finalize.Add(&InitializeStreamWriter{}, middleware.After)
+}
+
+// ID returns the identifier for the middleware.
+func (i *InitializeStreamWriter) ID() string {
+	return "InitializeStreamWriter"
+}
+
+// HandleFinalize is the middleware implementation.
+func (i *InitializeStreamWriter) HandleFinalize(
+	ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type: %T", in.Request)
+	}
+
+	inputReader, inputWriter := io.Pipe()
+	defer func() {
+		if err == nil {
+			return
+		}
+		_ = inputReader.Close()
+		_ = inputWriter.Close()
+	}()
+
+	request, err = request.SetStream(inputReader)
+	if err != nil {
+		return out, metadata, err
+	}
+	in.Request = request
+
+	ctx = setInputStreamWriter(ctx, inputWriter)
+
+	out, metadata, err = next.HandleFinalize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/eventstreamapi/transport.go 🔗

@@ -0,0 +1,13 @@
+//go:build go1.18
+// +build go1.18
+
+package eventstreamapi
+
+import smithyhttp "github.com/aws/smithy-go/transport/http"
+
+// ApplyHTTPTransportFixes applies fixes to the HTTP request for proper event stream functionality.
+//
+// This operation is a no-op for Go 1.18 and above.
+func ApplyHTTPTransportFixes(r *smithyhttp.Request) error {
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/header.go 🔗

@@ -0,0 +1,175 @@
+package eventstream
+
+import (
+	"encoding/binary"
+	"fmt"
+	"io"
+)
+
+// Headers are a collection of EventStream header values.
+type Headers []Header
+
+// Header is a single EventStream Key Value header pair.
+type Header struct {
+	Name  string
+	Value Value
+}
+
+// Set associates the name with a value. If the header name already exists in
+// the Headers the value will be replaced with the new one.
+func (hs *Headers) Set(name string, value Value) {
+	var i int
+	for ; i < len(*hs); i++ {
+		if (*hs)[i].Name == name {
+			(*hs)[i].Value = value
+			return
+		}
+	}
+
+	*hs = append(*hs, Header{
+		Name: name, Value: value,
+	})
+}
+
+// Get returns the Value associated with the header. Nil is returned if the
+// value does not exist.
+func (hs Headers) Get(name string) Value {
+	for i := 0; i < len(hs); i++ {
+		if h := hs[i]; h.Name == name {
+			return h.Value
+		}
+	}
+	return nil
+}
+
+// Del deletes the value in the Headers if it exists.
+func (hs *Headers) Del(name string) {
+	for i := 0; i < len(*hs); i++ {
+		if (*hs)[i].Name == name {
+			copy((*hs)[i:], (*hs)[i+1:])
+			(*hs) = (*hs)[:len(*hs)-1]
+		}
+	}
+}
+
+// Clone returns a deep copy of the headers
+func (hs Headers) Clone() Headers {
+	o := make(Headers, 0, len(hs))
+	for _, h := range hs {
+		o.Set(h.Name, h.Value)
+	}
+	return o
+}
+
+func decodeHeaders(r io.Reader) (Headers, error) {
+	hs := Headers{}
+
+	for {
+		name, err := decodeHeaderName(r)
+		if err != nil {
+			if err == io.EOF {
+				// EOF while getting header name means no more headers
+				break
+			}
+			return nil, err
+		}
+
+		value, err := decodeHeaderValue(r)
+		if err != nil {
+			return nil, err
+		}
+
+		hs.Set(name, value)
+	}
+
+	return hs, nil
+}
+
+func decodeHeaderName(r io.Reader) (string, error) {
+	var n headerName
+
+	var err error
+	n.Len, err = decodeUint8(r)
+	if err != nil {
+		return "", err
+	}
+
+	name := n.Name[:n.Len]
+	if _, err := io.ReadFull(r, name); err != nil {
+		return "", err
+	}
+
+	return string(name), nil
+}
+
+func decodeHeaderValue(r io.Reader) (Value, error) {
+	var raw rawValue
+
+	typ, err := decodeUint8(r)
+	if err != nil {
+		return nil, err
+	}
+	raw.Type = valueType(typ)
+
+	var v Value
+
+	switch raw.Type {
+	case trueValueType:
+		v = BoolValue(true)
+	case falseValueType:
+		v = BoolValue(false)
+	case int8ValueType:
+		var tv Int8Value
+		err = tv.decode(r)
+		v = tv
+	case int16ValueType:
+		var tv Int16Value
+		err = tv.decode(r)
+		v = tv
+	case int32ValueType:
+		var tv Int32Value
+		err = tv.decode(r)
+		v = tv
+	case int64ValueType:
+		var tv Int64Value
+		err = tv.decode(r)
+		v = tv
+	case bytesValueType:
+		var tv BytesValue
+		err = tv.decode(r)
+		v = tv
+	case stringValueType:
+		var tv StringValue
+		err = tv.decode(r)
+		v = tv
+	case timestampValueType:
+		var tv TimestampValue
+		err = tv.decode(r)
+		v = tv
+	case uuidValueType:
+		var tv UUIDValue
+		err = tv.decode(r)
+		v = tv
+	default:
+		panic(fmt.Sprintf("unknown value type %d", raw.Type))
+	}
+
+	// Error could be EOF, let caller deal with it
+	return v, err
+}
+
+const maxHeaderNameLen = 255
+
+type headerName struct {
+	Len  uint8
+	Name [maxHeaderNameLen]byte
+}
+
+func (v headerName) encode(w io.Writer) error {
+	if err := binary.Write(w, binary.BigEndian, v.Len); err != nil {
+		return err
+	}
+
+	_, err := w.Write(v.Name[:v.Len])
+	return err
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/header_value.go 🔗

@@ -0,0 +1,521 @@
+package eventstream
+
+import (
+	"encoding/base64"
+	"encoding/binary"
+	"encoding/hex"
+	"fmt"
+	"io"
+	"strconv"
+	"time"
+)
+
+const maxHeaderValueLen = 1<<15 - 1 // 2^15-1 or 32KB - 1
+
+// valueType is the EventStream header value type.
+type valueType uint8
+
+// Header value types
+const (
+	trueValueType valueType = iota
+	falseValueType
+	int8ValueType  // Byte
+	int16ValueType // Short
+	int32ValueType // Integer
+	int64ValueType // Long
+	bytesValueType
+	stringValueType
+	timestampValueType
+	uuidValueType
+)
+
+func (t valueType) String() string {
+	switch t {
+	case trueValueType:
+		return "bool"
+	case falseValueType:
+		return "bool"
+	case int8ValueType:
+		return "int8"
+	case int16ValueType:
+		return "int16"
+	case int32ValueType:
+		return "int32"
+	case int64ValueType:
+		return "int64"
+	case bytesValueType:
+		return "byte_array"
+	case stringValueType:
+		return "string"
+	case timestampValueType:
+		return "timestamp"
+	case uuidValueType:
+		return "uuid"
+	default:
+		return fmt.Sprintf("unknown value type %d", uint8(t))
+	}
+}
+
+type rawValue struct {
+	Type  valueType
+	Len   uint16 // Only set for variable length slices
+	Value []byte // byte representation of value, BigEndian encoding.
+}
+
+func (r rawValue) encodeScalar(w io.Writer, v interface{}) error {
+	return binaryWriteFields(w, binary.BigEndian,
+		r.Type,
+		v,
+	)
+}
+
+func (r rawValue) encodeFixedSlice(w io.Writer, v []byte) error {
+	binary.Write(w, binary.BigEndian, r.Type)
+
+	_, err := w.Write(v)
+	return err
+}
+
+func (r rawValue) encodeBytes(w io.Writer, v []byte) error {
+	if len(v) > maxHeaderValueLen {
+		return LengthError{
+			Part: "header value",
+			Want: maxHeaderValueLen, Have: len(v),
+			Value: v,
+		}
+	}
+	r.Len = uint16(len(v))
+
+	err := binaryWriteFields(w, binary.BigEndian,
+		r.Type,
+		r.Len,
+	)
+	if err != nil {
+		return err
+	}
+
+	_, err = w.Write(v)
+	return err
+}
+
+func (r rawValue) encodeString(w io.Writer, v string) error {
+	if len(v) > maxHeaderValueLen {
+		return LengthError{
+			Part: "header value",
+			Want: maxHeaderValueLen, Have: len(v),
+			Value: v,
+		}
+	}
+	r.Len = uint16(len(v))
+
+	type stringWriter interface {
+		WriteString(string) (int, error)
+	}
+
+	err := binaryWriteFields(w, binary.BigEndian,
+		r.Type,
+		r.Len,
+	)
+	if err != nil {
+		return err
+	}
+
+	if sw, ok := w.(stringWriter); ok {
+		_, err = sw.WriteString(v)
+	} else {
+		_, err = w.Write([]byte(v))
+	}
+
+	return err
+}
+
+func decodeFixedBytesValue(r io.Reader, buf []byte) error {
+	_, err := io.ReadFull(r, buf)
+	return err
+}
+
+func decodeBytesValue(r io.Reader) ([]byte, error) {
+	var raw rawValue
+	var err error
+	raw.Len, err = decodeUint16(r)
+	if err != nil {
+		return nil, err
+	}
+
+	buf := make([]byte, raw.Len)
+	_, err = io.ReadFull(r, buf)
+	if err != nil {
+		return nil, err
+	}
+
+	return buf, nil
+}
+
+func decodeStringValue(r io.Reader) (string, error) {
+	v, err := decodeBytesValue(r)
+	return string(v), err
+}
+
+// Value represents the abstract header value.
+type Value interface {
+	Get() interface{}
+	String() string
+	valueType() valueType
+	encode(io.Writer) error
+}
+
+// An BoolValue provides eventstream encoding, and representation
+// of a Go bool value.
+type BoolValue bool
+
+// Get returns the underlying type
+func (v BoolValue) Get() interface{} {
+	return bool(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (v BoolValue) valueType() valueType {
+	if v {
+		return trueValueType
+	}
+	return falseValueType
+}
+
+func (v BoolValue) String() string {
+	return strconv.FormatBool(bool(v))
+}
+
+// encode encodes the BoolValue into an eventstream binary value
+// representation.
+func (v BoolValue) encode(w io.Writer) error {
+	return binary.Write(w, binary.BigEndian, v.valueType())
+}
+
+// An Int8Value provides eventstream encoding, and representation of a Go
+// int8 value.
+type Int8Value int8
+
+// Get returns the underlying value.
+func (v Int8Value) Get() interface{} {
+	return int8(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (Int8Value) valueType() valueType {
+	return int8ValueType
+}
+
+func (v Int8Value) String() string {
+	return fmt.Sprintf("0x%02x", int8(v))
+}
+
+// encode encodes the Int8Value into an eventstream binary value
+// representation.
+func (v Int8Value) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+
+	return raw.encodeScalar(w, v)
+}
+
+func (v *Int8Value) decode(r io.Reader) error {
+	n, err := decodeUint8(r)
+	if err != nil {
+		return err
+	}
+
+	*v = Int8Value(n)
+	return nil
+}
+
+// An Int16Value provides eventstream encoding, and representation of a Go
+// int16 value.
+type Int16Value int16
+
+// Get returns the underlying value.
+func (v Int16Value) Get() interface{} {
+	return int16(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (Int16Value) valueType() valueType {
+	return int16ValueType
+}
+
+func (v Int16Value) String() string {
+	return fmt.Sprintf("0x%04x", int16(v))
+}
+
+// encode encodes the Int16Value into an eventstream binary value
+// representation.
+func (v Int16Value) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+	return raw.encodeScalar(w, v)
+}
+
+func (v *Int16Value) decode(r io.Reader) error {
+	n, err := decodeUint16(r)
+	if err != nil {
+		return err
+	}
+
+	*v = Int16Value(n)
+	return nil
+}
+
+// An Int32Value provides eventstream encoding, and representation of a Go
+// int32 value.
+type Int32Value int32
+
+// Get returns the underlying value.
+func (v Int32Value) Get() interface{} {
+	return int32(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (Int32Value) valueType() valueType {
+	return int32ValueType
+}
+
+func (v Int32Value) String() string {
+	return fmt.Sprintf("0x%08x", int32(v))
+}
+
+// encode encodes the Int32Value into an eventstream binary value
+// representation.
+func (v Int32Value) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+	return raw.encodeScalar(w, v)
+}
+
+func (v *Int32Value) decode(r io.Reader) error {
+	n, err := decodeUint32(r)
+	if err != nil {
+		return err
+	}
+
+	*v = Int32Value(n)
+	return nil
+}
+
+// An Int64Value provides eventstream encoding, and representation of a Go
+// int64 value.
+type Int64Value int64
+
+// Get returns the underlying value.
+func (v Int64Value) Get() interface{} {
+	return int64(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (Int64Value) valueType() valueType {
+	return int64ValueType
+}
+
+func (v Int64Value) String() string {
+	return fmt.Sprintf("0x%016x", int64(v))
+}
+
+// encode encodes the Int64Value into an eventstream binary value
+// representation.
+func (v Int64Value) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+	return raw.encodeScalar(w, v)
+}
+
+func (v *Int64Value) decode(r io.Reader) error {
+	n, err := decodeUint64(r)
+	if err != nil {
+		return err
+	}
+
+	*v = Int64Value(n)
+	return nil
+}
+
+// An BytesValue provides eventstream encoding, and representation of a Go
+// byte slice.
+type BytesValue []byte
+
+// Get returns the underlying value.
+func (v BytesValue) Get() interface{} {
+	return []byte(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (BytesValue) valueType() valueType {
+	return bytesValueType
+}
+
+func (v BytesValue) String() string {
+	return base64.StdEncoding.EncodeToString([]byte(v))
+}
+
+// encode encodes the BytesValue into an eventstream binary value
+// representation.
+func (v BytesValue) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+
+	return raw.encodeBytes(w, []byte(v))
+}
+
+func (v *BytesValue) decode(r io.Reader) error {
+	buf, err := decodeBytesValue(r)
+	if err != nil {
+		return err
+	}
+
+	*v = BytesValue(buf)
+	return nil
+}
+
+// An StringValue provides eventstream encoding, and representation of a Go
+// string.
+type StringValue string
+
+// Get returns the underlying value.
+func (v StringValue) Get() interface{} {
+	return string(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (StringValue) valueType() valueType {
+	return stringValueType
+}
+
+func (v StringValue) String() string {
+	return string(v)
+}
+
+// encode encodes the StringValue into an eventstream binary value
+// representation.
+func (v StringValue) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+
+	return raw.encodeString(w, string(v))
+}
+
+func (v *StringValue) decode(r io.Reader) error {
+	s, err := decodeStringValue(r)
+	if err != nil {
+		return err
+	}
+
+	*v = StringValue(s)
+	return nil
+}
+
+// An TimestampValue provides eventstream encoding, and representation of a Go
+// timestamp.
+type TimestampValue time.Time
+
+// Get returns the underlying value.
+func (v TimestampValue) Get() interface{} {
+	return time.Time(v)
+}
+
+// valueType returns the EventStream header value type value.
+func (TimestampValue) valueType() valueType {
+	return timestampValueType
+}
+
+func (v TimestampValue) epochMilli() int64 {
+	nano := time.Time(v).UnixNano()
+	msec := nano / int64(time.Millisecond)
+	return msec
+}
+
+func (v TimestampValue) String() string {
+	msec := v.epochMilli()
+	return strconv.FormatInt(msec, 10)
+}
+
+// encode encodes the TimestampValue into an eventstream binary value
+// representation.
+func (v TimestampValue) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+
+	msec := v.epochMilli()
+	return raw.encodeScalar(w, msec)
+}
+
+func (v *TimestampValue) decode(r io.Reader) error {
+	n, err := decodeUint64(r)
+	if err != nil {
+		return err
+	}
+
+	*v = TimestampValue(timeFromEpochMilli(int64(n)))
+	return nil
+}
+
+// MarshalJSON implements the json.Marshaler interface
+func (v TimestampValue) MarshalJSON() ([]byte, error) {
+	return []byte(v.String()), nil
+}
+
+func timeFromEpochMilli(t int64) time.Time {
+	secs := t / 1e3
+	msec := t % 1e3
+	return time.Unix(secs, msec*int64(time.Millisecond)).UTC()
+}
+
+// An UUIDValue provides eventstream encoding, and representation of a UUID
+// value.
+type UUIDValue [16]byte
+
+// Get returns the underlying value.
+func (v UUIDValue) Get() interface{} {
+	return v[:]
+}
+
+// valueType returns the EventStream header value type value.
+func (UUIDValue) valueType() valueType {
+	return uuidValueType
+}
+
+func (v UUIDValue) String() string {
+	var scratch [36]byte
+
+	const dash = '-'
+
+	hex.Encode(scratch[:8], v[0:4])
+	scratch[8] = dash
+	hex.Encode(scratch[9:13], v[4:6])
+	scratch[13] = dash
+	hex.Encode(scratch[14:18], v[6:8])
+	scratch[18] = dash
+	hex.Encode(scratch[19:23], v[8:10])
+	scratch[23] = dash
+	hex.Encode(scratch[24:], v[10:])
+
+	return string(scratch[:])
+}
+
+// encode encodes the UUIDValue into an eventstream binary value
+// representation.
+func (v UUIDValue) encode(w io.Writer) error {
+	raw := rawValue{
+		Type: v.valueType(),
+	}
+
+	return raw.encodeFixedSlice(w, v[:])
+}
+
+func (v *UUIDValue) decode(r io.Reader) error {
+	tv := (*v)[:]
+	return decodeFixedBytesValue(r, tv)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream/message.go 🔗

@@ -0,0 +1,117 @@
+package eventstream
+
+import (
+	"bytes"
+	"encoding/binary"
+	"hash/crc32"
+)
+
+const preludeLen = 8
+const preludeCRCLen = 4
+const msgCRCLen = 4
+const minMsgLen = preludeLen + preludeCRCLen + msgCRCLen
+const maxPayloadLen = 1024 * 1024 * 16 // 16MB
+const maxHeadersLen = 1024 * 128       // 128KB
+const maxMsgLen = minMsgLen + maxHeadersLen + maxPayloadLen
+
+var crc32IEEETable = crc32.MakeTable(crc32.IEEE)
+
+// A Message provides the eventstream message representation.
+type Message struct {
+	Headers Headers
+	Payload []byte
+}
+
+func (m *Message) rawMessage() (rawMessage, error) {
+	var raw rawMessage
+
+	if len(m.Headers) > 0 {
+		var headers bytes.Buffer
+		if err := EncodeHeaders(&headers, m.Headers); err != nil {
+			return rawMessage{}, err
+		}
+		raw.Headers = headers.Bytes()
+		raw.HeadersLen = uint32(len(raw.Headers))
+	}
+
+	raw.Length = raw.HeadersLen + uint32(len(m.Payload)) + minMsgLen
+
+	hash := crc32.New(crc32IEEETable)
+	binaryWriteFields(hash, binary.BigEndian, raw.Length, raw.HeadersLen)
+	raw.PreludeCRC = hash.Sum32()
+
+	binaryWriteFields(hash, binary.BigEndian, raw.PreludeCRC)
+
+	if raw.HeadersLen > 0 {
+		hash.Write(raw.Headers)
+	}
+
+	// Read payload bytes and update hash for it as well.
+	if len(m.Payload) > 0 {
+		raw.Payload = m.Payload
+		hash.Write(raw.Payload)
+	}
+
+	raw.CRC = hash.Sum32()
+
+	return raw, nil
+}
+
+// Clone returns a deep copy of the message.
+func (m Message) Clone() Message {
+	var payload []byte
+	if m.Payload != nil {
+		payload = make([]byte, len(m.Payload))
+		copy(payload, m.Payload)
+	}
+
+	return Message{
+		Headers: m.Headers.Clone(),
+		Payload: payload,
+	}
+}
+
+type messagePrelude struct {
+	Length     uint32
+	HeadersLen uint32
+	PreludeCRC uint32
+}
+
+func (p messagePrelude) PayloadLen() uint32 {
+	return p.Length - p.HeadersLen - minMsgLen
+}
+
+func (p messagePrelude) ValidateLens() error {
+	if p.Length == 0 || p.Length > maxMsgLen {
+		return LengthError{
+			Part: "message prelude",
+			Want: maxMsgLen,
+			Have: int(p.Length),
+		}
+	}
+	if p.HeadersLen > maxHeadersLen {
+		return LengthError{
+			Part: "message headers",
+			Want: maxHeadersLen,
+			Have: int(p.HeadersLen),
+		}
+	}
+	if payloadLen := p.PayloadLen(); payloadLen > maxPayloadLen {
+		return LengthError{
+			Part: "message payload",
+			Want: maxPayloadLen,
+			Have: int(payloadLen),
+		}
+	}
+
+	return nil
+}
+
+type rawMessage struct {
+	messagePrelude
+
+	Headers []byte
+	Payload []byte
+
+	CRC uint32
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/array.go 🔗

@@ -0,0 +1,72 @@
+package query
+
+import (
+	"fmt"
+	"net/url"
+)
+
+// Array represents the encoding of Query lists and sets. A Query array is a
+// representation of a list of values of a fixed type. A serialized array might
+// look like the following:
+//
+//	ListName.member.1=foo
+//	&ListName.member.2=bar
+//	&Listname.member.3=baz
+type Array struct {
+	// The query values to add the array to.
+	values url.Values
+	// The array's prefix, which includes the names of all parent structures
+	// and ends with the name of the list. For example, the prefix might be
+	// "ParentStructure.ListName". This prefix will be used to form the full
+	// keys for each element in the list. For example, an entry might have the
+	// key "ParentStructure.ListName.member.MemberName.1".
+	//
+	// While this is currently represented as a string that gets added to, it
+	// could also be represented as a stack that only gets condensed into a
+	// string when a finalized key is created. This could potentially reduce
+	// allocations.
+	prefix string
+	// Whether the list is flat or not. A list that is not flat will produce the
+	// following entry to the url.Values for a given entry:
+	//     ListName.MemberName.1=value
+	// A list that is flat will produce the following:
+	//     ListName.1=value
+	flat bool
+	// The location name of the member. In most cases this should be "member".
+	memberName string
+	// Elements are stored in values, so we keep track of the list size here.
+	size int32
+	// Empty lists are encoded as "<prefix>=", if we add a value later we will
+	// remove this encoding
+	emptyValue Value
+}
+
+func newArray(values url.Values, prefix string, flat bool, memberName string) *Array {
+	emptyValue := newValue(values, prefix, flat)
+	emptyValue.String("")
+
+	return &Array{
+		values:     values,
+		prefix:     prefix,
+		flat:       flat,
+		memberName: memberName,
+		emptyValue: emptyValue,
+	}
+}
+
+// Value adds a new element to the Query Array. Returns a Value type used to
+// encode the array element.
+func (a *Array) Value() Value {
+	if a.size == 0 {
+		delete(a.values, a.emptyValue.key)
+	}
+
+	// Query lists start a 1, so adjust the size first
+	a.size++
+	prefix := a.prefix
+	if !a.flat {
+		prefix = fmt.Sprintf("%s.%s", prefix, a.memberName)
+	}
+	// Lists can't have flat members
+	return newValue(a.values, fmt.Sprintf("%s.%d", prefix, a.size), false)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/encoder.go 🔗

@@ -0,0 +1,80 @@
+package query
+
+import (
+	"io"
+	"net/url"
+	"sort"
+)
+
+// Encoder is a Query encoder that supports construction of Query body
+// values using methods.
+type Encoder struct {
+	// The query values that will be built up to manage encoding.
+	values url.Values
+	// The writer that the encoded body will be written to.
+	writer io.Writer
+	Value
+}
+
+// NewEncoder returns a new Query body encoder
+func NewEncoder(writer io.Writer) *Encoder {
+	values := url.Values{}
+	return &Encoder{
+		values: values,
+		writer: writer,
+		Value:  newBaseValue(values),
+	}
+}
+
+// Encode returns the []byte slice representing the current
+// state of the Query encoder.
+func (e Encoder) Encode() error {
+	ws, ok := e.writer.(interface{ WriteString(string) (int, error) })
+	if !ok {
+		// Fall back to less optimal byte slice casting if WriteString isn't available.
+		ws = &wrapWriteString{writer: e.writer}
+	}
+
+	// Get the keys and sort them to have a stable output
+	keys := make([]string, 0, len(e.values))
+	for k := range e.values {
+		keys = append(keys, k)
+	}
+	sort.Strings(keys)
+	isFirstEntry := true
+	for _, key := range keys {
+		queryValues := e.values[key]
+		escapedKey := url.QueryEscape(key)
+		for _, value := range queryValues {
+			if !isFirstEntry {
+				if _, err := ws.WriteString(`&`); err != nil {
+					return err
+				}
+			} else {
+				isFirstEntry = false
+			}
+			if _, err := ws.WriteString(escapedKey); err != nil {
+				return err
+			}
+			if _, err := ws.WriteString(`=`); err != nil {
+				return err
+			}
+			if _, err := ws.WriteString(url.QueryEscape(value)); err != nil {
+				return err
+			}
+		}
+	}
+	return nil
+}
+
+// wrapWriteString wraps an io.Writer to provide a WriteString method
+// where one is not available.
+type wrapWriteString struct {
+	writer io.Writer
+}
+
+// WriteString writes a string to the wrapped writer by casting it to
+// a byte array first.
+func (w wrapWriteString) WriteString(v string) (int, error) {
+	return w.writer.Write([]byte(v))
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/map.go 🔗

@@ -0,0 +1,78 @@
+package query
+
+import (
+	"fmt"
+	"net/url"
+)
+
+// Map represents the encoding of Query maps. A Query map is a representation
+// of a mapping of arbitrary string keys to arbitrary values of a fixed type.
+// A Map differs from an Object in that the set of keys is not fixed, in that
+// the values must all be of the same type, and that map entries are ordered.
+// A serialized map might look like the following:
+//
+//	MapName.entry.1.key=Foo
+//	&MapName.entry.1.value=spam
+//	&MapName.entry.2.key=Bar
+//	&MapName.entry.2.value=eggs
+type Map struct {
+	// The query values to add the map to.
+	values url.Values
+	// The map's prefix, which includes the names of all parent structures
+	// and ends with the name of the object. For example, the prefix might be
+	// "ParentStructure.MapName". This prefix will be used to form the full
+	// keys for each key-value pair of the map. For example, a value might have
+	// the key "ParentStructure.MapName.1.value".
+	//
+	// While this is currently represented as a string that gets added to, it
+	// could also be represented as a stack that only gets condensed into a
+	// string when a finalized key is created. This could potentially reduce
+	// allocations.
+	prefix string
+	// Whether the map is flat or not. A map that is not flat will produce the
+	// following entries to the url.Values for a given key-value pair:
+	//     MapName.entry.1.KeyLocationName=mykey
+	//     MapName.entry.1.ValueLocationName=myvalue
+	// A map that is flat will produce the following:
+	//     MapName.1.KeyLocationName=mykey
+	//     MapName.1.ValueLocationName=myvalue
+	flat bool
+	// The location name of the key. In most cases this should be "key".
+	keyLocationName string
+	// The location name of the value. In most cases this should be "value".
+	valueLocationName string
+	// Elements are stored in values, so we keep track of the list size here.
+	size int32
+}
+
+func newMap(values url.Values, prefix string, flat bool, keyLocationName string, valueLocationName string) *Map {
+	return &Map{
+		values:            values,
+		prefix:            prefix,
+		flat:              flat,
+		keyLocationName:   keyLocationName,
+		valueLocationName: valueLocationName,
+	}
+}
+
+// Key adds the given named key to the Query map.
+// Returns a Value encoder that should be used to encode a Query value type.
+func (m *Map) Key(name string) Value {
+	// Query lists start a 1, so adjust the size first
+	m.size++
+	var key string
+	var value string
+	if m.flat {
+		key = fmt.Sprintf("%s.%d.%s", m.prefix, m.size, m.keyLocationName)
+		value = fmt.Sprintf("%s.%d.%s", m.prefix, m.size, m.valueLocationName)
+	} else {
+		key = fmt.Sprintf("%s.entry.%d.%s", m.prefix, m.size, m.keyLocationName)
+		value = fmt.Sprintf("%s.entry.%d.%s", m.prefix, m.size, m.valueLocationName)
+	}
+
+	// The key can only be a string, so we just go ahead and set it here
+	newValue(m.values, key, false).String(name)
+
+	// Maps can't have flat members
+	return newValue(m.values, value, false)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/middleware.go 🔗

@@ -0,0 +1,62 @@
+package query
+
+import (
+	"context"
+	"fmt"
+	"io/ioutil"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// AddAsGetRequestMiddleware adds a middleware to the Serialize stack after the
+// operation serializer that will convert the query request body to a GET
+// operation with the query message in the HTTP request querystring.
+func AddAsGetRequestMiddleware(stack *middleware.Stack) error {
+	return stack.Serialize.Insert(&asGetRequest{}, "OperationSerializer", middleware.After)
+}
+
+type asGetRequest struct{}
+
+func (*asGetRequest) ID() string { return "Query:AsGetRequest" }
+
+func (m *asGetRequest) HandleSerialize(
+	ctx context.Context, input middleware.SerializeInput, next middleware.SerializeHandler,
+) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := input.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("expect smithy HTTP Request, got %T", input.Request)
+	}
+
+	req.Method = "GET"
+
+	// If the stream is not set, nothing else to do.
+	stream := req.GetStream()
+	if stream == nil {
+		return next.HandleSerialize(ctx, input)
+	}
+
+	// Clear the stream since there will not be any body.
+	req.Header.Del("Content-Type")
+	req, err = req.SetStream(nil)
+	if err != nil {
+		return out, metadata, fmt.Errorf("unable update request body %w", err)
+	}
+	input.Request = req
+
+	// Update request query with the body's query string value.
+	delim := ""
+	if len(req.URL.RawQuery) != 0 {
+		delim = "&"
+	}
+
+	b, err := ioutil.ReadAll(stream)
+	if err != nil {
+		return out, metadata, fmt.Errorf("unable to get request body %w", err)
+	}
+	req.URL.RawQuery += delim + string(b)
+
+	return next.HandleSerialize(ctx, input)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/object.go 🔗

@@ -0,0 +1,69 @@
+package query
+
+import (
+	"fmt"
+	"net/url"
+)
+
+// Object represents the encoding of Query structures and unions. A Query
+// object is a representation of a mapping of string keys to arbitrary
+// values where there is a fixed set of keys whose values each have their
+// own known type. A serialized object might look like the following:
+//
+//	ObjectName.Foo=value
+//	&ObjectName.Bar=5
+type Object struct {
+	// The query values to add the object to.
+	values url.Values
+	// The object's prefix, which includes the names of all parent structures
+	// and ends with the name of the object. For example, the prefix might be
+	// "ParentStructure.ObjectName". This prefix will be used to form the full
+	// keys for each member of the object. For example, a member might have the
+	// key "ParentStructure.ObjectName.MemberName".
+	//
+	// While this is currently represented as a string that gets added to, it
+	// could also be represented as a stack that only gets condensed into a
+	// string when a finalized key is created. This could potentially reduce
+	// allocations.
+	prefix string
+}
+
+func newObject(values url.Values, prefix string) *Object {
+	return &Object{
+		values: values,
+		prefix: prefix,
+	}
+}
+
+// Key adds the given named key to the Query object.
+// Returns a Value encoder that should be used to encode a Query value type.
+func (o *Object) Key(name string) Value {
+	return o.key(name, false)
+}
+
+// KeyWithValues adds the given named key to the Query object.
+// Returns a Value encoder that should be used to encode a Query list of values.
+func (o *Object) KeyWithValues(name string) Value {
+	return o.keyWithValues(name, false)
+}
+
+// FlatKey adds the given named key to the Query object.
+// Returns a Value encoder that should be used to encode a Query value type. The
+// value will be flattened if it is a map or array.
+func (o *Object) FlatKey(name string) Value {
+	return o.key(name, true)
+}
+
+func (o *Object) key(name string, flatValue bool) Value {
+	if o.prefix != "" {
+		return newValue(o.values, fmt.Sprintf("%s.%s", o.prefix, name), flatValue)
+	}
+	return newValue(o.values, name, flatValue)
+}
+
+func (o *Object) keyWithValues(name string, flatValue bool) Value {
+	if o.prefix != "" {
+		return newAppendValue(o.values, fmt.Sprintf("%s.%s", o.prefix, name), flatValue)
+	}
+	return newAppendValue(o.values, name, flatValue)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/query/value.go 🔗

@@ -0,0 +1,115 @@
+package query
+
+import (
+	"math/big"
+	"net/url"
+
+	"github.com/aws/smithy-go/encoding/httpbinding"
+)
+
+// Value represents a Query Value type.
+type Value struct {
+	// The query values to add the value to.
+	values url.Values
+	// The value's key, which will form the prefix for complex types.
+	key string
+	// Whether the value should be flattened or not if it's a flattenable type.
+	flat       bool
+	queryValue httpbinding.QueryValue
+}
+
+func newValue(values url.Values, key string, flat bool) Value {
+	return Value{
+		values:     values,
+		key:        key,
+		flat:       flat,
+		queryValue: httpbinding.NewQueryValue(values, key, false),
+	}
+}
+
+func newAppendValue(values url.Values, key string, flat bool) Value {
+	return Value{
+		values:     values,
+		key:        key,
+		flat:       flat,
+		queryValue: httpbinding.NewQueryValue(values, key, true),
+	}
+}
+
+func newBaseValue(values url.Values) Value {
+	return Value{
+		values:     values,
+		queryValue: httpbinding.NewQueryValue(nil, "", false),
+	}
+}
+
+// Array returns a new Array encoder.
+func (qv Value) Array(locationName string) *Array {
+	return newArray(qv.values, qv.key, qv.flat, locationName)
+}
+
+// Object returns a new Object encoder.
+func (qv Value) Object() *Object {
+	return newObject(qv.values, qv.key)
+}
+
+// Map returns a new Map encoder.
+func (qv Value) Map(keyLocationName string, valueLocationName string) *Map {
+	return newMap(qv.values, qv.key, qv.flat, keyLocationName, valueLocationName)
+}
+
+// Base64EncodeBytes encodes v as a base64 query string value.
+// This is intended to enable compatibility with the JSON encoder.
+func (qv Value) Base64EncodeBytes(v []byte) {
+	qv.queryValue.Blob(v)
+}
+
+// Boolean encodes v as a query string value
+func (qv Value) Boolean(v bool) {
+	qv.queryValue.Boolean(v)
+}
+
+// String encodes v as a query string value
+func (qv Value) String(v string) {
+	qv.queryValue.String(v)
+}
+
+// Byte encodes v as a query string value
+func (qv Value) Byte(v int8) {
+	qv.queryValue.Byte(v)
+}
+
+// Short encodes v as a query string value
+func (qv Value) Short(v int16) {
+	qv.queryValue.Short(v)
+}
+
+// Integer encodes v as a query string value
+func (qv Value) Integer(v int32) {
+	qv.queryValue.Integer(v)
+}
+
+// Long encodes v as a query string value
+func (qv Value) Long(v int64) {
+	qv.queryValue.Long(v)
+}
+
+// Float encodes v as a query string value
+func (qv Value) Float(v float32) {
+	qv.queryValue.Float(v)
+}
+
+// Double encodes v as a query string value
+func (qv Value) Double(v float64) {
+	qv.queryValue.Double(v)
+}
+
+// BigInteger encodes v as a query string value
+func (qv Value) BigInteger(v *big.Int) {
+	qv.queryValue.BigInteger(v)
+}
+
+// BigDecimal encodes v as a query string value
+func (qv Value) BigDecimal(v *big.Float) {
+	qv.queryValue.BigDecimal(v)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/restjson/decoder_util.go 🔗

@@ -0,0 +1,85 @@
+package restjson
+
+import (
+	"encoding/json"
+	"io"
+	"strings"
+
+	"github.com/aws/smithy-go"
+)
+
+// GetErrorInfo util looks for code, __type, and message members in the
+// json body. These members are optionally available, and the function
+// returns the value of member if it is available. This function is useful to
+// identify the error code, msg in a REST JSON error response.
+func GetErrorInfo(decoder *json.Decoder) (errorType string, message string, err error) {
+	var errInfo struct {
+		Code    string
+		Type    string `json:"__type"`
+		Message string
+	}
+
+	err = decoder.Decode(&errInfo)
+	if err != nil {
+		if err == io.EOF {
+			return errorType, message, nil
+		}
+		return errorType, message, err
+	}
+
+	// assign error type
+	if len(errInfo.Code) != 0 {
+		errorType = errInfo.Code
+	} else if len(errInfo.Type) != 0 {
+		errorType = errInfo.Type
+	}
+
+	// assign error message
+	if len(errInfo.Message) != 0 {
+		message = errInfo.Message
+	}
+
+	// sanitize error
+	if len(errorType) != 0 {
+		errorType = SanitizeErrorCode(errorType)
+	}
+
+	return errorType, message, nil
+}
+
+// SanitizeErrorCode sanitizes the errorCode string .
+// The rule for sanitizing is if a `:` character is present, then take only the
+// contents before the first : character in the value.
+// If a # character is present, then take only the contents after the
+// first # character in the value.
+func SanitizeErrorCode(errorCode string) string {
+	if strings.ContainsAny(errorCode, ":") {
+		errorCode = strings.SplitN(errorCode, ":", 2)[0]
+	}
+
+	if strings.ContainsAny(errorCode, "#") {
+		errorCode = strings.SplitN(errorCode, "#", 2)[1]
+	}
+
+	return errorCode
+}
+
+// GetSmithyGenericAPIError returns smithy generic api error and an error interface.
+// Takes in json decoder, and error Code string as args. The function retrieves error message
+// and error code from the decoder body. If errorCode of length greater than 0 is passed in as
+// an argument, it is used instead.
+func GetSmithyGenericAPIError(decoder *json.Decoder, errorCode string) (*smithy.GenericAPIError, error) {
+	errorType, message, err := GetErrorInfo(decoder)
+	if err != nil {
+		return nil, err
+	}
+
+	if len(errorCode) == 0 {
+		errorCode = errorType
+	}
+
+	return &smithy.GenericAPIError{
+		Code:    errorCode,
+		Message: message,
+	}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/protocol/xml/error_utils.go 🔗

@@ -0,0 +1,48 @@
+package xml
+
+import (
+	"encoding/xml"
+	"fmt"
+	"io"
+)
+
+// ErrorComponents represents the error response fields
+// that will be deserialized from an xml error response body
+type ErrorComponents struct {
+	Code      string
+	Message   string
+	RequestID string
+}
+
+// GetErrorResponseComponents returns the error fields from an xml error response body
+func GetErrorResponseComponents(r io.Reader, noErrorWrapping bool) (ErrorComponents, error) {
+	if noErrorWrapping {
+		var errResponse noWrappedErrorResponse
+		if err := xml.NewDecoder(r).Decode(&errResponse); err != nil && err != io.EOF {
+			return ErrorComponents{}, fmt.Errorf("error while deserializing xml error response: %w", err)
+		}
+		return ErrorComponents(errResponse), nil
+	}
+
+	var errResponse wrappedErrorResponse
+	if err := xml.NewDecoder(r).Decode(&errResponse); err != nil && err != io.EOF {
+		return ErrorComponents{}, fmt.Errorf("error while deserializing xml error response: %w", err)
+	}
+	return ErrorComponents(errResponse), nil
+}
+
+// noWrappedErrorResponse represents the error response body with
+// no internal Error wrapping
+type noWrappedErrorResponse struct {
+	Code      string `xml:"Code"`
+	Message   string `xml:"Message"`
+	RequestID string `xml:"RequestId"`
+}
+
+// wrappedErrorResponse represents the error response body
+// wrapped within Error
+type wrappedErrorResponse struct {
+	Code      string `xml:"Error>Code"`
+	Message   string `xml:"Error>Message"`
+	RequestID string `xml:"RequestId"`
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/ratelimit/none.go 🔗

@@ -0,0 +1,20 @@
+package ratelimit
+
+import "context"
+
+// None implements a no-op rate limiter which effectively disables client-side
+// rate limiting (also known as "retry quotas").
+//
+// GetToken does nothing and always returns a nil error. The returned
+// token-release function does nothing, and always returns a nil error.
+//
+// AddTokens does nothing and always returns a nil error.
+var None = &none{}
+
+type none struct{}
+
+func (*none) GetToken(ctx context.Context, cost uint) (func() error, error) {
+	return func() error { return nil }, nil
+}
+
+func (*none) AddTokens(v uint) error { return nil }

vendor/github.com/aws/aws-sdk-go-v2/aws/ratelimit/token_bucket.go 🔗

@@ -0,0 +1,96 @@
+package ratelimit
+
+import (
+	"sync"
+)
+
+// TokenBucket provides a concurrency safe utility for adding and removing
+// tokens from the available token bucket.
+type TokenBucket struct {
+	remainingTokens uint
+	maxCapacity     uint
+	minCapacity     uint
+	mu              sync.Mutex
+}
+
+// NewTokenBucket returns an initialized TokenBucket with the capacity
+// specified.
+func NewTokenBucket(i uint) *TokenBucket {
+	return &TokenBucket{
+		remainingTokens: i,
+		maxCapacity:     i,
+		minCapacity:     1,
+	}
+}
+
+// Retrieve attempts to reduce the available tokens by the amount requested. If
+// there are tokens available true will be returned along with the number of
+// available tokens remaining. If amount requested is larger than the available
+// capacity, false will be returned along with the available capacity. If the
+// amount is less than the available capacity, the capacity will be reduced by
+// that amount, and the remaining capacity and true will be returned.
+func (t *TokenBucket) Retrieve(amount uint) (available uint, retrieved bool) {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	if amount > t.remainingTokens {
+		return t.remainingTokens, false
+	}
+
+	t.remainingTokens -= amount
+	return t.remainingTokens, true
+}
+
+// Refund returns the amount of tokens back to the available token bucket, up
+// to the initial capacity.
+func (t *TokenBucket) Refund(amount uint) {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	// Capacity cannot exceed max capacity.
+	t.remainingTokens = uintMin(t.remainingTokens+amount, t.maxCapacity)
+}
+
+// Capacity returns the maximum capacity of tokens that the bucket could
+// contain.
+func (t *TokenBucket) Capacity() uint {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	return t.maxCapacity
+}
+
+// Remaining returns the number of tokens that remaining in the bucket.
+func (t *TokenBucket) Remaining() uint {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	return t.remainingTokens
+}
+
+// Resize adjusts the size of the token bucket. Returns the capacity remaining.
+func (t *TokenBucket) Resize(size uint) uint {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	t.maxCapacity = uintMax(size, t.minCapacity)
+
+	// Capacity needs to be capped at max capacity, if max size reduced.
+	t.remainingTokens = uintMin(t.remainingTokens, t.maxCapacity)
+
+	return t.remainingTokens
+}
+
+func uintMin(a, b uint) uint {
+	if a < b {
+		return a
+	}
+	return b
+}
+
+func uintMax(a, b uint) uint {
+	if a > b {
+		return a
+	}
+	return b
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/ratelimit/token_rate_limit.go 🔗

@@ -0,0 +1,83 @@
+package ratelimit
+
+import (
+	"context"
+	"fmt"
+)
+
+type rateToken struct {
+	tokenCost uint
+	bucket    *TokenBucket
+}
+
+func (t rateToken) release() error {
+	t.bucket.Refund(t.tokenCost)
+	return nil
+}
+
+// TokenRateLimit provides a Token Bucket RateLimiter implementation
+// that limits the overall number of retry attempts that can be made across
+// operation invocations.
+type TokenRateLimit struct {
+	bucket *TokenBucket
+}
+
+// NewTokenRateLimit returns an TokenRateLimit with default values.
+// Functional options can configure the retry rate limiter.
+func NewTokenRateLimit(tokens uint) *TokenRateLimit {
+	return &TokenRateLimit{
+		bucket: NewTokenBucket(tokens),
+	}
+}
+
+type canceledError struct {
+	Err error
+}
+
+func (c canceledError) CanceledError() bool { return true }
+func (c canceledError) Unwrap() error       { return c.Err }
+func (c canceledError) Error() string {
+	return fmt.Sprintf("canceled, %v", c.Err)
+}
+
+// GetToken may cause a available pool of retry quota to be
+// decremented. Will return an error if the decremented value can not be
+// reduced from the retry quota.
+func (l *TokenRateLimit) GetToken(ctx context.Context, cost uint) (func() error, error) {
+	select {
+	case <-ctx.Done():
+		return nil, canceledError{Err: ctx.Err()}
+	default:
+	}
+	if avail, ok := l.bucket.Retrieve(cost); !ok {
+		return nil, QuotaExceededError{Available: avail, Requested: cost}
+	}
+
+	return rateToken{
+		tokenCost: cost,
+		bucket:    l.bucket,
+	}.release, nil
+}
+
+// AddTokens increments the token bucket by a fixed amount.
+func (l *TokenRateLimit) AddTokens(v uint) error {
+	l.bucket.Refund(v)
+	return nil
+}
+
+// Remaining returns the number of remaining tokens in the bucket.
+func (l *TokenRateLimit) Remaining() uint {
+	return l.bucket.Remaining()
+}
+
+// QuotaExceededError provides the SDK error when the retries for a given
+// token bucket have been exhausted.
+type QuotaExceededError struct {
+	Available uint
+	Requested uint
+}
+
+func (e QuotaExceededError) Error() string {
+	return fmt.Sprintf("retry quota exceeded, %d available, %d requested",
+		e.Available, e.Requested)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/request.go 🔗

@@ -0,0 +1,25 @@
+package aws
+
+import (
+	"fmt"
+)
+
+// TODO remove replace with smithy.CanceledError
+
+// RequestCanceledError is the error that will be returned by an API request
+// that was canceled. Requests given a Context may return this error when
+// canceled.
+type RequestCanceledError struct {
+	Err error
+}
+
+// CanceledError returns true to satisfy interfaces checking for canceled errors.
+func (*RequestCanceledError) CanceledError() bool { return true }
+
+// Unwrap returns the underlying error, if there was one.
+func (e *RequestCanceledError) Unwrap() error {
+	return e.Err
+}
+func (e *RequestCanceledError) Error() string {
+	return fmt.Sprintf("request canceled, %v", e.Err)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/adaptive.go 🔗

@@ -0,0 +1,156 @@
+package retry
+
+import (
+	"context"
+	"fmt"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+)
+
+const (
+	// DefaultRequestCost is the cost of a single request from the adaptive
+	// rate limited token bucket.
+	DefaultRequestCost uint = 1
+)
+
+// DefaultThrottles provides the set of errors considered throttle errors that
+// are checked by default.
+var DefaultThrottles = []IsErrorThrottle{
+	ThrottleErrorCode{
+		Codes: DefaultThrottleErrorCodes,
+	},
+}
+
+// AdaptiveModeOptions provides the functional options for configuring the
+// adaptive retry mode, and delay behavior.
+type AdaptiveModeOptions struct {
+	// If the adaptive token bucket is empty, when an attempt will be made
+	// AdaptiveMode will sleep until a token is available. This can occur when
+	// attempts fail with throttle errors. Use this option to disable the sleep
+	// until token is available, and return error immediately.
+	FailOnNoAttemptTokens bool
+
+	// The cost of an attempt from the AdaptiveMode's adaptive token bucket.
+	RequestCost uint
+
+	// Set of strategies to determine if the attempt failed due to a throttle
+	// error.
+	//
+	// It is safe to append to this list in NewAdaptiveMode's functional options.
+	Throttles []IsErrorThrottle
+
+	// Set of options for standard retry mode that AdaptiveMode is built on top
+	// of. AdaptiveMode may apply its own defaults to Standard retry mode that
+	// are different than the defaults of NewStandard. Use these options to
+	// override the default options.
+	StandardOptions []func(*StandardOptions)
+}
+
+// AdaptiveMode provides an experimental retry strategy that expands on the
+// Standard retry strategy, adding client attempt rate limits. The attempt rate
+// limit is initially unrestricted, but becomes restricted when the attempt
+// fails with for a throttle error. When restricted AdaptiveMode may need to
+// sleep before an attempt is made, if too many throttles have been received.
+// AdaptiveMode's sleep can be canceled with context cancel. Set
+// AdaptiveModeOptions FailOnNoAttemptTokens to change the behavior from sleep,
+// to fail fast.
+//
+// Eventually unrestricted attempt rate limit will be restored once attempts no
+// longer are failing due to throttle errors.
+type AdaptiveMode struct {
+	options   AdaptiveModeOptions
+	throttles IsErrorThrottles
+
+	retryer   aws.RetryerV2
+	rateLimit *adaptiveRateLimit
+}
+
+// NewAdaptiveMode returns an initialized AdaptiveMode retry strategy.
+func NewAdaptiveMode(optFns ...func(*AdaptiveModeOptions)) *AdaptiveMode {
+	o := AdaptiveModeOptions{
+		RequestCost: DefaultRequestCost,
+		Throttles:   append([]IsErrorThrottle{}, DefaultThrottles...),
+	}
+	for _, fn := range optFns {
+		fn(&o)
+	}
+
+	return &AdaptiveMode{
+		options:   o,
+		throttles: IsErrorThrottles(o.Throttles),
+		retryer:   NewStandard(o.StandardOptions...),
+		rateLimit: newAdaptiveRateLimit(),
+	}
+}
+
+// IsErrorRetryable returns if the failed attempt is retryable. This check
+// should determine if the error can be retried, or if the error is
+// terminal.
+func (a *AdaptiveMode) IsErrorRetryable(err error) bool {
+	return a.retryer.IsErrorRetryable(err)
+}
+
+// MaxAttempts returns the maximum number of attempts that can be made for
+// an attempt before failing. A value of 0 implies that the attempt should
+// be retried until it succeeds if the errors are retryable.
+func (a *AdaptiveMode) MaxAttempts() int {
+	return a.retryer.MaxAttempts()
+}
+
+// RetryDelay returns the delay that should be used before retrying the
+// attempt. Will return error if the if the delay could not be determined.
+func (a *AdaptiveMode) RetryDelay(attempt int, opErr error) (
+	time.Duration, error,
+) {
+	return a.retryer.RetryDelay(attempt, opErr)
+}
+
+// GetRetryToken attempts to deduct the retry cost from the retry token pool.
+// Returning the token release function, or error.
+func (a *AdaptiveMode) GetRetryToken(ctx context.Context, opErr error) (
+	releaseToken func(error) error, err error,
+) {
+	return a.retryer.GetRetryToken(ctx, opErr)
+}
+
+// GetInitialToken returns the initial attempt token that can increment the
+// retry token pool if the attempt is successful.
+//
+// Deprecated: This method does not provide a way to block using Context,
+// nor can it return an error. Use RetryerV2, and GetAttemptToken instead. Only
+// present to implement Retryer interface.
+func (a *AdaptiveMode) GetInitialToken() (releaseToken func(error) error) {
+	return nopRelease
+}
+
+// GetAttemptToken returns the attempt token that can be used to rate limit
+// attempt calls. Will be used by the SDK's retry package's Attempt
+// middleware to get an attempt token prior to calling the temp and releasing
+// the attempt token after the attempt has been made.
+func (a *AdaptiveMode) GetAttemptToken(ctx context.Context) (func(error) error, error) {
+	for {
+		acquiredToken, waitTryAgain := a.rateLimit.AcquireToken(a.options.RequestCost)
+		if acquiredToken {
+			break
+		}
+		if a.options.FailOnNoAttemptTokens {
+			return nil, fmt.Errorf(
+				"unable to get attempt token, and FailOnNoAttemptTokens enables")
+		}
+
+		if err := sdk.SleepWithContext(ctx, waitTryAgain); err != nil {
+			return nil, fmt.Errorf("failed to wait for token to be available, %w", err)
+		}
+	}
+
+	return a.handleResponse, nil
+}
+
+func (a *AdaptiveMode) handleResponse(opErr error) error {
+	throttled := a.throttles.IsErrorThrottle(opErr).Bool()
+
+	a.rateLimit.Update(throttled)
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/adaptive_ratelimit.go 🔗

@@ -0,0 +1,158 @@
+package retry
+
+import (
+	"math"
+	"sync"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+)
+
+type adaptiveRateLimit struct {
+	tokenBucketEnabled bool
+
+	smooth        float64
+	beta          float64
+	scaleConstant float64
+	minFillRate   float64
+
+	fillRate         float64
+	calculatedRate   float64
+	lastRefilled     time.Time
+	measuredTxRate   float64
+	lastTxRateBucket float64
+	requestCount     int64
+	lastMaxRate      float64
+	lastThrottleTime time.Time
+	timeWindow       float64
+
+	tokenBucket *adaptiveTokenBucket
+
+	mu sync.Mutex
+}
+
+func newAdaptiveRateLimit() *adaptiveRateLimit {
+	now := sdk.NowTime()
+	return &adaptiveRateLimit{
+		smooth:        0.8,
+		beta:          0.7,
+		scaleConstant: 0.4,
+
+		minFillRate: 0.5,
+
+		lastTxRateBucket: math.Floor(timeFloat64Seconds(now)),
+		lastThrottleTime: now,
+
+		tokenBucket: newAdaptiveTokenBucket(0),
+	}
+}
+
+func (a *adaptiveRateLimit) Enable(v bool) {
+	a.mu.Lock()
+	defer a.mu.Unlock()
+
+	a.tokenBucketEnabled = v
+}
+
+func (a *adaptiveRateLimit) AcquireToken(amount uint) (
+	tokenAcquired bool, waitTryAgain time.Duration,
+) {
+	a.mu.Lock()
+	defer a.mu.Unlock()
+
+	if !a.tokenBucketEnabled {
+		return true, 0
+	}
+
+	a.tokenBucketRefill()
+
+	available, ok := a.tokenBucket.Retrieve(float64(amount))
+	if !ok {
+		waitDur := float64Seconds((float64(amount) - available) / a.fillRate)
+		return false, waitDur
+	}
+
+	return true, 0
+}
+
+func (a *adaptiveRateLimit) Update(throttled bool) {
+	a.mu.Lock()
+	defer a.mu.Unlock()
+
+	a.updateMeasuredRate()
+
+	if throttled {
+		rateToUse := a.measuredTxRate
+		if a.tokenBucketEnabled {
+			rateToUse = math.Min(a.measuredTxRate, a.fillRate)
+		}
+
+		a.lastMaxRate = rateToUse
+		a.calculateTimeWindow()
+		a.lastThrottleTime = sdk.NowTime()
+		a.calculatedRate = a.cubicThrottle(rateToUse)
+		a.tokenBucketEnabled = true
+	} else {
+		a.calculateTimeWindow()
+		a.calculatedRate = a.cubicSuccess(sdk.NowTime())
+	}
+
+	newRate := math.Min(a.calculatedRate, 2*a.measuredTxRate)
+	a.tokenBucketUpdateRate(newRate)
+}
+
+func (a *adaptiveRateLimit) cubicSuccess(t time.Time) float64 {
+	dt := secondsFloat64(t.Sub(a.lastThrottleTime))
+	return (a.scaleConstant * math.Pow(dt-a.timeWindow, 3)) + a.lastMaxRate
+}
+
+func (a *adaptiveRateLimit) cubicThrottle(rateToUse float64) float64 {
+	return rateToUse * a.beta
+}
+
+func (a *adaptiveRateLimit) calculateTimeWindow() {
+	a.timeWindow = math.Pow((a.lastMaxRate*(1.-a.beta))/a.scaleConstant, 1./3.)
+}
+
+func (a *adaptiveRateLimit) tokenBucketUpdateRate(newRPS float64) {
+	a.tokenBucketRefill()
+	a.fillRate = math.Max(newRPS, a.minFillRate)
+	a.tokenBucket.Resize(newRPS)
+}
+
+func (a *adaptiveRateLimit) updateMeasuredRate() {
+	now := sdk.NowTime()
+	timeBucket := math.Floor(timeFloat64Seconds(now)*2.) / 2.
+	a.requestCount++
+
+	if timeBucket > a.lastTxRateBucket {
+		currentRate := float64(a.requestCount) / (timeBucket - a.lastTxRateBucket)
+		a.measuredTxRate = (currentRate * a.smooth) + (a.measuredTxRate * (1. - a.smooth))
+		a.requestCount = 0
+		a.lastTxRateBucket = timeBucket
+	}
+}
+
+func (a *adaptiveRateLimit) tokenBucketRefill() {
+	now := sdk.NowTime()
+	if a.lastRefilled.IsZero() {
+		a.lastRefilled = now
+		return
+	}
+
+	fillAmount := secondsFloat64(now.Sub(a.lastRefilled)) * a.fillRate
+	a.tokenBucket.Refund(fillAmount)
+	a.lastRefilled = now
+}
+
+func float64Seconds(v float64) time.Duration {
+	return time.Duration(v * float64(time.Second))
+}
+
+func secondsFloat64(v time.Duration) float64 {
+	return float64(v) / float64(time.Second)
+}
+
+func timeFloat64Seconds(v time.Time) float64 {
+	return float64(v.UnixNano()) / float64(time.Second)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/adaptive_token_bucket.go 🔗

@@ -0,0 +1,83 @@
+package retry
+
+import (
+	"math"
+	"sync"
+)
+
+// adaptiveTokenBucket provides a concurrency safe utility for adding and
+// removing tokens from the available token bucket.
+type adaptiveTokenBucket struct {
+	remainingTokens float64
+	maxCapacity     float64
+	minCapacity     float64
+	mu              sync.Mutex
+}
+
+// newAdaptiveTokenBucket returns an initialized adaptiveTokenBucket with the
+// capacity specified.
+func newAdaptiveTokenBucket(i float64) *adaptiveTokenBucket {
+	return &adaptiveTokenBucket{
+		remainingTokens: i,
+		maxCapacity:     i,
+		minCapacity:     1,
+	}
+}
+
+// Retrieve attempts to reduce the available tokens by the amount requested. If
+// there are tokens available true will be returned along with the number of
+// available tokens remaining. If amount requested is larger than the available
+// capacity, false will be returned along with the available capacity. If the
+// amount is less than the available capacity, the capacity will be reduced by
+// that amount, and the remaining capacity and true will be returned.
+func (t *adaptiveTokenBucket) Retrieve(amount float64) (available float64, retrieved bool) {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	if amount > t.remainingTokens {
+		return t.remainingTokens, false
+	}
+
+	t.remainingTokens -= amount
+	return t.remainingTokens, true
+}
+
+// Refund returns the amount of tokens back to the available token bucket, up
+// to the initial capacity.
+func (t *adaptiveTokenBucket) Refund(amount float64) {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	// Capacity cannot exceed max capacity.
+	t.remainingTokens = math.Min(t.remainingTokens+amount, t.maxCapacity)
+}
+
+// Capacity returns the maximum capacity of tokens that the bucket could
+// contain.
+func (t *adaptiveTokenBucket) Capacity() float64 {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	return t.maxCapacity
+}
+
+// Remaining returns the number of tokens that remaining in the bucket.
+func (t *adaptiveTokenBucket) Remaining() float64 {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	return t.remainingTokens
+}
+
+// Resize adjusts the size of the token bucket. Returns the capacity remaining.
+func (t *adaptiveTokenBucket) Resize(size float64) float64 {
+	t.mu.Lock()
+	defer t.mu.Unlock()
+
+	t.maxCapacity = math.Max(size, t.minCapacity)
+
+	// Capacity needs to be capped at max capacity, if max size reduced.
+	t.remainingTokens = math.Min(t.remainingTokens, t.maxCapacity)
+
+	return t.remainingTokens
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/doc.go 🔗

@@ -0,0 +1,80 @@
+// Package retry provides interfaces and implementations for SDK request retry behavior.
+//
+// # Retryer Interface and Implementations
+//
+// This package defines Retryer interface that is used to either implement custom retry behavior
+// or to extend the existing retry implementations provided by the SDK. This package provides a single
+// retry implementation: Standard.
+//
+// # Standard
+//
+// Standard is the default retryer implementation used by service clients. The standard retryer is a rate limited
+// retryer that has a configurable max attempts to limit the number of retry attempts when a retryable error occurs.
+// In addition, the retryer uses a configurable token bucket to rate limit the retry attempts across the client,
+// and uses an additional delay policy to limit the time between a requests subsequent attempts.
+//
+// By default the standard retryer uses the DefaultRetryables slice of IsErrorRetryable types to determine whether
+// a given error is retryable. By default this list of retryables includes the following:
+//   - Retrying errors that implement the RetryableError method, and return true.
+//   - Connection Errors
+//   - Errors that implement a ConnectionError, Temporary, or Timeout method that return true.
+//   - Connection Reset Errors.
+//   - net.OpErr types that are dialing errors or are temporary.
+//   - HTTP Status Codes: 500, 502, 503, and 504.
+//   - API Error Codes
+//   - RequestTimeout, RequestTimeoutException
+//   - Throttling, ThrottlingException, ThrottledException, RequestThrottledException, TooManyRequestsException,
+//     RequestThrottled, SlowDown, EC2ThrottledException
+//   - ProvisionedThroughputExceededException, RequestLimitExceeded, BandwidthLimitExceeded, LimitExceededException
+//   - TransactionInProgressException, PriorRequestNotComplete
+//
+// The standard retryer will not retry a request in the event if the context associated with the request
+// has been cancelled. Applications must handle this case explicitly if they wish to retry with a different context
+// value.
+//
+// You can configure the standard retryer implementation to fit your applications by constructing a standard retryer
+// using the NewStandard function, and providing one more functional argument that mutate the StandardOptions
+// structure. StandardOptions provides the ability to modify the token bucket rate limiter, retryable error conditions,
+// and the retry delay policy.
+//
+// For example to modify the default retry attempts for the standard retryer:
+//
+//	// configure the custom retryer
+//	customRetry := retry.NewStandard(func(o *retry.StandardOptions) {
+//	    o.MaxAttempts = 5
+//	})
+//
+//	// create a service client with the retryer
+//	s3.NewFromConfig(cfg, func(o *s3.Options) {
+//	    o.Retryer = customRetry
+//	})
+//
+// # Utilities
+//
+// A number of package functions have been provided to easily wrap retryer implementations in an implementation agnostic
+// way. These are:
+//
+//	AddWithErrorCodes      - Provides the ability to add additional API error codes that should be considered retryable
+//	                        in addition to those considered retryable by the provided retryer.
+//
+//	AddWithMaxAttempts     - Provides the ability to set the max number of attempts for retrying a request by wrapping
+//	                         a retryer implementation.
+//
+//	AddWithMaxBackoffDelay - Provides the ability to set the max back off delay that can occur before retrying a
+//	                         request by wrapping a retryer implementation.
+//
+// The following package functions have been provided to easily satisfy different retry interfaces to further customize
+// a given retryer's behavior:
+//
+//	BackoffDelayerFunc   - Can be used to wrap a function to satisfy the BackoffDelayer interface. For example,
+//	                       you can use this method to easily create custom back off policies to be used with the
+//	                       standard retryer.
+//
+//	IsErrorRetryableFunc - Can be used to wrap a function to satisfy the IsErrorRetryable interface. For example,
+//	                       this can be used to extend the standard retryer to add additional logic to determine if an
+//	                       error should be retried.
+//
+//	IsErrorTimeoutFunc   - Can be used to wrap a function to satisfy IsErrorTimeout interface. For example,
+//	                       this can be used to extend the standard retryer to add additional logic to determine if an
+//	                        error should be considered a timeout.
+package retry

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/errors.go 🔗

@@ -0,0 +1,20 @@
+package retry
+
+import "fmt"
+
+// MaxAttemptsError provides the error when the maximum number of attempts have
+// been exceeded.
+type MaxAttemptsError struct {
+	Attempt int
+	Err     error
+}
+
+func (e *MaxAttemptsError) Error() string {
+	return fmt.Sprintf("exceeded maximum number of attempts, %d, %v", e.Attempt, e.Err)
+}
+
+// Unwrap returns the nested error causing the max attempts error. Provides the
+// implementation for errors.Is and errors.As to unwrap nested errors.
+func (e *MaxAttemptsError) Unwrap() error {
+	return e.Err
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/jitter_backoff.go 🔗

@@ -0,0 +1,49 @@
+package retry
+
+import (
+	"math"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/internal/rand"
+	"github.com/aws/aws-sdk-go-v2/internal/timeconv"
+)
+
+// ExponentialJitterBackoff provides backoff delays with jitter based on the
+// number of attempts.
+type ExponentialJitterBackoff struct {
+	maxBackoff time.Duration
+	// precomputed number of attempts needed to reach max backoff.
+	maxBackoffAttempts float64
+
+	randFloat64 func() (float64, error)
+}
+
+// NewExponentialJitterBackoff returns an ExponentialJitterBackoff configured
+// for the max backoff.
+func NewExponentialJitterBackoff(maxBackoff time.Duration) *ExponentialJitterBackoff {
+	return &ExponentialJitterBackoff{
+		maxBackoff: maxBackoff,
+		maxBackoffAttempts: math.Log2(
+			float64(maxBackoff) / float64(time.Second)),
+		randFloat64: rand.CryptoRandFloat64,
+	}
+}
+
+// BackoffDelay returns the duration to wait before the next attempt should be
+// made. Returns an error if unable get a duration.
+func (j *ExponentialJitterBackoff) BackoffDelay(attempt int, err error) (time.Duration, error) {
+	if attempt > int(j.maxBackoffAttempts) {
+		return j.maxBackoff, nil
+	}
+
+	b, err := j.randFloat64()
+	if err != nil {
+		return 0, err
+	}
+
+	// [0.0, 1.0) * 2 ^ attempts
+	ri := int64(1 << uint64(attempt))
+	delaySeconds := b * float64(ri)
+
+	return timeconv.FloatSecondsDur(delaySeconds), nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/metadata.go 🔗

@@ -0,0 +1,52 @@
+package retry
+
+import (
+	awsmiddle "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// attemptResultsKey is a metadata accessor key to retrieve metadata
+// for all request attempts.
+type attemptResultsKey struct {
+}
+
+// GetAttemptResults retrieves attempts results from middleware metadata.
+func GetAttemptResults(metadata middleware.Metadata) (AttemptResults, bool) {
+	m, ok := metadata.Get(attemptResultsKey{}).(AttemptResults)
+	return m, ok
+}
+
+// AttemptResults represents struct containing metadata returned by all request attempts.
+type AttemptResults struct {
+
+	// Results is a slice consisting attempt result from all request attempts.
+	// Results are stored in order request attempt is made.
+	Results []AttemptResult
+}
+
+// AttemptResult represents attempt result returned by a single request attempt.
+type AttemptResult struct {
+
+	// Err is the error if received for the request attempt.
+	Err error
+
+	// Retryable denotes if request may be retried. This states if an
+	// error is considered retryable.
+	Retryable bool
+
+	// Retried indicates if this request was retried.
+	Retried bool
+
+	// ResponseMetadata is any existing metadata passed via the response middlewares.
+	ResponseMetadata middleware.Metadata
+}
+
+// addAttemptResults adds attempt results to middleware metadata
+func addAttemptResults(metadata *middleware.Metadata, v AttemptResults) {
+	metadata.Set(attemptResultsKey{}, v)
+}
+
+// GetRawResponse returns raw response recorded for the attempt result
+func (a AttemptResult) GetRawResponse() interface{} {
+	return awsmiddle.GetRawResponse(a.ResponseMetadata)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/middleware.go 🔗

@@ -0,0 +1,383 @@
+package retry
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws/middleware/private/metrics"
+	internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddle "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/smithy-go/logging"
+	smithymiddle "github.com/aws/smithy-go/middleware"
+	"github.com/aws/smithy-go/transport/http"
+)
+
+// RequestCloner is a function that can take an input request type and clone
+// the request for use in a subsequent retry attempt.
+type RequestCloner func(interface{}) interface{}
+
+type retryMetadata struct {
+	AttemptNum       int
+	AttemptTime      time.Time
+	MaxAttempts      int
+	AttemptClockSkew time.Duration
+}
+
+// Attempt is a Smithy Finalize middleware that handles retry attempts using
+// the provided Retryer implementation.
+type Attempt struct {
+	// Enable the logging of retry attempts performed by the SDK. This will
+	// include logging retry attempts, unretryable errors, and when max
+	// attempts are reached.
+	LogAttempts bool
+
+	retryer       aws.RetryerV2
+	requestCloner RequestCloner
+}
+
+// define the threshold at which we will consider certain kind of errors to be probably
+// caused by clock skew
+const skewThreshold = 4 * time.Minute
+
+// NewAttemptMiddleware returns a new Attempt retry middleware.
+func NewAttemptMiddleware(retryer aws.Retryer, requestCloner RequestCloner, optFns ...func(*Attempt)) *Attempt {
+	m := &Attempt{
+		retryer:       wrapAsRetryerV2(retryer),
+		requestCloner: requestCloner,
+	}
+	for _, fn := range optFns {
+		fn(m)
+	}
+	return m
+}
+
+// ID returns the middleware identifier
+func (r *Attempt) ID() string { return "Retry" }
+
+func (r Attempt) logf(logger logging.Logger, classification logging.Classification, format string, v ...interface{}) {
+	if !r.LogAttempts {
+		return
+	}
+	logger.Logf(classification, format, v...)
+}
+
+// HandleFinalize utilizes the provider Retryer implementation to attempt
+// retries over the next handler
+func (r *Attempt) HandleFinalize(ctx context.Context, in smithymiddle.FinalizeInput, next smithymiddle.FinalizeHandler) (
+	out smithymiddle.FinalizeOutput, metadata smithymiddle.Metadata, err error,
+) {
+	var attemptNum int
+	var attemptClockSkew time.Duration
+	var attemptResults AttemptResults
+
+	maxAttempts := r.retryer.MaxAttempts()
+	releaseRetryToken := nopRelease
+
+	for {
+		attemptNum++
+		attemptInput := in
+		attemptInput.Request = r.requestCloner(attemptInput.Request)
+
+		// Record the metadata for the for attempt being started.
+		attemptCtx := setRetryMetadata(ctx, retryMetadata{
+			AttemptNum:       attemptNum,
+			AttemptTime:      sdk.NowTime().UTC(),
+			MaxAttempts:      maxAttempts,
+			AttemptClockSkew: attemptClockSkew,
+		})
+
+		// Setting clock skew to be used on other context (like signing)
+		ctx = internalcontext.SetAttemptSkewContext(ctx, attemptClockSkew)
+
+		var attemptResult AttemptResult
+		out, attemptResult, releaseRetryToken, err = r.handleAttempt(attemptCtx, attemptInput, releaseRetryToken, next)
+		attemptClockSkew, _ = awsmiddle.GetAttemptSkew(attemptResult.ResponseMetadata)
+
+		// AttemptResult Retried states that the attempt was not successful, and
+		// should be retried.
+		shouldRetry := attemptResult.Retried
+
+		// Add attempt metadata to list of all attempt metadata
+		attemptResults.Results = append(attemptResults.Results, attemptResult)
+
+		if !shouldRetry {
+			// Ensure the last response's metadata is used as the bases for result
+			// metadata returned by the stack. The Slice of attempt results
+			// will be added to this cloned metadata.
+			metadata = attemptResult.ResponseMetadata.Clone()
+
+			break
+		}
+	}
+
+	addAttemptResults(&metadata, attemptResults)
+	return out, metadata, err
+}
+
+// handleAttempt handles an individual request attempt.
+func (r *Attempt) handleAttempt(
+	ctx context.Context, in smithymiddle.FinalizeInput, releaseRetryToken func(error) error, next smithymiddle.FinalizeHandler,
+) (
+	out smithymiddle.FinalizeOutput, attemptResult AttemptResult, _ func(error) error, err error,
+) {
+	defer func() {
+		attemptResult.Err = err
+	}()
+
+	// Short circuit if this attempt never can succeed because the context is
+	// canceled. This reduces the chance of token pools being modified for
+	// attempts that will not be made
+	select {
+	case <-ctx.Done():
+		return out, attemptResult, nopRelease, ctx.Err()
+	default:
+	}
+
+	//------------------------------
+	// Get Attempt Token
+	//------------------------------
+	releaseAttemptToken, err := r.retryer.GetAttemptToken(ctx)
+	if err != nil {
+		return out, attemptResult, nopRelease, fmt.Errorf(
+			"failed to get retry Send token, %w", err)
+	}
+
+	//------------------------------
+	// Send Attempt
+	//------------------------------
+	logger := smithymiddle.GetLogger(ctx)
+	service, operation := awsmiddle.GetServiceID(ctx), awsmiddle.GetOperationName(ctx)
+	retryMetadata, _ := getRetryMetadata(ctx)
+	attemptNum := retryMetadata.AttemptNum
+	maxAttempts := retryMetadata.MaxAttempts
+
+	// Following attempts must ensure the request payload stream starts in a
+	// rewound state.
+	if attemptNum > 1 {
+		if rewindable, ok := in.Request.(interface{ RewindStream() error }); ok {
+			if rewindErr := rewindable.RewindStream(); rewindErr != nil {
+				return out, attemptResult, nopRelease, fmt.Errorf(
+					"failed to rewind transport stream for retry, %w", rewindErr)
+			}
+		}
+
+		r.logf(logger, logging.Debug, "retrying request %s/%s, attempt %d",
+			service, operation, attemptNum)
+	}
+
+	var metadata smithymiddle.Metadata
+	out, metadata, err = next.HandleFinalize(ctx, in)
+	attemptResult.ResponseMetadata = metadata
+
+	//------------------------------
+	// Bookkeeping
+	//------------------------------
+	// Release the retry token based on the state of the attempt's error (if any).
+	if releaseError := releaseRetryToken(err); releaseError != nil && err != nil {
+		return out, attemptResult, nopRelease, fmt.Errorf(
+			"failed to release retry token after request error, %w", err)
+	}
+	// Release the attempt token based on the state of the attempt's error (if any).
+	if releaseError := releaseAttemptToken(err); releaseError != nil && err != nil {
+		return out, attemptResult, nopRelease, fmt.Errorf(
+			"failed to release initial token after request error, %w", err)
+	}
+	// If there was no error making the attempt, nothing further to do. There
+	// will be nothing to retry.
+	if err == nil {
+		return out, attemptResult, nopRelease, err
+	}
+
+	err = wrapAsClockSkew(ctx, err)
+
+	//------------------------------
+	// Is Retryable and Should Retry
+	//------------------------------
+	// If the attempt failed with an unretryable error, nothing further to do
+	// but return, and inform the caller about the terminal failure.
+	retryable := r.retryer.IsErrorRetryable(err)
+	if !retryable {
+		r.logf(logger, logging.Debug, "request failed with unretryable error %v", err)
+		return out, attemptResult, nopRelease, err
+	}
+
+	// set retryable to true
+	attemptResult.Retryable = true
+
+	// Once the maximum number of attempts have been exhausted there is nothing
+	// further to do other than inform the caller about the terminal failure.
+	if maxAttempts > 0 && attemptNum >= maxAttempts {
+		r.logf(logger, logging.Debug, "max retry attempts exhausted, max %d", maxAttempts)
+		err = &MaxAttemptsError{
+			Attempt: attemptNum,
+			Err:     err,
+		}
+		return out, attemptResult, nopRelease, err
+	}
+
+	//------------------------------
+	// Get Retry (aka Retry Quota) Token
+	//------------------------------
+	// Get a retry token that will be released after the
+	releaseRetryToken, retryTokenErr := r.retryer.GetRetryToken(ctx, err)
+	if retryTokenErr != nil {
+		return out, attemptResult, nopRelease, retryTokenErr
+	}
+
+	//------------------------------
+	// Retry Delay and Sleep
+	//------------------------------
+	// Get the retry delay before another attempt can be made, and sleep for
+	// that time. Potentially early exist if the sleep is canceled via the
+	// context.
+	retryDelay, reqErr := r.retryer.RetryDelay(attemptNum, err)
+	mctx := metrics.Context(ctx)
+	if mctx != nil {
+		attempt, err := mctx.Data().LatestAttempt()
+		if err != nil {
+			attempt.RetryDelay = retryDelay
+		}
+	}
+	if reqErr != nil {
+		return out, attemptResult, releaseRetryToken, reqErr
+	}
+	if reqErr = sdk.SleepWithContext(ctx, retryDelay); reqErr != nil {
+		err = &aws.RequestCanceledError{Err: reqErr}
+		return out, attemptResult, releaseRetryToken, err
+	}
+
+	// The request should be re-attempted.
+	attemptResult.Retried = true
+
+	return out, attemptResult, releaseRetryToken, err
+}
+
+// errors that, if detected when we know there's a clock skew,
+// can be retried and have a high chance of success
+var possibleSkewCodes = map[string]struct{}{
+	"InvalidSignatureException": {},
+	"SignatureDoesNotMatch":     {},
+	"AuthFailure":               {},
+}
+
+var definiteSkewCodes = map[string]struct{}{
+	"RequestExpired":       {},
+	"RequestInTheFuture":   {},
+	"RequestTimeTooSkewed": {},
+}
+
+// wrapAsClockSkew checks if this error could be related to a clock skew
+// error and if so, wrap the error.
+func wrapAsClockSkew(ctx context.Context, err error) error {
+	var v interface{ ErrorCode() string }
+	if !errors.As(err, &v) {
+		return err
+	}
+	if _, ok := definiteSkewCodes[v.ErrorCode()]; ok {
+		return &retryableClockSkewError{Err: err}
+	}
+	_, isPossibleSkewCode := possibleSkewCodes[v.ErrorCode()]
+	if skew := internalcontext.GetAttemptSkewContext(ctx); skew > skewThreshold && isPossibleSkewCode {
+		return &retryableClockSkewError{Err: err}
+	}
+	return err
+}
+
+// MetricsHeader attaches SDK request metric header for retries to the transport
+type MetricsHeader struct{}
+
+// ID returns the middleware identifier
+func (r *MetricsHeader) ID() string {
+	return "RetryMetricsHeader"
+}
+
+// HandleFinalize attaches the SDK request metric header to the transport layer
+func (r MetricsHeader) HandleFinalize(ctx context.Context, in smithymiddle.FinalizeInput, next smithymiddle.FinalizeHandler) (
+	out smithymiddle.FinalizeOutput, metadata smithymiddle.Metadata, err error,
+) {
+	retryMetadata, _ := getRetryMetadata(ctx)
+
+	const retryMetricHeader = "Amz-Sdk-Request"
+	var parts []string
+
+	parts = append(parts, "attempt="+strconv.Itoa(retryMetadata.AttemptNum))
+	if retryMetadata.MaxAttempts != 0 {
+		parts = append(parts, "max="+strconv.Itoa(retryMetadata.MaxAttempts))
+	}
+
+	var ttl time.Time
+	if deadline, ok := ctx.Deadline(); ok {
+		ttl = deadline
+	}
+
+	// Only append the TTL if it can be determined.
+	if !ttl.IsZero() && retryMetadata.AttemptClockSkew > 0 {
+		const unixTimeFormat = "20060102T150405Z"
+		ttl = ttl.Add(retryMetadata.AttemptClockSkew)
+		parts = append(parts, "ttl="+ttl.Format(unixTimeFormat))
+	}
+
+	switch req := in.Request.(type) {
+	case *http.Request:
+		req.Header[retryMetricHeader] = append(req.Header[retryMetricHeader][:0], strings.Join(parts, "; "))
+	default:
+		return out, metadata, fmt.Errorf("unknown transport type %T", req)
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+type retryMetadataKey struct{}
+
+// getRetryMetadata retrieves retryMetadata from the context and a bool
+// indicating if it was set.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func getRetryMetadata(ctx context.Context) (metadata retryMetadata, ok bool) {
+	metadata, ok = smithymiddle.GetStackValue(ctx, retryMetadataKey{}).(retryMetadata)
+	return metadata, ok
+}
+
+// setRetryMetadata sets the retryMetadata on the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func setRetryMetadata(ctx context.Context, metadata retryMetadata) context.Context {
+	return smithymiddle.WithStackValue(ctx, retryMetadataKey{}, metadata)
+}
+
+// AddRetryMiddlewaresOptions is the set of options that can be passed to
+// AddRetryMiddlewares for configuring retry associated middleware.
+type AddRetryMiddlewaresOptions struct {
+	Retryer aws.Retryer
+
+	// Enable the logging of retry attempts performed by the SDK. This will
+	// include logging retry attempts, unretryable errors, and when max
+	// attempts are reached.
+	LogRetryAttempts bool
+}
+
+// AddRetryMiddlewares adds retry middleware to operation middleware stack
+func AddRetryMiddlewares(stack *smithymiddle.Stack, options AddRetryMiddlewaresOptions) error {
+	attempt := NewAttemptMiddleware(options.Retryer, http.RequestCloner, func(middleware *Attempt) {
+		middleware.LogAttempts = options.LogRetryAttempts
+	})
+
+	// index retry to before signing, if signing exists
+	if err := stack.Finalize.Insert(attempt, "Signing", smithymiddle.Before); err != nil {
+		return err
+	}
+
+	if err := stack.Finalize.Insert(&MetricsHeader{}, attempt.ID(), smithymiddle.After); err != nil {
+		return err
+	}
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/retry.go 🔗

@@ -0,0 +1,90 @@
+package retry
+
+import (
+	"context"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// AddWithErrorCodes returns a Retryer with additional error codes considered
+// for determining if the error should be retried.
+func AddWithErrorCodes(r aws.Retryer, codes ...string) aws.Retryer {
+	retryable := &RetryableErrorCode{
+		Codes: map[string]struct{}{},
+	}
+	for _, c := range codes {
+		retryable.Codes[c] = struct{}{}
+	}
+
+	return &withIsErrorRetryable{
+		RetryerV2: wrapAsRetryerV2(r),
+		Retryable: retryable,
+	}
+}
+
+type withIsErrorRetryable struct {
+	aws.RetryerV2
+	Retryable IsErrorRetryable
+}
+
+func (r *withIsErrorRetryable) IsErrorRetryable(err error) bool {
+	if v := r.Retryable.IsErrorRetryable(err); v != aws.UnknownTernary {
+		return v.Bool()
+	}
+	return r.RetryerV2.IsErrorRetryable(err)
+}
+
+// AddWithMaxAttempts returns a Retryer with MaxAttempts set to the value
+// specified.
+func AddWithMaxAttempts(r aws.Retryer, max int) aws.Retryer {
+	return &withMaxAttempts{
+		RetryerV2: wrapAsRetryerV2(r),
+		Max:       max,
+	}
+}
+
+type withMaxAttempts struct {
+	aws.RetryerV2
+	Max int
+}
+
+func (w *withMaxAttempts) MaxAttempts() int {
+	return w.Max
+}
+
+// AddWithMaxBackoffDelay returns a retryer wrapping the passed in retryer
+// overriding the RetryDelay behavior for a alternate minimum initial backoff
+// delay.
+func AddWithMaxBackoffDelay(r aws.Retryer, delay time.Duration) aws.Retryer {
+	return &withMaxBackoffDelay{
+		RetryerV2: wrapAsRetryerV2(r),
+		backoff:   NewExponentialJitterBackoff(delay),
+	}
+}
+
+type withMaxBackoffDelay struct {
+	aws.RetryerV2
+	backoff *ExponentialJitterBackoff
+}
+
+func (r *withMaxBackoffDelay) RetryDelay(attempt int, err error) (time.Duration, error) {
+	return r.backoff.BackoffDelay(attempt, err)
+}
+
+type wrappedAsRetryerV2 struct {
+	aws.Retryer
+}
+
+func wrapAsRetryerV2(r aws.Retryer) aws.RetryerV2 {
+	v, ok := r.(aws.RetryerV2)
+	if !ok {
+		v = wrappedAsRetryerV2{Retryer: r}
+	}
+
+	return v
+}
+
+func (w wrappedAsRetryerV2) GetAttemptToken(context.Context) (func(error) error, error) {
+	return w.Retryer.GetInitialToken(), nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/retryable_error.go 🔗

@@ -0,0 +1,222 @@
+package retry
+
+import (
+	"errors"
+	"fmt"
+	"net"
+	"net/url"
+	"strings"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// IsErrorRetryable provides the interface of an implementation to determine if
+// a error as the result of an operation is retryable.
+type IsErrorRetryable interface {
+	IsErrorRetryable(error) aws.Ternary
+}
+
+// IsErrorRetryables is a collection of checks to determine of the error is
+// retryable.  Iterates through the checks and returns the state of retryable
+// if any check returns something other than unknown.
+type IsErrorRetryables []IsErrorRetryable
+
+// IsErrorRetryable returns if the error is retryable if any of the checks in
+// the list return a value other than unknown.
+func (r IsErrorRetryables) IsErrorRetryable(err error) aws.Ternary {
+	for _, re := range r {
+		if v := re.IsErrorRetryable(err); v != aws.UnknownTernary {
+			return v
+		}
+	}
+	return aws.UnknownTernary
+}
+
+// IsErrorRetryableFunc wraps a function with the IsErrorRetryable interface.
+type IsErrorRetryableFunc func(error) aws.Ternary
+
+// IsErrorRetryable returns if the error is retryable.
+func (fn IsErrorRetryableFunc) IsErrorRetryable(err error) aws.Ternary {
+	return fn(err)
+}
+
+// RetryableError is an IsErrorRetryable implementation which uses the
+// optional interface Retryable on the error value to determine if the error is
+// retryable.
+type RetryableError struct{}
+
+// IsErrorRetryable returns if the error is retryable if it satisfies the
+// Retryable interface, and returns if the attempt should be retried.
+func (RetryableError) IsErrorRetryable(err error) aws.Ternary {
+	var v interface{ RetryableError() bool }
+
+	if !errors.As(err, &v) {
+		return aws.UnknownTernary
+	}
+
+	return aws.BoolTernary(v.RetryableError())
+}
+
+// NoRetryCanceledError detects if the error was an request canceled error and
+// returns if so.
+type NoRetryCanceledError struct{}
+
+// IsErrorRetryable returns the error is not retryable if the request was
+// canceled.
+func (NoRetryCanceledError) IsErrorRetryable(err error) aws.Ternary {
+	var v interface{ CanceledError() bool }
+
+	if !errors.As(err, &v) {
+		return aws.UnknownTernary
+	}
+
+	if v.CanceledError() {
+		return aws.FalseTernary
+	}
+	return aws.UnknownTernary
+}
+
+// RetryableConnectionError determines if the underlying error is an HTTP
+// connection and returns if it should be retried.
+//
+// Includes errors such as connection reset, connection refused, net dial,
+// temporary, and timeout errors.
+type RetryableConnectionError struct{}
+
+// IsErrorRetryable returns if the error is caused by and HTTP connection
+// error, and should be retried.
+func (r RetryableConnectionError) IsErrorRetryable(err error) aws.Ternary {
+	if err == nil {
+		return aws.UnknownTernary
+	}
+	var retryable bool
+
+	var conErr interface{ ConnectionError() bool }
+	var tempErr interface{ Temporary() bool }
+	var timeoutErr interface{ Timeout() bool }
+	var urlErr *url.Error
+	var netOpErr *net.OpError
+	var dnsError *net.DNSError
+
+	if errors.As(err, &dnsError) {
+		// NXDOMAIN errors should not be retried
+		if dnsError.IsNotFound {
+			return aws.BoolTernary(false)
+		}
+
+		// if !dnsError.Temporary(), error may or may not be temporary,
+		// (i.e. !Temporary() =/=> !retryable) so we should fall through to
+		// remaining checks
+		if dnsError.Temporary() {
+			return aws.BoolTernary(true)
+		}
+	}
+
+	switch {
+	case errors.As(err, &conErr) && conErr.ConnectionError():
+		retryable = true
+
+	case strings.Contains(err.Error(), "connection reset"):
+		retryable = true
+
+	case errors.As(err, &urlErr):
+		// Refused connections should be retried as the service may not yet be
+		// running on the port. Go TCP dial considers refused connections as
+		// not temporary.
+		if strings.Contains(urlErr.Error(), "connection refused") {
+			retryable = true
+		} else {
+			return r.IsErrorRetryable(errors.Unwrap(urlErr))
+		}
+
+	case errors.As(err, &netOpErr):
+		// Network dial, or temporary network errors are always retryable.
+		if strings.EqualFold(netOpErr.Op, "dial") || netOpErr.Temporary() {
+			retryable = true
+		} else {
+			return r.IsErrorRetryable(errors.Unwrap(netOpErr))
+		}
+
+	case errors.As(err, &tempErr) && tempErr.Temporary():
+		// Fallback to the generic temporary check, with temporary errors
+		// retryable.
+		retryable = true
+
+	case errors.As(err, &timeoutErr) && timeoutErr.Timeout():
+		// Fallback to the generic timeout check, with timeout errors
+		// retryable.
+		retryable = true
+
+	default:
+		return aws.UnknownTernary
+	}
+
+	return aws.BoolTernary(retryable)
+
+}
+
+// RetryableHTTPStatusCode provides a IsErrorRetryable based on HTTP status
+// codes.
+type RetryableHTTPStatusCode struct {
+	Codes map[int]struct{}
+}
+
+// IsErrorRetryable return if the passed in error is retryable based on the
+// HTTP status code.
+func (r RetryableHTTPStatusCode) IsErrorRetryable(err error) aws.Ternary {
+	var v interface{ HTTPStatusCode() int }
+
+	if !errors.As(err, &v) {
+		return aws.UnknownTernary
+	}
+
+	_, ok := r.Codes[v.HTTPStatusCode()]
+	if !ok {
+		return aws.UnknownTernary
+	}
+
+	return aws.TrueTernary
+}
+
+// RetryableErrorCode determines if an attempt should be retried based on the
+// API error code.
+type RetryableErrorCode struct {
+	Codes map[string]struct{}
+}
+
+// IsErrorRetryable return if the error is retryable based on the error codes.
+// Returns unknown if the error doesn't have a code or it is unknown.
+func (r RetryableErrorCode) IsErrorRetryable(err error) aws.Ternary {
+	var v interface{ ErrorCode() string }
+
+	if !errors.As(err, &v) {
+		return aws.UnknownTernary
+	}
+
+	_, ok := r.Codes[v.ErrorCode()]
+	if !ok {
+		return aws.UnknownTernary
+	}
+
+	return aws.TrueTernary
+}
+
+// retryableClockSkewError marks errors that can be caused by clock skew
+// (difference between server time and client time).
+// This is returned when there's certain confidence that adjusting the client time
+// could allow a retry to succeed
+type retryableClockSkewError struct{ Err error }
+
+func (e *retryableClockSkewError) Error() string {
+	return fmt.Sprintf("Probable clock skew error: %v", e.Err)
+}
+
+// Unwrap returns the wrapped error.
+func (e *retryableClockSkewError) Unwrap() error {
+	return e.Err
+}
+
+// RetryableError allows the retryer to retry this request
+func (e *retryableClockSkewError) RetryableError() bool {
+	return true
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/standard.go 🔗

@@ -0,0 +1,269 @@
+package retry
+
+import (
+	"context"
+	"fmt"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws/ratelimit"
+)
+
+// BackoffDelayer provides the interface for determining the delay to before
+// another request attempt, that previously failed.
+type BackoffDelayer interface {
+	BackoffDelay(attempt int, err error) (time.Duration, error)
+}
+
+// BackoffDelayerFunc provides a wrapper around a function to determine the
+// backoff delay of an attempt retry.
+type BackoffDelayerFunc func(int, error) (time.Duration, error)
+
+// BackoffDelay returns the delay before attempt to retry a request.
+func (fn BackoffDelayerFunc) BackoffDelay(attempt int, err error) (time.Duration, error) {
+	return fn(attempt, err)
+}
+
+const (
+	// DefaultMaxAttempts is the maximum of attempts for an API request
+	DefaultMaxAttempts int = 3
+
+	// DefaultMaxBackoff is the maximum back off delay between attempts
+	DefaultMaxBackoff time.Duration = 20 * time.Second
+)
+
+// Default retry token quota values.
+const (
+	DefaultRetryRateTokens  uint = 500
+	DefaultRetryCost        uint = 5
+	DefaultRetryTimeoutCost uint = 10
+	DefaultNoRetryIncrement uint = 1
+)
+
+// DefaultRetryableHTTPStatusCodes is the default set of HTTP status codes the SDK
+// should consider as retryable errors.
+var DefaultRetryableHTTPStatusCodes = map[int]struct{}{
+	500: {},
+	502: {},
+	503: {},
+	504: {},
+}
+
+// DefaultRetryableErrorCodes provides the set of API error codes that should
+// be retried.
+var DefaultRetryableErrorCodes = map[string]struct{}{
+	"RequestTimeout":          {},
+	"RequestTimeoutException": {},
+}
+
+// DefaultThrottleErrorCodes provides the set of API error codes that are
+// considered throttle errors.
+var DefaultThrottleErrorCodes = map[string]struct{}{
+	"Throttling":                             {},
+	"ThrottlingException":                    {},
+	"ThrottledException":                     {},
+	"RequestThrottledException":              {},
+	"TooManyRequestsException":               {},
+	"ProvisionedThroughputExceededException": {},
+	"TransactionInProgressException":         {},
+	"RequestLimitExceeded":                   {},
+	"BandwidthLimitExceeded":                 {},
+	"LimitExceededException":                 {},
+	"RequestThrottled":                       {},
+	"SlowDown":                               {},
+	"PriorRequestNotComplete":                {},
+	"EC2ThrottledException":                  {},
+}
+
+// DefaultRetryables provides the set of retryable checks that are used by
+// default.
+var DefaultRetryables = []IsErrorRetryable{
+	NoRetryCanceledError{},
+	RetryableError{},
+	RetryableConnectionError{},
+	RetryableHTTPStatusCode{
+		Codes: DefaultRetryableHTTPStatusCodes,
+	},
+	RetryableErrorCode{
+		Codes: DefaultRetryableErrorCodes,
+	},
+	RetryableErrorCode{
+		Codes: DefaultThrottleErrorCodes,
+	},
+}
+
+// DefaultTimeouts provides the set of timeout checks that are used by default.
+var DefaultTimeouts = []IsErrorTimeout{
+	TimeouterError{},
+}
+
+// StandardOptions provides the functional options for configuring the standard
+// retryable, and delay behavior.
+type StandardOptions struct {
+	// Maximum number of attempts that should be made.
+	MaxAttempts int
+
+	// MaxBackoff duration between retried attempts.
+	MaxBackoff time.Duration
+
+	// Provides the backoff strategy the retryer will use to determine the
+	// delay between retry attempts.
+	Backoff BackoffDelayer
+
+	// Set of strategies to determine if the attempt should be retried based on
+	// the error response received.
+	//
+	// It is safe to append to this list in NewStandard's functional options.
+	Retryables []IsErrorRetryable
+
+	// Set of strategies to determine if the attempt failed due to a timeout
+	// error.
+	//
+	// It is safe to append to this list in NewStandard's functional options.
+	Timeouts []IsErrorTimeout
+
+	// Provides the rate limiting strategy for rate limiting attempt retries
+	// across all attempts the retryer is being used with.
+	//
+	// A RateLimiter operates as a token bucket with a set capacity, where
+	// attempt failures events consume tokens. A retry attempt that attempts to
+	// consume more tokens than what's available results in operation failure.
+	// The default implementation is parameterized as follows:
+	//   - a capacity of 500 (DefaultRetryRateTokens)
+	//   - a retry caused by a timeout costs 10 tokens (DefaultRetryCost)
+	//   - a retry caused by other errors costs 5 tokens (DefaultRetryTimeoutCost)
+	//   - an operation that succeeds on the 1st attempt adds 1 token (DefaultNoRetryIncrement)
+	//
+	// You can disable rate limiting by setting this field to ratelimit.None.
+	RateLimiter RateLimiter
+
+	// The cost to deduct from the RateLimiter's token bucket per retry.
+	RetryCost uint
+
+	// The cost to deduct from the RateLimiter's token bucket per retry caused
+	// by timeout error.
+	RetryTimeoutCost uint
+
+	// The cost to payback to the RateLimiter's token bucket for successful
+	// attempts.
+	NoRetryIncrement uint
+}
+
+// RateLimiter provides the interface for limiting the rate of attempt retries
+// allowed by the retryer.
+type RateLimiter interface {
+	GetToken(ctx context.Context, cost uint) (releaseToken func() error, err error)
+	AddTokens(uint) error
+}
+
+// Standard is the standard retry pattern for the SDK. It uses a set of
+// retryable checks to determine of the failed attempt should be retried, and
+// what retry delay should be used.
+type Standard struct {
+	options StandardOptions
+
+	timeout   IsErrorTimeout
+	retryable IsErrorRetryable
+	backoff   BackoffDelayer
+}
+
+// NewStandard initializes a standard retry behavior with defaults that can be
+// overridden via functional options.
+func NewStandard(fnOpts ...func(*StandardOptions)) *Standard {
+	o := StandardOptions{
+		MaxAttempts: DefaultMaxAttempts,
+		MaxBackoff:  DefaultMaxBackoff,
+		Retryables:  append([]IsErrorRetryable{}, DefaultRetryables...),
+		Timeouts:    append([]IsErrorTimeout{}, DefaultTimeouts...),
+
+		RateLimiter:      ratelimit.NewTokenRateLimit(DefaultRetryRateTokens),
+		RetryCost:        DefaultRetryCost,
+		RetryTimeoutCost: DefaultRetryTimeoutCost,
+		NoRetryIncrement: DefaultNoRetryIncrement,
+	}
+	for _, fn := range fnOpts {
+		fn(&o)
+	}
+	if o.MaxAttempts <= 0 {
+		o.MaxAttempts = DefaultMaxAttempts
+	}
+
+	backoff := o.Backoff
+	if backoff == nil {
+		backoff = NewExponentialJitterBackoff(o.MaxBackoff)
+	}
+
+	return &Standard{
+		options:   o,
+		backoff:   backoff,
+		retryable: IsErrorRetryables(o.Retryables),
+		timeout:   IsErrorTimeouts(o.Timeouts),
+	}
+}
+
+// MaxAttempts returns the maximum number of attempts that can be made for a
+// request before failing.
+func (s *Standard) MaxAttempts() int {
+	return s.options.MaxAttempts
+}
+
+// IsErrorRetryable returns if the error is can be retried or not. Should not
+// consider the number of attempts made.
+func (s *Standard) IsErrorRetryable(err error) bool {
+	return s.retryable.IsErrorRetryable(err).Bool()
+}
+
+// RetryDelay returns the delay to use before another request attempt is made.
+func (s *Standard) RetryDelay(attempt int, err error) (time.Duration, error) {
+	return s.backoff.BackoffDelay(attempt, err)
+}
+
+// GetAttemptToken returns the token to be released after then attempt completes.
+// The release token will add NoRetryIncrement to the RateLimiter token pool if
+// the attempt was successful. If the attempt failed, nothing will be done.
+func (s *Standard) GetAttemptToken(context.Context) (func(error) error, error) {
+	return s.GetInitialToken(), nil
+}
+
+// GetInitialToken returns a token for adding the NoRetryIncrement to the
+// RateLimiter token if the attempt completed successfully without error.
+//
+// InitialToken applies to result of the each attempt, including the first.
+// Whereas the RetryToken applies to the result of subsequent attempts.
+//
+// Deprecated: use GetAttemptToken instead.
+func (s *Standard) GetInitialToken() func(error) error {
+	return releaseToken(s.noRetryIncrement).release
+}
+
+func (s *Standard) noRetryIncrement() error {
+	return s.options.RateLimiter.AddTokens(s.options.NoRetryIncrement)
+}
+
+// GetRetryToken attempts to deduct the retry cost from the retry token pool.
+// Returning the token release function, or error.
+func (s *Standard) GetRetryToken(ctx context.Context, opErr error) (func(error) error, error) {
+	cost := s.options.RetryCost
+
+	if s.timeout.IsErrorTimeout(opErr).Bool() {
+		cost = s.options.RetryTimeoutCost
+	}
+
+	fn, err := s.options.RateLimiter.GetToken(ctx, cost)
+	if err != nil {
+		return nil, fmt.Errorf("failed to get rate limit token, %w", err)
+	}
+
+	return releaseToken(fn).release, nil
+}
+
+func nopRelease(error) error { return nil }
+
+type releaseToken func() error
+
+func (f releaseToken) release(err error) error {
+	if err != nil {
+		return nil
+	}
+
+	return f()
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/throttle_error.go 🔗

@@ -0,0 +1,60 @@
+package retry
+
+import (
+	"errors"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// IsErrorThrottle provides the interface of an implementation to determine if
+// a error response from an operation is a throttling error.
+type IsErrorThrottle interface {
+	IsErrorThrottle(error) aws.Ternary
+}
+
+// IsErrorThrottles is a collection of checks to determine of the error a
+// throttle error. Iterates through the checks and returns the state of
+// throttle if any check returns something other than unknown.
+type IsErrorThrottles []IsErrorThrottle
+
+// IsErrorThrottle returns if the error is a throttle error if any of the
+// checks in the list return a value other than unknown.
+func (r IsErrorThrottles) IsErrorThrottle(err error) aws.Ternary {
+	for _, re := range r {
+		if v := re.IsErrorThrottle(err); v != aws.UnknownTernary {
+			return v
+		}
+	}
+	return aws.UnknownTernary
+}
+
+// IsErrorThrottleFunc wraps a function with the IsErrorThrottle interface.
+type IsErrorThrottleFunc func(error) aws.Ternary
+
+// IsErrorThrottle returns if the error is a throttle error.
+func (fn IsErrorThrottleFunc) IsErrorThrottle(err error) aws.Ternary {
+	return fn(err)
+}
+
+// ThrottleErrorCode determines if an attempt should be retried based on the
+// API error code.
+type ThrottleErrorCode struct {
+	Codes map[string]struct{}
+}
+
+// IsErrorThrottle return if the error is a throttle error based on the error
+// codes. Returns unknown if the error doesn't have a code or it is unknown.
+func (r ThrottleErrorCode) IsErrorThrottle(err error) aws.Ternary {
+	var v interface{ ErrorCode() string }
+
+	if !errors.As(err, &v) {
+		return aws.UnknownTernary
+	}
+
+	_, ok := r.Codes[v.ErrorCode()]
+	if !ok {
+		return aws.UnknownTernary
+	}
+
+	return aws.TrueTernary
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retry/timeout_error.go 🔗

@@ -0,0 +1,52 @@
+package retry
+
+import (
+	"errors"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// IsErrorTimeout provides the interface of an implementation to determine if
+// a error matches.
+type IsErrorTimeout interface {
+	IsErrorTimeout(err error) aws.Ternary
+}
+
+// IsErrorTimeouts is a collection of checks to determine of the error is
+// retryable. Iterates through the checks and returns the state of retryable
+// if any check returns something other than unknown.
+type IsErrorTimeouts []IsErrorTimeout
+
+// IsErrorTimeout returns if the error is retryable if any of the checks in
+// the list return a value other than unknown.
+func (ts IsErrorTimeouts) IsErrorTimeout(err error) aws.Ternary {
+	for _, t := range ts {
+		if v := t.IsErrorTimeout(err); v != aws.UnknownTernary {
+			return v
+		}
+	}
+	return aws.UnknownTernary
+}
+
+// IsErrorTimeoutFunc wraps a function with the IsErrorTimeout interface.
+type IsErrorTimeoutFunc func(error) aws.Ternary
+
+// IsErrorTimeout returns if the error is retryable.
+func (fn IsErrorTimeoutFunc) IsErrorTimeout(err error) aws.Ternary {
+	return fn(err)
+}
+
+// TimeouterError provides the IsErrorTimeout implementation for determining if
+// an error is a timeout based on type with the Timeout method.
+type TimeouterError struct{}
+
+// IsErrorTimeout returns if the error is a timeout error.
+func (t TimeouterError) IsErrorTimeout(err error) aws.Ternary {
+	var v interface{ Timeout() bool }
+
+	if !errors.As(err, &v) {
+		return aws.UnknownTernary
+	}
+
+	return aws.BoolTernary(v.Timeout())
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/retryer.go 🔗

@@ -0,0 +1,127 @@
+package aws
+
+import (
+	"context"
+	"fmt"
+	"time"
+)
+
+// RetryMode provides the mode the API client will use to create a retryer
+// based on.
+type RetryMode string
+
+const (
+	// RetryModeStandard model provides rate limited retry attempts with
+	// exponential backoff delay.
+	RetryModeStandard RetryMode = "standard"
+
+	// RetryModeAdaptive model provides attempt send rate limiting on throttle
+	// responses in addition to standard mode's retry rate limiting.
+	//
+	// Adaptive retry mode is experimental and is subject to change in the
+	// future.
+	RetryModeAdaptive RetryMode = "adaptive"
+)
+
+// ParseRetryMode attempts to parse a RetryMode from the given string.
+// Returning error if the value is not a known RetryMode.
+func ParseRetryMode(v string) (mode RetryMode, err error) {
+	switch v {
+	case "standard":
+		return RetryModeStandard, nil
+	case "adaptive":
+		return RetryModeAdaptive, nil
+	default:
+		return mode, fmt.Errorf("unknown RetryMode, %v", v)
+	}
+}
+
+func (m RetryMode) String() string { return string(m) }
+
+// Retryer is an interface to determine if a given error from a
+// attempt should be retried, and if so what backoff delay to apply. The
+// default implementation used by most services is the retry package's Standard
+// type. Which contains basic retry logic using exponential backoff.
+type Retryer interface {
+	// IsErrorRetryable returns if the failed attempt is retryable. This check
+	// should determine if the error can be retried, or if the error is
+	// terminal.
+	IsErrorRetryable(error) bool
+
+	// MaxAttempts returns the maximum number of attempts that can be made for
+	// an attempt before failing. A value of 0 implies that the attempt should
+	// be retried until it succeeds if the errors are retryable.
+	MaxAttempts() int
+
+	// RetryDelay returns the delay that should be used before retrying the
+	// attempt. Will return error if the delay could not be determined.
+	RetryDelay(attempt int, opErr error) (time.Duration, error)
+
+	// GetRetryToken attempts to deduct the retry cost from the retry token pool.
+	// Returning the token release function, or error.
+	GetRetryToken(ctx context.Context, opErr error) (releaseToken func(error) error, err error)
+
+	// GetInitialToken returns the initial attempt token that can increment the
+	// retry token pool if the attempt is successful.
+	GetInitialToken() (releaseToken func(error) error)
+}
+
+// RetryerV2 is an interface to determine if a given error from an attempt
+// should be retried, and if so what backoff delay to apply. The default
+// implementation used by most services is the retry package's Standard type.
+// Which contains basic retry logic using exponential backoff.
+//
+// RetryerV2 replaces the Retryer interface, deprecating the GetInitialToken
+// method in favor of GetAttemptToken which takes a context, and can return an error.
+//
+// The SDK's retry package's Attempt middleware, and utilities will always
+// wrap a Retryer as a RetryerV2. Delegating to GetInitialToken, only if
+// GetAttemptToken is not implemented.
+type RetryerV2 interface {
+	Retryer
+
+	// GetInitialToken returns the initial attempt token that can increment the
+	// retry token pool if the attempt is successful.
+	//
+	// Deprecated: This method does not provide a way to block using Context,
+	// nor can it return an error. Use RetryerV2, and GetAttemptToken instead.
+	GetInitialToken() (releaseToken func(error) error)
+
+	// GetAttemptToken returns the send token that can be used to rate limit
+	// attempt calls. Will be used by the SDK's retry package's Attempt
+	// middleware to get a send token prior to calling the temp and releasing
+	// the send token after the attempt has been made.
+	GetAttemptToken(context.Context) (func(error) error, error)
+}
+
+// NopRetryer provides a RequestRetryDecider implementation that will flag
+// all attempt errors as not retryable, with a max attempts of 1.
+type NopRetryer struct{}
+
+// IsErrorRetryable returns false for all error values.
+func (NopRetryer) IsErrorRetryable(error) bool { return false }
+
+// MaxAttempts always returns 1 for the original attempt.
+func (NopRetryer) MaxAttempts() int { return 1 }
+
+// RetryDelay is not valid for the NopRetryer. Will always return error.
+func (NopRetryer) RetryDelay(int, error) (time.Duration, error) {
+	return 0, fmt.Errorf("not retrying any attempt errors")
+}
+
+// GetRetryToken returns a stub function that does nothing.
+func (NopRetryer) GetRetryToken(context.Context, error) (func(error) error, error) {
+	return nopReleaseToken, nil
+}
+
+// GetInitialToken returns a stub function that does nothing.
+func (NopRetryer) GetInitialToken() func(error) error {
+	return nopReleaseToken
+}
+
+// GetAttemptToken returns a stub function that does nothing.
+func (NopRetryer) GetAttemptToken(context.Context) (func(error) error, error) {
+	return nopReleaseToken, nil
+}
+
+func nopReleaseToken(error) error { return nil }

vendor/github.com/aws/aws-sdk-go-v2/aws/runtime.go 🔗

@@ -0,0 +1,14 @@
+package aws
+
+// ExecutionEnvironmentID is the AWS execution environment runtime identifier.
+type ExecutionEnvironmentID string
+
+// RuntimeEnvironment is a collection of values that are determined at runtime
+// based on the environment that the SDK is executing in. Some of these values
+// may or may not be present based on the executing environment and certain SDK
+// configuration properties that drive whether these values are populated..
+type RuntimeEnvironment struct {
+	EnvironmentIdentifier     ExecutionEnvironmentID
+	Region                    string
+	EC2InstanceMetadataRegion string
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/cache.go 🔗

@@ -0,0 +1,115 @@
+package v4
+
+import (
+	"strings"
+	"sync"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+func lookupKey(service, region string) string {
+	var s strings.Builder
+	s.Grow(len(region) + len(service) + 3)
+	s.WriteString(region)
+	s.WriteRune('/')
+	s.WriteString(service)
+	return s.String()
+}
+
+type derivedKey struct {
+	AccessKey  string
+	Date       time.Time
+	Credential []byte
+}
+
+type derivedKeyCache struct {
+	values map[string]derivedKey
+	mutex  sync.RWMutex
+}
+
+func newDerivedKeyCache() derivedKeyCache {
+	return derivedKeyCache{
+		values: make(map[string]derivedKey),
+	}
+}
+
+func (s *derivedKeyCache) Get(credentials aws.Credentials, service, region string, signingTime SigningTime) []byte {
+	key := lookupKey(service, region)
+	s.mutex.RLock()
+	if cred, ok := s.get(key, credentials, signingTime.Time); ok {
+		s.mutex.RUnlock()
+		return cred
+	}
+	s.mutex.RUnlock()
+
+	s.mutex.Lock()
+	if cred, ok := s.get(key, credentials, signingTime.Time); ok {
+		s.mutex.Unlock()
+		return cred
+	}
+	cred := deriveKey(credentials.SecretAccessKey, service, region, signingTime)
+	entry := derivedKey{
+		AccessKey:  credentials.AccessKeyID,
+		Date:       signingTime.Time,
+		Credential: cred,
+	}
+	s.values[key] = entry
+	s.mutex.Unlock()
+
+	return cred
+}
+
+func (s *derivedKeyCache) get(key string, credentials aws.Credentials, signingTime time.Time) ([]byte, bool) {
+	cacheEntry, ok := s.retrieveFromCache(key)
+	if ok && cacheEntry.AccessKey == credentials.AccessKeyID && isSameDay(signingTime, cacheEntry.Date) {
+		return cacheEntry.Credential, true
+	}
+	return nil, false
+}
+
+func (s *derivedKeyCache) retrieveFromCache(key string) (derivedKey, bool) {
+	if v, ok := s.values[key]; ok {
+		return v, true
+	}
+	return derivedKey{}, false
+}
+
+// SigningKeyDeriver derives a signing key from a set of credentials
+type SigningKeyDeriver struct {
+	cache derivedKeyCache
+}
+
+// NewSigningKeyDeriver returns a new SigningKeyDeriver
+func NewSigningKeyDeriver() *SigningKeyDeriver {
+	return &SigningKeyDeriver{
+		cache: newDerivedKeyCache(),
+	}
+}
+
+// DeriveKey returns a derived signing key from the given credentials to be used with SigV4 signing.
+func (k *SigningKeyDeriver) DeriveKey(credential aws.Credentials, service, region string, signingTime SigningTime) []byte {
+	return k.cache.Get(credential, service, region, signingTime)
+}
+
+func deriveKey(secret, service, region string, t SigningTime) []byte {
+	hmacDate := HMACSHA256([]byte("AWS4"+secret), []byte(t.ShortTimeFormat()))
+	hmacRegion := HMACSHA256(hmacDate, []byte(region))
+	hmacService := HMACSHA256(hmacRegion, []byte(service))
+	return HMACSHA256(hmacService, []byte("aws4_request"))
+}
+
+func isSameDay(x, y time.Time) bool {
+	xYear, xMonth, xDay := x.Date()
+	yYear, yMonth, yDay := y.Date()
+
+	if xYear != yYear {
+		return false
+	}
+
+	if xMonth != yMonth {
+		return false
+	}
+
+	return xDay == yDay
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/const.go 🔗

@@ -0,0 +1,40 @@
+package v4
+
+// Signature Version 4 (SigV4) Constants
+const (
+	// EmptyStringSHA256 is the hex encoded sha256 value of an empty string
+	EmptyStringSHA256 = `e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855`
+
+	// UnsignedPayload indicates that the request payload body is unsigned
+	UnsignedPayload = "UNSIGNED-PAYLOAD"
+
+	// AmzAlgorithmKey indicates the signing algorithm
+	AmzAlgorithmKey = "X-Amz-Algorithm"
+
+	// AmzSecurityTokenKey indicates the security token to be used with temporary credentials
+	AmzSecurityTokenKey = "X-Amz-Security-Token"
+
+	// AmzDateKey is the UTC timestamp for the request in the format YYYYMMDD'T'HHMMSS'Z'
+	AmzDateKey = "X-Amz-Date"
+
+	// AmzCredentialKey is the access key ID and credential scope
+	AmzCredentialKey = "X-Amz-Credential"
+
+	// AmzSignedHeadersKey is the set of headers signed for the request
+	AmzSignedHeadersKey = "X-Amz-SignedHeaders"
+
+	// AmzSignatureKey is the query parameter to store the SigV4 signature
+	AmzSignatureKey = "X-Amz-Signature"
+
+	// TimeFormat is the time format to be used in the X-Amz-Date header or query parameter
+	TimeFormat = "20060102T150405Z"
+
+	// ShortTimeFormat is the shorten time format used in the credential scope
+	ShortTimeFormat = "20060102"
+
+	// ContentSHAKey is the SHA256 of request body
+	ContentSHAKey = "X-Amz-Content-Sha256"
+
+	// StreamingEventsPayload indicates that the request payload body is a signed event stream.
+	StreamingEventsPayload = "STREAMING-AWS4-HMAC-SHA256-EVENTS"
+)

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/header_rules.go 🔗

@@ -0,0 +1,82 @@
+package v4
+
+import (
+	sdkstrings "github.com/aws/aws-sdk-go-v2/internal/strings"
+)
+
+// Rules houses a set of Rule needed for validation of a
+// string value
+type Rules []Rule
+
+// Rule interface allows for more flexible rules and just simply
+// checks whether or not a value adheres to that Rule
+type Rule interface {
+	IsValid(value string) bool
+}
+
+// IsValid will iterate through all rules and see if any rules
+// apply to the value and supports nested rules
+func (r Rules) IsValid(value string) bool {
+	for _, rule := range r {
+		if rule.IsValid(value) {
+			return true
+		}
+	}
+	return false
+}
+
+// MapRule generic Rule for maps
+type MapRule map[string]struct{}
+
+// IsValid for the map Rule satisfies whether it exists in the map
+func (m MapRule) IsValid(value string) bool {
+	_, ok := m[value]
+	return ok
+}
+
+// AllowList is a generic Rule for include listing
+type AllowList struct {
+	Rule
+}
+
+// IsValid for AllowList checks if the value is within the AllowList
+func (w AllowList) IsValid(value string) bool {
+	return w.Rule.IsValid(value)
+}
+
+// ExcludeList is a generic Rule for exclude listing
+type ExcludeList struct {
+	Rule
+}
+
+// IsValid for AllowList checks if the value is within the AllowList
+func (b ExcludeList) IsValid(value string) bool {
+	return !b.Rule.IsValid(value)
+}
+
+// Patterns is a list of strings to match against
+type Patterns []string
+
+// IsValid for Patterns checks each pattern and returns if a match has
+// been found
+func (p Patterns) IsValid(value string) bool {
+	for _, pattern := range p {
+		if sdkstrings.HasPrefixFold(value, pattern) {
+			return true
+		}
+	}
+	return false
+}
+
+// InclusiveRules rules allow for rules to depend on one another
+type InclusiveRules []Rule
+
+// IsValid will return true if all rules are true
+func (r InclusiveRules) IsValid(value string) bool {
+	for _, rule := range r {
+		if !rule.IsValid(value) {
+			return false
+		}
+	}
+	return true
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/headers.go 🔗

@@ -0,0 +1,70 @@
+package v4
+
+// IgnoredHeaders is a list of headers that are ignored during signing
+var IgnoredHeaders = Rules{
+	ExcludeList{
+		MapRule{
+			"Authorization":   struct{}{},
+			"User-Agent":      struct{}{},
+			"X-Amzn-Trace-Id": struct{}{},
+			"Expect":          struct{}{},
+		},
+	},
+}
+
+// RequiredSignedHeaders is a allow list for Build canonical headers.
+var RequiredSignedHeaders = Rules{
+	AllowList{
+		MapRule{
+			"Cache-Control":                         struct{}{},
+			"Content-Disposition":                   struct{}{},
+			"Content-Encoding":                      struct{}{},
+			"Content-Language":                      struct{}{},
+			"Content-Md5":                           struct{}{},
+			"Content-Type":                          struct{}{},
+			"Expires":                               struct{}{},
+			"If-Match":                              struct{}{},
+			"If-Modified-Since":                     struct{}{},
+			"If-None-Match":                         struct{}{},
+			"If-Unmodified-Since":                   struct{}{},
+			"Range":                                 struct{}{},
+			"X-Amz-Acl":                             struct{}{},
+			"X-Amz-Copy-Source":                     struct{}{},
+			"X-Amz-Copy-Source-If-Match":            struct{}{},
+			"X-Amz-Copy-Source-If-Modified-Since":   struct{}{},
+			"X-Amz-Copy-Source-If-None-Match":       struct{}{},
+			"X-Amz-Copy-Source-If-Unmodified-Since": struct{}{},
+			"X-Amz-Copy-Source-Range":               struct{}{},
+			"X-Amz-Copy-Source-Server-Side-Encryption-Customer-Algorithm": struct{}{},
+			"X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key":       struct{}{},
+			"X-Amz-Copy-Source-Server-Side-Encryption-Customer-Key-Md5":   struct{}{},
+			"X-Amz-Grant-Full-control":                                    struct{}{},
+			"X-Amz-Grant-Read":                                            struct{}{},
+			"X-Amz-Grant-Read-Acp":                                        struct{}{},
+			"X-Amz-Grant-Write":                                           struct{}{},
+			"X-Amz-Grant-Write-Acp":                                       struct{}{},
+			"X-Amz-Metadata-Directive":                                    struct{}{},
+			"X-Amz-Mfa":                                                   struct{}{},
+			"X-Amz-Request-Payer":                                         struct{}{},
+			"X-Amz-Server-Side-Encryption":                                struct{}{},
+			"X-Amz-Server-Side-Encryption-Aws-Kms-Key-Id":                 struct{}{},
+			"X-Amz-Server-Side-Encryption-Context":                        struct{}{},
+			"X-Amz-Server-Side-Encryption-Customer-Algorithm":             struct{}{},
+			"X-Amz-Server-Side-Encryption-Customer-Key":                   struct{}{},
+			"X-Amz-Server-Side-Encryption-Customer-Key-Md5":               struct{}{},
+			"X-Amz-Storage-Class":                                         struct{}{},
+			"X-Amz-Website-Redirect-Location":                             struct{}{},
+			"X-Amz-Content-Sha256":                                        struct{}{},
+			"X-Amz-Tagging":                                               struct{}{},
+		},
+	},
+	Patterns{"X-Amz-Object-Lock-"},
+	Patterns{"X-Amz-Meta-"},
+}
+
+// AllowedQueryHoisting is a allowed list for Build query headers. The boolean value
+// represents whether or not it is a pattern.
+var AllowedQueryHoisting = InclusiveRules{
+	ExcludeList{RequiredSignedHeaders},
+	Patterns{"X-Amz-"},
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/hmac.go 🔗

@@ -0,0 +1,13 @@
+package v4
+
+import (
+	"crypto/hmac"
+	"crypto/sha256"
+)
+
+// HMACSHA256 computes a HMAC-SHA256 of data given the provided key.
+func HMACSHA256(key []byte, data []byte) []byte {
+	hash := hmac.New(sha256.New, key)
+	hash.Write(data)
+	return hash.Sum(nil)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/host.go 🔗

@@ -0,0 +1,75 @@
+package v4
+
+import (
+	"net/http"
+	"strings"
+)
+
+// SanitizeHostForHeader removes default port from host and updates request.Host
+func SanitizeHostForHeader(r *http.Request) {
+	host := getHost(r)
+	port := portOnly(host)
+	if port != "" && isDefaultPort(r.URL.Scheme, port) {
+		r.Host = stripPort(host)
+	}
+}
+
+// Returns host from request
+func getHost(r *http.Request) string {
+	if r.Host != "" {
+		return r.Host
+	}
+
+	return r.URL.Host
+}
+
+// Hostname returns u.Host, without any port number.
+//
+// If Host is an IPv6 literal with a port number, Hostname returns the
+// IPv6 literal without the square brackets. IPv6 literals may include
+// a zone identifier.
+//
+// Copied from the Go 1.8 standard library (net/url)
+func stripPort(hostport string) string {
+	colon := strings.IndexByte(hostport, ':')
+	if colon == -1 {
+		return hostport
+	}
+	if i := strings.IndexByte(hostport, ']'); i != -1 {
+		return strings.TrimPrefix(hostport[:i], "[")
+	}
+	return hostport[:colon]
+}
+
+// Port returns the port part of u.Host, without the leading colon.
+// If u.Host doesn't contain a port, Port returns an empty string.
+//
+// Copied from the Go 1.8 standard library (net/url)
+func portOnly(hostport string) string {
+	colon := strings.IndexByte(hostport, ':')
+	if colon == -1 {
+		return ""
+	}
+	if i := strings.Index(hostport, "]:"); i != -1 {
+		return hostport[i+len("]:"):]
+	}
+	if strings.Contains(hostport, "]") {
+		return ""
+	}
+	return hostport[colon+len(":"):]
+}
+
+// Returns true if the specified URI is using the standard port
+// (i.e. port 80 for HTTP URIs or 443 for HTTPS URIs)
+func isDefaultPort(scheme, port string) bool {
+	if port == "" {
+		return true
+	}
+
+	lowerCaseScheme := strings.ToLower(scheme)
+	if (lowerCaseScheme == "http" && port == "80") || (lowerCaseScheme == "https" && port == "443") {
+		return true
+	}
+
+	return false
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/scope.go 🔗

@@ -0,0 +1,13 @@
+package v4
+
+import "strings"
+
+// BuildCredentialScope builds the Signature Version 4 (SigV4) signing scope
+func BuildCredentialScope(signingTime SigningTime, region, service string) string {
+	return strings.Join([]string{
+		signingTime.ShortTimeFormat(),
+		region,
+		service,
+		"aws4_request",
+	}, "/")
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/time.go 🔗

@@ -0,0 +1,36 @@
+package v4
+
+import "time"
+
+// SigningTime provides a wrapper around a time.Time which provides cached values for SigV4 signing.
+type SigningTime struct {
+	time.Time
+	timeFormat      string
+	shortTimeFormat string
+}
+
+// NewSigningTime creates a new SigningTime given a time.Time
+func NewSigningTime(t time.Time) SigningTime {
+	return SigningTime{
+		Time: t,
+	}
+}
+
+// TimeFormat provides a time formatted in the X-Amz-Date format.
+func (m *SigningTime) TimeFormat() string {
+	return m.format(&m.timeFormat, TimeFormat)
+}
+
+// ShortTimeFormat provides a time formatted of 20060102.
+func (m *SigningTime) ShortTimeFormat() string {
+	return m.format(&m.shortTimeFormat, ShortTimeFormat)
+}
+
+func (m *SigningTime) format(target *string, format string) string {
+	if len(*target) > 0 {
+		return *target
+	}
+	v := m.Time.Format(format)
+	*target = v
+	return v
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4/util.go 🔗

@@ -0,0 +1,80 @@
+package v4
+
+import (
+	"net/url"
+	"strings"
+)
+
+const doubleSpace = "  "
+
+// StripExcessSpaces will rewrite the passed in slice's string values to not
+// contain multiple side-by-side spaces.
+func StripExcessSpaces(str string) string {
+	var j, k, l, m, spaces int
+	// Trim trailing spaces
+	for j = len(str) - 1; j >= 0 && str[j] == ' '; j-- {
+	}
+
+	// Trim leading spaces
+	for k = 0; k < j && str[k] == ' '; k++ {
+	}
+	str = str[k : j+1]
+
+	// Strip multiple spaces.
+	j = strings.Index(str, doubleSpace)
+	if j < 0 {
+		return str
+	}
+
+	buf := []byte(str)
+	for k, m, l = j, j, len(buf); k < l; k++ {
+		if buf[k] == ' ' {
+			if spaces == 0 {
+				// First space.
+				buf[m] = buf[k]
+				m++
+			}
+			spaces++
+		} else {
+			// End of multiple spaces.
+			spaces = 0
+			buf[m] = buf[k]
+			m++
+		}
+	}
+
+	return string(buf[:m])
+}
+
+// GetURIPath returns the escaped URI component from the provided URL.
+func GetURIPath(u *url.URL) string {
+	var uriPath string
+
+	if len(u.Opaque) > 0 {
+		const schemeSep, pathSep, queryStart = "//", "/", "?"
+
+		opaque := u.Opaque
+		// Cut off the query string if present.
+		if idx := strings.Index(opaque, queryStart); idx >= 0 {
+			opaque = opaque[:idx]
+		}
+
+		// Cutout the scheme separator if present.
+		if strings.Index(opaque, schemeSep) == 0 {
+			opaque = opaque[len(schemeSep):]
+		}
+
+		// capture URI path starting with first path separator.
+		if idx := strings.Index(opaque, pathSep); idx >= 0 {
+			uriPath = opaque[idx:]
+		}
+	} else {
+		uriPath = u.EscapedPath()
+	}
+
+	if len(uriPath) == 0 {
+		uriPath = "/"
+	}
+
+	return uriPath
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/middleware.go 🔗

@@ -0,0 +1,414 @@
+package v4
+
+import (
+	"context"
+	"crypto/sha256"
+	"encoding/hex"
+	"fmt"
+	"io"
+	"net/http"
+	"strings"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	v4Internal "github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4"
+	internalauth "github.com/aws/aws-sdk-go-v2/internal/auth"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const computePayloadHashMiddlewareID = "ComputePayloadHash"
+
+// HashComputationError indicates an error occurred while computing the signing hash
+type HashComputationError struct {
+	Err error
+}
+
+// Error is the error message
+func (e *HashComputationError) Error() string {
+	return fmt.Sprintf("failed to compute payload hash: %v", e.Err)
+}
+
+// Unwrap returns the underlying error if one is set
+func (e *HashComputationError) Unwrap() error {
+	return e.Err
+}
+
+// SigningError indicates an error condition occurred while performing SigV4 signing
+type SigningError struct {
+	Err error
+}
+
+func (e *SigningError) Error() string {
+	return fmt.Sprintf("failed to sign request: %v", e.Err)
+}
+
+// Unwrap returns the underlying error cause
+func (e *SigningError) Unwrap() error {
+	return e.Err
+}
+
+// UseDynamicPayloadSigningMiddleware swaps the compute payload sha256 middleware with a resolver middleware that
+// switches between unsigned and signed payload based on TLS state for request.
+// This middleware should not be used for AWS APIs that do not support unsigned payload signing auth.
+// By default, SDK uses this middleware for known AWS APIs that support such TLS based auth selection .
+//
+// Usage example -
+// S3 PutObject API allows unsigned payload signing auth usage when TLS is enabled, and uses this middleware to
+// dynamically switch between unsigned and signed payload based on TLS state for request.
+func UseDynamicPayloadSigningMiddleware(stack *middleware.Stack) error {
+	_, err := stack.Finalize.Swap(computePayloadHashMiddlewareID, &dynamicPayloadSigningMiddleware{})
+	return err
+}
+
+// dynamicPayloadSigningMiddleware dynamically resolves the middleware that computes and set payload sha256 middleware.
+type dynamicPayloadSigningMiddleware struct {
+}
+
+// ID returns the resolver identifier
+func (m *dynamicPayloadSigningMiddleware) ID() string {
+	return computePayloadHashMiddlewareID
+}
+
+// HandleFinalize delegates SHA256 computation according to whether the request
+// is TLS-enabled.
+func (m *dynamicPayloadSigningMiddleware) HandleFinalize(
+	ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if req.IsHTTPS() {
+		return (&UnsignedPayload{}).HandleFinalize(ctx, in, next)
+	}
+	return (&ComputePayloadSHA256{}).HandleFinalize(ctx, in, next)
+}
+
+// UnsignedPayload sets the SigV4 request payload hash to unsigned.
+//
+// Will not set the Unsigned Payload magic SHA value, if a SHA has already been
+// stored in the context. (e.g. application pre-computed SHA256 before making
+// API call).
+//
+// This middleware does not check the X-Amz-Content-Sha256 header, if that
+// header is serialized a middleware must translate it into the context.
+type UnsignedPayload struct{}
+
+// AddUnsignedPayloadMiddleware adds unsignedPayload to the operation
+// middleware stack
+func AddUnsignedPayloadMiddleware(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&UnsignedPayload{}, "ResolveEndpointV2", middleware.After)
+}
+
+// ID returns the unsignedPayload identifier
+func (m *UnsignedPayload) ID() string {
+	return computePayloadHashMiddlewareID
+}
+
+// HandleFinalize sets the payload hash magic value to the unsigned sentinel.
+func (m *UnsignedPayload) HandleFinalize(
+	ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	if GetPayloadHash(ctx) == "" {
+		ctx = SetPayloadHash(ctx, v4Internal.UnsignedPayload)
+	}
+	return next.HandleFinalize(ctx, in)
+}
+
+// ComputePayloadSHA256 computes SHA256 payload hash to sign.
+//
+// Will not set the Unsigned Payload magic SHA value, if a SHA has already been
+// stored in the context. (e.g. application pre-computed SHA256 before making
+// API call).
+//
+// This middleware does not check the X-Amz-Content-Sha256 header, if that
+// header is serialized a middleware must translate it into the context.
+type ComputePayloadSHA256 struct{}
+
+// AddComputePayloadSHA256Middleware adds computePayloadSHA256 to the
+// operation middleware stack
+func AddComputePayloadSHA256Middleware(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&ComputePayloadSHA256{}, "ResolveEndpointV2", middleware.After)
+}
+
+// RemoveComputePayloadSHA256Middleware removes computePayloadSHA256 from the
+// operation middleware stack
+func RemoveComputePayloadSHA256Middleware(stack *middleware.Stack) error {
+	_, err := stack.Finalize.Remove(computePayloadHashMiddlewareID)
+	return err
+}
+
+// ID is the middleware name
+func (m *ComputePayloadSHA256) ID() string {
+	return computePayloadHashMiddlewareID
+}
+
+// HandleFinalize computes the payload hash for the request, storing it to the
+// context. This is a no-op if a caller has previously set that value.
+func (m *ComputePayloadSHA256) HandleFinalize(
+	ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	if GetPayloadHash(ctx) != "" {
+		return next.HandleFinalize(ctx, in)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &HashComputationError{
+			Err: fmt.Errorf("unexpected request middleware type %T", in.Request),
+		}
+	}
+
+	hash := sha256.New()
+	if stream := req.GetStream(); stream != nil {
+		_, err = io.Copy(hash, stream)
+		if err != nil {
+			return out, metadata, &HashComputationError{
+				Err: fmt.Errorf("failed to compute payload hash, %w", err),
+			}
+		}
+
+		if err := req.RewindStream(); err != nil {
+			return out, metadata, &HashComputationError{
+				Err: fmt.Errorf("failed to seek body to start, %w", err),
+			}
+		}
+	}
+
+	ctx = SetPayloadHash(ctx, hex.EncodeToString(hash.Sum(nil)))
+
+	return next.HandleFinalize(ctx, in)
+}
+
+// SwapComputePayloadSHA256ForUnsignedPayloadMiddleware replaces the
+// ComputePayloadSHA256 middleware with the UnsignedPayload middleware.
+//
+// Use this to disable computing the Payload SHA256 checksum and instead use
+// UNSIGNED-PAYLOAD for the SHA256 value.
+func SwapComputePayloadSHA256ForUnsignedPayloadMiddleware(stack *middleware.Stack) error {
+	_, err := stack.Finalize.Swap(computePayloadHashMiddlewareID, &UnsignedPayload{})
+	return err
+}
+
+// ContentSHA256Header sets the X-Amz-Content-Sha256 header value to
+// the Payload hash stored in the context.
+type ContentSHA256Header struct{}
+
+// AddContentSHA256HeaderMiddleware adds ContentSHA256Header to the
+// operation middleware stack
+func AddContentSHA256HeaderMiddleware(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&ContentSHA256Header{}, computePayloadHashMiddlewareID, middleware.After)
+}
+
+// RemoveContentSHA256HeaderMiddleware removes contentSHA256Header middleware
+// from the operation middleware stack
+func RemoveContentSHA256HeaderMiddleware(stack *middleware.Stack) error {
+	_, err := stack.Finalize.Remove((*ContentSHA256Header)(nil).ID())
+	return err
+}
+
+// ID returns the ContentSHA256HeaderMiddleware identifier
+func (m *ContentSHA256Header) ID() string {
+	return "SigV4ContentSHA256Header"
+}
+
+// HandleFinalize sets the X-Amz-Content-Sha256 header value to the Payload hash
+// stored in the context.
+func (m *ContentSHA256Header) HandleFinalize(
+	ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &HashComputationError{Err: fmt.Errorf("unexpected request middleware type %T", in.Request)}
+	}
+
+	req.Header.Set(v4Internal.ContentSHAKey, GetPayloadHash(ctx))
+	return next.HandleFinalize(ctx, in)
+}
+
+// SignHTTPRequestMiddlewareOptions is the configuration options for
+// [SignHTTPRequestMiddleware].
+//
+// Deprecated: [SignHTTPRequestMiddleware] is deprecated.
+type SignHTTPRequestMiddlewareOptions struct {
+	CredentialsProvider aws.CredentialsProvider
+	Signer              HTTPSigner
+	LogSigning          bool
+}
+
+// SignHTTPRequestMiddleware is a `FinalizeMiddleware` implementation for SigV4
+// HTTP Signing.
+//
+// Deprecated: AWS service clients no longer use this middleware. Signing as an
+// SDK operation is now performed through an internal per-service middleware
+// which opaquely selects and uses the signer from the resolved auth scheme.
+type SignHTTPRequestMiddleware struct {
+	credentialsProvider aws.CredentialsProvider
+	signer              HTTPSigner
+	logSigning          bool
+}
+
+// NewSignHTTPRequestMiddleware constructs a [SignHTTPRequestMiddleware] using
+// the given [Signer] for signing requests.
+//
+// Deprecated: SignHTTPRequestMiddleware is deprecated.
+func NewSignHTTPRequestMiddleware(options SignHTTPRequestMiddlewareOptions) *SignHTTPRequestMiddleware {
+	return &SignHTTPRequestMiddleware{
+		credentialsProvider: options.CredentialsProvider,
+		signer:              options.Signer,
+		logSigning:          options.LogSigning,
+	}
+}
+
+// ID is the SignHTTPRequestMiddleware identifier.
+//
+// Deprecated: SignHTTPRequestMiddleware is deprecated.
+func (s *SignHTTPRequestMiddleware) ID() string {
+	return "Signing"
+}
+
+// HandleFinalize will take the provided input and sign the request using the
+// SigV4 authentication scheme.
+//
+// Deprecated: SignHTTPRequestMiddleware is deprecated.
+func (s *SignHTTPRequestMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	if !haveCredentialProvider(s.credentialsProvider) {
+		return next.HandleFinalize(ctx, in)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &SigningError{Err: fmt.Errorf("unexpected request middleware type %T", in.Request)}
+	}
+
+	signingName, signingRegion := awsmiddleware.GetSigningName(ctx), awsmiddleware.GetSigningRegion(ctx)
+	payloadHash := GetPayloadHash(ctx)
+	if len(payloadHash) == 0 {
+		return out, metadata, &SigningError{Err: fmt.Errorf("computed payload hash missing from context")}
+	}
+
+	credentials, err := s.credentialsProvider.Retrieve(ctx)
+	if err != nil {
+		return out, metadata, &SigningError{Err: fmt.Errorf("failed to retrieve credentials: %w", err)}
+	}
+
+	signerOptions := []func(o *SignerOptions){
+		func(o *SignerOptions) {
+			o.Logger = middleware.GetLogger(ctx)
+			o.LogSigning = s.logSigning
+		},
+	}
+
+	// existing DisableURIPathEscaping is equivalent in purpose
+	// to authentication scheme property DisableDoubleEncoding
+	disableDoubleEncoding, overridden := internalauth.GetDisableDoubleEncoding(ctx)
+	if overridden {
+		signerOptions = append(signerOptions, func(o *SignerOptions) {
+			o.DisableURIPathEscaping = disableDoubleEncoding
+		})
+	}
+
+	err = s.signer.SignHTTP(ctx, credentials, req.Request, payloadHash, signingName, signingRegion, sdk.NowTime(), signerOptions...)
+	if err != nil {
+		return out, metadata, &SigningError{Err: fmt.Errorf("failed to sign http request, %w", err)}
+	}
+
+	ctx = awsmiddleware.SetSigningCredentials(ctx, credentials)
+
+	return next.HandleFinalize(ctx, in)
+}
+
+// StreamingEventsPayload signs input event stream messages.
+type StreamingEventsPayload struct{}
+
+// AddStreamingEventsPayload adds the streamingEventsPayload middleware to the stack.
+func AddStreamingEventsPayload(stack *middleware.Stack) error {
+	return stack.Finalize.Add(&StreamingEventsPayload{}, middleware.Before)
+}
+
+// ID identifies the middleware.
+func (s *StreamingEventsPayload) ID() string {
+	return computePayloadHashMiddlewareID
+}
+
+// HandleFinalize marks the input stream to be signed with SigV4.
+func (s *StreamingEventsPayload) HandleFinalize(
+	ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	contentSHA := GetPayloadHash(ctx)
+	if len(contentSHA) == 0 {
+		contentSHA = v4Internal.StreamingEventsPayload
+	}
+
+	ctx = SetPayloadHash(ctx, contentSHA)
+
+	return next.HandleFinalize(ctx, in)
+}
+
+// GetSignedRequestSignature attempts to extract the signature of the request.
+// Returning an error if the request is unsigned, or unable to extract the
+// signature.
+func GetSignedRequestSignature(r *http.Request) ([]byte, error) {
+	const authHeaderSignatureElem = "Signature="
+
+	if auth := r.Header.Get(authorizationHeader); len(auth) != 0 {
+		ps := strings.Split(auth, ", ")
+		for _, p := range ps {
+			if idx := strings.Index(p, authHeaderSignatureElem); idx >= 0 {
+				sig := p[len(authHeaderSignatureElem):]
+				if len(sig) == 0 {
+					return nil, fmt.Errorf("invalid request signature authorization header")
+				}
+				return hex.DecodeString(sig)
+			}
+		}
+	}
+
+	if sig := r.URL.Query().Get("X-Amz-Signature"); len(sig) != 0 {
+		return hex.DecodeString(sig)
+	}
+
+	return nil, fmt.Errorf("request not signed")
+}
+
+func haveCredentialProvider(p aws.CredentialsProvider) bool {
+	if p == nil {
+		return false
+	}
+
+	return !aws.IsCredentialsProvider(p, (*aws.AnonymousCredentials)(nil))
+}
+
+type payloadHashKey struct{}
+
+// GetPayloadHash retrieves the payload hash to use for signing
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetPayloadHash(ctx context.Context) (v string) {
+	v, _ = middleware.GetStackValue(ctx, payloadHashKey{}).(string)
+	return v
+}
+
+// SetPayloadHash sets the payload hash to be used for signing the request
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func SetPayloadHash(ctx context.Context, hash string) context.Context {
+	return middleware.WithStackValue(ctx, payloadHashKey{}, hash)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/presign_middleware.go 🔗

@@ -0,0 +1,127 @@
+package v4
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/smithy-go/middleware"
+	smithyHTTP "github.com/aws/smithy-go/transport/http"
+)
+
+// HTTPPresigner is an interface to a SigV4 signer that can sign create a
+// presigned URL for a HTTP requests.
+type HTTPPresigner interface {
+	PresignHTTP(
+		ctx context.Context, credentials aws.Credentials, r *http.Request,
+		payloadHash string, service string, region string, signingTime time.Time,
+		optFns ...func(*SignerOptions),
+	) (url string, signedHeader http.Header, err error)
+}
+
+// PresignedHTTPRequest provides the URL and signed headers that are included
+// in the presigned URL.
+type PresignedHTTPRequest struct {
+	URL          string
+	Method       string
+	SignedHeader http.Header
+}
+
+// PresignHTTPRequestMiddlewareOptions is the options for the PresignHTTPRequestMiddleware middleware.
+type PresignHTTPRequestMiddlewareOptions struct {
+	CredentialsProvider aws.CredentialsProvider
+	Presigner           HTTPPresigner
+	LogSigning          bool
+}
+
+// PresignHTTPRequestMiddleware provides the Finalize middleware for creating a
+// presigned URL for an HTTP request.
+//
+// Will short circuit the middleware stack and not forward onto the next
+// Finalize handler.
+type PresignHTTPRequestMiddleware struct {
+	credentialsProvider aws.CredentialsProvider
+	presigner           HTTPPresigner
+	logSigning          bool
+}
+
+// NewPresignHTTPRequestMiddleware returns a new PresignHTTPRequestMiddleware
+// initialized with the presigner.
+func NewPresignHTTPRequestMiddleware(options PresignHTTPRequestMiddlewareOptions) *PresignHTTPRequestMiddleware {
+	return &PresignHTTPRequestMiddleware{
+		credentialsProvider: options.CredentialsProvider,
+		presigner:           options.Presigner,
+		logSigning:          options.LogSigning,
+	}
+}
+
+// ID provides the middleware ID.
+func (*PresignHTTPRequestMiddleware) ID() string { return "PresignHTTPRequest" }
+
+// HandleFinalize will take the provided input and create a presigned url for
+// the http request using the SigV4 presign authentication scheme.
+//
+// Since the signed request is not a valid HTTP request
+func (s *PresignHTTPRequestMiddleware) HandleFinalize(
+	ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyHTTP.Request)
+	if !ok {
+		return out, metadata, &SigningError{
+			Err: fmt.Errorf("unexpected request middleware type %T", in.Request),
+		}
+	}
+
+	httpReq := req.Build(ctx)
+	if !haveCredentialProvider(s.credentialsProvider) {
+		out.Result = &PresignedHTTPRequest{
+			URL:          httpReq.URL.String(),
+			Method:       httpReq.Method,
+			SignedHeader: http.Header{},
+		}
+
+		return out, metadata, nil
+	}
+
+	signingName := awsmiddleware.GetSigningName(ctx)
+	signingRegion := awsmiddleware.GetSigningRegion(ctx)
+	payloadHash := GetPayloadHash(ctx)
+	if len(payloadHash) == 0 {
+		return out, metadata, &SigningError{
+			Err: fmt.Errorf("computed payload hash missing from context"),
+		}
+	}
+
+	credentials, err := s.credentialsProvider.Retrieve(ctx)
+	if err != nil {
+		return out, metadata, &SigningError{
+			Err: fmt.Errorf("failed to retrieve credentials: %w", err),
+		}
+	}
+
+	u, h, err := s.presigner.PresignHTTP(ctx, credentials,
+		httpReq, payloadHash, signingName, signingRegion, sdk.NowTime(),
+		func(o *SignerOptions) {
+			o.Logger = middleware.GetLogger(ctx)
+			o.LogSigning = s.logSigning
+		})
+	if err != nil {
+		return out, metadata, &SigningError{
+			Err: fmt.Errorf("failed to sign http request, %w", err),
+		}
+	}
+
+	out.Result = &PresignedHTTPRequest{
+		URL:          u,
+		Method:       httpReq.Method,
+		SignedHeader: h,
+	}
+
+	return out, metadata, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/stream.go 🔗

@@ -0,0 +1,86 @@
+package v4
+
+import (
+	"context"
+	"crypto/sha256"
+	"encoding/hex"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	v4Internal "github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4"
+	"strings"
+	"time"
+)
+
+// EventStreamSigner is an AWS EventStream protocol signer.
+type EventStreamSigner interface {
+	GetSignature(ctx context.Context, headers, payload []byte, signingTime time.Time, optFns ...func(*StreamSignerOptions)) ([]byte, error)
+}
+
+// StreamSignerOptions is the configuration options for StreamSigner.
+type StreamSignerOptions struct{}
+
+// StreamSigner implements Signature Version 4 (SigV4) signing of event stream encoded payloads.
+type StreamSigner struct {
+	options StreamSignerOptions
+
+	credentials aws.Credentials
+	service     string
+	region      string
+
+	prevSignature []byte
+
+	signingKeyDeriver *v4Internal.SigningKeyDeriver
+}
+
+// NewStreamSigner returns a new AWS EventStream protocol signer.
+func NewStreamSigner(credentials aws.Credentials, service, region string, seedSignature []byte, optFns ...func(*StreamSignerOptions)) *StreamSigner {
+	o := StreamSignerOptions{}
+
+	for _, fn := range optFns {
+		fn(&o)
+	}
+
+	return &StreamSigner{
+		options:           o,
+		credentials:       credentials,
+		service:           service,
+		region:            region,
+		signingKeyDeriver: v4Internal.NewSigningKeyDeriver(),
+		prevSignature:     seedSignature,
+	}
+}
+
+// GetSignature signs the provided header and payload bytes.
+func (s *StreamSigner) GetSignature(ctx context.Context, headers, payload []byte, signingTime time.Time, optFns ...func(*StreamSignerOptions)) ([]byte, error) {
+	options := s.options
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	prevSignature := s.prevSignature
+
+	st := v4Internal.NewSigningTime(signingTime)
+
+	sigKey := s.signingKeyDeriver.DeriveKey(s.credentials, s.service, s.region, st)
+
+	scope := v4Internal.BuildCredentialScope(st, s.region, s.service)
+
+	stringToSign := s.buildEventStreamStringToSign(headers, payload, prevSignature, scope, &st)
+
+	signature := v4Internal.HMACSHA256(sigKey, []byte(stringToSign))
+	s.prevSignature = signature
+
+	return signature, nil
+}
+
+func (s *StreamSigner) buildEventStreamStringToSign(headers, payload, previousSignature []byte, credentialScope string, signingTime *v4Internal.SigningTime) string {
+	hash := sha256.New()
+	return strings.Join([]string{
+		"AWS4-HMAC-SHA256-PAYLOAD",
+		signingTime.TimeFormat(),
+		credentialScope,
+		hex.EncodeToString(previousSignature),
+		hex.EncodeToString(makeHash(hash, headers)),
+		hex.EncodeToString(makeHash(hash, payload)),
+	}, "\n")
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/signer/v4/v4.go 🔗

@@ -0,0 +1,559 @@
+// Package v4 implements the AWS signature version 4 algorithm (commonly known
+// as SigV4).
+//
+// For more information about SigV4, see [Signing AWS API requests] in the IAM
+// user guide.
+//
+// While this implementation CAN work in an external context, it is developed
+// primarily for SDK use and you may encounter fringe behaviors around header
+// canonicalization.
+//
+// # Pre-escaping a request URI
+//
+// AWS v4 signature validation requires that the canonical string's URI path
+// component must be the escaped form of the HTTP request's path.
+//
+// The Go HTTP client will perform escaping automatically on the HTTP request.
+// This may cause signature validation errors because the request differs from
+// the URI path or query from which the signature was generated.
+//
+// Because of this, we recommend that you explicitly escape the request when
+// using this signer outside of the SDK to prevent possible signature mismatch.
+// This can be done by setting URL.Opaque on the request. The signer will
+// prefer that value, falling back to the return of URL.EscapedPath if unset.
+//
+// When setting URL.Opaque you must do so in the form of:
+//
+//	"//<hostname>/<path>"
+//
+//	// e.g.
+//	"//example.com/some/path"
+//
+// The leading "//" and hostname are required or the escaping will not work
+// correctly.
+//
+// The TestStandaloneSign unit test provides a complete example of using the
+// signer outside of the SDK and pre-escaping the URI path.
+//
+// [Signing AWS API requests]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-signing.html
+package v4
+
+import (
+	"context"
+	"crypto/sha256"
+	"encoding/hex"
+	"fmt"
+	"hash"
+	"net/http"
+	"net/textproto"
+	"net/url"
+	"sort"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	v4Internal "github.com/aws/aws-sdk-go-v2/aws/signer/internal/v4"
+	"github.com/aws/smithy-go/encoding/httpbinding"
+	"github.com/aws/smithy-go/logging"
+)
+
+const (
+	signingAlgorithm    = "AWS4-HMAC-SHA256"
+	authorizationHeader = "Authorization"
+
+	// Version of signing v4
+	Version = "SigV4"
+)
+
+// HTTPSigner is an interface to a SigV4 signer that can sign HTTP requests
+type HTTPSigner interface {
+	SignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*SignerOptions)) error
+}
+
+type keyDerivator interface {
+	DeriveKey(credential aws.Credentials, service, region string, signingTime v4Internal.SigningTime) []byte
+}
+
+// SignerOptions is the SigV4 Signer options.
+type SignerOptions struct {
+	// Disables the Signer's moving HTTP header key/value pairs from the HTTP
+	// request header to the request's query string. This is most commonly used
+	// with pre-signed requests preventing headers from being added to the
+	// request's query string.
+	DisableHeaderHoisting bool
+
+	// Disables the automatic escaping of the URI path of the request for the
+	// siganture's canonical string's path. For services that do not need additional
+	// escaping then use this to disable the signer escaping the path.
+	//
+	// S3 is an example of a service that does not need additional escaping.
+	//
+	// http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
+	DisableURIPathEscaping bool
+
+	// The logger to send log messages to.
+	Logger logging.Logger
+
+	// Enable logging of signed requests.
+	// This will enable logging of the canonical request, the string to sign, and for presigning the subsequent
+	// presigned URL.
+	LogSigning bool
+
+	// Disables setting the session token on the request as part of signing
+	// through X-Amz-Security-Token. This is needed for variations of v4 that
+	// present the token elsewhere.
+	DisableSessionToken bool
+}
+
+// Signer applies AWS v4 signing to given request. Use this to sign requests
+// that need to be signed with AWS V4 Signatures.
+type Signer struct {
+	options      SignerOptions
+	keyDerivator keyDerivator
+}
+
+// NewSigner returns a new SigV4 Signer
+func NewSigner(optFns ...func(signer *SignerOptions)) *Signer {
+	options := SignerOptions{}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	return &Signer{options: options, keyDerivator: v4Internal.NewSigningKeyDeriver()}
+}
+
+type httpSigner struct {
+	Request      *http.Request
+	ServiceName  string
+	Region       string
+	Time         v4Internal.SigningTime
+	Credentials  aws.Credentials
+	KeyDerivator keyDerivator
+	IsPreSign    bool
+
+	PayloadHash string
+
+	DisableHeaderHoisting  bool
+	DisableURIPathEscaping bool
+	DisableSessionToken    bool
+}
+
+func (s *httpSigner) Build() (signedRequest, error) {
+	req := s.Request
+
+	query := req.URL.Query()
+	headers := req.Header
+
+	s.setRequiredSigningFields(headers, query)
+
+	// Sort Each Query Key's Values
+	for key := range query {
+		sort.Strings(query[key])
+	}
+
+	v4Internal.SanitizeHostForHeader(req)
+
+	credentialScope := s.buildCredentialScope()
+	credentialStr := s.Credentials.AccessKeyID + "/" + credentialScope
+	if s.IsPreSign {
+		query.Set(v4Internal.AmzCredentialKey, credentialStr)
+	}
+
+	unsignedHeaders := headers
+	if s.IsPreSign && !s.DisableHeaderHoisting {
+		var urlValues url.Values
+		urlValues, unsignedHeaders = buildQuery(v4Internal.AllowedQueryHoisting, headers)
+		for k := range urlValues {
+			query[k] = urlValues[k]
+		}
+	}
+
+	host := req.URL.Host
+	if len(req.Host) > 0 {
+		host = req.Host
+	}
+
+	signedHeaders, signedHeadersStr, canonicalHeaderStr := s.buildCanonicalHeaders(host, v4Internal.IgnoredHeaders, unsignedHeaders, s.Request.ContentLength)
+
+	if s.IsPreSign {
+		query.Set(v4Internal.AmzSignedHeadersKey, signedHeadersStr)
+	}
+
+	var rawQuery strings.Builder
+	rawQuery.WriteString(strings.Replace(query.Encode(), "+", "%20", -1))
+
+	canonicalURI := v4Internal.GetURIPath(req.URL)
+	if !s.DisableURIPathEscaping {
+		canonicalURI = httpbinding.EscapePath(canonicalURI, false)
+	}
+
+	canonicalString := s.buildCanonicalString(
+		req.Method,
+		canonicalURI,
+		rawQuery.String(),
+		signedHeadersStr,
+		canonicalHeaderStr,
+	)
+
+	strToSign := s.buildStringToSign(credentialScope, canonicalString)
+	signingSignature, err := s.buildSignature(strToSign)
+	if err != nil {
+		return signedRequest{}, err
+	}
+
+	if s.IsPreSign {
+		rawQuery.WriteString("&X-Amz-Signature=")
+		rawQuery.WriteString(signingSignature)
+	} else {
+		headers[authorizationHeader] = append(headers[authorizationHeader][:0], buildAuthorizationHeader(credentialStr, signedHeadersStr, signingSignature))
+	}
+
+	req.URL.RawQuery = rawQuery.String()
+
+	return signedRequest{
+		Request:         req,
+		SignedHeaders:   signedHeaders,
+		CanonicalString: canonicalString,
+		StringToSign:    strToSign,
+		PreSigned:       s.IsPreSign,
+	}, nil
+}
+
+func buildAuthorizationHeader(credentialStr, signedHeadersStr, signingSignature string) string {
+	const credential = "Credential="
+	const signedHeaders = "SignedHeaders="
+	const signature = "Signature="
+	const commaSpace = ", "
+
+	var parts strings.Builder
+	parts.Grow(len(signingAlgorithm) + 1 +
+		len(credential) + len(credentialStr) + 2 +
+		len(signedHeaders) + len(signedHeadersStr) + 2 +
+		len(signature) + len(signingSignature),
+	)
+	parts.WriteString(signingAlgorithm)
+	parts.WriteRune(' ')
+	parts.WriteString(credential)
+	parts.WriteString(credentialStr)
+	parts.WriteString(commaSpace)
+	parts.WriteString(signedHeaders)
+	parts.WriteString(signedHeadersStr)
+	parts.WriteString(commaSpace)
+	parts.WriteString(signature)
+	parts.WriteString(signingSignature)
+	return parts.String()
+}
+
+// SignHTTP signs AWS v4 requests with the provided payload hash, service name, region the
+// request is made to, and time the request is signed at. The signTime allows
+// you to specify that a request is signed for the future, and cannot be
+// used until then.
+//
+// The payloadHash is the hex encoded SHA-256 hash of the request payload, and
+// must be provided. Even if the request has no payload (aka body). If the
+// request has no payload you should use the hex encoded SHA-256 of an empty
+// string as the payloadHash value.
+//
+//	"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
+//
+// Some services such as Amazon S3 accept alternative values for the payload
+// hash, such as "UNSIGNED-PAYLOAD" for requests where the body will not be
+// included in the request signature.
+//
+// https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
+//
+// Sign differs from Presign in that it will sign the request using HTTP
+// header values. This type of signing is intended for http.Request values that
+// will not be shared, or are shared in a way the header values on the request
+// will not be lost.
+//
+// The passed in request will be modified in place.
+func (s Signer) SignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(options *SignerOptions)) error {
+	options := s.options
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	signer := &httpSigner{
+		Request:                r,
+		PayloadHash:            payloadHash,
+		ServiceName:            service,
+		Region:                 region,
+		Credentials:            credentials,
+		Time:                   v4Internal.NewSigningTime(signingTime.UTC()),
+		DisableHeaderHoisting:  options.DisableHeaderHoisting,
+		DisableURIPathEscaping: options.DisableURIPathEscaping,
+		DisableSessionToken:    options.DisableSessionToken,
+		KeyDerivator:           s.keyDerivator,
+	}
+
+	signedRequest, err := signer.Build()
+	if err != nil {
+		return err
+	}
+
+	logSigningInfo(ctx, options, &signedRequest, false)
+
+	return nil
+}
+
+// PresignHTTP signs AWS v4 requests with the payload hash, service name, region
+// the request is made to, and time the request is signed at. The signTime
+// allows you to specify that a request is signed for the future, and cannot
+// be used until then.
+//
+// Returns the signed URL and the map of HTTP headers that were included in the
+// signature or an error if signing the request failed. For presigned requests
+// these headers and their values must be included on the HTTP request when it
+// is made. This is helpful to know what header values need to be shared with
+// the party the presigned request will be distributed to.
+//
+// The payloadHash is the hex encoded SHA-256 hash of the request payload, and
+// must be provided. Even if the request has no payload (aka body). If the
+// request has no payload you should use the hex encoded SHA-256 of an empty
+// string as the payloadHash value.
+//
+//	"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
+//
+// Some services such as Amazon S3 accept alternative values for the payload
+// hash, such as "UNSIGNED-PAYLOAD" for requests where the body will not be
+// included in the request signature.
+//
+// https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html
+//
+// PresignHTTP differs from SignHTTP in that it will sign the request using
+// query string instead of header values. This allows you to share the
+// Presigned Request's URL with third parties, or distribute it throughout your
+// system with minimal dependencies.
+//
+// PresignHTTP will not set the expires time of the presigned request
+// automatically. To specify the expire duration for a request add the
+// "X-Amz-Expires" query parameter on the request with the value as the
+// duration in seconds the presigned URL should be considered valid for. This
+// parameter is not used by all AWS services, and is most notable used by
+// Amazon S3 APIs.
+//
+//	expires := 20 * time.Minute
+//	query := req.URL.Query()
+//	query.Set("X-Amz-Expires", strconv.FormatInt(int64(expires/time.Second), 10))
+//	req.URL.RawQuery = query.Encode()
+//
+// This method does not modify the provided request.
+func (s *Signer) PresignHTTP(
+	ctx context.Context, credentials aws.Credentials, r *http.Request,
+	payloadHash string, service string, region string, signingTime time.Time,
+	optFns ...func(*SignerOptions),
+) (signedURI string, signedHeaders http.Header, err error) {
+	options := s.options
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	signer := &httpSigner{
+		Request:                r.Clone(r.Context()),
+		PayloadHash:            payloadHash,
+		ServiceName:            service,
+		Region:                 region,
+		Credentials:            credentials,
+		Time:                   v4Internal.NewSigningTime(signingTime.UTC()),
+		IsPreSign:              true,
+		DisableHeaderHoisting:  options.DisableHeaderHoisting,
+		DisableURIPathEscaping: options.DisableURIPathEscaping,
+		DisableSessionToken:    options.DisableSessionToken,
+		KeyDerivator:           s.keyDerivator,
+	}
+
+	signedRequest, err := signer.Build()
+	if err != nil {
+		return "", nil, err
+	}
+
+	logSigningInfo(ctx, options, &signedRequest, true)
+
+	signedHeaders = make(http.Header)
+
+	// For the signed headers we canonicalize the header keys in the returned map.
+	// This avoids situations where can standard library double headers like host header. For example the standard
+	// library will set the Host header, even if it is present in lower-case form.
+	for k, v := range signedRequest.SignedHeaders {
+		key := textproto.CanonicalMIMEHeaderKey(k)
+		signedHeaders[key] = append(signedHeaders[key], v...)
+	}
+
+	return signedRequest.Request.URL.String(), signedHeaders, nil
+}
+
+func (s *httpSigner) buildCredentialScope() string {
+	return v4Internal.BuildCredentialScope(s.Time, s.Region, s.ServiceName)
+}
+
+func buildQuery(r v4Internal.Rule, header http.Header) (url.Values, http.Header) {
+	query := url.Values{}
+	unsignedHeaders := http.Header{}
+	for k, h := range header {
+		// literally just this header has this constraint for some stupid reason,
+		// see #2508
+		if k == "X-Amz-Expected-Bucket-Owner" {
+			k = "x-amz-expected-bucket-owner"
+		}
+
+		if r.IsValid(k) {
+			query[k] = h
+		} else {
+			unsignedHeaders[k] = h
+		}
+	}
+
+	return query, unsignedHeaders
+}
+
+func (s *httpSigner) buildCanonicalHeaders(host string, rule v4Internal.Rule, header http.Header, length int64) (signed http.Header, signedHeaders, canonicalHeadersStr string) {
+	signed = make(http.Header)
+
+	var headers []string
+	const hostHeader = "host"
+	headers = append(headers, hostHeader)
+	signed[hostHeader] = append(signed[hostHeader], host)
+
+	const contentLengthHeader = "content-length"
+	if length > 0 {
+		headers = append(headers, contentLengthHeader)
+		signed[contentLengthHeader] = append(signed[contentLengthHeader], strconv.FormatInt(length, 10))
+	}
+
+	for k, v := range header {
+		if !rule.IsValid(k) {
+			continue // ignored header
+		}
+		if strings.EqualFold(k, contentLengthHeader) {
+			// prevent signing already handled content-length header.
+			continue
+		}
+
+		lowerCaseKey := strings.ToLower(k)
+		if _, ok := signed[lowerCaseKey]; ok {
+			// include additional values
+			signed[lowerCaseKey] = append(signed[lowerCaseKey], v...)
+			continue
+		}
+
+		headers = append(headers, lowerCaseKey)
+		signed[lowerCaseKey] = v
+	}
+	sort.Strings(headers)
+
+	signedHeaders = strings.Join(headers, ";")
+
+	var canonicalHeaders strings.Builder
+	n := len(headers)
+	const colon = ':'
+	for i := 0; i < n; i++ {
+		if headers[i] == hostHeader {
+			canonicalHeaders.WriteString(hostHeader)
+			canonicalHeaders.WriteRune(colon)
+			canonicalHeaders.WriteString(v4Internal.StripExcessSpaces(host))
+		} else {
+			canonicalHeaders.WriteString(headers[i])
+			canonicalHeaders.WriteRune(colon)
+			// Trim out leading, trailing, and dedup inner spaces from signed header values.
+			values := signed[headers[i]]
+			for j, v := range values {
+				cleanedValue := strings.TrimSpace(v4Internal.StripExcessSpaces(v))
+				canonicalHeaders.WriteString(cleanedValue)
+				if j < len(values)-1 {
+					canonicalHeaders.WriteRune(',')
+				}
+			}
+		}
+		canonicalHeaders.WriteRune('\n')
+	}
+	canonicalHeadersStr = canonicalHeaders.String()
+
+	return signed, signedHeaders, canonicalHeadersStr
+}
+
+func (s *httpSigner) buildCanonicalString(method, uri, query, signedHeaders, canonicalHeaders string) string {
+	return strings.Join([]string{
+		method,
+		uri,
+		query,
+		canonicalHeaders,
+		signedHeaders,
+		s.PayloadHash,
+	}, "\n")
+}
+
+func (s *httpSigner) buildStringToSign(credentialScope, canonicalRequestString string) string {
+	return strings.Join([]string{
+		signingAlgorithm,
+		s.Time.TimeFormat(),
+		credentialScope,
+		hex.EncodeToString(makeHash(sha256.New(), []byte(canonicalRequestString))),
+	}, "\n")
+}
+
+func makeHash(hash hash.Hash, b []byte) []byte {
+	hash.Reset()
+	hash.Write(b)
+	return hash.Sum(nil)
+}
+
+func (s *httpSigner) buildSignature(strToSign string) (string, error) {
+	key := s.KeyDerivator.DeriveKey(s.Credentials, s.ServiceName, s.Region, s.Time)
+	return hex.EncodeToString(v4Internal.HMACSHA256(key, []byte(strToSign))), nil
+}
+
+func (s *httpSigner) setRequiredSigningFields(headers http.Header, query url.Values) {
+	amzDate := s.Time.TimeFormat()
+
+	if s.IsPreSign {
+		query.Set(v4Internal.AmzAlgorithmKey, signingAlgorithm)
+		sessionToken := s.Credentials.SessionToken
+		if !s.DisableSessionToken && len(sessionToken) > 0 {
+			query.Set("X-Amz-Security-Token", sessionToken)
+		}
+
+		query.Set(v4Internal.AmzDateKey, amzDate)
+		return
+	}
+
+	headers[v4Internal.AmzDateKey] = append(headers[v4Internal.AmzDateKey][:0], amzDate)
+
+	if !s.DisableSessionToken && len(s.Credentials.SessionToken) > 0 {
+		headers[v4Internal.AmzSecurityTokenKey] = append(headers[v4Internal.AmzSecurityTokenKey][:0], s.Credentials.SessionToken)
+	}
+}
+
+func logSigningInfo(ctx context.Context, options SignerOptions, request *signedRequest, isPresign bool) {
+	if !options.LogSigning {
+		return
+	}
+	signedURLMsg := ""
+	if isPresign {
+		signedURLMsg = fmt.Sprintf(logSignedURLMsg, request.Request.URL.String())
+	}
+	logger := logging.WithContext(ctx, options.Logger)
+	logger.Logf(logging.Debug, logSignInfoMsg, request.CanonicalString, request.StringToSign, signedURLMsg)
+}
+
+type signedRequest struct {
+	Request         *http.Request
+	SignedHeaders   http.Header
+	CanonicalString string
+	StringToSign    string
+	PreSigned       bool
+}
+
+const logSignInfoMsg = `Request Signature:
+---[ CANONICAL STRING  ]-----------------------------
+%s
+---[ STRING TO SIGN ]--------------------------------
+%s%s
+-----------------------------------------------------`
+const logSignedURLMsg = `
+---[ SIGNED URL ]------------------------------------
+%s`

vendor/github.com/aws/aws-sdk-go-v2/aws/to_ptr.go 🔗

@@ -0,0 +1,297 @@
+// Code generated by aws/generate.go DO NOT EDIT.
+
+package aws
+
+import (
+	"github.com/aws/smithy-go/ptr"
+	"time"
+)
+
+// Bool returns a pointer value for the bool value passed in.
+func Bool(v bool) *bool {
+	return ptr.Bool(v)
+}
+
+// BoolSlice returns a slice of bool pointers from the values
+// passed in.
+func BoolSlice(vs []bool) []*bool {
+	return ptr.BoolSlice(vs)
+}
+
+// BoolMap returns a map of bool pointers from the values
+// passed in.
+func BoolMap(vs map[string]bool) map[string]*bool {
+	return ptr.BoolMap(vs)
+}
+
+// Byte returns a pointer value for the byte value passed in.
+func Byte(v byte) *byte {
+	return ptr.Byte(v)
+}
+
+// ByteSlice returns a slice of byte pointers from the values
+// passed in.
+func ByteSlice(vs []byte) []*byte {
+	return ptr.ByteSlice(vs)
+}
+
+// ByteMap returns a map of byte pointers from the values
+// passed in.
+func ByteMap(vs map[string]byte) map[string]*byte {
+	return ptr.ByteMap(vs)
+}
+
+// String returns a pointer value for the string value passed in.
+func String(v string) *string {
+	return ptr.String(v)
+}
+
+// StringSlice returns a slice of string pointers from the values
+// passed in.
+func StringSlice(vs []string) []*string {
+	return ptr.StringSlice(vs)
+}
+
+// StringMap returns a map of string pointers from the values
+// passed in.
+func StringMap(vs map[string]string) map[string]*string {
+	return ptr.StringMap(vs)
+}
+
+// Int returns a pointer value for the int value passed in.
+func Int(v int) *int {
+	return ptr.Int(v)
+}
+
+// IntSlice returns a slice of int pointers from the values
+// passed in.
+func IntSlice(vs []int) []*int {
+	return ptr.IntSlice(vs)
+}
+
+// IntMap returns a map of int pointers from the values
+// passed in.
+func IntMap(vs map[string]int) map[string]*int {
+	return ptr.IntMap(vs)
+}
+
+// Int8 returns a pointer value for the int8 value passed in.
+func Int8(v int8) *int8 {
+	return ptr.Int8(v)
+}
+
+// Int8Slice returns a slice of int8 pointers from the values
+// passed in.
+func Int8Slice(vs []int8) []*int8 {
+	return ptr.Int8Slice(vs)
+}
+
+// Int8Map returns a map of int8 pointers from the values
+// passed in.
+func Int8Map(vs map[string]int8) map[string]*int8 {
+	return ptr.Int8Map(vs)
+}
+
+// Int16 returns a pointer value for the int16 value passed in.
+func Int16(v int16) *int16 {
+	return ptr.Int16(v)
+}
+
+// Int16Slice returns a slice of int16 pointers from the values
+// passed in.
+func Int16Slice(vs []int16) []*int16 {
+	return ptr.Int16Slice(vs)
+}
+
+// Int16Map returns a map of int16 pointers from the values
+// passed in.
+func Int16Map(vs map[string]int16) map[string]*int16 {
+	return ptr.Int16Map(vs)
+}
+
+// Int32 returns a pointer value for the int32 value passed in.
+func Int32(v int32) *int32 {
+	return ptr.Int32(v)
+}
+
+// Int32Slice returns a slice of int32 pointers from the values
+// passed in.
+func Int32Slice(vs []int32) []*int32 {
+	return ptr.Int32Slice(vs)
+}
+
+// Int32Map returns a map of int32 pointers from the values
+// passed in.
+func Int32Map(vs map[string]int32) map[string]*int32 {
+	return ptr.Int32Map(vs)
+}
+
+// Int64 returns a pointer value for the int64 value passed in.
+func Int64(v int64) *int64 {
+	return ptr.Int64(v)
+}
+
+// Int64Slice returns a slice of int64 pointers from the values
+// passed in.
+func Int64Slice(vs []int64) []*int64 {
+	return ptr.Int64Slice(vs)
+}
+
+// Int64Map returns a map of int64 pointers from the values
+// passed in.
+func Int64Map(vs map[string]int64) map[string]*int64 {
+	return ptr.Int64Map(vs)
+}
+
+// Uint returns a pointer value for the uint value passed in.
+func Uint(v uint) *uint {
+	return ptr.Uint(v)
+}
+
+// UintSlice returns a slice of uint pointers from the values
+// passed in.
+func UintSlice(vs []uint) []*uint {
+	return ptr.UintSlice(vs)
+}
+
+// UintMap returns a map of uint pointers from the values
+// passed in.
+func UintMap(vs map[string]uint) map[string]*uint {
+	return ptr.UintMap(vs)
+}
+
+// Uint8 returns a pointer value for the uint8 value passed in.
+func Uint8(v uint8) *uint8 {
+	return ptr.Uint8(v)
+}
+
+// Uint8Slice returns a slice of uint8 pointers from the values
+// passed in.
+func Uint8Slice(vs []uint8) []*uint8 {
+	return ptr.Uint8Slice(vs)
+}
+
+// Uint8Map returns a map of uint8 pointers from the values
+// passed in.
+func Uint8Map(vs map[string]uint8) map[string]*uint8 {
+	return ptr.Uint8Map(vs)
+}
+
+// Uint16 returns a pointer value for the uint16 value passed in.
+func Uint16(v uint16) *uint16 {
+	return ptr.Uint16(v)
+}
+
+// Uint16Slice returns a slice of uint16 pointers from the values
+// passed in.
+func Uint16Slice(vs []uint16) []*uint16 {
+	return ptr.Uint16Slice(vs)
+}
+
+// Uint16Map returns a map of uint16 pointers from the values
+// passed in.
+func Uint16Map(vs map[string]uint16) map[string]*uint16 {
+	return ptr.Uint16Map(vs)
+}
+
+// Uint32 returns a pointer value for the uint32 value passed in.
+func Uint32(v uint32) *uint32 {
+	return ptr.Uint32(v)
+}
+
+// Uint32Slice returns a slice of uint32 pointers from the values
+// passed in.
+func Uint32Slice(vs []uint32) []*uint32 {
+	return ptr.Uint32Slice(vs)
+}
+
+// Uint32Map returns a map of uint32 pointers from the values
+// passed in.
+func Uint32Map(vs map[string]uint32) map[string]*uint32 {
+	return ptr.Uint32Map(vs)
+}
+
+// Uint64 returns a pointer value for the uint64 value passed in.
+func Uint64(v uint64) *uint64 {
+	return ptr.Uint64(v)
+}
+
+// Uint64Slice returns a slice of uint64 pointers from the values
+// passed in.
+func Uint64Slice(vs []uint64) []*uint64 {
+	return ptr.Uint64Slice(vs)
+}
+
+// Uint64Map returns a map of uint64 pointers from the values
+// passed in.
+func Uint64Map(vs map[string]uint64) map[string]*uint64 {
+	return ptr.Uint64Map(vs)
+}
+
+// Float32 returns a pointer value for the float32 value passed in.
+func Float32(v float32) *float32 {
+	return ptr.Float32(v)
+}
+
+// Float32Slice returns a slice of float32 pointers from the values
+// passed in.
+func Float32Slice(vs []float32) []*float32 {
+	return ptr.Float32Slice(vs)
+}
+
+// Float32Map returns a map of float32 pointers from the values
+// passed in.
+func Float32Map(vs map[string]float32) map[string]*float32 {
+	return ptr.Float32Map(vs)
+}
+
+// Float64 returns a pointer value for the float64 value passed in.
+func Float64(v float64) *float64 {
+	return ptr.Float64(v)
+}
+
+// Float64Slice returns a slice of float64 pointers from the values
+// passed in.
+func Float64Slice(vs []float64) []*float64 {
+	return ptr.Float64Slice(vs)
+}
+
+// Float64Map returns a map of float64 pointers from the values
+// passed in.
+func Float64Map(vs map[string]float64) map[string]*float64 {
+	return ptr.Float64Map(vs)
+}
+
+// Time returns a pointer value for the time.Time value passed in.
+func Time(v time.Time) *time.Time {
+	return ptr.Time(v)
+}
+
+// TimeSlice returns a slice of time.Time pointers from the values
+// passed in.
+func TimeSlice(vs []time.Time) []*time.Time {
+	return ptr.TimeSlice(vs)
+}
+
+// TimeMap returns a map of time.Time pointers from the values
+// passed in.
+func TimeMap(vs map[string]time.Time) map[string]*time.Time {
+	return ptr.TimeMap(vs)
+}
+
+// Duration returns a pointer value for the time.Duration value passed in.
+func Duration(v time.Duration) *time.Duration {
+	return ptr.Duration(v)
+}
+
+// DurationSlice returns a slice of time.Duration pointers from the values
+// passed in.
+func DurationSlice(vs []time.Duration) []*time.Duration {
+	return ptr.DurationSlice(vs)
+}
+
+// DurationMap returns a map of time.Duration pointers from the values
+// passed in.
+func DurationMap(vs map[string]time.Duration) map[string]*time.Duration {
+	return ptr.DurationMap(vs)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/client.go 🔗

@@ -0,0 +1,310 @@
+package http
+
+import (
+	"crypto/tls"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"net"
+	"net/http"
+	"reflect"
+	"sync"
+	"time"
+)
+
+// Defaults for the HTTPTransportBuilder.
+var (
+	// Default connection pool options
+	DefaultHTTPTransportMaxIdleConns        = 100
+	DefaultHTTPTransportMaxIdleConnsPerHost = 10
+
+	// Default connection timeouts
+	DefaultHTTPTransportIdleConnTimeout       = 90 * time.Second
+	DefaultHTTPTransportTLSHandleshakeTimeout = 10 * time.Second
+	DefaultHTTPTransportExpectContinueTimeout = 1 * time.Second
+
+	// Default to TLS 1.2 for all HTTPS requests.
+	DefaultHTTPTransportTLSMinVersion uint16 = tls.VersionTLS12
+)
+
+// Timeouts for net.Dialer's network connection.
+var (
+	DefaultDialConnectTimeout   = 30 * time.Second
+	DefaultDialKeepAliveTimeout = 30 * time.Second
+)
+
+// BuildableClient provides a HTTPClient implementation with options to
+// create copies of the HTTPClient when additional configuration is provided.
+//
+// The client's methods will not share the http.Transport value between copies
+// of the BuildableClient. Only exported member values of the Transport and
+// optional Dialer will be copied between copies of BuildableClient.
+type BuildableClient struct {
+	transport *http.Transport
+	dialer    *net.Dialer
+
+	initOnce sync.Once
+
+	clientTimeout time.Duration
+	client        *http.Client
+}
+
+// NewBuildableClient returns an initialized client for invoking HTTP
+// requests.
+func NewBuildableClient() *BuildableClient {
+	return &BuildableClient{}
+}
+
+// Do implements the HTTPClient interface's Do method to invoke a HTTP request,
+// and receive the response. Uses the BuildableClient's current
+// configuration to invoke the http.Request.
+//
+// If connection pooling is enabled (aka HTTP KeepAlive) the client will only
+// share pooled connections with its own instance. Copies of the
+// BuildableClient will have their own connection pools.
+//
+// Redirect (3xx) responses will not be followed, the HTTP response received
+// will returned instead.
+func (b *BuildableClient) Do(req *http.Request) (*http.Response, error) {
+	b.initOnce.Do(b.build)
+
+	return b.client.Do(req)
+}
+
+// Freeze returns a frozen aws.HTTPClient implementation that is no longer a BuildableClient.
+// Use this to prevent the SDK from applying DefaultMode configuration values to a buildable client.
+func (b *BuildableClient) Freeze() aws.HTTPClient {
+	cpy := b.clone()
+	cpy.build()
+	return cpy.client
+}
+
+func (b *BuildableClient) build() {
+	b.client = wrapWithLimitedRedirect(&http.Client{
+		Timeout:   b.clientTimeout,
+		Transport: b.GetTransport(),
+	})
+}
+
+func (b *BuildableClient) clone() *BuildableClient {
+	cpy := NewBuildableClient()
+	cpy.transport = b.GetTransport()
+	cpy.dialer = b.GetDialer()
+	cpy.clientTimeout = b.clientTimeout
+
+	return cpy
+}
+
+// WithTransportOptions copies the BuildableClient and returns it with the
+// http.Transport options applied.
+//
+// If a non (*http.Transport) was set as the round tripper, the round tripper
+// will be replaced with a default Transport value before invoking the option
+// functions.
+func (b *BuildableClient) WithTransportOptions(opts ...func(*http.Transport)) *BuildableClient {
+	cpy := b.clone()
+
+	tr := cpy.GetTransport()
+	for _, opt := range opts {
+		opt(tr)
+	}
+	cpy.transport = tr
+
+	return cpy
+}
+
+// WithDialerOptions copies the BuildableClient and returns it with the
+// net.Dialer options applied. Will set the client's http.Transport DialContext
+// member.
+func (b *BuildableClient) WithDialerOptions(opts ...func(*net.Dialer)) *BuildableClient {
+	cpy := b.clone()
+
+	dialer := cpy.GetDialer()
+	for _, opt := range opts {
+		opt(dialer)
+	}
+	cpy.dialer = dialer
+
+	tr := cpy.GetTransport()
+	tr.DialContext = cpy.dialer.DialContext
+	cpy.transport = tr
+
+	return cpy
+}
+
+// WithTimeout Sets the timeout used by the client for all requests.
+func (b *BuildableClient) WithTimeout(timeout time.Duration) *BuildableClient {
+	cpy := b.clone()
+	cpy.clientTimeout = timeout
+	return cpy
+}
+
+// GetTransport returns a copy of the client's HTTP Transport.
+func (b *BuildableClient) GetTransport() *http.Transport {
+	var tr *http.Transport
+	if b.transport != nil {
+		tr = b.transport.Clone()
+	} else {
+		tr = defaultHTTPTransport()
+	}
+
+	return tr
+}
+
+// GetDialer returns a copy of the client's network dialer.
+func (b *BuildableClient) GetDialer() *net.Dialer {
+	var dialer *net.Dialer
+	if b.dialer != nil {
+		dialer = shallowCopyStruct(b.dialer).(*net.Dialer)
+	} else {
+		dialer = defaultDialer()
+	}
+
+	return dialer
+}
+
+// GetTimeout returns a copy of the client's timeout to cancel requests with.
+func (b *BuildableClient) GetTimeout() time.Duration {
+	return b.clientTimeout
+}
+
+func defaultDialer() *net.Dialer {
+	return &net.Dialer{
+		Timeout:   DefaultDialConnectTimeout,
+		KeepAlive: DefaultDialKeepAliveTimeout,
+		DualStack: true,
+	}
+}
+
+func defaultHTTPTransport() *http.Transport {
+	dialer := defaultDialer()
+
+	tr := &http.Transport{
+		Proxy:                 http.ProxyFromEnvironment,
+		DialContext:           dialer.DialContext,
+		TLSHandshakeTimeout:   DefaultHTTPTransportTLSHandleshakeTimeout,
+		MaxIdleConns:          DefaultHTTPTransportMaxIdleConns,
+		MaxIdleConnsPerHost:   DefaultHTTPTransportMaxIdleConnsPerHost,
+		IdleConnTimeout:       DefaultHTTPTransportIdleConnTimeout,
+		ExpectContinueTimeout: DefaultHTTPTransportExpectContinueTimeout,
+		ForceAttemptHTTP2:     true,
+		TLSClientConfig: &tls.Config{
+			MinVersion: DefaultHTTPTransportTLSMinVersion,
+		},
+	}
+
+	return tr
+}
+
+// shallowCopyStruct creates a shallow copy of the passed in source struct, and
+// returns that copy of the same struct type.
+func shallowCopyStruct(src interface{}) interface{} {
+	srcVal := reflect.ValueOf(src)
+	srcValType := srcVal.Type()
+
+	var returnAsPtr bool
+	if srcValType.Kind() == reflect.Ptr {
+		srcVal = srcVal.Elem()
+		srcValType = srcValType.Elem()
+		returnAsPtr = true
+	}
+	dstVal := reflect.New(srcValType).Elem()
+
+	for i := 0; i < srcValType.NumField(); i++ {
+		ft := srcValType.Field(i)
+		if len(ft.PkgPath) != 0 {
+			// unexported fields have a PkgPath
+			continue
+		}
+
+		dstVal.Field(i).Set(srcVal.Field(i))
+	}
+
+	if returnAsPtr {
+		dstVal = dstVal.Addr()
+	}
+
+	return dstVal.Interface()
+}
+
+// wrapWithLimitedRedirect updates the Client's Transport and CheckRedirect to
+// not follow any redirect other than 307 and 308. No other redirect will be
+// followed.
+//
+// If the client does not have a Transport defined will use a new SDK default
+// http.Transport configuration.
+func wrapWithLimitedRedirect(c *http.Client) *http.Client {
+	tr := c.Transport
+	if tr == nil {
+		tr = defaultHTTPTransport()
+	}
+
+	cc := *c
+	cc.CheckRedirect = limitedRedirect
+	cc.Transport = suppressBadHTTPRedirectTransport{
+		tr: tr,
+	}
+
+	return &cc
+}
+
+// limitedRedirect is a CheckRedirect that prevents the client from following
+// any non 307/308 HTTP status code redirects.
+//
+// The 307 and 308 redirects are allowed because the client must use the
+// original HTTP method for the redirected to location. Whereas 301 and 302
+// allow the client to switch to GET for the redirect.
+//
+// Suppresses all redirect requests with a URL of badHTTPRedirectLocation.
+func limitedRedirect(r *http.Request, via []*http.Request) error {
+	// Request.Response, in CheckRedirect is the response that is triggering
+	// the redirect.
+	resp := r.Response
+	if r.URL.String() == badHTTPRedirectLocation {
+		resp.Header.Del(badHTTPRedirectLocation)
+		return http.ErrUseLastResponse
+	}
+
+	switch resp.StatusCode {
+	case 307, 308:
+		// Only allow 307 and 308 redirects as they preserve the method.
+		return nil
+	}
+
+	return http.ErrUseLastResponse
+}
+
+// suppressBadHTTPRedirectTransport provides an http.RoundTripper
+// implementation that wraps another http.RoundTripper to prevent HTTP client
+// receiving 301 and 302 HTTP responses redirects without the required location
+// header.
+//
+// Clients using this utility must have a CheckRedirect, e.g. limitedRedirect,
+// that check for responses with having a URL of baseHTTPRedirectLocation, and
+// suppress the redirect.
+type suppressBadHTTPRedirectTransport struct {
+	tr http.RoundTripper
+}
+
+const badHTTPRedirectLocation = `https://amazonaws.com/badhttpredirectlocation`
+
+// RoundTrip backfills a stub location when a 301/302 response is received
+// without a location. This stub location is used by limitedRedirect to prevent
+// the HTTP client from failing attempting to use follow a redirect without a
+// location value.
+func (t suppressBadHTTPRedirectTransport) RoundTrip(r *http.Request) (*http.Response, error) {
+	resp, err := t.tr.RoundTrip(r)
+	if err != nil {
+		return resp, err
+	}
+
+	// S3 is the only known service to return 301 without location header.
+	// The Go standard library HTTP client will return an opaque error if it
+	// tries to follow a 301/302 response missing the location header.
+	switch resp.StatusCode {
+	case 301, 302:
+		if v := resp.Header.Get("Location"); len(v) == 0 {
+			resp.Header.Set("Location", badHTTPRedirectLocation)
+		}
+	}
+
+	return resp, err
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/content_type.go 🔗

@@ -0,0 +1,42 @@
+package http
+
+import (
+	"context"
+	"fmt"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// removeContentTypeHeader is a build middleware that removes
+// content type header if content-length header is unset or
+// is set to zero,
+type removeContentTypeHeader struct {
+}
+
+// ID the name of the middleware.
+func (m *removeContentTypeHeader) ID() string {
+	return "RemoveContentTypeHeader"
+}
+
+// HandleBuild adds or appends the constructed user agent to the request.
+func (m *removeContentTypeHeader) HandleBuild(ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler) (
+	out middleware.BuildOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in)
+	}
+
+	// remove contentTypeHeader when content-length is zero
+	if req.ContentLength == 0 {
+		req.Header.Del("content-type")
+	}
+
+	return next.HandleBuild(ctx, in)
+}
+
+// RemoveContentTypeHeader removes content-type header if
+// content length is unset or equal to zero.
+func RemoveContentTypeHeader(stack *middleware.Stack) error {
+	return stack.Build.Add(&removeContentTypeHeader{}, middleware.After)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/response_error.go 🔗

@@ -0,0 +1,33 @@
+package http
+
+import (
+	"errors"
+	"fmt"
+
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// ResponseError provides the HTTP centric error type wrapping the underlying error
+// with the HTTP response value and the deserialized RequestID.
+type ResponseError struct {
+	*smithyhttp.ResponseError
+
+	// RequestID associated with response error
+	RequestID string
+}
+
+// ServiceRequestID returns the request id associated with Response Error
+func (e *ResponseError) ServiceRequestID() string { return e.RequestID }
+
+// Error returns the formatted error
+func (e *ResponseError) Error() string {
+	return fmt.Sprintf(
+		"https response error StatusCode: %d, RequestID: %s, %v",
+		e.Response.StatusCode, e.RequestID, e.Err)
+}
+
+// As populates target and returns true if the type of target is a error type that
+// the ResponseError embeds, (e.g.AWS HTTP ResponseError)
+func (e *ResponseError) As(target interface{}) bool {
+	return errors.As(e.ResponseError, target)
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/response_error_middleware.go 🔗

@@ -0,0 +1,56 @@
+package http
+
+import (
+	"context"
+
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// AddResponseErrorMiddleware adds response error wrapper middleware
+func AddResponseErrorMiddleware(stack *middleware.Stack) error {
+	// add error wrapper middleware before request id retriever middleware so that it can wrap the error response
+	// returned by operation deserializers
+	return stack.Deserialize.Insert(&ResponseErrorWrapper{}, "RequestIDRetriever", middleware.Before)
+}
+
+// ResponseErrorWrapper wraps operation errors with ResponseError.
+type ResponseErrorWrapper struct {
+}
+
+// ID returns the middleware identifier
+func (m *ResponseErrorWrapper) ID() string {
+	return "ResponseErrorWrapper"
+}
+
+// HandleDeserialize wraps the stack error with smithyhttp.ResponseError.
+func (m *ResponseErrorWrapper) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err == nil {
+		// Nothing to do when there is no error.
+		return out, metadata, err
+	}
+
+	resp, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		// No raw response to wrap with.
+		return out, metadata, err
+	}
+
+	// look for request id in metadata
+	reqID, _ := awsmiddleware.GetRequestIDMetadata(metadata)
+
+	// Wrap the returned smithy error with the request id retrieved from the metadata
+	err = &ResponseError{
+		ResponseError: &smithyhttp.ResponseError{
+			Response: resp,
+			Err:      err,
+		},
+		RequestID: reqID,
+	}
+
+	return out, metadata, err
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/transport/http/timeout_read_closer.go 🔗

@@ -0,0 +1,104 @@
+package http
+
+import (
+	"context"
+	"fmt"
+	"io"
+	"time"
+
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+type readResult struct {
+	n   int
+	err error
+}
+
+// ResponseTimeoutError is an error when the reads from the response are
+// delayed longer than the timeout the read was configured for.
+type ResponseTimeoutError struct {
+	TimeoutDur time.Duration
+}
+
+// Timeout returns that the error is was caused by a timeout, and can be
+// retried.
+func (*ResponseTimeoutError) Timeout() bool { return true }
+
+func (e *ResponseTimeoutError) Error() string {
+	return fmt.Sprintf("read on body reach timeout limit, %v", e.TimeoutDur)
+}
+
+// timeoutReadCloser will handle body reads that take too long.
+// We will return a ErrReadTimeout error if a timeout occurs.
+type timeoutReadCloser struct {
+	reader   io.ReadCloser
+	duration time.Duration
+}
+
+// Read will spin off a goroutine to call the reader's Read method. We will
+// select on the timer's channel or the read's channel. Whoever completes first
+// will be returned.
+func (r *timeoutReadCloser) Read(b []byte) (int, error) {
+	timer := time.NewTimer(r.duration)
+	c := make(chan readResult, 1)
+
+	go func() {
+		n, err := r.reader.Read(b)
+		timer.Stop()
+		c <- readResult{n: n, err: err}
+	}()
+
+	select {
+	case data := <-c:
+		return data.n, data.err
+	case <-timer.C:
+		return 0, &ResponseTimeoutError{TimeoutDur: r.duration}
+	}
+}
+
+func (r *timeoutReadCloser) Close() error {
+	return r.reader.Close()
+}
+
+// AddResponseReadTimeoutMiddleware adds a middleware to the stack that wraps the
+// response body so that a read that takes too long will return an error.
+func AddResponseReadTimeoutMiddleware(stack *middleware.Stack, duration time.Duration) error {
+	return stack.Deserialize.Add(&readTimeout{duration: duration}, middleware.After)
+}
+
+// readTimeout wraps the response body with a timeoutReadCloser
+type readTimeout struct {
+	duration time.Duration
+}
+
+// ID returns the id of the middleware
+func (*readTimeout) ID() string {
+	return "ReadResponseTimeout"
+}
+
+// HandleDeserialize implements the DeserializeMiddleware interface
+func (m *readTimeout) HandleDeserialize(
+	ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler,
+) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	response.Body = &timeoutReadCloser{
+		reader:   response.Body,
+		duration: m.duration,
+	}
+	out.RawResponse = response
+
+	return out, metadata, err
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/types.go 🔗

@@ -0,0 +1,42 @@
+package aws
+
+import (
+	"fmt"
+)
+
+// Ternary is an enum allowing an unknown or none state in addition to a bool's
+// true and false.
+type Ternary int
+
+func (t Ternary) String() string {
+	switch t {
+	case UnknownTernary:
+		return "unknown"
+	case FalseTernary:
+		return "false"
+	case TrueTernary:
+		return "true"
+	default:
+		return fmt.Sprintf("unknown value, %d", int(t))
+	}
+}
+
+// Bool returns true if the value is TrueTernary, false otherwise.
+func (t Ternary) Bool() bool {
+	return t == TrueTernary
+}
+
+// Enumerations for the values of the Ternary type.
+const (
+	UnknownTernary Ternary = iota
+	FalseTernary
+	TrueTernary
+)
+
+// BoolTernary returns a true or false Ternary value for the bool provided.
+func BoolTernary(v bool) Ternary {
+	if v {
+		return TrueTernary
+	}
+	return FalseTernary
+}

vendor/github.com/aws/aws-sdk-go-v2/aws/version.go 🔗

@@ -0,0 +1,8 @@
+// Package aws provides core functionality for making requests to AWS services.
+package aws
+
+// SDKName is the name of this AWS SDK
+const SDKName = "aws-sdk-go-v2"
+
+// SDKVersion is the version of this SDK
+const SDKVersion = goModuleVersion

vendor/github.com/aws/aws-sdk-go-v2/config/CHANGELOG.md 🔗

@@ -0,0 +1,678 @@
+# v1.27.27 (2024-07-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.26 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.25 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.24 (2024-07-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.23 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.22 (2024-06-26)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.21 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.20 (2024-06-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.19 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.18 (2024-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.17 (2024-06-03)
+
+* **Documentation**: Add deprecation docs to global endpoint resolution interfaces. These APIs were previously deprecated with the introduction of service-specific endpoint resolution (EndpointResolverV2 and BaseEndpoint on service client options).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.16 (2024-05-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.15 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.14 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.13 (2024-05-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.12 (2024-05-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.11 (2024-04-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.10 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.9 (2024-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.8 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.7 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.6 (2024-03-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.5 (2024-03-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.4 (2024-02-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.3 (2024-02-22)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.2 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.1 (2024-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.6 (2024-01-22)
+
+* **Bug Fix**: Remove invalid escaping of shared config values. All values in the shared config file will now be interpreted literally, save for fully-quoted strings which are unwrapped for legacy reasons.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.5 (2024-01-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.4 (2024-01-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.3 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.2 (2023-12-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.1 (2023-12-08)
+
+* **Bug Fix**: Correct loading of [services *] sections into shared config.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.0 (2023-12-07)
+
+* **Feature**: Support modeled request compression. The only algorithm supported at this time is `gzip`.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.12 (2023-12-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.11 (2023-12-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.10 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.9 (2023-11-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.8 (2023-11-28.3)
+
+* **Bug Fix**: Correct resolution of S3Express auth disable toggle.
+
+# v1.25.7 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.6 (2023-11-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.5 (2023-11-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.4 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.3 (2023-11-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.2 (2023-11-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.1 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.0 (2023-11-14)
+
+* **Feature**: Add support for dynamic auth token from file and EKS container host in absolute/relative URIs in the HTTP credential provider.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.0 (2023-11-13)
+
+* **Feature**: Replace the legacy config parser with a modern, less-strict implementation. Parsing failures within a section will now simply ignore the invalid line rather than silently drop the entire section.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.0 (2023-11-09.2)
+
+* **Feature**: BREAKFIX: In order to support subproperty parsing, invalid property definitions must not be ignored
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.22.3 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.22.2 (2023-11-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.22.1 (2023-11-06)
+
+* No change notes available for this release.
+
+# v1.22.0 (2023-11-02)
+
+* **Feature**: Add env and shared config settings for disabling IMDSv1 fallback.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.0 (2023-11-01)
+
+* **Feature**: Adds support for configured endpoints via environment variables and the AWS shared configuration file.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.19.1 (2023-10-24)
+
+* No change notes available for this release.
+
+# v1.19.0 (2023-10-16)
+
+* **Feature**: Modify logic of retrieving user agent appID from env config
+
+# v1.18.45 (2023-10-12)
+
+* **Bug Fix**: Fail to load config if an explicitly provided profile doesn't exist.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.44 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.43 (2023-10-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.42 (2023-09-22)
+
+* **Bug Fix**: Fixed a bug where merging `max_attempts` or `duration_seconds` fields across shared config files with invalid values would silently default them to 0.
+* **Bug Fix**: Move type assertion of config values out of the parsing stage, which resolves an issue where the contents of a profile would silently be dropped with certain numeric formats.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.41 (2023-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.40 (2023-09-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.39 (2023-09-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.38 (2023-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.37 (2023-08-23)
+
+* No change notes available for this release.
+
+# v1.18.36 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.35 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.34 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.33 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.32 (2023-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.31 (2023-07-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.30 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.29 (2023-07-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.28 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.27 (2023-06-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.26 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.25 (2023-05-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.24 (2023-05-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.23 (2023-05-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.22 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.21 (2023-04-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.20 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.19 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.18 (2023-03-16)
+
+* **Bug Fix**: Allow RoleARN to be set as functional option on STS WebIdentityRoleOptions. Fixes aws/aws-sdk-go-v2#2015.
+
+# v1.18.17 (2023-03-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.16 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.15 (2023-02-22)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.14 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.13 (2023-02-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.12 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.11 (2023-02-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.10 (2023-01-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.9 (2023-01-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.8 (2023-01-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.7 (2022-12-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.6 (2022-12-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.5 (2022-12-15)
+
+* **Bug Fix**: Unify logic between shared config and in finding home directory
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.4 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.3 (2022-11-22)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.2 (2022-11-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.1 (2022-11-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.0 (2022-11-11)
+
+* **Announcement**: When using the SSOTokenProvider, a previous implementation incorrectly compensated for invalid SSOTokenProvider configurations in the shared profile. This has been fixed via PR #1903 and tracked in issue #1846
+* **Feature**: Adds token refresh support (via SSOTokenProvider) when using the SSOCredentialProvider
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.11 (2022-11-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.10 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.9 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.8 (2022-09-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.7 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.6 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.5 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.4 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.3 (2022-08-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.2 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.1 (2022-08-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.0 (2022-08-14)
+
+* **Feature**: Add alternative mechanism for determning the users `$HOME` or `%USERPROFILE%` location when the environment variables are not present.
+
+# v1.16.1 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.0 (2022-08-10)
+
+* **Feature**: Adds support for the following settings in the `~/.aws/credentials` file: `sso_account_id`, `sso_region`, `sso_role_name`, `sso_start_url`, and `ca_bundle`.
+
+# v1.15.17 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.16 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.15 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.14 (2022-07-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.13 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.12 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.11 (2022-06-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.10 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.9 (2022-05-26)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.8 (2022-05-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.7 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.6 (2022-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.5 (2022-05-09)
+
+* **Bug Fix**: Fixes a bug in LoadDefaultConfig to correctly assign ConfigSources so all config resolvers have access to the config sources. This fixes the feature/ec2/imds client not having configuration applied via config.LoadOptions such as EC2IMDSClientEnableState. PR [#1682](https://github.com/aws/aws-sdk-go-v2/pull/1682)
+
+# v1.15.4 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.3 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.2 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.1 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.0 (2022-02-24)
+
+* **Feature**: Adds support for loading RetryMaxAttempts and RetryMod from the environment and shared configuration files. These parameters drive how the SDK's API client will initialize its default retryer, if custome retryer has not been specified. See [config](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/config) module and [aws.Config](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#Config) for more information about and how to use these new options.
+* **Feature**: Adds support for the `ca_bundle` parameter in shared config and credentials files. The usage of the file is the same as environment variable, `AWS_CA_BUNDLE`, but sourced from shared config. Fixes [#1589](https://github.com/aws/aws-sdk-go-v2/issues/1589)
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.1 (2022-01-28)
+
+* **Bug Fix**: Fixes LoadDefaultConfig handling of errors returned by passed in functional options. Previously errors returned from the LoadOptions passed into LoadDefaultConfig were incorrectly ignored. [#1562](https://github.com/aws/aws-sdk-go-v2/pull/1562). Thanks to [Pinglei Guo](https://github.com/pingleig) for submitting this PR.
+* **Bug Fix**: Fixes the SDK's handling of `duration_sections` in the shared credentials file or specified in multiple shared config and shared credentials files under the same profile. [#1568](https://github.com/aws/aws-sdk-go-v2/pull/1568). Thanks to [Amir Szekely](https://github.com/kichik) for help reproduce this bug.
+* **Bug Fix**: Updates `config` module to use os.UserHomeDir instead of hard coded environment variable for OS. [#1563](https://github.com/aws/aws-sdk-go-v2/pull/1563)
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.0 (2022-01-07)
+
+* **Feature**: Add load option for CredentialCache. Adds a new member to the LoadOptions struct, CredentialsCacheOptions. This member allows specifying a function that will be used to configure the CredentialsCache. The CredentialsCacheOptions will only be used if the configuration loader will wrap the underlying credential provider in the CredentialsCache.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.1 (2021-12-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.0 (2021-12-02)
+
+* **Feature**: Add support for specifying `EndpointResolverWithOptions` on `LoadOptions`, and associated `WithEndpointResolverWithOptions`.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.3 (2021-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.2 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.1 (2021-11-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.0 (2021-11-06)
+
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.0 (2021-10-21)
+
+* **Feature**: Updated  to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.3 (2021-10-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.2 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.1 (2021-09-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.0 (2021-09-02)
+
+* **Feature**: Add support for S3 Multi-Region Access Point ARNs.
+
+# v1.7.0 (2021-08-27)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.1 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.0 (2021-08-04)
+
+* **Feature**: adds error handling for defered close calls
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.0 (2021-07-15)
+
+* **Feature**: Support has been added for EC2 IPv6-enabled Instance Metadata Service Endpoints.
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.1 (2021-07-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.0 (2021-06-25)
+
+* **Feature**: Adds configuration setting for enabling endpoint discovery.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2021-05-20)
+
+* **Feature**: SSO credentials can now be defined alongside other credential providers within the same configuration profile.
+* **Bug Fix**: Profile names were incorrectly normalized to lower-case, which could result in unexpected profile configurations.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/config/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/config/config.go 🔗

@@ -0,0 +1,222 @@
+package config
+
+import (
+	"context"
+	"os"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// defaultAWSConfigResolvers are a slice of functions that will resolve external
+// configuration values into AWS configuration values.
+//
+// This will setup the AWS configuration's Region,
+var defaultAWSConfigResolvers = []awsConfigResolver{
+	// Resolves the default configuration the SDK's aws.Config will be
+	// initialized with.
+	resolveDefaultAWSConfig,
+
+	// Sets the logger to be used. Could be user provided logger, and client
+	// logging mode.
+	resolveLogger,
+	resolveClientLogMode,
+
+	// Sets the HTTP client and configuration to use for making requests using
+	// the HTTP transport.
+	resolveHTTPClient,
+	resolveCustomCABundle,
+
+	// Sets the endpoint resolving behavior the API Clients will use for making
+	// requests to. Clients default to their own clients this allows overrides
+	// to be specified. The resolveEndpointResolver option is deprecated, but
+	// we still need to set it for backwards compatibility on config
+	// construction.
+	resolveEndpointResolver,
+	resolveEndpointResolverWithOptions,
+
+	// Sets the retry behavior API clients will use within their retry attempt
+	// middleware. Defaults to unset, allowing API clients to define their own
+	// retry behavior.
+	resolveRetryer,
+
+	// Sets the region the API Clients should use for making requests to.
+	resolveRegion,
+	resolveEC2IMDSRegion,
+	resolveDefaultRegion,
+
+	// Sets the additional set of middleware stack mutators that will custom
+	// API client request pipeline middleware.
+	resolveAPIOptions,
+
+	// Resolves the DefaultsMode that should be used by SDK clients. If this
+	// mode is set to DefaultsModeAuto.
+	//
+	// Comes after HTTPClient and CustomCABundle to ensure the HTTP client is
+	// configured if provided before invoking IMDS if mode is auto. Comes
+	// before resolving credentials so that those subsequent clients use the
+	// configured auto mode.
+	resolveDefaultsModeOptions,
+
+	// Sets the resolved credentials the API clients will use for
+	// authentication. Provides the SDK's default credential chain.
+	//
+	// Should probably be the last step in the resolve chain to ensure that all
+	// other configurations are resolved first in case downstream credentials
+	// implementations depend on or can be configured with earlier resolved
+	// configuration options.
+	resolveCredentials,
+
+	// Sets the resolved bearer authentication token API clients will use for
+	// httpBearerAuth authentication scheme.
+	resolveBearerAuthToken,
+
+	// Sets the sdk app ID if present in env var or shared config profile
+	resolveAppID,
+
+	resolveBaseEndpoint,
+
+	// Sets the DisableRequestCompression if present in env var or shared config profile
+	resolveDisableRequestCompression,
+
+	// Sets the RequestMinCompressSizeBytes if present in env var or shared config profile
+	resolveRequestMinCompressSizeBytes,
+
+	// Sets the AccountIDEndpointMode if present in env var or shared config profile
+	resolveAccountIDEndpointMode,
+}
+
+// A Config represents a generic configuration value or set of values. This type
+// will be used by the AWSConfigResolvers to extract
+//
+// General the Config type will use type assertion against the Provider interfaces
+// to extract specific data from the Config.
+type Config interface{}
+
+// A loader is used to load external configuration data and returns it as
+// a generic Config type.
+//
+// The loader should return an error if it fails to load the external configuration
+// or the configuration data is malformed, or required components missing.
+type loader func(context.Context, configs) (Config, error)
+
+// An awsConfigResolver will extract configuration data from the configs slice
+// using the provider interfaces to extract specific functionality. The extracted
+// configuration values will be written to the AWS Config value.
+//
+// The resolver should return an error if it it fails to extract the data, the
+// data is malformed, or incomplete.
+type awsConfigResolver func(ctx context.Context, cfg *aws.Config, configs configs) error
+
+// configs is a slice of Config values. These values will be used by the
+// AWSConfigResolvers to extract external configuration values to populate the
+// AWS Config type.
+//
+// Use AppendFromLoaders to add additional external Config values that are
+// loaded from external sources.
+//
+// Use ResolveAWSConfig after external Config values have been added or loaded
+// to extract the loaded configuration values into the AWS Config.
+type configs []Config
+
+// AppendFromLoaders iterates over the slice of loaders passed in calling each
+// loader function in order. The external config value returned by the loader
+// will be added to the returned configs slice.
+//
+// If a loader returns an error this method will stop iterating and return
+// that error.
+func (cs configs) AppendFromLoaders(ctx context.Context, loaders []loader) (configs, error) {
+	for _, fn := range loaders {
+		cfg, err := fn(ctx, cs)
+		if err != nil {
+			return nil, err
+		}
+
+		cs = append(cs, cfg)
+	}
+
+	return cs, nil
+}
+
+// ResolveAWSConfig returns a AWS configuration populated with values by calling
+// the resolvers slice passed in. Each resolver is called in order. Any resolver
+// may overwrite the AWS Configuration value of a previous resolver.
+//
+// If an resolver returns an error this method will return that error, and stop
+// iterating over the resolvers.
+func (cs configs) ResolveAWSConfig(ctx context.Context, resolvers []awsConfigResolver) (aws.Config, error) {
+	var cfg aws.Config
+
+	for _, fn := range resolvers {
+		if err := fn(ctx, &cfg, cs); err != nil {
+			return aws.Config{}, err
+		}
+	}
+
+	return cfg, nil
+}
+
+// ResolveConfig calls the provide function passing slice of configuration sources.
+// This implements the aws.ConfigResolver interface.
+func (cs configs) ResolveConfig(f func(configs []interface{}) error) error {
+	var cfgs []interface{}
+	for i := range cs {
+		cfgs = append(cfgs, cs[i])
+	}
+	return f(cfgs)
+}
+
+// LoadDefaultConfig reads the SDK's default external configurations, and
+// populates an AWS Config with the values from the external configurations.
+//
+// An optional variadic set of additional Config values can be provided as input
+// that will be prepended to the configs slice. Use this to add custom configuration.
+// The custom configurations must satisfy the respective providers for their data
+// or the custom data will be ignored by the resolvers and config loaders.
+//
+//	cfg, err := config.LoadDefaultConfig( context.TODO(),
+//	   config.WithSharedConfigProfile("test-profile"),
+//	)
+//	if err != nil {
+//	   panic(fmt.Sprintf("failed loading config, %v", err))
+//	}
+//
+// The default configuration sources are:
+// * Environment Variables
+// * Shared Configuration and Shared Credentials files.
+func LoadDefaultConfig(ctx context.Context, optFns ...func(*LoadOptions) error) (cfg aws.Config, err error) {
+	var options LoadOptions
+	for _, optFn := range optFns {
+		if err := optFn(&options); err != nil {
+			return aws.Config{}, err
+		}
+	}
+
+	// assign Load Options to configs
+	var cfgCpy = configs{options}
+
+	cfgCpy, err = cfgCpy.AppendFromLoaders(ctx, resolveConfigLoaders(&options))
+	if err != nil {
+		return aws.Config{}, err
+	}
+
+	cfg, err = cfgCpy.ResolveAWSConfig(ctx, defaultAWSConfigResolvers)
+	if err != nil {
+		return aws.Config{}, err
+	}
+
+	return cfg, nil
+}
+
+func resolveConfigLoaders(options *LoadOptions) []loader {
+	loaders := make([]loader, 2)
+	loaders[0] = loadEnvConfig
+
+	// specification of a profile should cause a load failure if it doesn't exist
+	if os.Getenv(awsProfileEnvVar) != "" || options.SharedConfigProfile != "" {
+		loaders[1] = loadSharedConfig
+	} else {
+		loaders[1] = loadSharedConfigIgnoreNotExist
+	}
+
+	return loaders
+}

vendor/github.com/aws/aws-sdk-go-v2/config/defaultsmode.go 🔗

@@ -0,0 +1,47 @@
+package config
+
+import (
+	"context"
+	"os"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+)
+
+const execEnvVar = "AWS_EXECUTION_ENV"
+
+// DefaultsModeOptions is the set of options that are used to configure
+type DefaultsModeOptions struct {
+	// The SDK configuration defaults mode. Defaults to legacy if not specified.
+	//
+	// Supported modes are: auto, cross-region, in-region, legacy, mobile, standard
+	Mode aws.DefaultsMode
+
+	// The EC2 Instance Metadata Client that should be used when performing environment
+	// discovery when aws.DefaultsModeAuto is set.
+	//
+	// If not specified the SDK will construct a client if the instance metadata service has not been disabled by
+	// the AWS_EC2_METADATA_DISABLED environment variable.
+	IMDSClient *imds.Client
+}
+
+func resolveDefaultsModeRuntimeEnvironment(ctx context.Context, envConfig *EnvConfig, client *imds.Client) (aws.RuntimeEnvironment, error) {
+	getRegionOutput, err := client.GetRegion(ctx, &imds.GetRegionInput{})
+	// honor context timeouts, but if we couldn't talk to IMDS don't fail runtime environment introspection.
+	select {
+	case <-ctx.Done():
+		return aws.RuntimeEnvironment{}, err
+	default:
+	}
+
+	var imdsRegion string
+	if err == nil {
+		imdsRegion = getRegionOutput.Region
+	}
+
+	return aws.RuntimeEnvironment{
+		EnvironmentIdentifier:     aws.ExecutionEnvironmentID(os.Getenv(execEnvVar)),
+		Region:                    envConfig.Region,
+		EC2InstanceMetadataRegion: imdsRegion,
+	}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/config/doc.go 🔗

@@ -0,0 +1,20 @@
+// Package config provides utilities for loading configuration from multiple
+// sources that can be used to configure the SDK's API clients, and utilities.
+//
+// The config package will load configuration from environment variables, AWS
+// shared configuration file (~/.aws/config), and AWS shared credentials file
+// (~/.aws/credentials).
+//
+// Use the LoadDefaultConfig to load configuration from all the SDK's supported
+// sources, and resolve credentials using the SDK's default credential chain.
+//
+// LoadDefaultConfig allows for a variadic list of additional Config sources that can
+// provide one or more configuration values which can be used to programmatically control the resolution
+// of a specific value, or allow for broader range of additional configuration sources not supported by the SDK.
+// A Config source implements one or more provider interfaces defined in this package. Config sources passed in will
+// take precedence over the default environment and shared config sources used by the SDK. If one or more Config sources
+// implement the same provider interface, priority will be handled by the order in which the sources were passed in.
+//
+// A number of helpers (prefixed by “With“)  are provided in this package that implement their respective provider
+// interface. These helpers should be used for overriding configuration programmatically at runtime.
+package config

vendor/github.com/aws/aws-sdk-go-v2/config/env_config.go 🔗

@@ -0,0 +1,856 @@
+package config
+
+import (
+	"bytes"
+	"context"
+	"fmt"
+	"io"
+	"io/ioutil"
+	"os"
+	"strconv"
+	"strings"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+	smithyrequestcompression "github.com/aws/smithy-go/private/requestcompression"
+)
+
+// CredentialsSourceName provides a name of the provider when config is
+// loaded from environment.
+const CredentialsSourceName = "EnvConfigCredentials"
+
+// Environment variables that will be read for configuration values.
+const (
+	awsAccessKeyIDEnvVar = "AWS_ACCESS_KEY_ID"
+	awsAccessKeyEnvVar   = "AWS_ACCESS_KEY"
+
+	awsSecretAccessKeyEnvVar = "AWS_SECRET_ACCESS_KEY"
+	awsSecretKeyEnvVar       = "AWS_SECRET_KEY"
+
+	awsSessionTokenEnvVar = "AWS_SESSION_TOKEN"
+
+	awsContainerCredentialsEndpointEnvVar     = "AWS_CONTAINER_CREDENTIALS_FULL_URI"
+	awsContainerCredentialsRelativePathEnvVar = "AWS_CONTAINER_CREDENTIALS_RELATIVE_URI"
+	awsContainerPProviderAuthorizationEnvVar  = "AWS_CONTAINER_AUTHORIZATION_TOKEN"
+
+	awsRegionEnvVar        = "AWS_REGION"
+	awsDefaultRegionEnvVar = "AWS_DEFAULT_REGION"
+
+	awsProfileEnvVar        = "AWS_PROFILE"
+	awsDefaultProfileEnvVar = "AWS_DEFAULT_PROFILE"
+
+	awsSharedCredentialsFileEnvVar = "AWS_SHARED_CREDENTIALS_FILE"
+
+	awsConfigFileEnvVar = "AWS_CONFIG_FILE"
+
+	awsCustomCABundleEnvVar = "AWS_CA_BUNDLE"
+
+	awsWebIdentityTokenFilePathEnvVar = "AWS_WEB_IDENTITY_TOKEN_FILE"
+
+	awsRoleARNEnvVar         = "AWS_ROLE_ARN"
+	awsRoleSessionNameEnvVar = "AWS_ROLE_SESSION_NAME"
+
+	awsEnableEndpointDiscoveryEnvVar = "AWS_ENABLE_ENDPOINT_DISCOVERY"
+
+	awsS3UseARNRegionEnvVar = "AWS_S3_USE_ARN_REGION"
+
+	awsEc2MetadataServiceEndpointModeEnvVar = "AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE"
+
+	awsEc2MetadataServiceEndpointEnvVar = "AWS_EC2_METADATA_SERVICE_ENDPOINT"
+
+	awsEc2MetadataDisabled         = "AWS_EC2_METADATA_DISABLED"
+	awsEc2MetadataV1DisabledEnvVar = "AWS_EC2_METADATA_V1_DISABLED"
+
+	awsS3DisableMultiRegionAccessPointEnvVar = "AWS_S3_DISABLE_MULTIREGION_ACCESS_POINTS"
+
+	awsUseDualStackEndpoint = "AWS_USE_DUALSTACK_ENDPOINT"
+
+	awsUseFIPSEndpoint = "AWS_USE_FIPS_ENDPOINT"
+
+	awsDefaultMode = "AWS_DEFAULTS_MODE"
+
+	awsRetryMaxAttempts = "AWS_MAX_ATTEMPTS"
+	awsRetryMode        = "AWS_RETRY_MODE"
+	awsSdkAppID         = "AWS_SDK_UA_APP_ID"
+
+	awsIgnoreConfiguredEndpoints = "AWS_IGNORE_CONFIGURED_ENDPOINT_URLS"
+	awsEndpointURL               = "AWS_ENDPOINT_URL"
+
+	awsDisableRequestCompression      = "AWS_DISABLE_REQUEST_COMPRESSION"
+	awsRequestMinCompressionSizeBytes = "AWS_REQUEST_MIN_COMPRESSION_SIZE_BYTES"
+
+	awsS3DisableExpressSessionAuthEnv = "AWS_S3_DISABLE_EXPRESS_SESSION_AUTH"
+
+	awsAccountIDEnv             = "AWS_ACCOUNT_ID"
+	awsAccountIDEndpointModeEnv = "AWS_ACCOUNT_ID_ENDPOINT_MODE"
+)
+
+var (
+	credAccessEnvKeys = []string{
+		awsAccessKeyIDEnvVar,
+		awsAccessKeyEnvVar,
+	}
+	credSecretEnvKeys = []string{
+		awsSecretAccessKeyEnvVar,
+		awsSecretKeyEnvVar,
+	}
+	regionEnvKeys = []string{
+		awsRegionEnvVar,
+		awsDefaultRegionEnvVar,
+	}
+	profileEnvKeys = []string{
+		awsProfileEnvVar,
+		awsDefaultProfileEnvVar,
+	}
+)
+
+// EnvConfig is a collection of environment values the SDK will read
+// setup config from. All environment values are optional. But some values
+// such as credentials require multiple values to be complete or the values
+// will be ignored.
+type EnvConfig struct {
+	// Environment configuration values. If set both Access Key ID and Secret Access
+	// Key must be provided. Session Token and optionally also be provided, but is
+	// not required.
+	//
+	//	# Access Key ID
+	//	AWS_ACCESS_KEY_ID=AKID
+	//	AWS_ACCESS_KEY=AKID # only read if AWS_ACCESS_KEY_ID is not set.
+	//
+	//	# Secret Access Key
+	//	AWS_SECRET_ACCESS_KEY=SECRET
+	//	AWS_SECRET_KEY=SECRET # only read if AWS_SECRET_ACCESS_KEY is not set.
+	//
+	//	# Session Token
+	//	AWS_SESSION_TOKEN=TOKEN
+	Credentials aws.Credentials
+
+	// ContainerCredentialsEndpoint value is the HTTP enabled endpoint to retrieve credentials
+	// using the endpointcreds.Provider
+	ContainerCredentialsEndpoint string
+
+	// ContainerCredentialsRelativePath is the relative URI path that will be used when attempting to retrieve
+	// credentials from the container endpoint.
+	ContainerCredentialsRelativePath string
+
+	// ContainerAuthorizationToken is the authorization token that will be included in the HTTP Authorization
+	// header when attempting to retrieve credentials from the container credentials endpoint.
+	ContainerAuthorizationToken string
+
+	// Region value will instruct the SDK where to make service API requests to. If is
+	// not provided in the environment the region must be provided before a service
+	// client request is made.
+	//
+	//	AWS_REGION=us-west-2
+	//	AWS_DEFAULT_REGION=us-west-2
+	Region string
+
+	// Profile name the SDK should load use when loading shared configuration from the
+	// shared configuration files. If not provided "default" will be used as the
+	// profile name.
+	//
+	//	AWS_PROFILE=my_profile
+	//	AWS_DEFAULT_PROFILE=my_profile
+	SharedConfigProfile string
+
+	// Shared credentials file path can be set to instruct the SDK to use an alternate
+	// file for the shared credentials. If not set the file will be loaded from
+	// $HOME/.aws/credentials on Linux/Unix based systems, and
+	// %USERPROFILE%\.aws\credentials on Windows.
+	//
+	//	AWS_SHARED_CREDENTIALS_FILE=$HOME/my_shared_credentials
+	SharedCredentialsFile string
+
+	// Shared config file path can be set to instruct the SDK to use an alternate
+	// file for the shared config. If not set the file will be loaded from
+	// $HOME/.aws/config on Linux/Unix based systems, and
+	// %USERPROFILE%\.aws\config on Windows.
+	//
+	//	AWS_CONFIG_FILE=$HOME/my_shared_config
+	SharedConfigFile string
+
+	// Sets the path to a custom Credentials Authority (CA) Bundle PEM file
+	// that the SDK will use instead of the system's root CA bundle.
+	// Only use this if you want to configure the SDK to use a custom set
+	// of CAs.
+	//
+	// Enabling this option will attempt to merge the Transport
+	// into the SDK's HTTP client. If the client's Transport is
+	// not a http.Transport an error will be returned. If the
+	// Transport's TLS config is set this option will cause the
+	// SDK to overwrite the Transport's TLS config's  RootCAs value.
+	//
+	// Setting a custom HTTPClient in the aws.Config options will override this setting.
+	// To use this option and custom HTTP client, the HTTP client needs to be provided
+	// when creating the config. Not the service client.
+	//
+	//  AWS_CA_BUNDLE=$HOME/my_custom_ca_bundle
+	CustomCABundle string
+
+	// Enables endpoint discovery via environment variables.
+	//
+	//	AWS_ENABLE_ENDPOINT_DISCOVERY=true
+	EnableEndpointDiscovery aws.EndpointDiscoveryEnableState
+
+	// Specifies the WebIdentity token the SDK should use to assume a role
+	// with.
+	//
+	//  AWS_WEB_IDENTITY_TOKEN_FILE=file_path
+	WebIdentityTokenFilePath string
+
+	// Specifies the IAM role arn to use when assuming an role.
+	//
+	//  AWS_ROLE_ARN=role_arn
+	RoleARN string
+
+	// Specifies the IAM role session name to use when assuming a role.
+	//
+	//  AWS_ROLE_SESSION_NAME=session_name
+	RoleSessionName string
+
+	// Specifies if the S3 service should allow ARNs to direct the region
+	// the client's requests are sent to.
+	//
+	// AWS_S3_USE_ARN_REGION=true
+	S3UseARNRegion *bool
+
+	// Specifies if the EC2 IMDS service client is enabled.
+	//
+	// AWS_EC2_METADATA_DISABLED=true
+	EC2IMDSClientEnableState imds.ClientEnableState
+
+	// Specifies if EC2 IMDSv1 fallback is disabled.
+	//
+	// AWS_EC2_METADATA_V1_DISABLED=true
+	EC2IMDSv1Disabled *bool
+
+	// Specifies the EC2 Instance Metadata Service default endpoint selection mode (IPv4 or IPv6)
+	//
+	// AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE=IPv6
+	EC2IMDSEndpointMode imds.EndpointModeState
+
+	// Specifies the EC2 Instance Metadata Service endpoint to use. If specified it overrides EC2IMDSEndpointMode.
+	//
+	// AWS_EC2_METADATA_SERVICE_ENDPOINT=http://fd00:ec2::254
+	EC2IMDSEndpoint string
+
+	// Specifies if the S3 service should disable multi-region access points
+	// support.
+	//
+	// AWS_S3_DISABLE_MULTIREGION_ACCESS_POINTS=true
+	S3DisableMultiRegionAccessPoints *bool
+
+	// Specifies that SDK clients must resolve a dual-stack endpoint for
+	// services.
+	//
+	// AWS_USE_DUALSTACK_ENDPOINT=true
+	UseDualStackEndpoint aws.DualStackEndpointState
+
+	// Specifies that SDK clients must resolve a FIPS endpoint for
+	// services.
+	//
+	// AWS_USE_FIPS_ENDPOINT=true
+	UseFIPSEndpoint aws.FIPSEndpointState
+
+	// Specifies the SDK Defaults Mode used by services.
+	//
+	// AWS_DEFAULTS_MODE=standard
+	DefaultsMode aws.DefaultsMode
+
+	// Specifies the maximum number attempts an API client will call an
+	// operation that fails with a retryable error.
+	//
+	// AWS_MAX_ATTEMPTS=3
+	RetryMaxAttempts int
+
+	// Specifies the retry model the API client will be created with.
+	//
+	// aws_retry_mode=standard
+	RetryMode aws.RetryMode
+
+	// aws sdk app ID that can be added to user agent header string
+	AppID string
+
+	// Flag used to disable configured endpoints.
+	IgnoreConfiguredEndpoints *bool
+
+	// Value to contain configured endpoints to be propagated to
+	// corresponding endpoint resolution field.
+	BaseEndpoint string
+
+	// determine if request compression is allowed, default to false
+	// retrieved from env var AWS_DISABLE_REQUEST_COMPRESSION
+	DisableRequestCompression *bool
+
+	// inclusive threshold request body size to trigger compression,
+	// default to 10240 and must be within 0 and 10485760 bytes inclusive
+	// retrieved from env var AWS_REQUEST_MIN_COMPRESSION_SIZE_BYTES
+	RequestMinCompressSizeBytes *int64
+
+	// Whether S3Express auth is disabled.
+	//
+	// This will NOT prevent requests from being made to S3Express buckets, it
+	// will only bypass the modified endpoint routing and signing behaviors
+	// associated with the feature.
+	S3DisableExpressAuth *bool
+
+	// Indicates whether account ID will be required/ignored in endpoint2.0 routing
+	AccountIDEndpointMode aws.AccountIDEndpointMode
+}
+
+// loadEnvConfig reads configuration values from the OS's environment variables.
+// Returning the a Config typed EnvConfig to satisfy the ConfigLoader func type.
+func loadEnvConfig(ctx context.Context, cfgs configs) (Config, error) {
+	return NewEnvConfig()
+}
+
+// NewEnvConfig retrieves the SDK's environment configuration.
+// See `EnvConfig` for the values that will be retrieved.
+func NewEnvConfig() (EnvConfig, error) {
+	var cfg EnvConfig
+
+	creds := aws.Credentials{
+		Source: CredentialsSourceName,
+	}
+	setStringFromEnvVal(&creds.AccessKeyID, credAccessEnvKeys)
+	setStringFromEnvVal(&creds.SecretAccessKey, credSecretEnvKeys)
+	if creds.HasKeys() {
+		creds.AccountID = os.Getenv(awsAccountIDEnv)
+		creds.SessionToken = os.Getenv(awsSessionTokenEnvVar)
+		cfg.Credentials = creds
+	}
+
+	cfg.ContainerCredentialsEndpoint = os.Getenv(awsContainerCredentialsEndpointEnvVar)
+	cfg.ContainerCredentialsRelativePath = os.Getenv(awsContainerCredentialsRelativePathEnvVar)
+	cfg.ContainerAuthorizationToken = os.Getenv(awsContainerPProviderAuthorizationEnvVar)
+
+	setStringFromEnvVal(&cfg.Region, regionEnvKeys)
+	setStringFromEnvVal(&cfg.SharedConfigProfile, profileEnvKeys)
+
+	cfg.SharedCredentialsFile = os.Getenv(awsSharedCredentialsFileEnvVar)
+	cfg.SharedConfigFile = os.Getenv(awsConfigFileEnvVar)
+
+	cfg.CustomCABundle = os.Getenv(awsCustomCABundleEnvVar)
+
+	cfg.WebIdentityTokenFilePath = os.Getenv(awsWebIdentityTokenFilePathEnvVar)
+
+	cfg.RoleARN = os.Getenv(awsRoleARNEnvVar)
+	cfg.RoleSessionName = os.Getenv(awsRoleSessionNameEnvVar)
+
+	cfg.AppID = os.Getenv(awsSdkAppID)
+
+	if err := setBoolPtrFromEnvVal(&cfg.DisableRequestCompression, []string{awsDisableRequestCompression}); err != nil {
+		return cfg, err
+	}
+	if err := setInt64PtrFromEnvVal(&cfg.RequestMinCompressSizeBytes, []string{awsRequestMinCompressionSizeBytes}, smithyrequestcompression.MaxRequestMinCompressSizeBytes); err != nil {
+		return cfg, err
+	}
+
+	if err := setEndpointDiscoveryTypeFromEnvVal(&cfg.EnableEndpointDiscovery, []string{awsEnableEndpointDiscoveryEnvVar}); err != nil {
+		return cfg, err
+	}
+
+	if err := setBoolPtrFromEnvVal(&cfg.S3UseARNRegion, []string{awsS3UseARNRegionEnvVar}); err != nil {
+		return cfg, err
+	}
+
+	setEC2IMDSClientEnableState(&cfg.EC2IMDSClientEnableState, []string{awsEc2MetadataDisabled})
+	if err := setEC2IMDSEndpointMode(&cfg.EC2IMDSEndpointMode, []string{awsEc2MetadataServiceEndpointModeEnvVar}); err != nil {
+		return cfg, err
+	}
+	cfg.EC2IMDSEndpoint = os.Getenv(awsEc2MetadataServiceEndpointEnvVar)
+	if err := setBoolPtrFromEnvVal(&cfg.EC2IMDSv1Disabled, []string{awsEc2MetadataV1DisabledEnvVar}); err != nil {
+		return cfg, err
+	}
+
+	if err := setBoolPtrFromEnvVal(&cfg.S3DisableMultiRegionAccessPoints, []string{awsS3DisableMultiRegionAccessPointEnvVar}); err != nil {
+		return cfg, err
+	}
+
+	if err := setUseDualStackEndpointFromEnvVal(&cfg.UseDualStackEndpoint, []string{awsUseDualStackEndpoint}); err != nil {
+		return cfg, err
+	}
+
+	if err := setUseFIPSEndpointFromEnvVal(&cfg.UseFIPSEndpoint, []string{awsUseFIPSEndpoint}); err != nil {
+		return cfg, err
+	}
+
+	if err := setDefaultsModeFromEnvVal(&cfg.DefaultsMode, []string{awsDefaultMode}); err != nil {
+		return cfg, err
+	}
+
+	if err := setIntFromEnvVal(&cfg.RetryMaxAttempts, []string{awsRetryMaxAttempts}); err != nil {
+		return cfg, err
+	}
+	if err := setRetryModeFromEnvVal(&cfg.RetryMode, []string{awsRetryMode}); err != nil {
+		return cfg, err
+	}
+
+	setStringFromEnvVal(&cfg.BaseEndpoint, []string{awsEndpointURL})
+
+	if err := setBoolPtrFromEnvVal(&cfg.IgnoreConfiguredEndpoints, []string{awsIgnoreConfiguredEndpoints}); err != nil {
+		return cfg, err
+	}
+
+	if err := setBoolPtrFromEnvVal(&cfg.S3DisableExpressAuth, []string{awsS3DisableExpressSessionAuthEnv}); err != nil {
+		return cfg, err
+	}
+
+	if err := setAIDEndPointModeFromEnvVal(&cfg.AccountIDEndpointMode, []string{awsAccountIDEndpointModeEnv}); err != nil {
+		return cfg, err
+	}
+
+	return cfg, nil
+}
+
+func (c EnvConfig) getDefaultsMode(ctx context.Context) (aws.DefaultsMode, bool, error) {
+	if len(c.DefaultsMode) == 0 {
+		return "", false, nil
+	}
+	return c.DefaultsMode, true, nil
+}
+
+func (c EnvConfig) getAppID(context.Context) (string, bool, error) {
+	return c.AppID, len(c.AppID) > 0, nil
+}
+
+func (c EnvConfig) getDisableRequestCompression(context.Context) (bool, bool, error) {
+	if c.DisableRequestCompression == nil {
+		return false, false, nil
+	}
+	return *c.DisableRequestCompression, true, nil
+}
+
+func (c EnvConfig) getRequestMinCompressSizeBytes(context.Context) (int64, bool, error) {
+	if c.RequestMinCompressSizeBytes == nil {
+		return 0, false, nil
+	}
+	return *c.RequestMinCompressSizeBytes, true, nil
+}
+
+func (c EnvConfig) getAccountIDEndpointMode(context.Context) (aws.AccountIDEndpointMode, bool, error) {
+	return c.AccountIDEndpointMode, len(c.AccountIDEndpointMode) > 0, nil
+}
+
+// GetRetryMaxAttempts returns the value of AWS_MAX_ATTEMPTS if was specified,
+// and not 0.
+func (c EnvConfig) GetRetryMaxAttempts(ctx context.Context) (int, bool, error) {
+	if c.RetryMaxAttempts == 0 {
+		return 0, false, nil
+	}
+	return c.RetryMaxAttempts, true, nil
+}
+
+// GetRetryMode returns the RetryMode of AWS_RETRY_MODE if was specified, and a
+// valid value.
+func (c EnvConfig) GetRetryMode(ctx context.Context) (aws.RetryMode, bool, error) {
+	if len(c.RetryMode) == 0 {
+		return "", false, nil
+	}
+	return c.RetryMode, true, nil
+}
+
+func setEC2IMDSClientEnableState(state *imds.ClientEnableState, keys []string) {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue
+		}
+		switch {
+		case strings.EqualFold(value, "true"):
+			*state = imds.ClientDisabled
+		case strings.EqualFold(value, "false"):
+			*state = imds.ClientEnabled
+		default:
+			continue
+		}
+		break
+	}
+}
+
+func setDefaultsModeFromEnvVal(mode *aws.DefaultsMode, keys []string) error {
+	for _, k := range keys {
+		if value := os.Getenv(k); len(value) > 0 {
+			if ok := mode.SetFromString(value); !ok {
+				return fmt.Errorf("invalid %s value: %s", k, value)
+			}
+			break
+		}
+	}
+	return nil
+}
+
+func setRetryModeFromEnvVal(mode *aws.RetryMode, keys []string) (err error) {
+	for _, k := range keys {
+		if value := os.Getenv(k); len(value) > 0 {
+			*mode, err = aws.ParseRetryMode(value)
+			if err != nil {
+				return fmt.Errorf("invalid %s value, %w", k, err)
+			}
+			break
+		}
+	}
+	return nil
+}
+
+func setEC2IMDSEndpointMode(mode *imds.EndpointModeState, keys []string) error {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue
+		}
+		if err := mode.SetFromString(value); err != nil {
+			return fmt.Errorf("invalid value for environment variable, %s=%s, %v", k, value, err)
+		}
+	}
+	return nil
+}
+
+func setAIDEndPointModeFromEnvVal(m *aws.AccountIDEndpointMode, keys []string) error {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue
+		}
+
+		switch value {
+		case "preferred":
+			*m = aws.AccountIDEndpointModePreferred
+		case "required":
+			*m = aws.AccountIDEndpointModeRequired
+		case "disabled":
+			*m = aws.AccountIDEndpointModeDisabled
+		default:
+			return fmt.Errorf("invalid value for environment variable, %s=%s, must be preferred/required/disabled", k, value)
+		}
+		break
+	}
+	return nil
+}
+
+// GetRegion returns the AWS Region if set in the environment. Returns an empty
+// string if not set.
+func (c EnvConfig) getRegion(ctx context.Context) (string, bool, error) {
+	if len(c.Region) == 0 {
+		return "", false, nil
+	}
+	return c.Region, true, nil
+}
+
+// GetSharedConfigProfile returns the shared config profile if set in the
+// environment. Returns an empty string if not set.
+func (c EnvConfig) getSharedConfigProfile(ctx context.Context) (string, bool, error) {
+	if len(c.SharedConfigProfile) == 0 {
+		return "", false, nil
+	}
+
+	return c.SharedConfigProfile, true, nil
+}
+
+// getSharedConfigFiles returns a slice of filenames set in the environment.
+//
+// Will return the filenames in the order of:
+// * Shared Config
+func (c EnvConfig) getSharedConfigFiles(context.Context) ([]string, bool, error) {
+	var files []string
+	if v := c.SharedConfigFile; len(v) > 0 {
+		files = append(files, v)
+	}
+
+	if len(files) == 0 {
+		return nil, false, nil
+	}
+	return files, true, nil
+}
+
+// getSharedCredentialsFiles returns a slice of filenames set in the environment.
+//
+// Will return the filenames in the order of:
+// * Shared Credentials
+func (c EnvConfig) getSharedCredentialsFiles(context.Context) ([]string, bool, error) {
+	var files []string
+	if v := c.SharedCredentialsFile; len(v) > 0 {
+		files = append(files, v)
+	}
+	if len(files) == 0 {
+		return nil, false, nil
+	}
+	return files, true, nil
+}
+
+// GetCustomCABundle returns the custom CA bundle's PEM bytes if the file was
+func (c EnvConfig) getCustomCABundle(context.Context) (io.Reader, bool, error) {
+	if len(c.CustomCABundle) == 0 {
+		return nil, false, nil
+	}
+
+	b, err := ioutil.ReadFile(c.CustomCABundle)
+	if err != nil {
+		return nil, false, err
+	}
+	return bytes.NewReader(b), true, nil
+}
+
+// GetIgnoreConfiguredEndpoints is used in knowing when to disable configured
+// endpoints feature.
+func (c EnvConfig) GetIgnoreConfiguredEndpoints(context.Context) (bool, bool, error) {
+	if c.IgnoreConfiguredEndpoints == nil {
+		return false, false, nil
+	}
+
+	return *c.IgnoreConfiguredEndpoints, true, nil
+}
+
+func (c EnvConfig) getBaseEndpoint(context.Context) (string, bool, error) {
+	return c.BaseEndpoint, len(c.BaseEndpoint) > 0, nil
+}
+
+// GetServiceBaseEndpoint is used to retrieve a normalized SDK ID for use
+// with configured endpoints.
+func (c EnvConfig) GetServiceBaseEndpoint(ctx context.Context, sdkID string) (string, bool, error) {
+	if endpt := os.Getenv(fmt.Sprintf("%s_%s", awsEndpointURL, normalizeEnv(sdkID))); endpt != "" {
+		return endpt, true, nil
+	}
+	return "", false, nil
+}
+
+func normalizeEnv(sdkID string) string {
+	upper := strings.ToUpper(sdkID)
+	return strings.ReplaceAll(upper, " ", "_")
+}
+
+// GetS3UseARNRegion returns whether to allow ARNs to direct the region
+// the S3 client's requests are sent to.
+func (c EnvConfig) GetS3UseARNRegion(ctx context.Context) (value, ok bool, err error) {
+	if c.S3UseARNRegion == nil {
+		return false, false, nil
+	}
+
+	return *c.S3UseARNRegion, true, nil
+}
+
+// GetS3DisableMultiRegionAccessPoints returns whether to disable multi-region access point
+// support for the S3 client.
+func (c EnvConfig) GetS3DisableMultiRegionAccessPoints(ctx context.Context) (value, ok bool, err error) {
+	if c.S3DisableMultiRegionAccessPoints == nil {
+		return false, false, nil
+	}
+
+	return *c.S3DisableMultiRegionAccessPoints, true, nil
+}
+
+// GetUseDualStackEndpoint returns whether the service's dual-stack endpoint should be
+// used for requests.
+func (c EnvConfig) GetUseDualStackEndpoint(ctx context.Context) (value aws.DualStackEndpointState, found bool, err error) {
+	if c.UseDualStackEndpoint == aws.DualStackEndpointStateUnset {
+		return aws.DualStackEndpointStateUnset, false, nil
+	}
+
+	return c.UseDualStackEndpoint, true, nil
+}
+
+// GetUseFIPSEndpoint returns whether the service's FIPS endpoint should be
+// used for requests.
+func (c EnvConfig) GetUseFIPSEndpoint(ctx context.Context) (value aws.FIPSEndpointState, found bool, err error) {
+	if c.UseFIPSEndpoint == aws.FIPSEndpointStateUnset {
+		return aws.FIPSEndpointStateUnset, false, nil
+	}
+
+	return c.UseFIPSEndpoint, true, nil
+}
+
+func setStringFromEnvVal(dst *string, keys []string) {
+	for _, k := range keys {
+		if v := os.Getenv(k); len(v) > 0 {
+			*dst = v
+			break
+		}
+	}
+}
+
+func setIntFromEnvVal(dst *int, keys []string) error {
+	for _, k := range keys {
+		if v := os.Getenv(k); len(v) > 0 {
+			i, err := strconv.ParseInt(v, 10, 64)
+			if err != nil {
+				return fmt.Errorf("invalid value %s=%s, %w", k, v, err)
+			}
+			*dst = int(i)
+			break
+		}
+	}
+
+	return nil
+}
+
+func setBoolPtrFromEnvVal(dst **bool, keys []string) error {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue
+		}
+
+		if *dst == nil {
+			*dst = new(bool)
+		}
+
+		switch {
+		case strings.EqualFold(value, "false"):
+			**dst = false
+		case strings.EqualFold(value, "true"):
+			**dst = true
+		default:
+			return fmt.Errorf(
+				"invalid value for environment variable, %s=%s, need true or false",
+				k, value)
+		}
+		break
+	}
+
+	return nil
+}
+
+func setInt64PtrFromEnvVal(dst **int64, keys []string, max int64) error {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue
+		}
+
+		v, err := strconv.ParseInt(value, 10, 64)
+		if err != nil {
+			return fmt.Errorf("invalid value for env var, %s=%s, need int64", k, value)
+		} else if v < 0 || v > max {
+			return fmt.Errorf("invalid range for env var min request compression size bytes %q, must be within 0 and 10485760 inclusively", v)
+		}
+		if *dst == nil {
+			*dst = new(int64)
+		}
+
+		**dst = v
+		break
+	}
+
+	return nil
+}
+
+func setEndpointDiscoveryTypeFromEnvVal(dst *aws.EndpointDiscoveryEnableState, keys []string) error {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue // skip if empty
+		}
+
+		switch {
+		case strings.EqualFold(value, endpointDiscoveryDisabled):
+			*dst = aws.EndpointDiscoveryDisabled
+		case strings.EqualFold(value, endpointDiscoveryEnabled):
+			*dst = aws.EndpointDiscoveryEnabled
+		case strings.EqualFold(value, endpointDiscoveryAuto):
+			*dst = aws.EndpointDiscoveryAuto
+		default:
+			return fmt.Errorf(
+				"invalid value for environment variable, %s=%s, need true, false or auto",
+				k, value)
+		}
+	}
+	return nil
+}
+
+func setUseDualStackEndpointFromEnvVal(dst *aws.DualStackEndpointState, keys []string) error {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue // skip if empty
+		}
+
+		switch {
+		case strings.EqualFold(value, "true"):
+			*dst = aws.DualStackEndpointStateEnabled
+		case strings.EqualFold(value, "false"):
+			*dst = aws.DualStackEndpointStateDisabled
+		default:
+			return fmt.Errorf(
+				"invalid value for environment variable, %s=%s, need true, false",
+				k, value)
+		}
+	}
+	return nil
+}
+
+func setUseFIPSEndpointFromEnvVal(dst *aws.FIPSEndpointState, keys []string) error {
+	for _, k := range keys {
+		value := os.Getenv(k)
+		if len(value) == 0 {
+			continue // skip if empty
+		}
+
+		switch {
+		case strings.EqualFold(value, "true"):
+			*dst = aws.FIPSEndpointStateEnabled
+		case strings.EqualFold(value, "false"):
+			*dst = aws.FIPSEndpointStateDisabled
+		default:
+			return fmt.Errorf(
+				"invalid value for environment variable, %s=%s, need true, false",
+				k, value)
+		}
+	}
+	return nil
+}
+
+// GetEnableEndpointDiscovery returns resolved value for EnableEndpointDiscovery env variable setting.
+func (c EnvConfig) GetEnableEndpointDiscovery(ctx context.Context) (value aws.EndpointDiscoveryEnableState, found bool, err error) {
+	if c.EnableEndpointDiscovery == aws.EndpointDiscoveryUnset {
+		return aws.EndpointDiscoveryUnset, false, nil
+	}
+
+	return c.EnableEndpointDiscovery, true, nil
+}
+
+// GetEC2IMDSClientEnableState implements a EC2IMDSClientEnableState options resolver interface.
+func (c EnvConfig) GetEC2IMDSClientEnableState() (imds.ClientEnableState, bool, error) {
+	if c.EC2IMDSClientEnableState == imds.ClientDefaultEnableState {
+		return imds.ClientDefaultEnableState, false, nil
+	}
+
+	return c.EC2IMDSClientEnableState, true, nil
+}
+
+// GetEC2IMDSEndpointMode implements a EC2IMDSEndpointMode option resolver interface.
+func (c EnvConfig) GetEC2IMDSEndpointMode() (imds.EndpointModeState, bool, error) {
+	if c.EC2IMDSEndpointMode == imds.EndpointModeStateUnset {
+		return imds.EndpointModeStateUnset, false, nil
+	}
+
+	return c.EC2IMDSEndpointMode, true, nil
+}
+
+// GetEC2IMDSEndpoint implements a EC2IMDSEndpoint option resolver interface.
+func (c EnvConfig) GetEC2IMDSEndpoint() (string, bool, error) {
+	if len(c.EC2IMDSEndpoint) == 0 {
+		return "", false, nil
+	}
+
+	return c.EC2IMDSEndpoint, true, nil
+}
+
+// GetEC2IMDSV1FallbackDisabled implements an EC2IMDSV1FallbackDisabled option
+// resolver interface.
+func (c EnvConfig) GetEC2IMDSV1FallbackDisabled() (bool, bool) {
+	if c.EC2IMDSv1Disabled == nil {
+		return false, false
+	}
+
+	return *c.EC2IMDSv1Disabled, true
+}
+
+// GetS3DisableExpressAuth returns the configured value for
+// [EnvConfig.S3DisableExpressAuth].
+func (c EnvConfig) GetS3DisableExpressAuth() (value, ok bool) {
+	if c.S3DisableExpressAuth == nil {
+		return false, false
+	}
+
+	return *c.S3DisableExpressAuth, true
+}

vendor/github.com/aws/aws-sdk-go-v2/config/load_options.go 🔗

@@ -0,0 +1,1141 @@
+package config
+
+import (
+	"context"
+	"io"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/credentials/ec2rolecreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/endpointcreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/processcreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/ssocreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/stscreds"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+	smithybearer "github.com/aws/smithy-go/auth/bearer"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// LoadOptionsFunc is a type alias for LoadOptions functional option
+type LoadOptionsFunc func(*LoadOptions) error
+
+// LoadOptions are discrete set of options that are valid for loading the
+// configuration
+type LoadOptions struct {
+
+	// Region is the region to send requests to.
+	Region string
+
+	// Credentials object to use when signing requests.
+	Credentials aws.CredentialsProvider
+
+	// Token provider for authentication operations with bearer authentication.
+	BearerAuthTokenProvider smithybearer.TokenProvider
+
+	// HTTPClient the SDK's API clients will use to invoke HTTP requests.
+	HTTPClient HTTPClient
+
+	// EndpointResolver that can be used to provide or override an endpoint for
+	// the given service and region.
+	//
+	// See the `aws.EndpointResolver` documentation on usage.
+	//
+	// Deprecated: See EndpointResolverWithOptions
+	EndpointResolver aws.EndpointResolver
+
+	// EndpointResolverWithOptions that can be used to provide or override an
+	// endpoint for the given service and region.
+	//
+	// See the `aws.EndpointResolverWithOptions` documentation on usage.
+	EndpointResolverWithOptions aws.EndpointResolverWithOptions
+
+	// RetryMaxAttempts specifies the maximum number attempts an API client
+	// will call an operation that fails with a retryable error.
+	//
+	// This value will only be used if Retryer option is nil.
+	RetryMaxAttempts int
+
+	// RetryMode specifies the retry model the API client will be created with.
+	//
+	// This value will only be used if Retryer option is nil.
+	RetryMode aws.RetryMode
+
+	// Retryer is a function that provides a Retryer implementation. A Retryer
+	// guides how HTTP requests should be retried in case of recoverable
+	// failures.
+	//
+	// If not nil, RetryMaxAttempts, and RetryMode will be ignored.
+	Retryer func() aws.Retryer
+
+	// APIOptions provides the set of middleware mutations modify how the API
+	// client requests will be handled. This is useful for adding additional
+	// tracing data to a request, or changing behavior of the SDK's client.
+	APIOptions []func(*middleware.Stack) error
+
+	// Logger writer interface to write logging messages to.
+	Logger logging.Logger
+
+	// ClientLogMode is used to configure the events that will be sent to the
+	// configured logger. This can be used to configure the logging of signing,
+	// retries, request, and responses of the SDK clients.
+	//
+	// See the ClientLogMode type documentation for the complete set of logging
+	// modes and available configuration.
+	ClientLogMode *aws.ClientLogMode
+
+	// SharedConfigProfile is the profile to be used when loading the SharedConfig
+	SharedConfigProfile string
+
+	// SharedConfigFiles is the slice of custom shared config files to use when
+	// loading the SharedConfig. A non-default profile used within config file
+	// must have name defined with prefix 'profile '. eg [profile xyz]
+	// indicates a profile with name 'xyz'. To read more on the format of the
+	// config file, please refer the documentation at
+	// https://docs.aws.amazon.com/credref/latest/refdocs/file-format.html#file-format-config
+	//
+	// If duplicate profiles are provided within the same, or across multiple
+	// shared config files, the next parsed profile will override only the
+	// properties that conflict with the previously defined profile. Note that
+	// if duplicate profiles are provided within the SharedCredentialsFiles and
+	// SharedConfigFiles, the properties defined in shared credentials file
+	// take precedence.
+	SharedConfigFiles []string
+
+	// SharedCredentialsFile is the slice of custom shared credentials files to
+	// use when loading the SharedConfig. The profile name used within
+	// credentials file must not prefix 'profile '. eg [xyz] indicates a
+	// profile with name 'xyz'. Profile declared as [profile xyz] will be
+	// ignored. To read more on the format of the credentials file, please
+	// refer the documentation at
+	// https://docs.aws.amazon.com/credref/latest/refdocs/file-format.html#file-format-creds
+	//
+	// If duplicate profiles are provided with a same, or across multiple
+	// shared credentials files, the next parsed profile will override only
+	// properties that conflict with the previously defined profile. Note that
+	// if duplicate profiles are provided within the SharedCredentialsFiles and
+	// SharedConfigFiles, the properties defined in shared credentials file
+	// take precedence.
+	SharedCredentialsFiles []string
+
+	// CustomCABundle is CA bundle PEM bytes reader
+	CustomCABundle io.Reader
+
+	// DefaultRegion is the fall back region, used if a region was not resolved
+	// from other sources
+	DefaultRegion string
+
+	// UseEC2IMDSRegion indicates if SDK should retrieve the region
+	// from the EC2 Metadata service
+	UseEC2IMDSRegion *UseEC2IMDSRegion
+
+	// CredentialsCacheOptions is a function for setting the
+	// aws.CredentialsCacheOptions
+	CredentialsCacheOptions func(*aws.CredentialsCacheOptions)
+
+	// BearerAuthTokenCacheOptions is a function for setting the smithy-go
+	// auth/bearer#TokenCacheOptions
+	BearerAuthTokenCacheOptions func(*smithybearer.TokenCacheOptions)
+
+	// SSOTokenProviderOptions is a function for setting the
+	// credentials/ssocreds.SSOTokenProviderOptions
+	SSOTokenProviderOptions func(*ssocreds.SSOTokenProviderOptions)
+
+	// ProcessCredentialOptions is a function for setting
+	// the processcreds.Options
+	ProcessCredentialOptions func(*processcreds.Options)
+
+	// EC2RoleCredentialOptions is a function for setting
+	// the ec2rolecreds.Options
+	EC2RoleCredentialOptions func(*ec2rolecreds.Options)
+
+	// EndpointCredentialOptions is a function for setting
+	// the endpointcreds.Options
+	EndpointCredentialOptions func(*endpointcreds.Options)
+
+	// WebIdentityRoleCredentialOptions is a function for setting
+	// the stscreds.WebIdentityRoleOptions
+	WebIdentityRoleCredentialOptions func(*stscreds.WebIdentityRoleOptions)
+
+	// AssumeRoleCredentialOptions is a function for setting the
+	// stscreds.AssumeRoleOptions
+	AssumeRoleCredentialOptions func(*stscreds.AssumeRoleOptions)
+
+	// SSOProviderOptions is a function for setting
+	// the ssocreds.Options
+	SSOProviderOptions func(options *ssocreds.Options)
+
+	// LogConfigurationWarnings when set to true, enables logging
+	// configuration warnings
+	LogConfigurationWarnings *bool
+
+	// S3UseARNRegion specifies if the S3 service should allow ARNs to direct
+	// the region, the client's requests are sent to.
+	S3UseARNRegion *bool
+
+	// S3DisableMultiRegionAccessPoints specifies if the S3 service should disable
+	// the S3 Multi-Region access points feature.
+	S3DisableMultiRegionAccessPoints *bool
+
+	// EnableEndpointDiscovery specifies if endpoint discovery is enable for
+	// the client.
+	EnableEndpointDiscovery aws.EndpointDiscoveryEnableState
+
+	// Specifies if the EC2 IMDS service client is enabled.
+	//
+	// AWS_EC2_METADATA_DISABLED=true
+	EC2IMDSClientEnableState imds.ClientEnableState
+
+	// Specifies the EC2 Instance Metadata Service default endpoint selection
+	// mode (IPv4 or IPv6)
+	EC2IMDSEndpointMode imds.EndpointModeState
+
+	// Specifies the EC2 Instance Metadata Service endpoint to use. If
+	// specified it overrides EC2IMDSEndpointMode.
+	EC2IMDSEndpoint string
+
+	// Specifies that SDK clients must resolve a dual-stack endpoint for
+	// services.
+	UseDualStackEndpoint aws.DualStackEndpointState
+
+	// Specifies that SDK clients must resolve a FIPS endpoint for
+	// services.
+	UseFIPSEndpoint aws.FIPSEndpointState
+
+	// Specifies the SDK configuration mode for defaults.
+	DefaultsModeOptions DefaultsModeOptions
+
+	// The sdk app ID retrieved from env var or shared config to be added to request user agent header
+	AppID string
+
+	// Specifies whether an operation request could be compressed
+	DisableRequestCompression *bool
+
+	// The inclusive min bytes of a request body that could be compressed
+	RequestMinCompressSizeBytes *int64
+
+	// Whether S3 Express auth is disabled.
+	S3DisableExpressAuth *bool
+
+	AccountIDEndpointMode aws.AccountIDEndpointMode
+}
+
+func (o LoadOptions) getDefaultsMode(ctx context.Context) (aws.DefaultsMode, bool, error) {
+	if len(o.DefaultsModeOptions.Mode) == 0 {
+		return "", false, nil
+	}
+	return o.DefaultsModeOptions.Mode, true, nil
+}
+
+// GetRetryMaxAttempts returns the RetryMaxAttempts if specified in the
+// LoadOptions and not 0.
+func (o LoadOptions) GetRetryMaxAttempts(ctx context.Context) (int, bool, error) {
+	if o.RetryMaxAttempts == 0 {
+		return 0, false, nil
+	}
+	return o.RetryMaxAttempts, true, nil
+}
+
+// GetRetryMode returns the RetryMode specified in the LoadOptions.
+func (o LoadOptions) GetRetryMode(ctx context.Context) (aws.RetryMode, bool, error) {
+	if len(o.RetryMode) == 0 {
+		return "", false, nil
+	}
+	return o.RetryMode, true, nil
+}
+
+func (o LoadOptions) getDefaultsModeIMDSClient(ctx context.Context) (*imds.Client, bool, error) {
+	if o.DefaultsModeOptions.IMDSClient == nil {
+		return nil, false, nil
+	}
+	return o.DefaultsModeOptions.IMDSClient, true, nil
+}
+
+// getRegion returns Region from config's LoadOptions
+func (o LoadOptions) getRegion(ctx context.Context) (string, bool, error) {
+	if len(o.Region) == 0 {
+		return "", false, nil
+	}
+
+	return o.Region, true, nil
+}
+
+// getAppID returns AppID from config's LoadOptions
+func (o LoadOptions) getAppID(ctx context.Context) (string, bool, error) {
+	return o.AppID, len(o.AppID) > 0, nil
+}
+
+// getDisableRequestCompression returns DisableRequestCompression from config's LoadOptions
+func (o LoadOptions) getDisableRequestCompression(ctx context.Context) (bool, bool, error) {
+	if o.DisableRequestCompression == nil {
+		return false, false, nil
+	}
+	return *o.DisableRequestCompression, true, nil
+}
+
+// getRequestMinCompressSizeBytes returns RequestMinCompressSizeBytes from config's LoadOptions
+func (o LoadOptions) getRequestMinCompressSizeBytes(ctx context.Context) (int64, bool, error) {
+	if o.RequestMinCompressSizeBytes == nil {
+		return 0, false, nil
+	}
+	return *o.RequestMinCompressSizeBytes, true, nil
+}
+
+func (o LoadOptions) getAccountIDEndpointMode(ctx context.Context) (aws.AccountIDEndpointMode, bool, error) {
+	return o.AccountIDEndpointMode, len(o.AccountIDEndpointMode) > 0, nil
+}
+
+// WithRegion is a helper function to construct functional options
+// that sets Region on config's LoadOptions. Setting the region to
+// an empty string, will result in the region value being ignored.
+// If multiple WithRegion calls are made, the last call overrides
+// the previous call values.
+func WithRegion(v string) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.Region = v
+		return nil
+	}
+}
+
+// WithAppID is a helper function to construct functional options
+// that sets AppID on config's LoadOptions.
+func WithAppID(ID string) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.AppID = ID
+		return nil
+	}
+}
+
+// WithDisableRequestCompression is a helper function to construct functional options
+// that sets DisableRequestCompression on config's LoadOptions.
+func WithDisableRequestCompression(DisableRequestCompression *bool) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		if DisableRequestCompression == nil {
+			return nil
+		}
+		o.DisableRequestCompression = DisableRequestCompression
+		return nil
+	}
+}
+
+// WithRequestMinCompressSizeBytes is a helper function to construct functional options
+// that sets RequestMinCompressSizeBytes on config's LoadOptions.
+func WithRequestMinCompressSizeBytes(RequestMinCompressSizeBytes *int64) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		if RequestMinCompressSizeBytes == nil {
+			return nil
+		}
+		o.RequestMinCompressSizeBytes = RequestMinCompressSizeBytes
+		return nil
+	}
+}
+
+// WithAccountIDEndpointMode is a helper function to construct functional options
+// that sets AccountIDEndpointMode on config's LoadOptions
+func WithAccountIDEndpointMode(m aws.AccountIDEndpointMode) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		if m != "" {
+			o.AccountIDEndpointMode = m
+		}
+		return nil
+	}
+}
+
+// getDefaultRegion returns DefaultRegion from config's LoadOptions
+func (o LoadOptions) getDefaultRegion(ctx context.Context) (string, bool, error) {
+	if len(o.DefaultRegion) == 0 {
+		return "", false, nil
+	}
+
+	return o.DefaultRegion, true, nil
+}
+
+// WithDefaultRegion is a helper function to construct functional options
+// that sets a DefaultRegion on config's LoadOptions. Setting the default
+// region to an empty string, will result in the default region value
+// being ignored. If multiple WithDefaultRegion calls are made, the last
+// call overrides the previous call values. Note that both WithRegion and
+// WithEC2IMDSRegion call takes precedence over WithDefaultRegion call
+// when resolving region.
+func WithDefaultRegion(v string) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.DefaultRegion = v
+		return nil
+	}
+}
+
+// getSharedConfigProfile returns SharedConfigProfile from config's LoadOptions
+func (o LoadOptions) getSharedConfigProfile(ctx context.Context) (string, bool, error) {
+	if len(o.SharedConfigProfile) == 0 {
+		return "", false, nil
+	}
+
+	return o.SharedConfigProfile, true, nil
+}
+
+// WithSharedConfigProfile is a helper function to construct functional options
+// that sets SharedConfigProfile on config's LoadOptions. Setting the shared
+// config profile to an empty string, will result in the shared config profile
+// value being ignored.
+// If multiple WithSharedConfigProfile calls are made, the last call overrides
+// the previous call values.
+func WithSharedConfigProfile(v string) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.SharedConfigProfile = v
+		return nil
+	}
+}
+
+// getSharedConfigFiles returns SharedConfigFiles set on config's LoadOptions
+func (o LoadOptions) getSharedConfigFiles(ctx context.Context) ([]string, bool, error) {
+	if o.SharedConfigFiles == nil {
+		return nil, false, nil
+	}
+
+	return o.SharedConfigFiles, true, nil
+}
+
+// WithSharedConfigFiles is a helper function to construct functional options
+// that sets slice of SharedConfigFiles on config's LoadOptions.
+// Setting the shared config files to an nil string slice, will result in the
+// shared config files value being ignored.
+// If multiple WithSharedConfigFiles calls are made, the last call overrides
+// the previous call values.
+func WithSharedConfigFiles(v []string) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.SharedConfigFiles = v
+		return nil
+	}
+}
+
+// getSharedCredentialsFiles returns SharedCredentialsFiles set on config's LoadOptions
+func (o LoadOptions) getSharedCredentialsFiles(ctx context.Context) ([]string, bool, error) {
+	if o.SharedCredentialsFiles == nil {
+		return nil, false, nil
+	}
+
+	return o.SharedCredentialsFiles, true, nil
+}
+
+// WithSharedCredentialsFiles is a helper function to construct functional options
+// that sets slice of SharedCredentialsFiles on config's LoadOptions.
+// Setting the shared credentials files to an nil string slice, will result in the
+// shared credentials files value being ignored.
+// If multiple WithSharedCredentialsFiles calls are made, the last call overrides
+// the previous call values.
+func WithSharedCredentialsFiles(v []string) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.SharedCredentialsFiles = v
+		return nil
+	}
+}
+
+// getCustomCABundle returns CustomCABundle from LoadOptions
+func (o LoadOptions) getCustomCABundle(ctx context.Context) (io.Reader, bool, error) {
+	if o.CustomCABundle == nil {
+		return nil, false, nil
+	}
+
+	return o.CustomCABundle, true, nil
+}
+
+// WithCustomCABundle is a helper function to construct functional options
+// that sets CustomCABundle on config's LoadOptions. Setting the custom CA Bundle
+// to nil will result in custom CA Bundle value being ignored.
+// If multiple WithCustomCABundle calls are made, the last call overrides the
+// previous call values.
+func WithCustomCABundle(v io.Reader) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.CustomCABundle = v
+		return nil
+	}
+}
+
+// UseEC2IMDSRegion provides a regionProvider that retrieves the region
+// from the EC2 Metadata service.
+type UseEC2IMDSRegion struct {
+	// If unset will default to generic EC2 IMDS client.
+	Client *imds.Client
+}
+
+// getRegion attempts to retrieve the region from EC2 Metadata service.
+func (p *UseEC2IMDSRegion) getRegion(ctx context.Context) (string, bool, error) {
+	if ctx == nil {
+		ctx = context.Background()
+	}
+
+	client := p.Client
+	if client == nil {
+		client = imds.New(imds.Options{})
+	}
+
+	result, err := client.GetRegion(ctx, nil)
+	if err != nil {
+		return "", false, err
+	}
+	if len(result.Region) != 0 {
+		return result.Region, true, nil
+	}
+	return "", false, nil
+}
+
+// getEC2IMDSRegion returns the value of EC2 IMDS region.
+func (o LoadOptions) getEC2IMDSRegion(ctx context.Context) (string, bool, error) {
+	if o.UseEC2IMDSRegion == nil {
+		return "", false, nil
+	}
+
+	return o.UseEC2IMDSRegion.getRegion(ctx)
+}
+
+// WithEC2IMDSRegion is a helper function to construct functional options
+// that enables resolving EC2IMDS region. The function takes
+// in a UseEC2IMDSRegion functional option, and can be used to set the
+// EC2IMDS client which will be used to resolve EC2IMDSRegion.
+// If no functional option is provided, an EC2IMDS client is built and used
+// by the resolver. If multiple WithEC2IMDSRegion calls are made, the last
+// call overrides the previous call values. Note that the WithRegion calls takes
+// precedence over WithEC2IMDSRegion when resolving region.
+func WithEC2IMDSRegion(fnOpts ...func(o *UseEC2IMDSRegion)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.UseEC2IMDSRegion = &UseEC2IMDSRegion{}
+
+		for _, fn := range fnOpts {
+			fn(o.UseEC2IMDSRegion)
+		}
+		return nil
+	}
+}
+
+// getCredentialsProvider returns the credentials value
+func (o LoadOptions) getCredentialsProvider(ctx context.Context) (aws.CredentialsProvider, bool, error) {
+	if o.Credentials == nil {
+		return nil, false, nil
+	}
+
+	return o.Credentials, true, nil
+}
+
+// WithCredentialsProvider is a helper function to construct functional options
+// that sets Credential provider value on config's LoadOptions. If credentials
+// provider is set to nil, the credentials provider value will be ignored.
+// If multiple WithCredentialsProvider calls are made, the last call overrides
+// the previous call values.
+func WithCredentialsProvider(v aws.CredentialsProvider) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.Credentials = v
+		return nil
+	}
+}
+
+// getCredentialsCacheOptionsProvider returns the wrapped function to set aws.CredentialsCacheOptions
+func (o LoadOptions) getCredentialsCacheOptions(ctx context.Context) (func(*aws.CredentialsCacheOptions), bool, error) {
+	if o.CredentialsCacheOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.CredentialsCacheOptions, true, nil
+}
+
+// WithCredentialsCacheOptions is a helper function to construct functional
+// options that sets a function to modify the aws.CredentialsCacheOptions the
+// aws.CredentialsCache will be configured with, if the CredentialsCache is used
+// by the configuration loader.
+//
+// If multiple WithCredentialsCacheOptions calls are made, the last call
+// overrides the previous call values.
+func WithCredentialsCacheOptions(v func(*aws.CredentialsCacheOptions)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.CredentialsCacheOptions = v
+		return nil
+	}
+}
+
+// getBearerAuthTokenProvider returns the credentials value
+func (o LoadOptions) getBearerAuthTokenProvider(ctx context.Context) (smithybearer.TokenProvider, bool, error) {
+	if o.BearerAuthTokenProvider == nil {
+		return nil, false, nil
+	}
+
+	return o.BearerAuthTokenProvider, true, nil
+}
+
+// WithBearerAuthTokenProvider is a helper function to construct functional options
+// that sets Credential provider value on config's LoadOptions. If credentials
+// provider is set to nil, the credentials provider value will be ignored.
+// If multiple WithBearerAuthTokenProvider calls are made, the last call overrides
+// the previous call values.
+func WithBearerAuthTokenProvider(v smithybearer.TokenProvider) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.BearerAuthTokenProvider = v
+		return nil
+	}
+}
+
+// getBearerAuthTokenCacheOptionsProvider returns the wrapped function to set smithybearer.TokenCacheOptions
+func (o LoadOptions) getBearerAuthTokenCacheOptions(ctx context.Context) (func(*smithybearer.TokenCacheOptions), bool, error) {
+	if o.BearerAuthTokenCacheOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.BearerAuthTokenCacheOptions, true, nil
+}
+
+// WithBearerAuthTokenCacheOptions is a helper function to construct functional options
+// that sets a function to modify the TokenCacheOptions the smithy-go
+// auth/bearer#TokenCache will be configured with, if the TokenCache is used by
+// the configuration loader.
+//
+// If multiple WithBearerAuthTokenCacheOptions calls are made, the last call overrides
+// the previous call values.
+func WithBearerAuthTokenCacheOptions(v func(*smithybearer.TokenCacheOptions)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.BearerAuthTokenCacheOptions = v
+		return nil
+	}
+}
+
+// getSSOTokenProviderOptionsProvider returns the wrapped function to set smithybearer.TokenCacheOptions
+func (o LoadOptions) getSSOTokenProviderOptions(ctx context.Context) (func(*ssocreds.SSOTokenProviderOptions), bool, error) {
+	if o.SSOTokenProviderOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.SSOTokenProviderOptions, true, nil
+}
+
+// WithSSOTokenProviderOptions is a helper function to construct functional
+// options that sets a function to modify the SSOtokenProviderOptions the SDK's
+// credentials/ssocreds#SSOProvider will be configured with, if the
+// SSOTokenProvider is used by the configuration loader.
+//
+// If multiple WithSSOTokenProviderOptions calls are made, the last call overrides
+// the previous call values.
+func WithSSOTokenProviderOptions(v func(*ssocreds.SSOTokenProviderOptions)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.SSOTokenProviderOptions = v
+		return nil
+	}
+}
+
+// getProcessCredentialOptions returns the wrapped function to set processcreds.Options
+func (o LoadOptions) getProcessCredentialOptions(ctx context.Context) (func(*processcreds.Options), bool, error) {
+	if o.ProcessCredentialOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.ProcessCredentialOptions, true, nil
+}
+
+// WithProcessCredentialOptions is a helper function to construct functional options
+// that sets a function to use processcreds.Options on config's LoadOptions.
+// If process credential options is set to nil, the process credential value will
+// be ignored. If multiple WithProcessCredentialOptions calls are made, the last call
+// overrides the previous call values.
+func WithProcessCredentialOptions(v func(*processcreds.Options)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.ProcessCredentialOptions = v
+		return nil
+	}
+}
+
+// getEC2RoleCredentialOptions returns the wrapped function to set the ec2rolecreds.Options
+func (o LoadOptions) getEC2RoleCredentialOptions(ctx context.Context) (func(*ec2rolecreds.Options), bool, error) {
+	if o.EC2RoleCredentialOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.EC2RoleCredentialOptions, true, nil
+}
+
+// WithEC2RoleCredentialOptions is a helper function to construct functional options
+// that sets a function to use ec2rolecreds.Options on config's LoadOptions. If
+// EC2 role credential options is set to nil, the EC2 role credential options value
+// will be ignored. If multiple WithEC2RoleCredentialOptions calls are made,
+// the last call overrides the previous call values.
+func WithEC2RoleCredentialOptions(v func(*ec2rolecreds.Options)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EC2RoleCredentialOptions = v
+		return nil
+	}
+}
+
+// getEndpointCredentialOptions returns the wrapped function to set endpointcreds.Options
+func (o LoadOptions) getEndpointCredentialOptions(context.Context) (func(*endpointcreds.Options), bool, error) {
+	if o.EndpointCredentialOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.EndpointCredentialOptions, true, nil
+}
+
+// WithEndpointCredentialOptions is a helper function to construct functional options
+// that sets a function to use endpointcreds.Options on config's LoadOptions. If
+// endpoint credential options is set to nil, the endpoint credential options
+// value will be ignored. If multiple WithEndpointCredentialOptions calls are made,
+// the last call overrides the previous call values.
+func WithEndpointCredentialOptions(v func(*endpointcreds.Options)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EndpointCredentialOptions = v
+		return nil
+	}
+}
+
+// getWebIdentityRoleCredentialOptions returns the wrapped function
+func (o LoadOptions) getWebIdentityRoleCredentialOptions(context.Context) (func(*stscreds.WebIdentityRoleOptions), bool, error) {
+	if o.WebIdentityRoleCredentialOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.WebIdentityRoleCredentialOptions, true, nil
+}
+
+// WithWebIdentityRoleCredentialOptions is a helper function to construct
+// functional options that sets a function to use stscreds.WebIdentityRoleOptions
+// on config's LoadOptions. If web identity role credentials options is set to nil,
+// the web identity role credentials value will be ignored. If multiple
+// WithWebIdentityRoleCredentialOptions calls are made, the last call
+// overrides the previous call values.
+func WithWebIdentityRoleCredentialOptions(v func(*stscreds.WebIdentityRoleOptions)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.WebIdentityRoleCredentialOptions = v
+		return nil
+	}
+}
+
+// getAssumeRoleCredentialOptions returns AssumeRoleCredentialOptions from LoadOptions
+func (o LoadOptions) getAssumeRoleCredentialOptions(context.Context) (func(options *stscreds.AssumeRoleOptions), bool, error) {
+	if o.AssumeRoleCredentialOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.AssumeRoleCredentialOptions, true, nil
+}
+
+// WithAssumeRoleCredentialOptions  is a helper function to construct
+// functional options that sets a function to use stscreds.AssumeRoleOptions
+// on config's LoadOptions. If assume role credentials options is set to nil,
+// the assume role credentials value will be ignored. If multiple
+// WithAssumeRoleCredentialOptions calls are made, the last call overrides
+// the previous call values.
+func WithAssumeRoleCredentialOptions(v func(*stscreds.AssumeRoleOptions)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.AssumeRoleCredentialOptions = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getHTTPClient(ctx context.Context) (HTTPClient, bool, error) {
+	if o.HTTPClient == nil {
+		return nil, false, nil
+	}
+
+	return o.HTTPClient, true, nil
+}
+
+// WithHTTPClient is a helper function to construct functional options
+// that sets HTTPClient on LoadOptions. If HTTPClient is set to nil,
+// the HTTPClient value will be ignored.
+// If multiple WithHTTPClient calls are made, the last call overrides
+// the previous call values.
+func WithHTTPClient(v HTTPClient) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.HTTPClient = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getAPIOptions(ctx context.Context) ([]func(*middleware.Stack) error, bool, error) {
+	if o.APIOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.APIOptions, true, nil
+}
+
+// WithAPIOptions is a helper function to construct functional options
+// that sets APIOptions on LoadOptions. If APIOptions is set to nil, the
+// APIOptions value is ignored. If multiple WithAPIOptions calls are
+// made, the last call overrides the previous call values.
+func WithAPIOptions(v []func(*middleware.Stack) error) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		if v == nil {
+			return nil
+		}
+
+		o.APIOptions = append(o.APIOptions, v...)
+		return nil
+	}
+}
+
+func (o LoadOptions) getRetryMaxAttempts(ctx context.Context) (int, bool, error) {
+	if o.RetryMaxAttempts == 0 {
+		return 0, false, nil
+	}
+
+	return o.RetryMaxAttempts, true, nil
+}
+
+// WithRetryMaxAttempts is a helper function to construct functional options that sets
+// RetryMaxAttempts on LoadOptions. If RetryMaxAttempts is unset, the RetryMaxAttempts value is
+// ignored. If multiple WithRetryMaxAttempts calls are made, the last call overrides
+// the previous call values.
+//
+// Will be ignored of LoadOptions.Retryer or WithRetryer are used.
+func WithRetryMaxAttempts(v int) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.RetryMaxAttempts = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getRetryMode(ctx context.Context) (aws.RetryMode, bool, error) {
+	if o.RetryMode == "" {
+		return "", false, nil
+	}
+
+	return o.RetryMode, true, nil
+}
+
+// WithRetryMode is a helper function to construct functional options that sets
+// RetryMode on LoadOptions. If RetryMode is unset, the RetryMode value is
+// ignored. If multiple WithRetryMode calls are made, the last call overrides
+// the previous call values.
+//
+// Will be ignored of LoadOptions.Retryer or WithRetryer are used.
+func WithRetryMode(v aws.RetryMode) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.RetryMode = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getRetryer(ctx context.Context) (func() aws.Retryer, bool, error) {
+	if o.Retryer == nil {
+		return nil, false, nil
+	}
+
+	return o.Retryer, true, nil
+}
+
+// WithRetryer is a helper function to construct functional options
+// that sets Retryer on LoadOptions. If Retryer is set to nil, the
+// Retryer value is ignored. If multiple WithRetryer calls are
+// made, the last call overrides the previous call values.
+func WithRetryer(v func() aws.Retryer) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.Retryer = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getEndpointResolver(ctx context.Context) (aws.EndpointResolver, bool, error) {
+	if o.EndpointResolver == nil {
+		return nil, false, nil
+	}
+
+	return o.EndpointResolver, true, nil
+}
+
+// WithEndpointResolver is a helper function to construct functional options
+// that sets the EndpointResolver on LoadOptions. If the EndpointResolver is set to nil,
+// the EndpointResolver value is ignored. If multiple WithEndpointResolver calls
+// are made, the last call overrides the previous call values.
+//
+// Deprecated: The global endpoint resolution interface is deprecated. The API
+// for endpoint resolution is now unique to each service and is set via the
+// EndpointResolverV2 field on service client options. Use of
+// WithEndpointResolver or WithEndpointResolverWithOptions will prevent you
+// from using any endpoint-related service features released after the
+// introduction of EndpointResolverV2. You may also encounter broken or
+// unexpected behavior when using the old global interface with services that
+// use many endpoint-related customizations such as S3.
+func WithEndpointResolver(v aws.EndpointResolver) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EndpointResolver = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getEndpointResolverWithOptions(ctx context.Context) (aws.EndpointResolverWithOptions, bool, error) {
+	if o.EndpointResolverWithOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.EndpointResolverWithOptions, true, nil
+}
+
+// WithEndpointResolverWithOptions is a helper function to construct functional options
+// that sets the EndpointResolverWithOptions on LoadOptions. If the EndpointResolverWithOptions is set to nil,
+// the EndpointResolver value is ignored. If multiple WithEndpointResolver calls
+// are made, the last call overrides the previous call values.
+//
+// Deprecated: The global endpoint resolution interface is deprecated. See
+// deprecation docs on [WithEndpointResolver].
+func WithEndpointResolverWithOptions(v aws.EndpointResolverWithOptions) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EndpointResolverWithOptions = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getLogger(ctx context.Context) (logging.Logger, bool, error) {
+	if o.Logger == nil {
+		return nil, false, nil
+	}
+
+	return o.Logger, true, nil
+}
+
+// WithLogger is a helper function to construct functional options
+// that sets Logger on LoadOptions. If Logger is set to nil, the
+// Logger value will be ignored. If multiple WithLogger calls are made,
+// the last call overrides the previous call values.
+func WithLogger(v logging.Logger) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.Logger = v
+		return nil
+	}
+}
+
+func (o LoadOptions) getClientLogMode(ctx context.Context) (aws.ClientLogMode, bool, error) {
+	if o.ClientLogMode == nil {
+		return 0, false, nil
+	}
+
+	return *o.ClientLogMode, true, nil
+}
+
+// WithClientLogMode is a helper function to construct functional options
+// that sets client log mode on LoadOptions. If client log mode is set to nil,
+// the client log mode value will be ignored. If multiple WithClientLogMode calls are made,
+// the last call overrides the previous call values.
+func WithClientLogMode(v aws.ClientLogMode) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.ClientLogMode = &v
+		return nil
+	}
+}
+
+func (o LoadOptions) getLogConfigurationWarnings(ctx context.Context) (v bool, found bool, err error) {
+	if o.LogConfigurationWarnings == nil {
+		return false, false, nil
+	}
+	return *o.LogConfigurationWarnings, true, nil
+}
+
+// WithLogConfigurationWarnings is a helper function to construct
+// functional options that can be used to set LogConfigurationWarnings
+// on LoadOptions.
+//
+// If multiple WithLogConfigurationWarnings calls are made, the last call
+// overrides the previous call values.
+func WithLogConfigurationWarnings(v bool) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.LogConfigurationWarnings = &v
+		return nil
+	}
+}
+
+// GetS3UseARNRegion returns whether to allow ARNs to direct the region
+// the S3 client's requests are sent to.
+func (o LoadOptions) GetS3UseARNRegion(ctx context.Context) (v bool, found bool, err error) {
+	if o.S3UseARNRegion == nil {
+		return false, false, nil
+	}
+	return *o.S3UseARNRegion, true, nil
+}
+
+// WithS3UseARNRegion is a helper function to construct functional options
+// that can be used to set S3UseARNRegion on LoadOptions.
+// If multiple WithS3UseARNRegion calls are made, the last call overrides
+// the previous call values.
+func WithS3UseARNRegion(v bool) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.S3UseARNRegion = &v
+		return nil
+	}
+}
+
+// GetS3DisableMultiRegionAccessPoints returns whether to disable
+// the S3 multi-region access points feature.
+func (o LoadOptions) GetS3DisableMultiRegionAccessPoints(ctx context.Context) (v bool, found bool, err error) {
+	if o.S3DisableMultiRegionAccessPoints == nil {
+		return false, false, nil
+	}
+	return *o.S3DisableMultiRegionAccessPoints, true, nil
+}
+
+// WithS3DisableMultiRegionAccessPoints is a helper function to construct functional options
+// that can be used to set S3DisableMultiRegionAccessPoints on LoadOptions.
+// If multiple WithS3DisableMultiRegionAccessPoints calls are made, the last call overrides
+// the previous call values.
+func WithS3DisableMultiRegionAccessPoints(v bool) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.S3DisableMultiRegionAccessPoints = &v
+		return nil
+	}
+}
+
+// GetEnableEndpointDiscovery returns if the EnableEndpointDiscovery flag is set.
+func (o LoadOptions) GetEnableEndpointDiscovery(ctx context.Context) (value aws.EndpointDiscoveryEnableState, ok bool, err error) {
+	if o.EnableEndpointDiscovery == aws.EndpointDiscoveryUnset {
+		return aws.EndpointDiscoveryUnset, false, nil
+	}
+	return o.EnableEndpointDiscovery, true, nil
+}
+
+// WithEndpointDiscovery is a helper function to construct functional options
+// that can be used to enable endpoint discovery on LoadOptions for supported clients.
+// If multiple WithEndpointDiscovery calls are made, the last call overrides
+// the previous call values.
+func WithEndpointDiscovery(v aws.EndpointDiscoveryEnableState) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EnableEndpointDiscovery = v
+		return nil
+	}
+}
+
+// getSSOProviderOptions returns AssumeRoleCredentialOptions from LoadOptions
+func (o LoadOptions) getSSOProviderOptions(context.Context) (func(options *ssocreds.Options), bool, error) {
+	if o.SSOProviderOptions == nil {
+		return nil, false, nil
+	}
+
+	return o.SSOProviderOptions, true, nil
+}
+
+// WithSSOProviderOptions is a helper function to construct
+// functional options that sets a function to use ssocreds.Options
+// on config's LoadOptions. If the SSO credential provider options is set to nil,
+// the sso provider options value will be ignored. If multiple
+// WithSSOProviderOptions calls are made, the last call overrides
+// the previous call values.
+func WithSSOProviderOptions(v func(*ssocreds.Options)) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.SSOProviderOptions = v
+		return nil
+	}
+}
+
+// GetEC2IMDSClientEnableState implements a EC2IMDSClientEnableState options resolver interface.
+func (o LoadOptions) GetEC2IMDSClientEnableState() (imds.ClientEnableState, bool, error) {
+	if o.EC2IMDSClientEnableState == imds.ClientDefaultEnableState {
+		return imds.ClientDefaultEnableState, false, nil
+	}
+
+	return o.EC2IMDSClientEnableState, true, nil
+}
+
+// GetEC2IMDSEndpointMode implements a EC2IMDSEndpointMode option resolver interface.
+func (o LoadOptions) GetEC2IMDSEndpointMode() (imds.EndpointModeState, bool, error) {
+	if o.EC2IMDSEndpointMode == imds.EndpointModeStateUnset {
+		return imds.EndpointModeStateUnset, false, nil
+	}
+
+	return o.EC2IMDSEndpointMode, true, nil
+}
+
+// GetEC2IMDSEndpoint implements a EC2IMDSEndpoint option resolver interface.
+func (o LoadOptions) GetEC2IMDSEndpoint() (string, bool, error) {
+	if len(o.EC2IMDSEndpoint) == 0 {
+		return "", false, nil
+	}
+
+	return o.EC2IMDSEndpoint, true, nil
+}
+
+// WithEC2IMDSClientEnableState is a helper function to construct functional options that sets the EC2IMDSClientEnableState.
+func WithEC2IMDSClientEnableState(v imds.ClientEnableState) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EC2IMDSClientEnableState = v
+		return nil
+	}
+}
+
+// WithEC2IMDSEndpointMode is a helper function to construct functional options that sets the EC2IMDSEndpointMode.
+func WithEC2IMDSEndpointMode(v imds.EndpointModeState) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EC2IMDSEndpointMode = v
+		return nil
+	}
+}
+
+// WithEC2IMDSEndpoint is a helper function to construct functional options that sets the EC2IMDSEndpoint.
+func WithEC2IMDSEndpoint(v string) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.EC2IMDSEndpoint = v
+		return nil
+	}
+}
+
+// WithUseDualStackEndpoint is a helper function to construct
+// functional options that can be used to set UseDualStackEndpoint on LoadOptions.
+func WithUseDualStackEndpoint(v aws.DualStackEndpointState) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.UseDualStackEndpoint = v
+		return nil
+	}
+}
+
+// GetUseDualStackEndpoint returns whether the service's dual-stack endpoint should be
+// used for requests.
+func (o LoadOptions) GetUseDualStackEndpoint(ctx context.Context) (value aws.DualStackEndpointState, found bool, err error) {
+	if o.UseDualStackEndpoint == aws.DualStackEndpointStateUnset {
+		return aws.DualStackEndpointStateUnset, false, nil
+	}
+	return o.UseDualStackEndpoint, true, nil
+}
+
+// WithUseFIPSEndpoint is a helper function to construct
+// functional options that can be used to set UseFIPSEndpoint on LoadOptions.
+func WithUseFIPSEndpoint(v aws.FIPSEndpointState) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.UseFIPSEndpoint = v
+		return nil
+	}
+}
+
+// GetUseFIPSEndpoint returns whether the service's FIPS endpoint should be
+// used for requests.
+func (o LoadOptions) GetUseFIPSEndpoint(ctx context.Context) (value aws.FIPSEndpointState, found bool, err error) {
+	if o.UseFIPSEndpoint == aws.FIPSEndpointStateUnset {
+		return aws.FIPSEndpointStateUnset, false, nil
+	}
+	return o.UseFIPSEndpoint, true, nil
+}
+
+// WithDefaultsMode sets the SDK defaults configuration mode to the value provided.
+//
+// Zero or more functional options can be provided to provide configuration options for performing
+// environment discovery when using aws.DefaultsModeAuto.
+func WithDefaultsMode(mode aws.DefaultsMode, optFns ...func(options *DefaultsModeOptions)) LoadOptionsFunc {
+	do := DefaultsModeOptions{
+		Mode: mode,
+	}
+	for _, fn := range optFns {
+		fn(&do)
+	}
+	return func(options *LoadOptions) error {
+		options.DefaultsModeOptions = do
+		return nil
+	}
+}
+
+// GetS3DisableExpressAuth returns the configured value for
+// [EnvConfig.S3DisableExpressAuth].
+func (o LoadOptions) GetS3DisableExpressAuth() (value, ok bool) {
+	if o.S3DisableExpressAuth == nil {
+		return false, false
+	}
+
+	return *o.S3DisableExpressAuth, true
+}
+
+// WithS3DisableExpressAuth sets [LoadOptions.S3DisableExpressAuth]
+// to the value provided.
+func WithS3DisableExpressAuth(v bool) LoadOptionsFunc {
+	return func(o *LoadOptions) error {
+		o.S3DisableExpressAuth = &v
+		return nil
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/config/local.go 🔗

@@ -0,0 +1,51 @@
+package config
+
+import (
+	"fmt"
+	"net"
+	"net/url"
+)
+
+var lookupHostFn = net.LookupHost
+
+func isLoopbackHost(host string) (bool, error) {
+	ip := net.ParseIP(host)
+	if ip != nil {
+		return ip.IsLoopback(), nil
+	}
+
+	// Host is not an ip, perform lookup
+	addrs, err := lookupHostFn(host)
+	if err != nil {
+		return false, err
+	}
+	if len(addrs) == 0 {
+		return false, fmt.Errorf("no addrs found for host, %s", host)
+	}
+
+	for _, addr := range addrs {
+		if !net.ParseIP(addr).IsLoopback() {
+			return false, nil
+		}
+	}
+
+	return true, nil
+}
+
+func validateLocalURL(v string) error {
+	u, err := url.Parse(v)
+	if err != nil {
+		return err
+	}
+
+	host := u.Hostname()
+	if len(host) == 0 {
+		return fmt.Errorf("unable to parse host from local HTTP cred provider URL")
+	} else if isLoopback, err := isLoopbackHost(host); err != nil {
+		return fmt.Errorf("failed to resolve host %q, %v", host, err)
+	} else if !isLoopback {
+		return fmt.Errorf("invalid endpoint host, %q, only host resolving to loopback addresses are allowed", host)
+	}
+
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/config/provider.go 🔗

@@ -0,0 +1,721 @@
+package config
+
+import (
+	"context"
+	"io"
+	"net/http"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/credentials/ec2rolecreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/endpointcreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/processcreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/ssocreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/stscreds"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+	smithybearer "github.com/aws/smithy-go/auth/bearer"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// sharedConfigProfileProvider provides access to the shared config profile
+// name external configuration value.
+type sharedConfigProfileProvider interface {
+	getSharedConfigProfile(ctx context.Context) (string, bool, error)
+}
+
+// getSharedConfigProfile searches the configs for a sharedConfigProfileProvider
+// and returns the value if found. Returns an error if a provider fails before a
+// value is found.
+func getSharedConfigProfile(ctx context.Context, configs configs) (value string, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(sharedConfigProfileProvider); ok {
+			value, found, err = p.getSharedConfigProfile(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// sharedConfigFilesProvider provides access to the shared config filesnames
+// external configuration value.
+type sharedConfigFilesProvider interface {
+	getSharedConfigFiles(ctx context.Context) ([]string, bool, error)
+}
+
+// getSharedConfigFiles searches the configs for a sharedConfigFilesProvider
+// and returns the value if found. Returns an error if a provider fails before a
+// value is found.
+func getSharedConfigFiles(ctx context.Context, configs configs) (value []string, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(sharedConfigFilesProvider); ok {
+			value, found, err = p.getSharedConfigFiles(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+
+	return
+}
+
+// sharedCredentialsFilesProvider provides access to the shared credentials filesnames
+// external configuration value.
+type sharedCredentialsFilesProvider interface {
+	getSharedCredentialsFiles(ctx context.Context) ([]string, bool, error)
+}
+
+// getSharedCredentialsFiles searches the configs for a sharedCredentialsFilesProvider
+// and returns the value if found. Returns an error if a provider fails before a
+// value is found.
+func getSharedCredentialsFiles(ctx context.Context, configs configs) (value []string, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(sharedCredentialsFilesProvider); ok {
+			value, found, err = p.getSharedCredentialsFiles(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+
+	return
+}
+
+// customCABundleProvider provides access to the custom CA bundle PEM bytes.
+type customCABundleProvider interface {
+	getCustomCABundle(ctx context.Context) (io.Reader, bool, error)
+}
+
+// getCustomCABundle searches the configs for a customCABundleProvider
+// and returns the value if found. Returns an error if a provider fails before a
+// value is found.
+func getCustomCABundle(ctx context.Context, configs configs) (value io.Reader, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(customCABundleProvider); ok {
+			value, found, err = p.getCustomCABundle(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+
+	return
+}
+
+// regionProvider provides access to the region external configuration value.
+type regionProvider interface {
+	getRegion(ctx context.Context) (string, bool, error)
+}
+
+// getRegion searches the configs for a regionProvider and returns the value
+// if found. Returns an error if a provider fails before a value is found.
+func getRegion(ctx context.Context, configs configs) (value string, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(regionProvider); ok {
+			value, found, err = p.getRegion(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// IgnoreConfiguredEndpointsProvider is needed to search for all providers
+// that provide a flag to disable configured endpoints.
+type IgnoreConfiguredEndpointsProvider interface {
+	GetIgnoreConfiguredEndpoints(ctx context.Context) (bool, bool, error)
+}
+
+// GetIgnoreConfiguredEndpoints is used in knowing when to disable configured
+// endpoints feature.
+func GetIgnoreConfiguredEndpoints(ctx context.Context, configs []interface{}) (value bool, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(IgnoreConfiguredEndpointsProvider); ok {
+			value, found, err = p.GetIgnoreConfiguredEndpoints(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+type baseEndpointProvider interface {
+	getBaseEndpoint(ctx context.Context) (string, bool, error)
+}
+
+func getBaseEndpoint(ctx context.Context, configs configs) (value string, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(baseEndpointProvider); ok {
+			value, found, err = p.getBaseEndpoint(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+type servicesObjectProvider interface {
+	getServicesObject(ctx context.Context) (map[string]map[string]string, bool, error)
+}
+
+func getServicesObject(ctx context.Context, configs configs) (value map[string]map[string]string, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(servicesObjectProvider); ok {
+			value, found, err = p.getServicesObject(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// appIDProvider provides access to the sdk app ID value
+type appIDProvider interface {
+	getAppID(ctx context.Context) (string, bool, error)
+}
+
+func getAppID(ctx context.Context, configs configs) (value string, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(appIDProvider); ok {
+			value, found, err = p.getAppID(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// disableRequestCompressionProvider provides access to the DisableRequestCompression
+type disableRequestCompressionProvider interface {
+	getDisableRequestCompression(context.Context) (bool, bool, error)
+}
+
+func getDisableRequestCompression(ctx context.Context, configs configs) (value bool, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(disableRequestCompressionProvider); ok {
+			value, found, err = p.getDisableRequestCompression(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// requestMinCompressSizeBytesProvider provides access to the MinCompressSizeBytes
+type requestMinCompressSizeBytesProvider interface {
+	getRequestMinCompressSizeBytes(context.Context) (int64, bool, error)
+}
+
+func getRequestMinCompressSizeBytes(ctx context.Context, configs configs) (value int64, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(requestMinCompressSizeBytesProvider); ok {
+			value, found, err = p.getRequestMinCompressSizeBytes(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// accountIDEndpointModeProvider provides access to the AccountIDEndpointMode
+type accountIDEndpointModeProvider interface {
+	getAccountIDEndpointMode(context.Context) (aws.AccountIDEndpointMode, bool, error)
+}
+
+func getAccountIDEndpointMode(ctx context.Context, configs configs) (value aws.AccountIDEndpointMode, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(accountIDEndpointModeProvider); ok {
+			value, found, err = p.getAccountIDEndpointMode(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// ec2IMDSRegionProvider provides access to the ec2 imds region
+// configuration value
+type ec2IMDSRegionProvider interface {
+	getEC2IMDSRegion(ctx context.Context) (string, bool, error)
+}
+
+// getEC2IMDSRegion searches the configs for a ec2IMDSRegionProvider and
+// returns the value if found. Returns an error if a provider fails before
+// a value is found.
+func getEC2IMDSRegion(ctx context.Context, configs configs) (region string, found bool, err error) {
+	for _, cfg := range configs {
+		if provider, ok := cfg.(ec2IMDSRegionProvider); ok {
+			region, found, err = provider.getEC2IMDSRegion(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// credentialsProviderProvider provides access to the credentials external
+// configuration value.
+type credentialsProviderProvider interface {
+	getCredentialsProvider(ctx context.Context) (aws.CredentialsProvider, bool, error)
+}
+
+// getCredentialsProvider searches the configs for a credentialsProviderProvider
+// and returns the value if found. Returns an error if a provider fails before a
+// value is found.
+func getCredentialsProvider(ctx context.Context, configs configs) (p aws.CredentialsProvider, found bool, err error) {
+	for _, cfg := range configs {
+		if provider, ok := cfg.(credentialsProviderProvider); ok {
+			p, found, err = provider.getCredentialsProvider(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// credentialsCacheOptionsProvider is an interface for retrieving a function for setting
+// the aws.CredentialsCacheOptions.
+type credentialsCacheOptionsProvider interface {
+	getCredentialsCacheOptions(ctx context.Context) (func(*aws.CredentialsCacheOptions), bool, error)
+}
+
+// getCredentialsCacheOptionsProvider is an interface for retrieving a function for setting
+// the aws.CredentialsCacheOptions.
+func getCredentialsCacheOptionsProvider(ctx context.Context, configs configs) (
+	f func(*aws.CredentialsCacheOptions), found bool, err error,
+) {
+	for _, config := range configs {
+		if p, ok := config.(credentialsCacheOptionsProvider); ok {
+			f, found, err = p.getCredentialsCacheOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// bearerAuthTokenProviderProvider provides access to the bearer authentication
+// token external configuration value.
+type bearerAuthTokenProviderProvider interface {
+	getBearerAuthTokenProvider(context.Context) (smithybearer.TokenProvider, bool, error)
+}
+
+// getBearerAuthTokenProvider searches the config sources for a
+// bearerAuthTokenProviderProvider and returns the value if found. Returns an
+// error if a provider fails before a value is found.
+func getBearerAuthTokenProvider(ctx context.Context, configs configs) (p smithybearer.TokenProvider, found bool, err error) {
+	for _, cfg := range configs {
+		if provider, ok := cfg.(bearerAuthTokenProviderProvider); ok {
+			p, found, err = provider.getBearerAuthTokenProvider(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// bearerAuthTokenCacheOptionsProvider is an interface for retrieving a function for
+// setting the smithy-go auth/bearer#TokenCacheOptions.
+type bearerAuthTokenCacheOptionsProvider interface {
+	getBearerAuthTokenCacheOptions(context.Context) (func(*smithybearer.TokenCacheOptions), bool, error)
+}
+
+// getBearerAuthTokenCacheOptionsProvider is an interface for retrieving a function for
+// setting the smithy-go auth/bearer#TokenCacheOptions.
+func getBearerAuthTokenCacheOptions(ctx context.Context, configs configs) (
+	f func(*smithybearer.TokenCacheOptions), found bool, err error,
+) {
+	for _, config := range configs {
+		if p, ok := config.(bearerAuthTokenCacheOptionsProvider); ok {
+			f, found, err = p.getBearerAuthTokenCacheOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// ssoTokenProviderOptionsProvider is an interface for retrieving a function for
+// setting the SDK's credentials/ssocreds#SSOTokenProviderOptions.
+type ssoTokenProviderOptionsProvider interface {
+	getSSOTokenProviderOptions(context.Context) (func(*ssocreds.SSOTokenProviderOptions), bool, error)
+}
+
+// getSSOTokenProviderOptions is an interface for retrieving a function for
+// setting the SDK's credentials/ssocreds#SSOTokenProviderOptions.
+func getSSOTokenProviderOptions(ctx context.Context, configs configs) (
+	f func(*ssocreds.SSOTokenProviderOptions), found bool, err error,
+) {
+	for _, config := range configs {
+		if p, ok := config.(ssoTokenProviderOptionsProvider); ok {
+			f, found, err = p.getSSOTokenProviderOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// ssoTokenProviderOptionsProvider
+
+// processCredentialOptions is an interface for retrieving a function for setting
+// the processcreds.Options.
+type processCredentialOptions interface {
+	getProcessCredentialOptions(ctx context.Context) (func(*processcreds.Options), bool, error)
+}
+
+// getProcessCredentialOptions searches the slice of configs and returns the first function found
+func getProcessCredentialOptions(ctx context.Context, configs configs) (f func(*processcreds.Options), found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(processCredentialOptions); ok {
+			f, found, err = p.getProcessCredentialOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// ec2RoleCredentialOptionsProvider is an interface for retrieving a function
+// for setting the ec2rolecreds.Provider options.
+type ec2RoleCredentialOptionsProvider interface {
+	getEC2RoleCredentialOptions(ctx context.Context) (func(*ec2rolecreds.Options), bool, error)
+}
+
+// getEC2RoleCredentialProviderOptions searches the slice of configs and returns the first function found
+func getEC2RoleCredentialProviderOptions(ctx context.Context, configs configs) (f func(*ec2rolecreds.Options), found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(ec2RoleCredentialOptionsProvider); ok {
+			f, found, err = p.getEC2RoleCredentialOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// defaultRegionProvider is an interface for retrieving a default region if a region was not resolved from other sources
+type defaultRegionProvider interface {
+	getDefaultRegion(ctx context.Context) (string, bool, error)
+}
+
+// getDefaultRegion searches the slice of configs and returns the first fallback region found
+func getDefaultRegion(ctx context.Context, configs configs) (value string, found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(defaultRegionProvider); ok {
+			value, found, err = p.getDefaultRegion(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// endpointCredentialOptionsProvider is an interface for retrieving a function for setting
+// the endpointcreds.ProviderOptions.
+type endpointCredentialOptionsProvider interface {
+	getEndpointCredentialOptions(ctx context.Context) (func(*endpointcreds.Options), bool, error)
+}
+
+// getEndpointCredentialProviderOptions searches the slice of configs and returns the first function found
+func getEndpointCredentialProviderOptions(ctx context.Context, configs configs) (f func(*endpointcreds.Options), found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(endpointCredentialOptionsProvider); ok {
+			f, found, err = p.getEndpointCredentialOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// webIdentityRoleCredentialOptionsProvider is an interface for retrieving a function for setting
+// the stscreds.WebIdentityRoleProvider.
+type webIdentityRoleCredentialOptionsProvider interface {
+	getWebIdentityRoleCredentialOptions(ctx context.Context) (func(*stscreds.WebIdentityRoleOptions), bool, error)
+}
+
+// getWebIdentityCredentialProviderOptions searches the slice of configs and returns the first function found
+func getWebIdentityCredentialProviderOptions(ctx context.Context, configs configs) (f func(*stscreds.WebIdentityRoleOptions), found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(webIdentityRoleCredentialOptionsProvider); ok {
+			f, found, err = p.getWebIdentityRoleCredentialOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// assumeRoleCredentialOptionsProvider is an interface for retrieving a function for setting
+// the stscreds.AssumeRoleOptions.
+type assumeRoleCredentialOptionsProvider interface {
+	getAssumeRoleCredentialOptions(ctx context.Context) (func(*stscreds.AssumeRoleOptions), bool, error)
+}
+
+// getAssumeRoleCredentialProviderOptions searches the slice of configs and returns the first function found
+func getAssumeRoleCredentialProviderOptions(ctx context.Context, configs configs) (f func(*stscreds.AssumeRoleOptions), found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(assumeRoleCredentialOptionsProvider); ok {
+			f, found, err = p.getAssumeRoleCredentialOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// HTTPClient is an HTTP client implementation
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+// httpClientProvider is an interface for retrieving HTTPClient
+type httpClientProvider interface {
+	getHTTPClient(ctx context.Context) (HTTPClient, bool, error)
+}
+
+// getHTTPClient searches the slice of configs and returns the HTTPClient set on configs
+func getHTTPClient(ctx context.Context, configs configs) (client HTTPClient, found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(httpClientProvider); ok {
+			client, found, err = p.getHTTPClient(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// apiOptionsProvider is an interface for retrieving APIOptions
+type apiOptionsProvider interface {
+	getAPIOptions(ctx context.Context) ([]func(*middleware.Stack) error, bool, error)
+}
+
+// getAPIOptions searches the slice of configs and returns the APIOptions set on configs
+func getAPIOptions(ctx context.Context, configs configs) (apiOptions []func(*middleware.Stack) error, found bool, err error) {
+	for _, config := range configs {
+		if p, ok := config.(apiOptionsProvider); ok {
+			// retrieve APIOptions from configs and set it on cfg
+			apiOptions, found, err = p.getAPIOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// endpointResolverProvider is an interface for retrieving an aws.EndpointResolver from a configuration source
+type endpointResolverProvider interface {
+	getEndpointResolver(ctx context.Context) (aws.EndpointResolver, bool, error)
+}
+
+// getEndpointResolver searches the provided config sources for a EndpointResolverFunc that can be used
+// to configure the aws.Config.EndpointResolver value.
+func getEndpointResolver(ctx context.Context, configs configs) (f aws.EndpointResolver, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(endpointResolverProvider); ok {
+			f, found, err = p.getEndpointResolver(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// endpointResolverWithOptionsProvider is an interface for retrieving an aws.EndpointResolverWithOptions from a configuration source
+type endpointResolverWithOptionsProvider interface {
+	getEndpointResolverWithOptions(ctx context.Context) (aws.EndpointResolverWithOptions, bool, error)
+}
+
+// getEndpointResolver searches the provided config sources for a EndpointResolverFunc that can be used
+// to configure the aws.Config.EndpointResolver value.
+func getEndpointResolverWithOptions(ctx context.Context, configs configs) (f aws.EndpointResolverWithOptions, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(endpointResolverWithOptionsProvider); ok {
+			f, found, err = p.getEndpointResolverWithOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// loggerProvider is an interface for retrieving a logging.Logger from a configuration source.
+type loggerProvider interface {
+	getLogger(ctx context.Context) (logging.Logger, bool, error)
+}
+
+// getLogger searches the provided config sources for a logging.Logger that can be used
+// to configure the aws.Config.Logger value.
+func getLogger(ctx context.Context, configs configs) (l logging.Logger, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(loggerProvider); ok {
+			l, found, err = p.getLogger(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// clientLogModeProvider is an interface for retrieving the aws.ClientLogMode from a configuration source.
+type clientLogModeProvider interface {
+	getClientLogMode(ctx context.Context) (aws.ClientLogMode, bool, error)
+}
+
+func getClientLogMode(ctx context.Context, configs configs) (m aws.ClientLogMode, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(clientLogModeProvider); ok {
+			m, found, err = p.getClientLogMode(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// retryProvider is an configuration provider for custom Retryer.
+type retryProvider interface {
+	getRetryer(ctx context.Context) (func() aws.Retryer, bool, error)
+}
+
+func getRetryer(ctx context.Context, configs configs) (v func() aws.Retryer, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(retryProvider); ok {
+			v, found, err = p.getRetryer(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// logConfigurationWarningsProvider is an configuration provider for
+// retrieving a boolean indicating whether configuration issues should
+// be logged when loading from config sources
+type logConfigurationWarningsProvider interface {
+	getLogConfigurationWarnings(ctx context.Context) (bool, bool, error)
+}
+
+func getLogConfigurationWarnings(ctx context.Context, configs configs) (v bool, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(logConfigurationWarningsProvider); ok {
+			v, found, err = p.getLogConfigurationWarnings(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// ssoCredentialOptionsProvider is an interface for retrieving a function for setting
+// the ssocreds.Options.
+type ssoCredentialOptionsProvider interface {
+	getSSOProviderOptions(context.Context) (func(*ssocreds.Options), bool, error)
+}
+
+func getSSOProviderOptions(ctx context.Context, configs configs) (v func(options *ssocreds.Options), found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(ssoCredentialOptionsProvider); ok {
+			v, found, err = p.getSSOProviderOptions(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return v, found, err
+}
+
+type defaultsModeIMDSClientProvider interface {
+	getDefaultsModeIMDSClient(context.Context) (*imds.Client, bool, error)
+}
+
+func getDefaultsModeIMDSClient(ctx context.Context, configs configs) (v *imds.Client, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(defaultsModeIMDSClientProvider); ok {
+			v, found, err = p.getDefaultsModeIMDSClient(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return v, found, err
+}
+
+type defaultsModeProvider interface {
+	getDefaultsMode(context.Context) (aws.DefaultsMode, bool, error)
+}
+
+func getDefaultsMode(ctx context.Context, configs configs) (v aws.DefaultsMode, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(defaultsModeProvider); ok {
+			v, found, err = p.getDefaultsMode(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return v, found, err
+}
+
+type retryMaxAttemptsProvider interface {
+	GetRetryMaxAttempts(context.Context) (int, bool, error)
+}
+
+func getRetryMaxAttempts(ctx context.Context, configs configs) (v int, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(retryMaxAttemptsProvider); ok {
+			v, found, err = p.GetRetryMaxAttempts(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return v, found, err
+}
+
+type retryModeProvider interface {
+	GetRetryMode(context.Context) (aws.RetryMode, bool, error)
+}
+
+func getRetryMode(ctx context.Context, configs configs) (v aws.RetryMode, found bool, err error) {
+	for _, c := range configs {
+		if p, ok := c.(retryModeProvider); ok {
+			v, found, err = p.GetRetryMode(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return v, found, err
+}

vendor/github.com/aws/aws-sdk-go-v2/config/resolve.go 🔗

@@ -0,0 +1,383 @@
+package config
+
+import (
+	"context"
+	"crypto/tls"
+	"crypto/x509"
+	"fmt"
+	"io/ioutil"
+	"net/http"
+	"os"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+	"github.com/aws/smithy-go/logging"
+)
+
+// resolveDefaultAWSConfig will write default configuration values into the cfg
+// value. It will write the default values, overwriting any previous value.
+//
+// This should be used as the first resolver in the slice of resolvers when
+// resolving external configuration.
+func resolveDefaultAWSConfig(ctx context.Context, cfg *aws.Config, cfgs configs) error {
+	var sources []interface{}
+	for _, s := range cfgs {
+		sources = append(sources, s)
+	}
+
+	*cfg = aws.Config{
+		Logger:        logging.NewStandardLogger(os.Stderr),
+		ConfigSources: sources,
+	}
+	return nil
+}
+
+// resolveCustomCABundle extracts the first instance of a custom CA bundle filename
+// from the external configurations. It will update the HTTP Client's builder
+// to be configured with the custom CA bundle.
+//
+// Config provider used:
+// * customCABundleProvider
+func resolveCustomCABundle(ctx context.Context, cfg *aws.Config, cfgs configs) error {
+	pemCerts, found, err := getCustomCABundle(ctx, cfgs)
+	if err != nil {
+		// TODO error handling, What is the best way to handle this?
+		// capture previous errors continue. error out if all errors
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	if cfg.HTTPClient == nil {
+		cfg.HTTPClient = awshttp.NewBuildableClient()
+	}
+
+	trOpts, ok := cfg.HTTPClient.(*awshttp.BuildableClient)
+	if !ok {
+		return fmt.Errorf("unable to add custom RootCAs HTTPClient, "+
+			"has no WithTransportOptions, %T", cfg.HTTPClient)
+	}
+
+	var appendErr error
+	client := trOpts.WithTransportOptions(func(tr *http.Transport) {
+		if tr.TLSClientConfig == nil {
+			tr.TLSClientConfig = &tls.Config{}
+		}
+		if tr.TLSClientConfig.RootCAs == nil {
+			tr.TLSClientConfig.RootCAs = x509.NewCertPool()
+		}
+
+		b, err := ioutil.ReadAll(pemCerts)
+		if err != nil {
+			appendErr = fmt.Errorf("failed to read custom CA bundle PEM file")
+		}
+
+		if !tr.TLSClientConfig.RootCAs.AppendCertsFromPEM(b) {
+			appendErr = fmt.Errorf("failed to load custom CA bundle PEM file")
+		}
+	})
+	if appendErr != nil {
+		return appendErr
+	}
+
+	cfg.HTTPClient = client
+	return err
+}
+
+// resolveRegion extracts the first instance of a Region from the configs slice.
+//
+// Config providers used:
+// * regionProvider
+func resolveRegion(ctx context.Context, cfg *aws.Config, configs configs) error {
+	v, found, err := getRegion(ctx, configs)
+	if err != nil {
+		// TODO error handling, What is the best way to handle this?
+		// capture previous errors continue. error out if all errors
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.Region = v
+	return nil
+}
+
+func resolveBaseEndpoint(ctx context.Context, cfg *aws.Config, configs configs) error {
+	var downcastCfgSources []interface{}
+	for _, cs := range configs {
+		downcastCfgSources = append(downcastCfgSources, interface{}(cs))
+	}
+
+	if val, found, err := GetIgnoreConfiguredEndpoints(ctx, downcastCfgSources); found && val && err == nil {
+		cfg.BaseEndpoint = nil
+		return nil
+	}
+
+	v, found, err := getBaseEndpoint(ctx, configs)
+	if err != nil {
+		return err
+	}
+
+	if !found {
+		return nil
+	}
+	cfg.BaseEndpoint = aws.String(v)
+	return nil
+}
+
+// resolveAppID extracts the sdk app ID from the configs slice's SharedConfig or env var
+func resolveAppID(ctx context.Context, cfg *aws.Config, configs configs) error {
+	ID, _, err := getAppID(ctx, configs)
+	if err != nil {
+		return err
+	}
+
+	cfg.AppID = ID
+	return nil
+}
+
+// resolveDisableRequestCompression extracts the DisableRequestCompression from the configs slice's
+// SharedConfig or EnvConfig
+func resolveDisableRequestCompression(ctx context.Context, cfg *aws.Config, configs configs) error {
+	disable, _, err := getDisableRequestCompression(ctx, configs)
+	if err != nil {
+		return err
+	}
+
+	cfg.DisableRequestCompression = disable
+	return nil
+}
+
+// resolveRequestMinCompressSizeBytes extracts the RequestMinCompressSizeBytes from the configs slice's
+// SharedConfig or EnvConfig
+func resolveRequestMinCompressSizeBytes(ctx context.Context, cfg *aws.Config, configs configs) error {
+	minBytes, found, err := getRequestMinCompressSizeBytes(ctx, configs)
+	if err != nil {
+		return err
+	}
+	// must set a default min size 10240 if not configured
+	if !found {
+		minBytes = 10240
+	}
+	cfg.RequestMinCompressSizeBytes = minBytes
+	return nil
+}
+
+// resolveAccountIDEndpointMode extracts the AccountIDEndpointMode from the configs slice's
+// SharedConfig or EnvConfig
+func resolveAccountIDEndpointMode(ctx context.Context, cfg *aws.Config, configs configs) error {
+	m, found, err := getAccountIDEndpointMode(ctx, configs)
+	if err != nil {
+		return err
+	}
+
+	if !found {
+		m = aws.AccountIDEndpointModePreferred
+	}
+
+	cfg.AccountIDEndpointMode = m
+	return nil
+}
+
+// resolveDefaultRegion extracts the first instance of a default region and sets `aws.Config.Region` to the default
+// region if region had not been resolved from other sources.
+func resolveDefaultRegion(ctx context.Context, cfg *aws.Config, configs configs) error {
+	if len(cfg.Region) > 0 {
+		return nil
+	}
+
+	v, found, err := getDefaultRegion(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.Region = v
+
+	return nil
+}
+
+// resolveHTTPClient extracts the first instance of a HTTPClient and sets `aws.Config.HTTPClient` to the HTTPClient instance
+// if one has not been resolved from other sources.
+func resolveHTTPClient(ctx context.Context, cfg *aws.Config, configs configs) error {
+	c, found, err := getHTTPClient(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.HTTPClient = c
+	return nil
+}
+
+// resolveAPIOptions extracts the first instance of APIOptions and sets `aws.Config.APIOptions` to the resolved API options
+// if one has not been resolved from other sources.
+func resolveAPIOptions(ctx context.Context, cfg *aws.Config, configs configs) error {
+	o, found, err := getAPIOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.APIOptions = o
+
+	return nil
+}
+
+// resolveEndpointResolver extracts the first instance of a EndpointResolverFunc from the config slice
+// and sets the functions result on the aws.Config.EndpointResolver
+func resolveEndpointResolver(ctx context.Context, cfg *aws.Config, configs configs) error {
+	endpointResolver, found, err := getEndpointResolver(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.EndpointResolver = endpointResolver
+
+	return nil
+}
+
+// resolveEndpointResolver extracts the first instance of a EndpointResolverFunc from the config slice
+// and sets the functions result on the aws.Config.EndpointResolver
+func resolveEndpointResolverWithOptions(ctx context.Context, cfg *aws.Config, configs configs) error {
+	endpointResolver, found, err := getEndpointResolverWithOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.EndpointResolverWithOptions = endpointResolver
+
+	return nil
+}
+
+func resolveLogger(ctx context.Context, cfg *aws.Config, configs configs) error {
+	logger, found, err := getLogger(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.Logger = logger
+
+	return nil
+}
+
+func resolveClientLogMode(ctx context.Context, cfg *aws.Config, configs configs) error {
+	mode, found, err := getClientLogMode(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.ClientLogMode = mode
+
+	return nil
+}
+
+func resolveRetryer(ctx context.Context, cfg *aws.Config, configs configs) error {
+	retryer, found, err := getRetryer(ctx, configs)
+	if err != nil {
+		return err
+	}
+
+	if found {
+		cfg.Retryer = retryer
+		return nil
+	}
+
+	// Only load the retry options if a custom retryer has not be specified.
+	if err = resolveRetryMaxAttempts(ctx, cfg, configs); err != nil {
+		return err
+	}
+	return resolveRetryMode(ctx, cfg, configs)
+}
+
+func resolveEC2IMDSRegion(ctx context.Context, cfg *aws.Config, configs configs) error {
+	if len(cfg.Region) > 0 {
+		return nil
+	}
+
+	region, found, err := getEC2IMDSRegion(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		return nil
+	}
+
+	cfg.Region = region
+
+	return nil
+}
+
+func resolveDefaultsModeOptions(ctx context.Context, cfg *aws.Config, configs configs) error {
+	defaultsMode, found, err := getDefaultsMode(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if !found {
+		defaultsMode = aws.DefaultsModeLegacy
+	}
+
+	var environment aws.RuntimeEnvironment
+	if defaultsMode == aws.DefaultsModeAuto {
+		envConfig, _, _ := getAWSConfigSources(configs)
+
+		client, found, err := getDefaultsModeIMDSClient(ctx, configs)
+		if err != nil {
+			return err
+		}
+		if !found {
+			client = imds.NewFromConfig(*cfg)
+		}
+
+		environment, err = resolveDefaultsModeRuntimeEnvironment(ctx, envConfig, client)
+		if err != nil {
+			return err
+		}
+	}
+
+	cfg.DefaultsMode = defaultsMode
+	cfg.RuntimeEnvironment = environment
+
+	return nil
+}
+
+func resolveRetryMaxAttempts(ctx context.Context, cfg *aws.Config, configs configs) error {
+	maxAttempts, found, err := getRetryMaxAttempts(ctx, configs)
+	if err != nil || !found {
+		return err
+	}
+	cfg.RetryMaxAttempts = maxAttempts
+
+	return nil
+}
+
+func resolveRetryMode(ctx context.Context, cfg *aws.Config, configs configs) error {
+	retryMode, found, err := getRetryMode(ctx, configs)
+	if err != nil || !found {
+		return err
+	}
+	cfg.RetryMode = retryMode
+
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/config/resolve_bearer_token.go 🔗

@@ -0,0 +1,122 @@
+package config
+
+import (
+	"context"
+	"fmt"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/credentials/ssocreds"
+	"github.com/aws/aws-sdk-go-v2/service/ssooidc"
+	smithybearer "github.com/aws/smithy-go/auth/bearer"
+)
+
+// resolveBearerAuthToken extracts a token provider from the config sources.
+//
+// If an explicit bearer authentication token provider is not found the
+// resolver will fallback to resolving token provider via other config sources
+// such as SharedConfig.
+func resolveBearerAuthToken(ctx context.Context, cfg *aws.Config, configs configs) error {
+	found, err := resolveBearerAuthTokenProvider(ctx, cfg, configs)
+	if found || err != nil {
+		return err
+	}
+
+	return resolveBearerAuthTokenProviderChain(ctx, cfg, configs)
+}
+
+// resolveBearerAuthTokenProvider extracts the first instance of
+// BearerAuthTokenProvider from the config sources.
+//
+// The resolved BearerAuthTokenProvider will be wrapped in a cache to ensure
+// the Token is only refreshed when needed. This also protects the
+// TokenProvider so it can be used concurrently.
+//
+// Config providers used:
+// * bearerAuthTokenProviderProvider
+func resolveBearerAuthTokenProvider(ctx context.Context, cfg *aws.Config, configs configs) (bool, error) {
+	tokenProvider, found, err := getBearerAuthTokenProvider(ctx, configs)
+	if !found || err != nil {
+		return false, err
+	}
+
+	cfg.BearerAuthTokenProvider, err = wrapWithBearerAuthTokenCache(
+		ctx, configs, tokenProvider)
+	if err != nil {
+		return false, err
+	}
+
+	return true, nil
+}
+
+func resolveBearerAuthTokenProviderChain(ctx context.Context, cfg *aws.Config, configs configs) (err error) {
+	_, sharedConfig, _ := getAWSConfigSources(configs)
+
+	var provider smithybearer.TokenProvider
+
+	if sharedConfig.SSOSession != nil {
+		provider, err = resolveBearerAuthSSOTokenProvider(
+			ctx, cfg, sharedConfig.SSOSession, configs)
+	}
+
+	if err == nil && provider != nil {
+		cfg.BearerAuthTokenProvider, err = wrapWithBearerAuthTokenCache(
+			ctx, configs, provider)
+	}
+
+	return err
+}
+
+func resolveBearerAuthSSOTokenProvider(ctx context.Context, cfg *aws.Config, session *SSOSession, configs configs) (*ssocreds.SSOTokenProvider, error) {
+	ssoTokenProviderOptionsFn, found, err := getSSOTokenProviderOptions(ctx, configs)
+	if err != nil {
+		return nil, fmt.Errorf("failed to get SSOTokenProviderOptions from config sources, %w", err)
+	}
+
+	var optFns []func(*ssocreds.SSOTokenProviderOptions)
+	if found {
+		optFns = append(optFns, ssoTokenProviderOptionsFn)
+	}
+
+	cachePath, err := ssocreds.StandardCachedTokenFilepath(session.Name)
+	if err != nil {
+		return nil, fmt.Errorf("failed to get SSOTokenProvider's cache path, %w", err)
+	}
+
+	client := ssooidc.NewFromConfig(*cfg)
+	provider := ssocreds.NewSSOTokenProvider(client, cachePath, optFns...)
+
+	return provider, nil
+}
+
+// wrapWithBearerAuthTokenCache will wrap provider with an smithy-go
+// bearer/auth#TokenCache with the provided options if the provider is not
+// already a TokenCache.
+func wrapWithBearerAuthTokenCache(
+	ctx context.Context,
+	cfgs configs,
+	provider smithybearer.TokenProvider,
+	optFns ...func(*smithybearer.TokenCacheOptions),
+) (smithybearer.TokenProvider, error) {
+	_, ok := provider.(*smithybearer.TokenCache)
+	if ok {
+		return provider, nil
+	}
+
+	tokenCacheConfigOptions, optionsFound, err := getBearerAuthTokenCacheOptions(ctx, cfgs)
+	if err != nil {
+		return nil, err
+	}
+
+	opts := make([]func(*smithybearer.TokenCacheOptions), 0, 2+len(optFns))
+	opts = append(opts, func(o *smithybearer.TokenCacheOptions) {
+		o.RefreshBeforeExpires = 5 * time.Minute
+		o.RetrieveBearerTokenTimeout = 30 * time.Second
+	})
+	opts = append(opts, optFns...)
+	if optionsFound {
+		opts = append(opts, tokenCacheConfigOptions)
+	}
+
+	return smithybearer.NewTokenCache(provider, opts...), nil
+}

vendor/github.com/aws/aws-sdk-go-v2/config/resolve_credentials.go 🔗

@@ -0,0 +1,566 @@
+package config
+
+import (
+	"context"
+	"fmt"
+	"io/ioutil"
+	"net"
+	"net/url"
+	"os"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/credentials"
+	"github.com/aws/aws-sdk-go-v2/credentials/ec2rolecreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/endpointcreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/processcreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/ssocreds"
+	"github.com/aws/aws-sdk-go-v2/credentials/stscreds"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+	"github.com/aws/aws-sdk-go-v2/service/sso"
+	"github.com/aws/aws-sdk-go-v2/service/ssooidc"
+	"github.com/aws/aws-sdk-go-v2/service/sts"
+)
+
+const (
+	// valid credential source values
+	credSourceEc2Metadata      = "Ec2InstanceMetadata"
+	credSourceEnvironment      = "Environment"
+	credSourceECSContainer     = "EcsContainer"
+	httpProviderAuthFileEnvVar = "AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE"
+)
+
+// direct representation of the IPv4 address for the ECS container
+// "169.254.170.2"
+var ecsContainerIPv4 net.IP = []byte{
+	169, 254, 170, 2,
+}
+
+// direct representation of the IPv4 address for the EKS container
+// "169.254.170.23"
+var eksContainerIPv4 net.IP = []byte{
+	169, 254, 170, 23,
+}
+
+// direct representation of the IPv6 address for the EKS container
+// "fd00:ec2::23"
+var eksContainerIPv6 net.IP = []byte{
+	0xFD, 0, 0xE, 0xC2,
+	0, 0, 0, 0,
+	0, 0, 0, 0,
+	0, 0, 0, 0x23,
+}
+
+var (
+	ecsContainerEndpoint = "http://169.254.170.2" // not constant to allow for swapping during unit-testing
+)
+
+// resolveCredentials extracts a credential provider from slice of config
+// sources.
+//
+// If an explicit credential provider is not found the resolver will fallback
+// to resolving credentials by extracting a credential provider from EnvConfig
+// and SharedConfig.
+func resolveCredentials(ctx context.Context, cfg *aws.Config, configs configs) error {
+	found, err := resolveCredentialProvider(ctx, cfg, configs)
+	if found || err != nil {
+		return err
+	}
+
+	return resolveCredentialChain(ctx, cfg, configs)
+}
+
+// resolveCredentialProvider extracts the first instance of Credentials from the
+// config slices.
+//
+// The resolved CredentialProvider will be wrapped in a cache to ensure the
+// credentials are only refreshed when needed. This also protects the
+// credential provider to be used concurrently.
+//
+// Config providers used:
+// * credentialsProviderProvider
+func resolveCredentialProvider(ctx context.Context, cfg *aws.Config, configs configs) (bool, error) {
+	credProvider, found, err := getCredentialsProvider(ctx, configs)
+	if !found || err != nil {
+		return false, err
+	}
+
+	cfg.Credentials, err = wrapWithCredentialsCache(ctx, configs, credProvider)
+	if err != nil {
+		return false, err
+	}
+
+	return true, nil
+}
+
+// resolveCredentialChain resolves a credential provider chain using EnvConfig
+// and SharedConfig if present in the slice of provided configs.
+//
+// The resolved CredentialProvider will be wrapped in a cache to ensure the
+// credentials are only refreshed when needed. This also protects the
+// credential provider to be used concurrently.
+func resolveCredentialChain(ctx context.Context, cfg *aws.Config, configs configs) (err error) {
+	envConfig, sharedConfig, other := getAWSConfigSources(configs)
+
+	// When checking if a profile was specified programmatically we should only consider the "other"
+	// configuration sources that have been provided. This ensures we correctly honor the expected credential
+	// hierarchy.
+	_, sharedProfileSet, err := getSharedConfigProfile(ctx, other)
+	if err != nil {
+		return err
+	}
+
+	switch {
+	case sharedProfileSet:
+		err = resolveCredsFromProfile(ctx, cfg, envConfig, sharedConfig, other)
+	case envConfig.Credentials.HasKeys():
+		cfg.Credentials = credentials.StaticCredentialsProvider{Value: envConfig.Credentials}
+	case len(envConfig.WebIdentityTokenFilePath) > 0:
+		err = assumeWebIdentity(ctx, cfg, envConfig.WebIdentityTokenFilePath, envConfig.RoleARN, envConfig.RoleSessionName, configs)
+	default:
+		err = resolveCredsFromProfile(ctx, cfg, envConfig, sharedConfig, other)
+	}
+	if err != nil {
+		return err
+	}
+
+	// Wrap the resolved provider in a cache so the SDK will cache credentials.
+	cfg.Credentials, err = wrapWithCredentialsCache(ctx, configs, cfg.Credentials)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func resolveCredsFromProfile(ctx context.Context, cfg *aws.Config, envConfig *EnvConfig, sharedConfig *SharedConfig, configs configs) (err error) {
+
+	switch {
+	case sharedConfig.Source != nil:
+		// Assume IAM role with credentials source from a different profile.
+		err = resolveCredsFromProfile(ctx, cfg, envConfig, sharedConfig.Source, configs)
+
+	case sharedConfig.Credentials.HasKeys():
+		// Static Credentials from Shared Config/Credentials file.
+		cfg.Credentials = credentials.StaticCredentialsProvider{
+			Value: sharedConfig.Credentials,
+		}
+
+	case len(sharedConfig.CredentialSource) != 0:
+		err = resolveCredsFromSource(ctx, cfg, envConfig, sharedConfig, configs)
+
+	case len(sharedConfig.WebIdentityTokenFile) != 0:
+		// Credentials from Assume Web Identity token require an IAM Role, and
+		// that roll will be assumed. May be wrapped with another assume role
+		// via SourceProfile.
+		return assumeWebIdentity(ctx, cfg, sharedConfig.WebIdentityTokenFile, sharedConfig.RoleARN, sharedConfig.RoleSessionName, configs)
+
+	case sharedConfig.hasSSOConfiguration():
+		err = resolveSSOCredentials(ctx, cfg, sharedConfig, configs)
+
+	case len(sharedConfig.CredentialProcess) != 0:
+		// Get credentials from CredentialProcess
+		err = processCredentials(ctx, cfg, sharedConfig, configs)
+
+	case len(envConfig.ContainerCredentialsEndpoint) != 0:
+		err = resolveLocalHTTPCredProvider(ctx, cfg, envConfig.ContainerCredentialsEndpoint, envConfig.ContainerAuthorizationToken, configs)
+
+	case len(envConfig.ContainerCredentialsRelativePath) != 0:
+		err = resolveHTTPCredProvider(ctx, cfg, ecsContainerURI(envConfig.ContainerCredentialsRelativePath), envConfig.ContainerAuthorizationToken, configs)
+
+	default:
+		err = resolveEC2RoleCredentials(ctx, cfg, configs)
+	}
+	if err != nil {
+		return err
+	}
+
+	if len(sharedConfig.RoleARN) > 0 {
+		return credsFromAssumeRole(ctx, cfg, sharedConfig, configs)
+	}
+
+	return nil
+}
+
+func resolveSSOCredentials(ctx context.Context, cfg *aws.Config, sharedConfig *SharedConfig, configs configs) error {
+	if err := sharedConfig.validateSSOConfiguration(); err != nil {
+		return err
+	}
+
+	var options []func(*ssocreds.Options)
+	v, found, err := getSSOProviderOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if found {
+		options = append(options, v)
+	}
+
+	cfgCopy := cfg.Copy()
+
+	if sharedConfig.SSOSession != nil {
+		ssoTokenProviderOptionsFn, found, err := getSSOTokenProviderOptions(ctx, configs)
+		if err != nil {
+			return fmt.Errorf("failed to get SSOTokenProviderOptions from config sources, %w", err)
+		}
+		var optFns []func(*ssocreds.SSOTokenProviderOptions)
+		if found {
+			optFns = append(optFns, ssoTokenProviderOptionsFn)
+		}
+		cfgCopy.Region = sharedConfig.SSOSession.SSORegion
+		cachedPath, err := ssocreds.StandardCachedTokenFilepath(sharedConfig.SSOSession.Name)
+		if err != nil {
+			return err
+		}
+		oidcClient := ssooidc.NewFromConfig(cfgCopy)
+		tokenProvider := ssocreds.NewSSOTokenProvider(oidcClient, cachedPath, optFns...)
+		options = append(options, func(o *ssocreds.Options) {
+			o.SSOTokenProvider = tokenProvider
+			o.CachedTokenFilepath = cachedPath
+		})
+	} else {
+		cfgCopy.Region = sharedConfig.SSORegion
+	}
+
+	cfg.Credentials = ssocreds.New(sso.NewFromConfig(cfgCopy), sharedConfig.SSOAccountID, sharedConfig.SSORoleName, sharedConfig.SSOStartURL, options...)
+
+	return nil
+}
+
+func ecsContainerURI(path string) string {
+	return fmt.Sprintf("%s%s", ecsContainerEndpoint, path)
+}
+
+func processCredentials(ctx context.Context, cfg *aws.Config, sharedConfig *SharedConfig, configs configs) error {
+	var opts []func(*processcreds.Options)
+
+	options, found, err := getProcessCredentialOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if found {
+		opts = append(opts, options)
+	}
+
+	cfg.Credentials = processcreds.NewProvider(sharedConfig.CredentialProcess, opts...)
+
+	return nil
+}
+
+// isAllowedHost allows host to be loopback or known ECS/EKS container IPs
+//
+// host can either be an IP address OR an unresolved hostname - resolution will
+// be automatically performed in the latter case
+func isAllowedHost(host string) (bool, error) {
+	if ip := net.ParseIP(host); ip != nil {
+		return isIPAllowed(ip), nil
+	}
+
+	addrs, err := lookupHostFn(host)
+	if err != nil {
+		return false, err
+	}
+
+	for _, addr := range addrs {
+		if ip := net.ParseIP(addr); ip == nil || !isIPAllowed(ip) {
+			return false, nil
+		}
+	}
+
+	return true, nil
+}
+
+func isIPAllowed(ip net.IP) bool {
+	return ip.IsLoopback() ||
+		ip.Equal(ecsContainerIPv4) ||
+		ip.Equal(eksContainerIPv4) ||
+		ip.Equal(eksContainerIPv6)
+}
+
+func resolveLocalHTTPCredProvider(ctx context.Context, cfg *aws.Config, endpointURL, authToken string, configs configs) error {
+	var resolveErr error
+
+	parsed, err := url.Parse(endpointURL)
+	if err != nil {
+		resolveErr = fmt.Errorf("invalid URL, %w", err)
+	} else {
+		host := parsed.Hostname()
+		if len(host) == 0 {
+			resolveErr = fmt.Errorf("unable to parse host from local HTTP cred provider URL")
+		} else if parsed.Scheme == "http" {
+			if isAllowedHost, allowHostErr := isAllowedHost(host); allowHostErr != nil {
+				resolveErr = fmt.Errorf("failed to resolve host %q, %v", host, allowHostErr)
+			} else if !isAllowedHost {
+				resolveErr = fmt.Errorf("invalid endpoint host, %q, only loopback/ecs/eks hosts are allowed", host)
+			}
+		}
+	}
+
+	if resolveErr != nil {
+		return resolveErr
+	}
+
+	return resolveHTTPCredProvider(ctx, cfg, endpointURL, authToken, configs)
+}
+
+func resolveHTTPCredProvider(ctx context.Context, cfg *aws.Config, url, authToken string, configs configs) error {
+	optFns := []func(*endpointcreds.Options){
+		func(options *endpointcreds.Options) {
+			if len(authToken) != 0 {
+				options.AuthorizationToken = authToken
+			}
+			if authFilePath := os.Getenv(httpProviderAuthFileEnvVar); authFilePath != "" {
+				options.AuthorizationTokenProvider = endpointcreds.TokenProviderFunc(func() (string, error) {
+					var contents []byte
+					var err error
+					if contents, err = ioutil.ReadFile(authFilePath); err != nil {
+						return "", fmt.Errorf("failed to read authorization token from %v: %v", authFilePath, err)
+					}
+					return string(contents), nil
+				})
+			}
+			options.APIOptions = cfg.APIOptions
+			if cfg.Retryer != nil {
+				options.Retryer = cfg.Retryer()
+			}
+		},
+	}
+
+	optFn, found, err := getEndpointCredentialProviderOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if found {
+		optFns = append(optFns, optFn)
+	}
+
+	provider := endpointcreds.New(url, optFns...)
+
+	cfg.Credentials, err = wrapWithCredentialsCache(ctx, configs, provider, func(options *aws.CredentialsCacheOptions) {
+		options.ExpiryWindow = 5 * time.Minute
+	})
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func resolveCredsFromSource(ctx context.Context, cfg *aws.Config, envConfig *EnvConfig, sharedCfg *SharedConfig, configs configs) (err error) {
+	switch sharedCfg.CredentialSource {
+	case credSourceEc2Metadata:
+		return resolveEC2RoleCredentials(ctx, cfg, configs)
+
+	case credSourceEnvironment:
+		cfg.Credentials = credentials.StaticCredentialsProvider{Value: envConfig.Credentials}
+
+	case credSourceECSContainer:
+		if len(envConfig.ContainerCredentialsRelativePath) == 0 {
+			return fmt.Errorf("EcsContainer was specified as the credential_source, but 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI' was not set")
+		}
+		return resolveHTTPCredProvider(ctx, cfg, ecsContainerURI(envConfig.ContainerCredentialsRelativePath), envConfig.ContainerAuthorizationToken, configs)
+
+	default:
+		return fmt.Errorf("credential_source values must be EcsContainer, Ec2InstanceMetadata, or Environment")
+	}
+
+	return nil
+}
+
+func resolveEC2RoleCredentials(ctx context.Context, cfg *aws.Config, configs configs) error {
+	optFns := make([]func(*ec2rolecreds.Options), 0, 2)
+
+	optFn, found, err := getEC2RoleCredentialProviderOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if found {
+		optFns = append(optFns, optFn)
+	}
+
+	optFns = append(optFns, func(o *ec2rolecreds.Options) {
+		// Only define a client from config if not already defined.
+		if o.Client == nil {
+			o.Client = imds.NewFromConfig(*cfg)
+		}
+	})
+
+	provider := ec2rolecreds.New(optFns...)
+
+	cfg.Credentials, err = wrapWithCredentialsCache(ctx, configs, provider)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func getAWSConfigSources(cfgs configs) (*EnvConfig, *SharedConfig, configs) {
+	var (
+		envConfig    *EnvConfig
+		sharedConfig *SharedConfig
+		other        configs
+	)
+
+	for i := range cfgs {
+		switch c := cfgs[i].(type) {
+		case EnvConfig:
+			if envConfig == nil {
+				envConfig = &c
+			}
+		case *EnvConfig:
+			if envConfig == nil {
+				envConfig = c
+			}
+		case SharedConfig:
+			if sharedConfig == nil {
+				sharedConfig = &c
+			}
+		case *SharedConfig:
+			if envConfig == nil {
+				sharedConfig = c
+			}
+		default:
+			other = append(other, c)
+		}
+	}
+
+	if envConfig == nil {
+		envConfig = &EnvConfig{}
+	}
+
+	if sharedConfig == nil {
+		sharedConfig = &SharedConfig{}
+	}
+
+	return envConfig, sharedConfig, other
+}
+
+// AssumeRoleTokenProviderNotSetError is an error returned when creating a
+// session when the MFAToken option is not set when shared config is configured
+// load assume a role with an MFA token.
+type AssumeRoleTokenProviderNotSetError struct{}
+
+// Error is the error message
+func (e AssumeRoleTokenProviderNotSetError) Error() string {
+	return fmt.Sprintf("assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.")
+}
+
+func assumeWebIdentity(ctx context.Context, cfg *aws.Config, filepath string, roleARN, sessionName string, configs configs) error {
+	if len(filepath) == 0 {
+		return fmt.Errorf("token file path is not set")
+	}
+
+	optFns := []func(*stscreds.WebIdentityRoleOptions){
+		func(options *stscreds.WebIdentityRoleOptions) {
+			options.RoleSessionName = sessionName
+		},
+	}
+
+	optFn, found, err := getWebIdentityCredentialProviderOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+
+	if found {
+		optFns = append(optFns, optFn)
+	}
+
+	opts := stscreds.WebIdentityRoleOptions{
+		RoleARN: roleARN,
+	}
+
+	for _, fn := range optFns {
+		fn(&opts)
+	}
+
+	if len(opts.RoleARN) == 0 {
+		return fmt.Errorf("role ARN is not set")
+	}
+
+	client := opts.Client
+	if client == nil {
+		client = sts.NewFromConfig(*cfg)
+	}
+
+	provider := stscreds.NewWebIdentityRoleProvider(client, roleARN, stscreds.IdentityTokenFile(filepath), optFns...)
+
+	cfg.Credentials = provider
+
+	return nil
+}
+
+func credsFromAssumeRole(ctx context.Context, cfg *aws.Config, sharedCfg *SharedConfig, configs configs) (err error) {
+	optFns := []func(*stscreds.AssumeRoleOptions){
+		func(options *stscreds.AssumeRoleOptions) {
+			options.RoleSessionName = sharedCfg.RoleSessionName
+			if sharedCfg.RoleDurationSeconds != nil {
+				if *sharedCfg.RoleDurationSeconds/time.Minute > 15 {
+					options.Duration = *sharedCfg.RoleDurationSeconds
+				}
+			}
+			// Assume role with external ID
+			if len(sharedCfg.ExternalID) > 0 {
+				options.ExternalID = aws.String(sharedCfg.ExternalID)
+			}
+
+			// Assume role with MFA
+			if len(sharedCfg.MFASerial) != 0 {
+				options.SerialNumber = aws.String(sharedCfg.MFASerial)
+			}
+		},
+	}
+
+	optFn, found, err := getAssumeRoleCredentialProviderOptions(ctx, configs)
+	if err != nil {
+		return err
+	}
+	if found {
+		optFns = append(optFns, optFn)
+	}
+
+	{
+		// Synthesize options early to validate configuration errors sooner to ensure a token provider
+		// is present if the SerialNumber was set.
+		var o stscreds.AssumeRoleOptions
+		for _, fn := range optFns {
+			fn(&o)
+		}
+		if o.TokenProvider == nil && o.SerialNumber != nil {
+			return AssumeRoleTokenProviderNotSetError{}
+		}
+	}
+
+	cfg.Credentials = stscreds.NewAssumeRoleProvider(sts.NewFromConfig(*cfg), sharedCfg.RoleARN, optFns...)
+
+	return nil
+}
+
+// wrapWithCredentialsCache will wrap provider with an aws.CredentialsCache
+// with the provided options if the provider is not already a
+// aws.CredentialsCache.
+func wrapWithCredentialsCache(
+	ctx context.Context,
+	cfgs configs,
+	provider aws.CredentialsProvider,
+	optFns ...func(options *aws.CredentialsCacheOptions),
+) (aws.CredentialsProvider, error) {
+	_, ok := provider.(*aws.CredentialsCache)
+	if ok {
+		return provider, nil
+	}
+
+	credCacheOptions, optionsFound, err := getCredentialsCacheOptionsProvider(ctx, cfgs)
+	if err != nil {
+		return nil, err
+	}
+
+	// force allocation of a new slice if the additional options are
+	// needed, to prevent overwriting the passed in slice of options.
+	optFns = optFns[:len(optFns):len(optFns)]
+	if optionsFound {
+		optFns = append(optFns, credCacheOptions)
+	}
+
+	return aws.NewCredentialsCache(provider, optFns...), nil
+}

vendor/github.com/aws/aws-sdk-go-v2/config/shared_config.go 🔗

@@ -0,0 +1,1618 @@
+package config
+
+import (
+	"bytes"
+	"context"
+	"errors"
+	"fmt"
+	"io"
+	"io/ioutil"
+	"os"
+	"path/filepath"
+	"strings"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+	"github.com/aws/aws-sdk-go-v2/internal/ini"
+	"github.com/aws/aws-sdk-go-v2/internal/shareddefaults"
+	"github.com/aws/smithy-go/logging"
+	smithyrequestcompression "github.com/aws/smithy-go/private/requestcompression"
+)
+
+const (
+	// Prefix to use for filtering profiles. The profile prefix should only
+	// exist in the shared config file, not the credentials file.
+	profilePrefix = `profile `
+
+	// Prefix to be used for SSO sections. These are supposed to only exist in
+	// the shared config file, not the credentials file.
+	ssoSectionPrefix = `sso-session `
+
+	// Prefix for services section. It is referenced in profile via the services
+	// parameter to configure clients for service-specific parameters.
+	servicesPrefix = `services `
+
+	// string equivalent for boolean
+	endpointDiscoveryDisabled = `false`
+	endpointDiscoveryEnabled  = `true`
+	endpointDiscoveryAuto     = `auto`
+
+	// Static Credentials group
+	accessKeyIDKey  = `aws_access_key_id`     // group required
+	secretAccessKey = `aws_secret_access_key` // group required
+	sessionTokenKey = `aws_session_token`     // optional
+
+	// Assume Role Credentials group
+	roleArnKey             = `role_arn`          // group required
+	sourceProfileKey       = `source_profile`    // group required
+	credentialSourceKey    = `credential_source` // group required (or source_profile)
+	externalIDKey          = `external_id`       // optional
+	mfaSerialKey           = `mfa_serial`        // optional
+	roleSessionNameKey     = `role_session_name` // optional
+	roleDurationSecondsKey = "duration_seconds"  // optional
+
+	// AWS Single Sign-On (AWS SSO) group
+	ssoSessionNameKey = "sso_session"
+
+	ssoRegionKey   = "sso_region"
+	ssoStartURLKey = "sso_start_url"
+
+	ssoAccountIDKey = "sso_account_id"
+	ssoRoleNameKey  = "sso_role_name"
+
+	// Additional Config fields
+	regionKey = `region`
+
+	// endpoint discovery group
+	enableEndpointDiscoveryKey = `endpoint_discovery_enabled` // optional
+
+	// External Credential process
+	credentialProcessKey = `credential_process` // optional
+
+	// Web Identity Token File
+	webIdentityTokenFileKey = `web_identity_token_file` // optional
+
+	// S3 ARN Region Usage
+	s3UseARNRegionKey = "s3_use_arn_region"
+
+	ec2MetadataServiceEndpointModeKey = "ec2_metadata_service_endpoint_mode"
+
+	ec2MetadataServiceEndpointKey = "ec2_metadata_service_endpoint"
+
+	ec2MetadataV1DisabledKey = "ec2_metadata_v1_disabled"
+
+	// Use DualStack Endpoint Resolution
+	useDualStackEndpoint = "use_dualstack_endpoint"
+
+	// DefaultSharedConfigProfile is the default profile to be used when
+	// loading configuration from the config files if another profile name
+	// is not provided.
+	DefaultSharedConfigProfile = `default`
+
+	// S3 Disable Multi-Region AccessPoints
+	s3DisableMultiRegionAccessPointsKey = `s3_disable_multiregion_access_points`
+
+	useFIPSEndpointKey = "use_fips_endpoint"
+
+	defaultsModeKey = "defaults_mode"
+
+	// Retry options
+	retryMaxAttemptsKey = "max_attempts"
+	retryModeKey        = "retry_mode"
+
+	caBundleKey = "ca_bundle"
+
+	sdkAppID = "sdk_ua_app_id"
+
+	ignoreConfiguredEndpoints = "ignore_configured_endpoint_urls"
+
+	endpointURL = "endpoint_url"
+
+	servicesSectionKey = "services"
+
+	disableRequestCompression      = "disable_request_compression"
+	requestMinCompressionSizeBytes = "request_min_compression_size_bytes"
+
+	s3DisableExpressSessionAuthKey = "s3_disable_express_session_auth"
+
+	accountIDKey          = "aws_account_id"
+	accountIDEndpointMode = "account_id_endpoint_mode"
+)
+
+// defaultSharedConfigProfile allows for swapping the default profile for testing
+var defaultSharedConfigProfile = DefaultSharedConfigProfile
+
+// DefaultSharedCredentialsFilename returns the SDK's default file path
+// for the shared credentials file.
+//
+// Builds the shared config file path based on the OS's platform.
+//
+//   - Linux/Unix: $HOME/.aws/credentials
+//   - Windows: %USERPROFILE%\.aws\credentials
+func DefaultSharedCredentialsFilename() string {
+	return filepath.Join(shareddefaults.UserHomeDir(), ".aws", "credentials")
+}
+
+// DefaultSharedConfigFilename returns the SDK's default file path for
+// the shared config file.
+//
+// Builds the shared config file path based on the OS's platform.
+//
+//   - Linux/Unix: $HOME/.aws/config
+//   - Windows: %USERPROFILE%\.aws\config
+func DefaultSharedConfigFilename() string {
+	return filepath.Join(shareddefaults.UserHomeDir(), ".aws", "config")
+}
+
+// DefaultSharedConfigFiles is a slice of the default shared config files that
+// the will be used in order to load the SharedConfig.
+var DefaultSharedConfigFiles = []string{
+	DefaultSharedConfigFilename(),
+}
+
+// DefaultSharedCredentialsFiles is a slice of the default shared credentials
+// files that the will be used in order to load the SharedConfig.
+var DefaultSharedCredentialsFiles = []string{
+	DefaultSharedCredentialsFilename(),
+}
+
+// SSOSession provides the shared configuration parameters of the sso-session
+// section.
+type SSOSession struct {
+	Name        string
+	SSORegion   string
+	SSOStartURL string
+}
+
+func (s *SSOSession) setFromIniSection(section ini.Section) {
+	updateString(&s.Name, section, ssoSessionNameKey)
+	updateString(&s.SSORegion, section, ssoRegionKey)
+	updateString(&s.SSOStartURL, section, ssoStartURLKey)
+}
+
+// Services contains values configured in the services section
+// of the AWS configuration file.
+type Services struct {
+	// Services section values
+	// {"serviceId": {"key": "value"}}
+	// e.g. {"s3": {"endpoint_url": "example.com"}}
+	ServiceValues map[string]map[string]string
+}
+
+func (s *Services) setFromIniSection(section ini.Section) {
+	if s.ServiceValues == nil {
+		s.ServiceValues = make(map[string]map[string]string)
+	}
+	for _, service := range section.List() {
+		s.ServiceValues[service] = section.Map(service)
+	}
+}
+
+// SharedConfig represents the configuration fields of the SDK config files.
+type SharedConfig struct {
+	Profile string
+
+	// Credentials values from the config file. Both aws_access_key_id
+	// and aws_secret_access_key must be provided together in the same file
+	// to be considered valid. The values will be ignored if not a complete group.
+	// aws_session_token is an optional field that can be provided if both of the
+	// other two fields are also provided.
+	//
+	//	aws_access_key_id
+	//	aws_secret_access_key
+	//	aws_session_token
+	Credentials aws.Credentials
+
+	CredentialSource     string
+	CredentialProcess    string
+	WebIdentityTokenFile string
+
+	// SSO session options
+	SSOSessionName string
+	SSOSession     *SSOSession
+
+	// Legacy SSO session options
+	SSORegion   string
+	SSOStartURL string
+
+	// SSO fields not used
+	SSOAccountID string
+	SSORoleName  string
+
+	RoleARN             string
+	ExternalID          string
+	MFASerial           string
+	RoleSessionName     string
+	RoleDurationSeconds *time.Duration
+
+	SourceProfileName string
+	Source            *SharedConfig
+
+	// Region is the region the SDK should use for looking up AWS service endpoints
+	// and signing requests.
+	//
+	//	region = us-west-2
+	Region string
+
+	// EnableEndpointDiscovery can be enabled or disabled in the shared config
+	// by setting endpoint_discovery_enabled to true, or false respectively.
+	//
+	//	endpoint_discovery_enabled = true
+	EnableEndpointDiscovery aws.EndpointDiscoveryEnableState
+
+	// Specifies if the S3 service should allow ARNs to direct the region
+	// the client's requests are sent to.
+	//
+	// s3_use_arn_region=true
+	S3UseARNRegion *bool
+
+	// Specifies the EC2 Instance Metadata Service default endpoint selection
+	// mode (IPv4 or IPv6)
+	//
+	// ec2_metadata_service_endpoint_mode=IPv6
+	EC2IMDSEndpointMode imds.EndpointModeState
+
+	// Specifies the EC2 Instance Metadata Service endpoint to use. If
+	// specified it overrides EC2IMDSEndpointMode.
+	//
+	// ec2_metadata_service_endpoint=http://fd00:ec2::254
+	EC2IMDSEndpoint string
+
+	// Specifies that IMDS clients should not fallback to IMDSv1 if token
+	// requests fail.
+	//
+	// ec2_metadata_v1_disabled=true
+	EC2IMDSv1Disabled *bool
+
+	// Specifies if the S3 service should disable support for Multi-Region
+	// access-points
+	//
+	// s3_disable_multiregion_access_points=true
+	S3DisableMultiRegionAccessPoints *bool
+
+	// Specifies that SDK clients must resolve a dual-stack endpoint for
+	// services.
+	//
+	// use_dualstack_endpoint=true
+	UseDualStackEndpoint aws.DualStackEndpointState
+
+	// Specifies that SDK clients must resolve a FIPS endpoint for
+	// services.
+	//
+	// use_fips_endpoint=true
+	UseFIPSEndpoint aws.FIPSEndpointState
+
+	// Specifies which defaults mode should be used by services.
+	//
+	// defaults_mode=standard
+	DefaultsMode aws.DefaultsMode
+
+	// Specifies the maximum number attempts an API client will call an
+	// operation that fails with a retryable error.
+	//
+	// max_attempts=3
+	RetryMaxAttempts int
+
+	// Specifies the retry model the API client will be created with.
+	//
+	// retry_mode=standard
+	RetryMode aws.RetryMode
+
+	// Sets the path to a custom Credentials Authority (CA) Bundle PEM file
+	// that the SDK will use instead of the system's root CA bundle. Only use
+	// this if you want to configure the SDK to use a custom set of CAs.
+	//
+	// Enabling this option will attempt to merge the Transport into the SDK's
+	// HTTP client. If the client's Transport is not a http.Transport an error
+	// will be returned. If the Transport's TLS config is set this option will
+	// cause the SDK to overwrite the Transport's TLS config's  RootCAs value.
+	//
+	// Setting a custom HTTPClient in the aws.Config options will override this
+	// setting. To use this option and custom HTTP client, the HTTP client
+	// needs to be provided when creating the config. Not the service client.
+	//
+	//  ca_bundle=$HOME/my_custom_ca_bundle
+	CustomCABundle string
+
+	// aws sdk app ID that can be added to user agent header string
+	AppID string
+
+	// Flag used to disable configured endpoints.
+	IgnoreConfiguredEndpoints *bool
+
+	// Value to contain configured endpoints to be propagated to
+	// corresponding endpoint resolution field.
+	BaseEndpoint string
+
+	// Services section config.
+	ServicesSectionName string
+	Services            Services
+
+	// determine if request compression is allowed, default to false
+	// retrieved from config file's profile field disable_request_compression
+	DisableRequestCompression *bool
+
+	// inclusive threshold request body size to trigger compression,
+	// default to 10240 and must be within 0 and 10485760 bytes inclusive
+	// retrieved from config file's profile field request_min_compression_size_bytes
+	RequestMinCompressSizeBytes *int64
+
+	// Whether S3Express auth is disabled.
+	//
+	// This will NOT prevent requests from being made to S3Express buckets, it
+	// will only bypass the modified endpoint routing and signing behaviors
+	// associated with the feature.
+	S3DisableExpressAuth *bool
+
+	AccountIDEndpointMode aws.AccountIDEndpointMode
+}
+
+func (c SharedConfig) getDefaultsMode(ctx context.Context) (value aws.DefaultsMode, ok bool, err error) {
+	if len(c.DefaultsMode) == 0 {
+		return "", false, nil
+	}
+
+	return c.DefaultsMode, true, nil
+}
+
+// GetRetryMaxAttempts returns the maximum number of attempts an API client
+// created Retryer should attempt an operation call before failing.
+func (c SharedConfig) GetRetryMaxAttempts(ctx context.Context) (value int, ok bool, err error) {
+	if c.RetryMaxAttempts == 0 {
+		return 0, false, nil
+	}
+
+	return c.RetryMaxAttempts, true, nil
+}
+
+// GetRetryMode returns the model the API client should create its Retryer in.
+func (c SharedConfig) GetRetryMode(ctx context.Context) (value aws.RetryMode, ok bool, err error) {
+	if len(c.RetryMode) == 0 {
+		return "", false, nil
+	}
+
+	return c.RetryMode, true, nil
+}
+
+// GetS3UseARNRegion returns if the S3 service should allow ARNs to direct the region
+// the client's requests are sent to.
+func (c SharedConfig) GetS3UseARNRegion(ctx context.Context) (value, ok bool, err error) {
+	if c.S3UseARNRegion == nil {
+		return false, false, nil
+	}
+
+	return *c.S3UseARNRegion, true, nil
+}
+
+// GetEnableEndpointDiscovery returns if the enable_endpoint_discovery is set.
+func (c SharedConfig) GetEnableEndpointDiscovery(ctx context.Context) (value aws.EndpointDiscoveryEnableState, ok bool, err error) {
+	if c.EnableEndpointDiscovery == aws.EndpointDiscoveryUnset {
+		return aws.EndpointDiscoveryUnset, false, nil
+	}
+
+	return c.EnableEndpointDiscovery, true, nil
+}
+
+// GetS3DisableMultiRegionAccessPoints returns if the S3 service should disable support for Multi-Region
+// access-points.
+func (c SharedConfig) GetS3DisableMultiRegionAccessPoints(ctx context.Context) (value, ok bool, err error) {
+	if c.S3DisableMultiRegionAccessPoints == nil {
+		return false, false, nil
+	}
+
+	return *c.S3DisableMultiRegionAccessPoints, true, nil
+}
+
+// GetRegion returns the region for the profile if a region is set.
+func (c SharedConfig) getRegion(ctx context.Context) (string, bool, error) {
+	if len(c.Region) == 0 {
+		return "", false, nil
+	}
+	return c.Region, true, nil
+}
+
+// GetCredentialsProvider returns the credentials for a profile if they were set.
+func (c SharedConfig) getCredentialsProvider() (aws.Credentials, bool, error) {
+	return c.Credentials, true, nil
+}
+
+// GetEC2IMDSEndpointMode implements a EC2IMDSEndpointMode option resolver interface.
+func (c SharedConfig) GetEC2IMDSEndpointMode() (imds.EndpointModeState, bool, error) {
+	if c.EC2IMDSEndpointMode == imds.EndpointModeStateUnset {
+		return imds.EndpointModeStateUnset, false, nil
+	}
+
+	return c.EC2IMDSEndpointMode, true, nil
+}
+
+// GetEC2IMDSEndpoint implements a EC2IMDSEndpoint option resolver interface.
+func (c SharedConfig) GetEC2IMDSEndpoint() (string, bool, error) {
+	if len(c.EC2IMDSEndpoint) == 0 {
+		return "", false, nil
+	}
+
+	return c.EC2IMDSEndpoint, true, nil
+}
+
+// GetEC2IMDSV1FallbackDisabled implements an EC2IMDSV1FallbackDisabled option
+// resolver interface.
+func (c SharedConfig) GetEC2IMDSV1FallbackDisabled() (bool, bool) {
+	if c.EC2IMDSv1Disabled == nil {
+		return false, false
+	}
+
+	return *c.EC2IMDSv1Disabled, true
+}
+
+// GetUseDualStackEndpoint returns whether the service's dual-stack endpoint should be
+// used for requests.
+func (c SharedConfig) GetUseDualStackEndpoint(ctx context.Context) (value aws.DualStackEndpointState, found bool, err error) {
+	if c.UseDualStackEndpoint == aws.DualStackEndpointStateUnset {
+		return aws.DualStackEndpointStateUnset, false, nil
+	}
+
+	return c.UseDualStackEndpoint, true, nil
+}
+
+// GetUseFIPSEndpoint returns whether the service's FIPS endpoint should be
+// used for requests.
+func (c SharedConfig) GetUseFIPSEndpoint(ctx context.Context) (value aws.FIPSEndpointState, found bool, err error) {
+	if c.UseFIPSEndpoint == aws.FIPSEndpointStateUnset {
+		return aws.FIPSEndpointStateUnset, false, nil
+	}
+
+	return c.UseFIPSEndpoint, true, nil
+}
+
+// GetS3DisableExpressAuth returns the configured value for
+// [SharedConfig.S3DisableExpressAuth].
+func (c SharedConfig) GetS3DisableExpressAuth() (value, ok bool) {
+	if c.S3DisableExpressAuth == nil {
+		return false, false
+	}
+
+	return *c.S3DisableExpressAuth, true
+}
+
+// GetCustomCABundle returns the custom CA bundle's PEM bytes if the file was
+func (c SharedConfig) getCustomCABundle(context.Context) (io.Reader, bool, error) {
+	if len(c.CustomCABundle) == 0 {
+		return nil, false, nil
+	}
+
+	b, err := ioutil.ReadFile(c.CustomCABundle)
+	if err != nil {
+		return nil, false, err
+	}
+	return bytes.NewReader(b), true, nil
+}
+
+// getAppID returns the sdk app ID if set in shared config profile
+func (c SharedConfig) getAppID(context.Context) (string, bool, error) {
+	return c.AppID, len(c.AppID) > 0, nil
+}
+
+// GetIgnoreConfiguredEndpoints is used in knowing when to disable configured
+// endpoints feature.
+func (c SharedConfig) GetIgnoreConfiguredEndpoints(context.Context) (bool, bool, error) {
+	if c.IgnoreConfiguredEndpoints == nil {
+		return false, false, nil
+	}
+
+	return *c.IgnoreConfiguredEndpoints, true, nil
+}
+
+func (c SharedConfig) getBaseEndpoint(context.Context) (string, bool, error) {
+	return c.BaseEndpoint, len(c.BaseEndpoint) > 0, nil
+}
+
+// GetServiceBaseEndpoint is used to retrieve a normalized SDK ID for use
+// with configured endpoints.
+func (c SharedConfig) GetServiceBaseEndpoint(ctx context.Context, sdkID string) (string, bool, error) {
+	if service, ok := c.Services.ServiceValues[normalizeShared(sdkID)]; ok {
+		if endpt, ok := service[endpointURL]; ok {
+			return endpt, true, nil
+		}
+	}
+	return "", false, nil
+}
+
+func normalizeShared(sdkID string) string {
+	lower := strings.ToLower(sdkID)
+	return strings.ReplaceAll(lower, " ", "_")
+}
+
+func (c SharedConfig) getServicesObject(context.Context) (map[string]map[string]string, bool, error) {
+	return c.Services.ServiceValues, c.Services.ServiceValues != nil, nil
+}
+
+// loadSharedConfigIgnoreNotExist is an alias for loadSharedConfig with the
+// addition of ignoring when none of the files exist or when the profile
+// is not found in any of the files.
+func loadSharedConfigIgnoreNotExist(ctx context.Context, configs configs) (Config, error) {
+	cfg, err := loadSharedConfig(ctx, configs)
+	if err != nil {
+		if _, ok := err.(SharedConfigProfileNotExistError); ok {
+			return SharedConfig{}, nil
+		}
+		return nil, err
+	}
+
+	return cfg, nil
+}
+
+// loadSharedConfig uses the configs passed in to load the SharedConfig from file
+// The file names and profile name are sourced from the configs.
+//
+// If profile name is not provided DefaultSharedConfigProfile (default) will
+// be used.
+//
+// If shared config filenames are not provided DefaultSharedConfigFiles will
+// be used.
+//
+// Config providers used:
+// * sharedConfigProfileProvider
+// * sharedConfigFilesProvider
+func loadSharedConfig(ctx context.Context, configs configs) (Config, error) {
+	var profile string
+	var configFiles []string
+	var credentialsFiles []string
+	var ok bool
+	var err error
+
+	profile, ok, err = getSharedConfigProfile(ctx, configs)
+	if err != nil {
+		return nil, err
+	}
+	if !ok {
+		profile = defaultSharedConfigProfile
+	}
+
+	configFiles, ok, err = getSharedConfigFiles(ctx, configs)
+	if err != nil {
+		return nil, err
+	}
+
+	credentialsFiles, ok, err = getSharedCredentialsFiles(ctx, configs)
+	if err != nil {
+		return nil, err
+	}
+
+	// setup logger if log configuration warning is seti
+	var logger logging.Logger
+	logWarnings, found, err := getLogConfigurationWarnings(ctx, configs)
+	if err != nil {
+		return SharedConfig{}, err
+	}
+	if found && logWarnings {
+		logger, found, err = getLogger(ctx, configs)
+		if err != nil {
+			return SharedConfig{}, err
+		}
+		if !found {
+			logger = logging.NewStandardLogger(os.Stderr)
+		}
+	}
+
+	return LoadSharedConfigProfile(ctx, profile,
+		func(o *LoadSharedConfigOptions) {
+			o.Logger = logger
+			o.ConfigFiles = configFiles
+			o.CredentialsFiles = credentialsFiles
+		},
+	)
+}
+
+// LoadSharedConfigOptions struct contains optional values that can be used to load the config.
+type LoadSharedConfigOptions struct {
+
+	// CredentialsFiles are the shared credentials files
+	CredentialsFiles []string
+
+	// ConfigFiles are the shared config files
+	ConfigFiles []string
+
+	// Logger is the logger used to log shared config behavior
+	Logger logging.Logger
+}
+
+// LoadSharedConfigProfile retrieves the configuration from the list of files
+// using the profile provided. The order the files are listed will determine
+// precedence. Values in subsequent files will overwrite values defined in
+// earlier files.
+//
+// For example, given two files A and B. Both define credentials. If the order
+// of the files are A then B, B's credential values will be used instead of A's.
+//
+// If config files are not set, SDK will default to using a file at location `.aws/config` if present.
+// If credentials files are not set, SDK will default to using a file at location `.aws/credentials` if present.
+// No default files are set, if files set to an empty slice.
+//
+// You can read more about shared config and credentials file location at
+// https://docs.aws.amazon.com/credref/latest/refdocs/file-location.html#file-location
+func LoadSharedConfigProfile(ctx context.Context, profile string, optFns ...func(*LoadSharedConfigOptions)) (SharedConfig, error) {
+	var option LoadSharedConfigOptions
+	for _, fn := range optFns {
+		fn(&option)
+	}
+
+	if option.ConfigFiles == nil {
+		option.ConfigFiles = DefaultSharedConfigFiles
+	}
+
+	if option.CredentialsFiles == nil {
+		option.CredentialsFiles = DefaultSharedCredentialsFiles
+	}
+
+	// load shared configuration sections from shared configuration INI options
+	configSections, err := loadIniFiles(option.ConfigFiles)
+	if err != nil {
+		return SharedConfig{}, err
+	}
+
+	// check for profile prefix and drop duplicates or invalid profiles
+	err = processConfigSections(ctx, &configSections, option.Logger)
+	if err != nil {
+		return SharedConfig{}, err
+	}
+
+	// load shared credentials sections from shared credentials INI options
+	credentialsSections, err := loadIniFiles(option.CredentialsFiles)
+	if err != nil {
+		return SharedConfig{}, err
+	}
+
+	// check for profile prefix and drop duplicates or invalid profiles
+	err = processCredentialsSections(ctx, &credentialsSections, option.Logger)
+	if err != nil {
+		return SharedConfig{}, err
+	}
+
+	err = mergeSections(&configSections, credentialsSections)
+	if err != nil {
+		return SharedConfig{}, err
+	}
+
+	cfg := SharedConfig{}
+	profiles := map[string]struct{}{}
+
+	if err = cfg.setFromIniSections(profiles, profile, configSections, option.Logger); err != nil {
+		return SharedConfig{}, err
+	}
+
+	return cfg, nil
+}
+
+func processConfigSections(ctx context.Context, sections *ini.Sections, logger logging.Logger) error {
+	skipSections := map[string]struct{}{}
+
+	for _, section := range sections.List() {
+		if _, ok := skipSections[section]; ok {
+			continue
+		}
+
+		// drop sections from config file that do not have expected prefixes.
+		switch {
+		case strings.HasPrefix(section, profilePrefix):
+			// Rename sections to remove "profile " prefixing to match with
+			// credentials file. If default is already present, it will be
+			// dropped.
+			newName, err := renameProfileSection(section, sections, logger)
+			if err != nil {
+				return fmt.Errorf("failed to rename profile section, %w", err)
+			}
+			skipSections[newName] = struct{}{}
+
+		case strings.HasPrefix(section, ssoSectionPrefix):
+		case strings.HasPrefix(section, servicesPrefix):
+		case strings.EqualFold(section, "default"):
+		default:
+			// drop this section, as invalid profile name
+			sections.DeleteSection(section)
+
+			if logger != nil {
+				logger.Logf(logging.Debug, "A profile defined with name `%v` is ignored. "+
+					"For use within a shared configuration file, "+
+					"a non-default profile must have `profile ` "+
+					"prefixed to the profile name.",
+					section,
+				)
+			}
+		}
+	}
+	return nil
+}
+
+func renameProfileSection(section string, sections *ini.Sections, logger logging.Logger) (string, error) {
+	v, ok := sections.GetSection(section)
+	if !ok {
+		return "", fmt.Errorf("error processing profiles within the shared configuration files")
+	}
+
+	// delete section with profile as prefix
+	sections.DeleteSection(section)
+
+	// set the value to non-prefixed name in sections.
+	section = strings.TrimPrefix(section, profilePrefix)
+	if sections.HasSection(section) {
+		oldSection, _ := sections.GetSection(section)
+		v.Logs = append(v.Logs,
+			fmt.Sprintf("A non-default profile not prefixed with `profile ` found in %s, "+
+				"overriding non-default profile from %s",
+				v.SourceFile, oldSection.SourceFile))
+		sections.DeleteSection(section)
+	}
+
+	// assign non-prefixed name to section
+	v.Name = section
+	sections.SetSection(section, v)
+
+	return section, nil
+}
+
+func processCredentialsSections(ctx context.Context, sections *ini.Sections, logger logging.Logger) error {
+	for _, section := range sections.List() {
+		// drop profiles with prefix for credential files
+		if strings.HasPrefix(section, profilePrefix) {
+			// drop this section, as invalid profile name
+			sections.DeleteSection(section)
+
+			if logger != nil {
+				logger.Logf(logging.Debug,
+					"The profile defined with name `%v` is ignored. A profile with the `profile ` prefix is invalid "+
+						"for the shared credentials file.\n",
+					section,
+				)
+			}
+		}
+	}
+	return nil
+}
+
+func loadIniFiles(filenames []string) (ini.Sections, error) {
+	mergedSections := ini.NewSections()
+
+	for _, filename := range filenames {
+		sections, err := ini.OpenFile(filename)
+		var v *ini.UnableToReadFile
+		if ok := errors.As(err, &v); ok {
+			// Skip files which can't be opened and read for whatever reason.
+			// We treat such files as empty, and do not fall back to other locations.
+			continue
+		} else if err != nil {
+			return ini.Sections{}, SharedConfigLoadError{Filename: filename, Err: err}
+		}
+
+		// mergeSections into mergedSections
+		err = mergeSections(&mergedSections, sections)
+		if err != nil {
+			return ini.Sections{}, SharedConfigLoadError{Filename: filename, Err: err}
+		}
+	}
+
+	return mergedSections, nil
+}
+
+// mergeSections merges source section properties into destination section properties
+func mergeSections(dst *ini.Sections, src ini.Sections) error {
+	for _, sectionName := range src.List() {
+		srcSection, _ := src.GetSection(sectionName)
+
+		if (!srcSection.Has(accessKeyIDKey) && srcSection.Has(secretAccessKey)) ||
+			(srcSection.Has(accessKeyIDKey) && !srcSection.Has(secretAccessKey)) {
+			srcSection.Errors = append(srcSection.Errors,
+				fmt.Errorf("partial credentials found for profile %v", sectionName))
+		}
+
+		if !dst.HasSection(sectionName) {
+			dst.SetSection(sectionName, srcSection)
+			continue
+		}
+
+		// merge with destination srcSection
+		dstSection, _ := dst.GetSection(sectionName)
+
+		// errors should be overriden if any
+		dstSection.Errors = srcSection.Errors
+
+		// Access key id update
+		if srcSection.Has(accessKeyIDKey) && srcSection.Has(secretAccessKey) {
+			accessKey := srcSection.String(accessKeyIDKey)
+			secretKey := srcSection.String(secretAccessKey)
+
+			if dstSection.Has(accessKeyIDKey) {
+				dstSection.Logs = append(dstSection.Logs, newMergeKeyLogMessage(sectionName, accessKeyIDKey,
+					dstSection.SourceFile[accessKeyIDKey], srcSection.SourceFile[accessKeyIDKey]))
+			}
+
+			// update access key
+			v, err := ini.NewStringValue(accessKey)
+			if err != nil {
+				return fmt.Errorf("error merging access key, %w", err)
+			}
+			dstSection.UpdateValue(accessKeyIDKey, v)
+
+			// update secret key
+			v, err = ini.NewStringValue(secretKey)
+			if err != nil {
+				return fmt.Errorf("error merging secret key, %w", err)
+			}
+			dstSection.UpdateValue(secretAccessKey, v)
+
+			// update session token
+			if err = mergeStringKey(&srcSection, &dstSection, sectionName, sessionTokenKey); err != nil {
+				return err
+			}
+
+			// update source file to reflect where the static creds came from
+			dstSection.UpdateSourceFile(accessKeyIDKey, srcSection.SourceFile[accessKeyIDKey])
+			dstSection.UpdateSourceFile(secretAccessKey, srcSection.SourceFile[secretAccessKey])
+		}
+
+		stringKeys := []string{
+			roleArnKey,
+			sourceProfileKey,
+			credentialSourceKey,
+			externalIDKey,
+			mfaSerialKey,
+			roleSessionNameKey,
+			regionKey,
+			enableEndpointDiscoveryKey,
+			credentialProcessKey,
+			webIdentityTokenFileKey,
+			s3UseARNRegionKey,
+			s3DisableMultiRegionAccessPointsKey,
+			ec2MetadataServiceEndpointModeKey,
+			ec2MetadataServiceEndpointKey,
+			ec2MetadataV1DisabledKey,
+			useDualStackEndpoint,
+			useFIPSEndpointKey,
+			defaultsModeKey,
+			retryModeKey,
+			caBundleKey,
+			roleDurationSecondsKey,
+			retryMaxAttemptsKey,
+
+			ssoSessionNameKey,
+			ssoAccountIDKey,
+			ssoRegionKey,
+			ssoRoleNameKey,
+			ssoStartURLKey,
+		}
+		for i := range stringKeys {
+			if err := mergeStringKey(&srcSection, &dstSection, sectionName, stringKeys[i]); err != nil {
+				return err
+			}
+		}
+
+		// set srcSection on dst srcSection
+		*dst = dst.SetSection(sectionName, dstSection)
+	}
+
+	return nil
+}
+
+func mergeStringKey(srcSection *ini.Section, dstSection *ini.Section, sectionName, key string) error {
+	if srcSection.Has(key) {
+		srcValue := srcSection.String(key)
+		val, err := ini.NewStringValue(srcValue)
+		if err != nil {
+			return fmt.Errorf("error merging %s, %w", key, err)
+		}
+
+		if dstSection.Has(key) {
+			dstSection.Logs = append(dstSection.Logs, newMergeKeyLogMessage(sectionName, key,
+				dstSection.SourceFile[key], srcSection.SourceFile[key]))
+		}
+
+		dstSection.UpdateValue(key, val)
+		dstSection.UpdateSourceFile(key, srcSection.SourceFile[key])
+	}
+	return nil
+}
+
+func newMergeKeyLogMessage(sectionName, key, dstSourceFile, srcSourceFile string) string {
+	return fmt.Sprintf("For profile: %v, overriding %v value, defined in %v "+
+		"with a %v value found in a duplicate profile defined at file %v. \n",
+		sectionName, key, dstSourceFile, key, srcSourceFile)
+}
+
+// Returns an error if all of the files fail to load. If at least one file is
+// successfully loaded and contains the profile, no error will be returned.
+func (c *SharedConfig) setFromIniSections(profiles map[string]struct{}, profile string,
+	sections ini.Sections, logger logging.Logger) error {
+	c.Profile = profile
+
+	section, ok := sections.GetSection(profile)
+	if !ok {
+		return SharedConfigProfileNotExistError{
+			Profile: profile,
+		}
+	}
+
+	// if logs are appended to the section, log them
+	if section.Logs != nil && logger != nil {
+		for _, log := range section.Logs {
+			logger.Logf(logging.Debug, log)
+		}
+	}
+
+	// set config from the provided INI section
+	err := c.setFromIniSection(profile, section)
+	if err != nil {
+		return fmt.Errorf("error fetching config from profile, %v, %w", profile, err)
+	}
+
+	if _, ok := profiles[profile]; ok {
+		// if this is the second instance of the profile the Assume Role
+		// options must be cleared because they are only valid for the
+		// first reference of a profile. The self linked instance of the
+		// profile only have credential provider options.
+		c.clearAssumeRoleOptions()
+	} else {
+		// First time a profile has been seen. Assert if the credential type
+		// requires a role ARN, the ARN is also set
+		if err := c.validateCredentialsConfig(profile); err != nil {
+			return err
+		}
+	}
+
+	// if not top level profile and has credentials, return with credentials.
+	if len(profiles) != 0 && c.Credentials.HasKeys() {
+		return nil
+	}
+
+	profiles[profile] = struct{}{}
+
+	// validate no colliding credentials type are present
+	if err := c.validateCredentialType(); err != nil {
+		return err
+	}
+
+	// Link source profiles for assume roles
+	if len(c.SourceProfileName) != 0 {
+		// Linked profile via source_profile ignore credential provider
+		// options, the source profile must provide the credentials.
+		c.clearCredentialOptions()
+
+		srcCfg := &SharedConfig{}
+		err := srcCfg.setFromIniSections(profiles, c.SourceProfileName, sections, logger)
+		if err != nil {
+			// SourceProfileName that doesn't exist is an error in configuration.
+			if _, ok := err.(SharedConfigProfileNotExistError); ok {
+				err = SharedConfigAssumeRoleError{
+					RoleARN: c.RoleARN,
+					Profile: c.SourceProfileName,
+					Err:     err,
+				}
+			}
+			return err
+		}
+
+		if !srcCfg.hasCredentials() {
+			return SharedConfigAssumeRoleError{
+				RoleARN: c.RoleARN,
+				Profile: c.SourceProfileName,
+			}
+		}
+
+		c.Source = srcCfg
+	}
+
+	// If the profile contains an SSO session parameter, the session MUST exist
+	// as a section in the config file. Load the SSO session using the name
+	// provided. If the session section is not found or incomplete an error
+	// will be returned.
+	if c.hasSSOTokenProviderConfiguration() {
+		section, ok := sections.GetSection(ssoSectionPrefix + strings.TrimSpace(c.SSOSessionName))
+		if !ok {
+			return fmt.Errorf("failed to find SSO session section, %v", c.SSOSessionName)
+		}
+		var ssoSession SSOSession
+		ssoSession.setFromIniSection(section)
+		ssoSession.Name = c.SSOSessionName
+		c.SSOSession = &ssoSession
+	}
+
+	if len(c.ServicesSectionName) > 0 {
+		if section, ok := sections.GetSection(servicesPrefix + c.ServicesSectionName); ok {
+			var svcs Services
+			svcs.setFromIniSection(section)
+			c.Services = svcs
+		}
+	}
+
+	return nil
+}
+
+// setFromIniSection loads the configuration from the profile section defined in
+// the provided INI file. A SharedConfig pointer type value is used so that
+// multiple config file loadings can be chained.
+//
+// Only loads complete logically grouped values, and will not set fields in cfg
+// for incomplete grouped values in the config. Such as credentials. For example
+// if a config file only includes aws_access_key_id but no aws_secret_access_key
+// the aws_access_key_id will be ignored.
+func (c *SharedConfig) setFromIniSection(profile string, section ini.Section) error {
+	if len(section.Name) == 0 {
+		sources := make([]string, 0)
+		for _, v := range section.SourceFile {
+			sources = append(sources, v)
+		}
+
+		return fmt.Errorf("parsing error : could not find profile section name after processing files: %v", sources)
+	}
+
+	if len(section.Errors) != 0 {
+		var errStatement string
+		for i, e := range section.Errors {
+			errStatement = fmt.Sprintf("%d, %v\n", i+1, e.Error())
+		}
+		return fmt.Errorf("Error using profile: \n %v", errStatement)
+	}
+
+	// Assume Role
+	updateString(&c.RoleARN, section, roleArnKey)
+	updateString(&c.ExternalID, section, externalIDKey)
+	updateString(&c.MFASerial, section, mfaSerialKey)
+	updateString(&c.RoleSessionName, section, roleSessionNameKey)
+	updateString(&c.SourceProfileName, section, sourceProfileKey)
+	updateString(&c.CredentialSource, section, credentialSourceKey)
+	updateString(&c.Region, section, regionKey)
+
+	// AWS Single Sign-On (AWS SSO)
+	// SSO session options
+	updateString(&c.SSOSessionName, section, ssoSessionNameKey)
+
+	// Legacy SSO session options
+	updateString(&c.SSORegion, section, ssoRegionKey)
+	updateString(&c.SSOStartURL, section, ssoStartURLKey)
+
+	// SSO fields not used
+	updateString(&c.SSOAccountID, section, ssoAccountIDKey)
+	updateString(&c.SSORoleName, section, ssoRoleNameKey)
+
+	// we're retaining a behavioral quirk with this field that existed before
+	// the removal of literal parsing for #2276:
+	//   - if the key is missing, the config field will not be set
+	//   - if the key is set to a non-numeric, the config field will be set to 0
+	if section.Has(roleDurationSecondsKey) {
+		if v, ok := section.Int(roleDurationSecondsKey); ok {
+			c.RoleDurationSeconds = aws.Duration(time.Duration(v) * time.Second)
+		} else {
+			c.RoleDurationSeconds = aws.Duration(time.Duration(0))
+		}
+	}
+
+	updateString(&c.CredentialProcess, section, credentialProcessKey)
+	updateString(&c.WebIdentityTokenFile, section, webIdentityTokenFileKey)
+
+	updateEndpointDiscoveryType(&c.EnableEndpointDiscovery, section, enableEndpointDiscoveryKey)
+	updateBoolPtr(&c.S3UseARNRegion, section, s3UseARNRegionKey)
+	updateBoolPtr(&c.S3DisableMultiRegionAccessPoints, section, s3DisableMultiRegionAccessPointsKey)
+	updateBoolPtr(&c.S3DisableExpressAuth, section, s3DisableExpressSessionAuthKey)
+
+	if err := updateEC2MetadataServiceEndpointMode(&c.EC2IMDSEndpointMode, section, ec2MetadataServiceEndpointModeKey); err != nil {
+		return fmt.Errorf("failed to load %s from shared config, %v", ec2MetadataServiceEndpointModeKey, err)
+	}
+	updateString(&c.EC2IMDSEndpoint, section, ec2MetadataServiceEndpointKey)
+	updateBoolPtr(&c.EC2IMDSv1Disabled, section, ec2MetadataV1DisabledKey)
+
+	updateUseDualStackEndpoint(&c.UseDualStackEndpoint, section, useDualStackEndpoint)
+	updateUseFIPSEndpoint(&c.UseFIPSEndpoint, section, useFIPSEndpointKey)
+
+	if err := updateDefaultsMode(&c.DefaultsMode, section, defaultsModeKey); err != nil {
+		return fmt.Errorf("failed to load %s from shared config, %w", defaultsModeKey, err)
+	}
+
+	if err := updateInt(&c.RetryMaxAttempts, section, retryMaxAttemptsKey); err != nil {
+		return fmt.Errorf("failed to load %s from shared config, %w", retryMaxAttemptsKey, err)
+	}
+	if err := updateRetryMode(&c.RetryMode, section, retryModeKey); err != nil {
+		return fmt.Errorf("failed to load %s from shared config, %w", retryModeKey, err)
+	}
+
+	updateString(&c.CustomCABundle, section, caBundleKey)
+
+	// user agent app ID added to request User-Agent header
+	updateString(&c.AppID, section, sdkAppID)
+
+	updateBoolPtr(&c.IgnoreConfiguredEndpoints, section, ignoreConfiguredEndpoints)
+
+	updateString(&c.BaseEndpoint, section, endpointURL)
+
+	if err := updateDisableRequestCompression(&c.DisableRequestCompression, section, disableRequestCompression); err != nil {
+		return fmt.Errorf("failed to load %s from shared config, %w", disableRequestCompression, err)
+	}
+	if err := updateRequestMinCompressSizeBytes(&c.RequestMinCompressSizeBytes, section, requestMinCompressionSizeBytes); err != nil {
+		return fmt.Errorf("failed to load %s from shared config, %w", requestMinCompressionSizeBytes, err)
+	}
+
+	if err := updateAIDEndpointMode(&c.AccountIDEndpointMode, section, accountIDEndpointMode); err != nil {
+		return fmt.Errorf("failed to load %s from shared config, %w", accountIDEndpointMode, err)
+	}
+
+	// Shared Credentials
+	creds := aws.Credentials{
+		AccessKeyID:     section.String(accessKeyIDKey),
+		SecretAccessKey: section.String(secretAccessKey),
+		SessionToken:    section.String(sessionTokenKey),
+		Source:          fmt.Sprintf("SharedConfigCredentials: %s", section.SourceFile[accessKeyIDKey]),
+		AccountID:       section.String(accountIDKey),
+	}
+
+	if creds.HasKeys() {
+		c.Credentials = creds
+	}
+
+	updateString(&c.ServicesSectionName, section, servicesSectionKey)
+
+	return nil
+}
+
+func updateRequestMinCompressSizeBytes(bytes **int64, sec ini.Section, key string) error {
+	if !sec.Has(key) {
+		return nil
+	}
+
+	v, ok := sec.Int(key)
+	if !ok {
+		return fmt.Errorf("invalid value for min request compression size bytes %s, need int64", sec.String(key))
+	}
+	if v < 0 || v > smithyrequestcompression.MaxRequestMinCompressSizeBytes {
+		return fmt.Errorf("invalid range for min request compression size bytes %d, must be within 0 and 10485760 inclusively", v)
+	}
+	*bytes = new(int64)
+	**bytes = v
+	return nil
+}
+
+func updateDisableRequestCompression(disable **bool, sec ini.Section, key string) error {
+	if !sec.Has(key) {
+		return nil
+	}
+
+	v := sec.String(key)
+	switch {
+	case v == "true":
+		*disable = new(bool)
+		**disable = true
+	case v == "false":
+		*disable = new(bool)
+		**disable = false
+	default:
+		return fmt.Errorf("invalid value for shared config profile field, %s=%s, need true or false", key, v)
+	}
+	return nil
+}
+
+func updateAIDEndpointMode(m *aws.AccountIDEndpointMode, sec ini.Section, key string) error {
+	if !sec.Has(key) {
+		return nil
+	}
+
+	v := sec.String(key)
+	switch v {
+	case "preferred":
+		*m = aws.AccountIDEndpointModePreferred
+	case "required":
+		*m = aws.AccountIDEndpointModeRequired
+	case "disabled":
+		*m = aws.AccountIDEndpointModeDisabled
+	default:
+		return fmt.Errorf("invalid value for shared config profile field, %s=%s, must be preferred/required/disabled", key, v)
+	}
+
+	return nil
+}
+
+func (c SharedConfig) getRequestMinCompressSizeBytes(ctx context.Context) (int64, bool, error) {
+	if c.RequestMinCompressSizeBytes == nil {
+		return 0, false, nil
+	}
+	return *c.RequestMinCompressSizeBytes, true, nil
+}
+
+func (c SharedConfig) getDisableRequestCompression(ctx context.Context) (bool, bool, error) {
+	if c.DisableRequestCompression == nil {
+		return false, false, nil
+	}
+	return *c.DisableRequestCompression, true, nil
+}
+
+func (c SharedConfig) getAccountIDEndpointMode(ctx context.Context) (aws.AccountIDEndpointMode, bool, error) {
+	return c.AccountIDEndpointMode, len(c.AccountIDEndpointMode) > 0, nil
+}
+
+func updateDefaultsMode(mode *aws.DefaultsMode, section ini.Section, key string) error {
+	if !section.Has(key) {
+		return nil
+	}
+	value := section.String(key)
+	if ok := mode.SetFromString(value); !ok {
+		return fmt.Errorf("invalid value: %s", value)
+	}
+	return nil
+}
+
+func updateRetryMode(mode *aws.RetryMode, section ini.Section, key string) (err error) {
+	if !section.Has(key) {
+		return nil
+	}
+	value := section.String(key)
+	if *mode, err = aws.ParseRetryMode(value); err != nil {
+		return err
+	}
+	return nil
+}
+
+func updateEC2MetadataServiceEndpointMode(endpointMode *imds.EndpointModeState, section ini.Section, key string) error {
+	if !section.Has(key) {
+		return nil
+	}
+	value := section.String(key)
+	return endpointMode.SetFromString(value)
+}
+
+func (c *SharedConfig) validateCredentialsConfig(profile string) error {
+	if err := c.validateCredentialsRequireARN(profile); err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func (c *SharedConfig) validateCredentialsRequireARN(profile string) error {
+	var credSource string
+
+	switch {
+	case len(c.SourceProfileName) != 0:
+		credSource = sourceProfileKey
+	case len(c.CredentialSource) != 0:
+		credSource = credentialSourceKey
+	case len(c.WebIdentityTokenFile) != 0:
+		credSource = webIdentityTokenFileKey
+	}
+
+	if len(credSource) != 0 && len(c.RoleARN) == 0 {
+		return CredentialRequiresARNError{
+			Type:    credSource,
+			Profile: profile,
+		}
+	}
+
+	return nil
+}
+
+func (c *SharedConfig) validateCredentialType() error {
+	// Only one or no credential type can be defined.
+	if !oneOrNone(
+		len(c.SourceProfileName) != 0,
+		len(c.CredentialSource) != 0,
+		len(c.CredentialProcess) != 0,
+		len(c.WebIdentityTokenFile) != 0,
+	) {
+		return fmt.Errorf("only one credential type may be specified per profile: source profile, credential source, credential process, web identity token")
+	}
+
+	return nil
+}
+
+func (c *SharedConfig) validateSSOConfiguration() error {
+	if c.hasSSOTokenProviderConfiguration() {
+		err := c.validateSSOTokenProviderConfiguration()
+		if err != nil {
+			return err
+		}
+		return nil
+	}
+
+	if c.hasLegacySSOConfiguration() {
+		err := c.validateLegacySSOConfiguration()
+		if err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+func (c *SharedConfig) validateSSOTokenProviderConfiguration() error {
+	var missing []string
+
+	if len(c.SSOSessionName) == 0 {
+		missing = append(missing, ssoSessionNameKey)
+	}
+
+	if c.SSOSession == nil {
+		missing = append(missing, ssoSectionPrefix)
+	} else {
+		if len(c.SSOSession.SSORegion) == 0 {
+			missing = append(missing, ssoRegionKey)
+		}
+
+		if len(c.SSOSession.SSOStartURL) == 0 {
+			missing = append(missing, ssoStartURLKey)
+		}
+	}
+
+	if len(missing) > 0 {
+		return fmt.Errorf("profile %q is configured to use SSO but is missing required configuration: %s",
+			c.Profile, strings.Join(missing, ", "))
+	}
+
+	if len(c.SSORegion) > 0 && c.SSORegion != c.SSOSession.SSORegion {
+		return fmt.Errorf("%s in profile %q must match %s in %s", ssoRegionKey, c.Profile, ssoRegionKey, ssoSectionPrefix)
+	}
+
+	if len(c.SSOStartURL) > 0 && c.SSOStartURL != c.SSOSession.SSOStartURL {
+		return fmt.Errorf("%s in profile %q must match %s in %s", ssoStartURLKey, c.Profile, ssoStartURLKey, ssoSectionPrefix)
+	}
+
+	return nil
+}
+
+func (c *SharedConfig) validateLegacySSOConfiguration() error {
+	var missing []string
+
+	if len(c.SSORegion) == 0 {
+		missing = append(missing, ssoRegionKey)
+	}
+
+	if len(c.SSOStartURL) == 0 {
+		missing = append(missing, ssoStartURLKey)
+	}
+
+	if len(c.SSOAccountID) == 0 {
+		missing = append(missing, ssoAccountIDKey)
+	}
+
+	if len(c.SSORoleName) == 0 {
+		missing = append(missing, ssoRoleNameKey)
+	}
+
+	if len(missing) > 0 {
+		return fmt.Errorf("profile %q is configured to use SSO but is missing required configuration: %s",
+			c.Profile, strings.Join(missing, ", "))
+	}
+	return nil
+}
+
+func (c *SharedConfig) hasCredentials() bool {
+	switch {
+	case len(c.SourceProfileName) != 0:
+	case len(c.CredentialSource) != 0:
+	case len(c.CredentialProcess) != 0:
+	case len(c.WebIdentityTokenFile) != 0:
+	case c.hasSSOConfiguration():
+	case c.Credentials.HasKeys():
+	default:
+		return false
+	}
+
+	return true
+}
+
+func (c *SharedConfig) hasSSOConfiguration() bool {
+	return c.hasSSOTokenProviderConfiguration() || c.hasLegacySSOConfiguration()
+}
+
+func (c *SharedConfig) hasSSOTokenProviderConfiguration() bool {
+	return len(c.SSOSessionName) > 0
+}
+
+func (c *SharedConfig) hasLegacySSOConfiguration() bool {
+	return len(c.SSORegion) > 0 || len(c.SSOAccountID) > 0 || len(c.SSOStartURL) > 0 || len(c.SSORoleName) > 0
+}
+
+func (c *SharedConfig) clearAssumeRoleOptions() {
+	c.RoleARN = ""
+	c.ExternalID = ""
+	c.MFASerial = ""
+	c.RoleSessionName = ""
+	c.SourceProfileName = ""
+}
+
+func (c *SharedConfig) clearCredentialOptions() {
+	c.CredentialSource = ""
+	c.CredentialProcess = ""
+	c.WebIdentityTokenFile = ""
+	c.Credentials = aws.Credentials{}
+	c.SSOAccountID = ""
+	c.SSORegion = ""
+	c.SSORoleName = ""
+	c.SSOStartURL = ""
+}
+
+// SharedConfigLoadError is an error for the shared config file failed to load.
+type SharedConfigLoadError struct {
+	Filename string
+	Err      error
+}
+
+// Unwrap returns the underlying error that caused the failure.
+func (e SharedConfigLoadError) Unwrap() error {
+	return e.Err
+}
+
+func (e SharedConfigLoadError) Error() string {
+	return fmt.Sprintf("failed to load shared config file, %s, %v", e.Filename, e.Err)
+}
+
+// SharedConfigProfileNotExistError is an error for the shared config when
+// the profile was not find in the config file.
+type SharedConfigProfileNotExistError struct {
+	Filename []string
+	Profile  string
+	Err      error
+}
+
+// Unwrap returns the underlying error that caused the failure.
+func (e SharedConfigProfileNotExistError) Unwrap() error {
+	return e.Err
+}
+
+func (e SharedConfigProfileNotExistError) Error() string {
+	return fmt.Sprintf("failed to get shared config profile, %s", e.Profile)
+}
+
+// SharedConfigAssumeRoleError is an error for the shared config when the
+// profile contains assume role information, but that information is invalid
+// or not complete.
+type SharedConfigAssumeRoleError struct {
+	Profile string
+	RoleARN string
+	Err     error
+}
+
+// Unwrap returns the underlying error that caused the failure.
+func (e SharedConfigAssumeRoleError) Unwrap() error {
+	return e.Err
+}
+
+func (e SharedConfigAssumeRoleError) Error() string {
+	return fmt.Sprintf("failed to load assume role %s, of profile %s, %v",
+		e.RoleARN, e.Profile, e.Err)
+}
+
+// CredentialRequiresARNError provides the error for shared config credentials
+// that are incorrectly configured in the shared config or credentials file.
+type CredentialRequiresARNError struct {
+	// type of credentials that were configured.
+	Type string
+
+	// Profile name the credentials were in.
+	Profile string
+}
+
+// Error satisfies the error interface.
+func (e CredentialRequiresARNError) Error() string {
+	return fmt.Sprintf(
+		"credential type %s requires role_arn, profile %s",
+		e.Type, e.Profile,
+	)
+}
+
+func oneOrNone(bs ...bool) bool {
+	var count int
+
+	for _, b := range bs {
+		if b {
+			count++
+			if count > 1 {
+				return false
+			}
+		}
+	}
+
+	return true
+}
+
+// updateString will only update the dst with the value in the section key, key
+// is present in the section.
+func updateString(dst *string, section ini.Section, key string) {
+	if !section.Has(key) {
+		return
+	}
+	*dst = section.String(key)
+}
+
+// updateInt will only update the dst with the value in the section key, key
+// is present in the section.
+//
+// Down casts the INI integer value from a int64 to an int, which could be
+// different bit size depending on platform.
+func updateInt(dst *int, section ini.Section, key string) error {
+	if !section.Has(key) {
+		return nil
+	}
+
+	v, ok := section.Int(key)
+	if !ok {
+		return fmt.Errorf("invalid value %s=%s, expect integer", key, section.String(key))
+	}
+
+	*dst = int(v)
+	return nil
+}
+
+// updateBool will only update the dst with the value in the section key, key
+// is present in the section.
+func updateBool(dst *bool, section ini.Section, key string) {
+	if !section.Has(key) {
+		return
+	}
+
+	// retains pre-#2276 behavior where non-bool value would resolve to false
+	v, _ := section.Bool(key)
+	*dst = v
+}
+
+// updateBoolPtr will only update the dst with the value in the section key,
+// key is present in the section.
+func updateBoolPtr(dst **bool, section ini.Section, key string) {
+	if !section.Has(key) {
+		return
+	}
+
+	// retains pre-#2276 behavior where non-bool value would resolve to false
+	v, _ := section.Bool(key)
+	*dst = new(bool)
+	**dst = v
+}
+
+// updateEndpointDiscoveryType will only update the dst with the value in the section, if
+// a valid key and corresponding EndpointDiscoveryType is found.
+func updateEndpointDiscoveryType(dst *aws.EndpointDiscoveryEnableState, section ini.Section, key string) {
+	if !section.Has(key) {
+		return
+	}
+
+	value := section.String(key)
+	if len(value) == 0 {
+		return
+	}
+
+	switch {
+	case strings.EqualFold(value, endpointDiscoveryDisabled):
+		*dst = aws.EndpointDiscoveryDisabled
+	case strings.EqualFold(value, endpointDiscoveryEnabled):
+		*dst = aws.EndpointDiscoveryEnabled
+	case strings.EqualFold(value, endpointDiscoveryAuto):
+		*dst = aws.EndpointDiscoveryAuto
+	}
+}
+
+// updateEndpointDiscoveryType will only update the dst with the value in the section, if
+// a valid key and corresponding EndpointDiscoveryType is found.
+func updateUseDualStackEndpoint(dst *aws.DualStackEndpointState, section ini.Section, key string) {
+	if !section.Has(key) {
+		return
+	}
+
+	// retains pre-#2276 behavior where non-bool value would resolve to false
+	if v, _ := section.Bool(key); v {
+		*dst = aws.DualStackEndpointStateEnabled
+	} else {
+		*dst = aws.DualStackEndpointStateDisabled
+	}
+
+	return
+}
+
+// updateEndpointDiscoveryType will only update the dst with the value in the section, if
+// a valid key and corresponding EndpointDiscoveryType is found.
+func updateUseFIPSEndpoint(dst *aws.FIPSEndpointState, section ini.Section, key string) {
+	if !section.Has(key) {
+		return
+	}
+
+	// retains pre-#2276 behavior where non-bool value would resolve to false
+	if v, _ := section.Bool(key); v {
+		*dst = aws.FIPSEndpointStateEnabled
+	} else {
+		*dst = aws.FIPSEndpointStateDisabled
+	}
+
+	return
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/CHANGELOG.md 🔗

@@ -0,0 +1,591 @@
+# v1.17.27 (2024-07-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.26 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.25 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.24 (2024-07-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.23 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.22 (2024-06-26)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.21 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.20 (2024-06-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.19 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.18 (2024-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.17 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.16 (2024-05-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.15 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.14 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.13 (2024-05-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.12 (2024-05-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.11 (2024-04-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.10 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.9 (2024-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.8 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.7 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.6 (2024-03-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.5 (2024-03-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.4 (2024-02-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.3 (2024-02-22)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.2 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.1 (2024-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.16 (2024-01-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.15 (2024-01-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.14 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.13 (2023-12-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.12 (2023-12-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.11 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.10 (2023-12-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.9 (2023-12-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.8 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.7 (2023-11-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.6 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.5 (2023-11-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.4 (2023-11-21)
+
+* **Bug Fix**: Don't expect error responses to have a JSON payload in the endpointcreds provider.
+
+# v1.16.3 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.2 (2023-11-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.1 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.0 (2023-11-14)
+
+* **Feature**: Add support for dynamic auth token from file and EKS container host in absolute/relative URIs in the HTTP credential provider.
+
+# v1.15.2 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.1 (2023-11-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.0 (2023-11-01)
+
+* **Feature**: Adds support for configured endpoints via environment variables and the AWS shared configuration file.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.43 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.42 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.41 (2023-10-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.40 (2023-09-22)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.39 (2023-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.38 (2023-09-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.37 (2023-09-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.36 (2023-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.35 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.34 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.33 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.32 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.31 (2023-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.30 (2023-07-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.29 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.28 (2023-07-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.27 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.26 (2023-06-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.25 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.24 (2023-05-09)
+
+* No change notes available for this release.
+
+# v1.13.23 (2023-05-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.22 (2023-05-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.21 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.20 (2023-04-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.19 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.18 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.17 (2023-03-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.16 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.15 (2023-02-22)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.14 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.13 (2023-02-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.12 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.11 (2023-02-01)
+
+* No change notes available for this release.
+
+# v1.13.10 (2023-01-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.9 (2023-01-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.8 (2023-01-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.7 (2022-12-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.6 (2022-12-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.5 (2022-12-15)
+
+* **Bug Fix**: Unify logic between shared config and in finding home directory
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.4 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.3 (2022-11-22)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.2 (2022-11-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.1 (2022-11-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.0 (2022-11-11)
+
+* **Announcement**: When using the SSOTokenProvider, a previous implementation incorrectly compensated for invalid SSOTokenProvider configurations in the shared profile. This has been fixed via PR #1903 and tracked in issue #1846
+* **Feature**: Adds token refresh support (via SSOTokenProvider) when using the SSOCredentialProvider
+
+# v1.12.24 (2022-11-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.23 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.22 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.21 (2022-09-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.20 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.19 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.18 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.17 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.16 (2022-08-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.15 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.14 (2022-08-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.13 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.12 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.11 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.10 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.9 (2022-07-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.8 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.7 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.6 (2022-06-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.5 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.4 (2022-05-26)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.3 (2022-05-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.2 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.1 (2022-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.0 (2022-04-25)
+
+* **Feature**: Adds Duration and Policy options that can be used when creating stscreds.WebIdentityRoleProvider credentials provider.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.2 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.1 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.0 (2022-03-23)
+
+* **Feature**: Update `ec2rolecreds` package's `Provider` to implememnt support for CredentialsCache new optional caching strategy interfaces, HandleFailRefreshCredentialsCacheStrategy and AdjustExpiresByCredentialsCacheStrategy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.0 (2022-02-24)
+
+* **Feature**: Adds support for `SourceIdentity` to `stscreds.AssumeRoleProvider` [#1588](https://github.com/aws/aws-sdk-go-v2/pull/1588). Fixes [#1575](https://github.com/aws/aws-sdk-go-v2/issues/1575)
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.5 (2021-12-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.4 (2021-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.3 (2021-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.2 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.1 (2021-11-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.0 (2021-11-06)
+
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.0 (2021-10-21)
+
+* **Feature**: Updated  to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.3 (2021-10-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.2 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.1 (2021-09-10)
+
+* **Documentation**: Fixes the AssumeRoleProvider's documentation for using custom TokenProviders.
+
+# v1.4.0 (2021-08-27)
+
+* **Feature**: Adds support for Tags and TransitiveTagKeys to stscreds.AssumeRoleProvider. Closes https://github.com/aws/aws-sdk-go-v2/issues/723
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.3 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.2 (2021-08-04)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.1 (2021-07-15)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2021-06-25)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Bug Fix**: Fixed example usages of aws.CredentialsCache ([#1275](https://github.com/aws/aws-sdk-go-v2/pull/1275))
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.1 (2021-05-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/credentials/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/credentials/ec2rolecreds/doc.go 🔗

@@ -0,0 +1,58 @@
+// Package ec2rolecreds provides the credentials provider implementation for
+// retrieving AWS credentials from Amazon EC2 Instance Roles via Amazon EC2 IMDS.
+//
+// # Concurrency and caching
+//
+// The Provider is not safe to be used concurrently, and does not provide any
+// caching of credentials retrieved. You should wrap the Provider with a
+// `aws.CredentialsCache` to provide concurrency safety, and caching of
+// credentials.
+//
+// # Loading credentials with the SDK's AWS Config
+//
+// The EC2 Instance role credentials provider will automatically be the resolved
+// credential provider in the credential chain if no other credential provider is
+// resolved first.
+//
+// To explicitly instruct the SDK's credentials resolving to use the EC2 Instance
+// role for credentials, you specify a `credentials_source` property in the config
+// profile the SDK will load.
+//
+//	[default]
+//	credential_source = Ec2InstanceMetadata
+//
+// # Loading credentials with the Provider directly
+//
+// Another way to use the EC2 Instance role credentials provider is to create it
+// directly and assign it as the credentials provider for an API client.
+//
+// The following example creates a credentials provider for a command, and wraps
+// it with the CredentialsCache before assigning the provider to the Amazon S3 API
+// client's Credentials option.
+//
+//	provider := imds.New(imds.Options{})
+//
+//	// Create the service client value configured for credentials.
+//	svc := s3.New(s3.Options{
+//	  Credentials: aws.NewCredentialsCache(provider),
+//	})
+//
+// If you need more control, you can set the configuration options on the
+// credentials provider using the imds.Options type to configure the EC2 IMDS
+// API Client and ExpiryWindow of the retrieved credentials.
+//
+//	provider := imds.New(imds.Options{
+//		// See imds.Options type's documentation for more options available.
+//		Client: imds.New(Options{
+//			HTTPClient: customHTTPClient,
+//		}),
+//
+//		// Modify how soon credentials expire prior to their original expiry time.
+//		ExpiryWindow: 5 * time.Minute,
+//	})
+//
+// # EC2 IMDS API Client
+//
+// See the github.com/aws/aws-sdk-go-v2/feature/ec2/imds module for more details on
+// configuring the client, and options available.
+package ec2rolecreds

vendor/github.com/aws/aws-sdk-go-v2/credentials/ec2rolecreds/provider.go 🔗

@@ -0,0 +1,229 @@
+package ec2rolecreds
+
+import (
+	"bufio"
+	"context"
+	"encoding/json"
+	"fmt"
+	"math"
+	"path"
+	"strings"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/feature/ec2/imds"
+	sdkrand "github.com/aws/aws-sdk-go-v2/internal/rand"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// ProviderName provides a name of EC2Role provider
+const ProviderName = "EC2RoleProvider"
+
+// GetMetadataAPIClient provides the interface for an EC2 IMDS API client for the
+// GetMetadata operation.
+type GetMetadataAPIClient interface {
+	GetMetadata(context.Context, *imds.GetMetadataInput, ...func(*imds.Options)) (*imds.GetMetadataOutput, error)
+}
+
+// A Provider retrieves credentials from the EC2 service, and keeps track if
+// those credentials are expired.
+//
+// The New function must be used to create the with a custom EC2 IMDS client.
+//
+//	p := &ec2rolecreds.New(func(o *ec2rolecreds.Options{
+//	     o.Client = imds.New(imds.Options{/* custom options */})
+//	})
+type Provider struct {
+	options Options
+}
+
+// Options is a list of user settable options for setting the behavior of the Provider.
+type Options struct {
+	// The API client that will be used by the provider to make GetMetadata API
+	// calls to EC2 IMDS.
+	//
+	// If nil, the provider will default to the EC2 IMDS client.
+	Client GetMetadataAPIClient
+}
+
+// New returns an initialized Provider value configured to retrieve
+// credentials from EC2 Instance Metadata service.
+func New(optFns ...func(*Options)) *Provider {
+	options := Options{}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	if options.Client == nil {
+		options.Client = imds.New(imds.Options{})
+	}
+
+	return &Provider{
+		options: options,
+	}
+}
+
+// Retrieve retrieves credentials from the EC2 service. Error will be returned
+// if the request fails, or unable to extract the desired credentials.
+func (p *Provider) Retrieve(ctx context.Context) (aws.Credentials, error) {
+	credsList, err := requestCredList(ctx, p.options.Client)
+	if err != nil {
+		return aws.Credentials{Source: ProviderName}, err
+	}
+
+	if len(credsList) == 0 {
+		return aws.Credentials{Source: ProviderName},
+			fmt.Errorf("unexpected empty EC2 IMDS role list")
+	}
+	credsName := credsList[0]
+
+	roleCreds, err := requestCred(ctx, p.options.Client, credsName)
+	if err != nil {
+		return aws.Credentials{Source: ProviderName}, err
+	}
+
+	creds := aws.Credentials{
+		AccessKeyID:     roleCreds.AccessKeyID,
+		SecretAccessKey: roleCreds.SecretAccessKey,
+		SessionToken:    roleCreds.Token,
+		Source:          ProviderName,
+
+		CanExpire: true,
+		Expires:   roleCreds.Expiration,
+	}
+
+	// Cap role credentials Expires to 1 hour so they can be refreshed more
+	// often. Jitter will be applied credentials cache if being used.
+	if anHour := sdk.NowTime().Add(1 * time.Hour); creds.Expires.After(anHour) {
+		creds.Expires = anHour
+	}
+
+	return creds, nil
+}
+
+// HandleFailToRefresh will extend the credentials Expires time if it it is
+// expired. If the credentials will not expire within the minimum time, they
+// will be returned.
+//
+// If the credentials cannot expire, the original error will be returned.
+func (p *Provider) HandleFailToRefresh(ctx context.Context, prevCreds aws.Credentials, err error) (
+	aws.Credentials, error,
+) {
+	if !prevCreds.CanExpire {
+		return aws.Credentials{}, err
+	}
+
+	if prevCreds.Expires.After(sdk.NowTime().Add(5 * time.Minute)) {
+		return prevCreds, nil
+	}
+
+	newCreds := prevCreds
+	randFloat64, err := sdkrand.CryptoRandFloat64()
+	if err != nil {
+		return aws.Credentials{}, fmt.Errorf("failed to get random float, %w", err)
+	}
+
+	// Random distribution of [5,15) minutes.
+	expireOffset := time.Duration(randFloat64*float64(10*time.Minute)) + 5*time.Minute
+	newCreds.Expires = sdk.NowTime().Add(expireOffset)
+
+	logger := middleware.GetLogger(ctx)
+	logger.Logf(logging.Warn, "Attempting credential expiration extension due to a credential service availability issue. A refresh of these credentials will be attempted again in %v minutes.", math.Floor(expireOffset.Minutes()))
+
+	return newCreds, nil
+}
+
+// AdjustExpiresBy will adds the passed in duration to the passed in
+// credential's Expires time, unless the time until Expires is less than 15
+// minutes. Returns the credentials, even if not updated.
+func (p *Provider) AdjustExpiresBy(creds aws.Credentials, dur time.Duration) (
+	aws.Credentials, error,
+) {
+	if !creds.CanExpire {
+		return creds, nil
+	}
+	if creds.Expires.Before(sdk.NowTime().Add(15 * time.Minute)) {
+		return creds, nil
+	}
+
+	creds.Expires = creds.Expires.Add(dur)
+	return creds, nil
+}
+
+// ec2RoleCredRespBody provides the shape for unmarshaling credential
+// request responses.
+type ec2RoleCredRespBody struct {
+	// Success State
+	Expiration      time.Time
+	AccessKeyID     string
+	SecretAccessKey string
+	Token           string
+
+	// Error state
+	Code    string
+	Message string
+}
+
+const iamSecurityCredsPath = "/iam/security-credentials/"
+
+// requestCredList requests a list of credentials from the EC2 service. If
+// there are no credentials, or there is an error making or receiving the
+// request
+func requestCredList(ctx context.Context, client GetMetadataAPIClient) ([]string, error) {
+	resp, err := client.GetMetadata(ctx, &imds.GetMetadataInput{
+		Path: iamSecurityCredsPath,
+	})
+	if err != nil {
+		return nil, fmt.Errorf("no EC2 IMDS role found, %w", err)
+	}
+	defer resp.Content.Close()
+
+	credsList := []string{}
+	s := bufio.NewScanner(resp.Content)
+	for s.Scan() {
+		credsList = append(credsList, s.Text())
+	}
+
+	if err := s.Err(); err != nil {
+		return nil, fmt.Errorf("failed to read EC2 IMDS role, %w", err)
+	}
+
+	return credsList, nil
+}
+
+// requestCred requests the credentials for a specific credentials from the EC2 service.
+//
+// If the credentials cannot be found, or there is an error reading the response
+// and error will be returned.
+func requestCred(ctx context.Context, client GetMetadataAPIClient, credsName string) (ec2RoleCredRespBody, error) {
+	resp, err := client.GetMetadata(ctx, &imds.GetMetadataInput{
+		Path: path.Join(iamSecurityCredsPath, credsName),
+	})
+	if err != nil {
+		return ec2RoleCredRespBody{},
+			fmt.Errorf("failed to get %s EC2 IMDS role credentials, %w",
+				credsName, err)
+	}
+	defer resp.Content.Close()
+
+	var respCreds ec2RoleCredRespBody
+	if err := json.NewDecoder(resp.Content).Decode(&respCreds); err != nil {
+		return ec2RoleCredRespBody{},
+			fmt.Errorf("failed to decode %s EC2 IMDS role credentials, %w",
+				credsName, err)
+	}
+
+	if !strings.EqualFold(respCreds.Code, "Success") {
+		// If an error code was returned something failed requesting the role.
+		return ec2RoleCredRespBody{},
+			fmt.Errorf("failed to get %s EC2 IMDS role credentials, %w",
+				credsName,
+				&smithy.GenericAPIError{Code: respCreds.Code, Message: respCreds.Message})
+	}
+
+	return respCreds, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/auth.go 🔗

@@ -0,0 +1,48 @@
+package client
+
+import (
+	"context"
+	"github.com/aws/smithy-go/middleware"
+)
+
+type getIdentityMiddleware struct {
+	options Options
+}
+
+func (*getIdentityMiddleware) ID() string {
+	return "GetIdentity"
+}
+
+func (m *getIdentityMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}
+
+type signRequestMiddleware struct {
+}
+
+func (*signRequestMiddleware) ID() string {
+	return "Signing"
+}
+
+func (m *signRequestMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}
+
+type resolveAuthSchemeMiddleware struct {
+	operation string
+	options   Options
+}
+
+func (*resolveAuthSchemeMiddleware) ID() string {
+	return "ResolveAuthScheme"
+}
+
+func (m *resolveAuthSchemeMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/client.go 🔗

@@ -0,0 +1,165 @@
+package client
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/aws/retry"
+	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+	"github.com/aws/smithy-go"
+	smithymiddleware "github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// ServiceID is the client identifer
+const ServiceID = "endpoint-credentials"
+
+// HTTPClient is a client for sending HTTP requests
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+// Options is the endpoint client configurable options
+type Options struct {
+	// The endpoint to retrieve credentials from
+	Endpoint string
+
+	// The HTTP client to invoke API calls with. Defaults to client's default HTTP
+	// implementation if nil.
+	HTTPClient HTTPClient
+
+	// Retryer guides how HTTP requests should be retried in case of recoverable
+	// failures. When nil the API client will use a default retryer.
+	Retryer aws.Retryer
+
+	// Set of options to modify how the credentials operation is invoked.
+	APIOptions []func(*smithymiddleware.Stack) error
+}
+
+// Copy creates a copy of the API options.
+func (o Options) Copy() Options {
+	to := o
+	to.APIOptions = make([]func(*smithymiddleware.Stack) error, len(o.APIOptions))
+	copy(to.APIOptions, o.APIOptions)
+	return to
+}
+
+// Client is an client for retrieving AWS credentials from an endpoint
+type Client struct {
+	options Options
+}
+
+// New constructs a new Client from the given options
+func New(options Options, optFns ...func(*Options)) *Client {
+	options = options.Copy()
+
+	if options.HTTPClient == nil {
+		options.HTTPClient = awshttp.NewBuildableClient()
+	}
+
+	if options.Retryer == nil {
+		// Amazon-owned implementations of this endpoint are known to sometimes
+		// return plaintext responses (i.e. no Code) like normal, add a few
+		// additional status codes
+		options.Retryer = retry.NewStandard(func(o *retry.StandardOptions) {
+			o.Retryables = append(o.Retryables, retry.RetryableHTTPStatusCode{
+				Codes: map[int]struct{}{
+					http.StatusTooManyRequests: {},
+				},
+			})
+		})
+	}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	client := &Client{
+		options: options,
+	}
+
+	return client
+}
+
+// GetCredentialsInput is the input to send with the endpoint service to receive credentials.
+type GetCredentialsInput struct {
+	AuthorizationToken string
+}
+
+// GetCredentials retrieves credentials from credential endpoint
+func (c *Client) GetCredentials(ctx context.Context, params *GetCredentialsInput, optFns ...func(*Options)) (*GetCredentialsOutput, error) {
+	stack := smithymiddleware.NewStack("GetCredentials", smithyhttp.NewStackRequest)
+	options := c.options.Copy()
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	stack.Serialize.Add(&serializeOpGetCredential{}, smithymiddleware.After)
+	stack.Build.Add(&buildEndpoint{Endpoint: options.Endpoint}, smithymiddleware.After)
+	stack.Deserialize.Add(&deserializeOpGetCredential{}, smithymiddleware.After)
+	addProtocolFinalizerMiddlewares(stack, options, "GetCredentials")
+	retry.AddRetryMiddlewares(stack, retry.AddRetryMiddlewaresOptions{Retryer: options.Retryer})
+	middleware.AddSDKAgentKey(middleware.FeatureMetadata, ServiceID)
+	smithyhttp.AddErrorCloseResponseBodyMiddleware(stack)
+	smithyhttp.AddCloseResponseBodyMiddleware(stack)
+
+	for _, fn := range options.APIOptions {
+		if err := fn(stack); err != nil {
+			return nil, err
+		}
+	}
+
+	handler := smithymiddleware.DecorateHandler(smithyhttp.NewClientHandler(options.HTTPClient), stack)
+	result, _, err := handler.Handle(ctx, params)
+	if err != nil {
+		return nil, err
+	}
+
+	return result.(*GetCredentialsOutput), err
+}
+
+// GetCredentialsOutput is the response from the credential endpoint
+type GetCredentialsOutput struct {
+	Expiration      *time.Time
+	AccessKeyID     string
+	SecretAccessKey string
+	Token           string
+	AccountID       string
+}
+
+// EndpointError is an error returned from the endpoint service
+type EndpointError struct {
+	Code       string            `json:"code"`
+	Message    string            `json:"message"`
+	Fault      smithy.ErrorFault `json:"-"`
+	statusCode int               `json:"-"`
+}
+
+// Error is the error mesage string
+func (e *EndpointError) Error() string {
+	return fmt.Sprintf("%s: %s", e.Code, e.Message)
+}
+
+// ErrorCode is the error code returned by the endpoint
+func (e *EndpointError) ErrorCode() string {
+	return e.Code
+}
+
+// ErrorMessage is the error message returned by the endpoint
+func (e *EndpointError) ErrorMessage() string {
+	return e.Message
+}
+
+// ErrorFault indicates error fault classification
+func (e *EndpointError) ErrorFault() smithy.ErrorFault {
+	return e.Fault
+}
+
+// HTTPStatusCode implements retry.HTTPStatusCode.
+func (e *EndpointError) HTTPStatusCode() int {
+	return e.statusCode
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/endpoints.go 🔗

@@ -0,0 +1,20 @@
+package client
+
+import (
+	"context"
+	"github.com/aws/smithy-go/middleware"
+)
+
+type resolveEndpointV2Middleware struct {
+	options Options
+}
+
+func (*resolveEndpointV2Middleware) ID() string {
+	return "ResolveEndpointV2"
+}
+
+func (m *resolveEndpointV2Middleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client/middleware.go 🔗

@@ -0,0 +1,164 @@
+package client
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"net/url"
+
+	"github.com/aws/smithy-go"
+	smithymiddleware "github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+type buildEndpoint struct {
+	Endpoint string
+}
+
+func (b *buildEndpoint) ID() string {
+	return "BuildEndpoint"
+}
+
+func (b *buildEndpoint) HandleBuild(ctx context.Context, in smithymiddleware.BuildInput, next smithymiddleware.BuildHandler) (
+	out smithymiddleware.BuildOutput, metadata smithymiddleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport, %T", in.Request)
+	}
+
+	if len(b.Endpoint) == 0 {
+		return out, metadata, fmt.Errorf("endpoint not provided")
+	}
+
+	parsed, err := url.Parse(b.Endpoint)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to parse endpoint, %w", err)
+	}
+
+	request.URL = parsed
+
+	return next.HandleBuild(ctx, in)
+}
+
+type serializeOpGetCredential struct{}
+
+func (s *serializeOpGetCredential) ID() string {
+	return "OperationSerializer"
+}
+
+func (s *serializeOpGetCredential) HandleSerialize(ctx context.Context, in smithymiddleware.SerializeInput, next smithymiddleware.SerializeHandler) (
+	out smithymiddleware.SerializeOutput, metadata smithymiddleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type, %T", in.Request)
+	}
+
+	params, ok := in.Parameters.(*GetCredentialsInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters, %T", in.Parameters)
+	}
+
+	const acceptHeader = "Accept"
+	request.Header[acceptHeader] = append(request.Header[acceptHeader][:0], "application/json")
+
+	if len(params.AuthorizationToken) > 0 {
+		const authHeader = "Authorization"
+		request.Header[authHeader] = append(request.Header[authHeader][:0], params.AuthorizationToken)
+	}
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type deserializeOpGetCredential struct{}
+
+func (d *deserializeOpGetCredential) ID() string {
+	return "OperationDeserializer"
+}
+
+func (d *deserializeOpGetCredential) HandleDeserialize(ctx context.Context, in smithymiddleware.DeserializeInput, next smithymiddleware.DeserializeHandler) (
+	out smithymiddleware.DeserializeOutput, metadata smithymiddleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, deserializeError(response)
+	}
+
+	var shape *GetCredentialsOutput
+	if err = json.NewDecoder(response.Body).Decode(&shape); err != nil {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("failed to deserialize json response, %w", err)}
+	}
+
+	out.Result = shape
+	return out, metadata, err
+}
+
+func deserializeError(response *smithyhttp.Response) error {
+	// we could be talking to anything, json isn't guaranteed
+	// see https://github.com/aws/aws-sdk-go-v2/issues/2316
+	if response.Header.Get("Content-Type") == "application/json" {
+		return deserializeJSONError(response)
+	}
+
+	msg, err := io.ReadAll(response.Body)
+	if err != nil {
+		return &smithy.DeserializationError{
+			Err: fmt.Errorf("read response, %w", err),
+		}
+	}
+
+	return &EndpointError{
+		// no sensible value for Code
+		Message:    string(msg),
+		Fault:      stof(response.StatusCode),
+		statusCode: response.StatusCode,
+	}
+}
+
+func deserializeJSONError(response *smithyhttp.Response) error {
+	var errShape *EndpointError
+	if err := json.NewDecoder(response.Body).Decode(&errShape); err != nil {
+		return &smithy.DeserializationError{
+			Err: fmt.Errorf("failed to decode error message, %w", err),
+		}
+	}
+
+	errShape.Fault = stof(response.StatusCode)
+	errShape.statusCode = response.StatusCode
+	return errShape
+}
+
+// maps HTTP status code to smithy ErrorFault
+func stof(code int) smithy.ErrorFault {
+	if code >= 500 {
+		return smithy.FaultServer
+	}
+	return smithy.FaultClient
+}
+
+func addProtocolFinalizerMiddlewares(stack *smithymiddleware.Stack, options Options, operation string) error {
+	if err := stack.Finalize.Add(&resolveAuthSchemeMiddleware{operation: operation, options: options}, smithymiddleware.Before); err != nil {
+		return fmt.Errorf("add ResolveAuthScheme: %w", err)
+	}
+	if err := stack.Finalize.Insert(&getIdentityMiddleware{options: options}, "ResolveAuthScheme", smithymiddleware.After); err != nil {
+		return fmt.Errorf("add GetIdentity: %w", err)
+	}
+	if err := stack.Finalize.Insert(&resolveEndpointV2Middleware{options: options}, "GetIdentity", smithymiddleware.After); err != nil {
+		return fmt.Errorf("add ResolveEndpointV2: %w", err)
+	}
+	if err := stack.Finalize.Insert(&signRequestMiddleware{}, "ResolveEndpointV2", smithymiddleware.After); err != nil {
+		return fmt.Errorf("add Signing: %w", err)
+	}
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/provider.go 🔗

@@ -0,0 +1,193 @@
+// Package endpointcreds provides support for retrieving credentials from an
+// arbitrary HTTP endpoint.
+//
+// The credentials endpoint Provider can receive both static and refreshable
+// credentials that will expire. Credentials are static when an "Expiration"
+// value is not provided in the endpoint's response.
+//
+// Static credentials will never expire once they have been retrieved. The format
+// of the static credentials response:
+//
+//	{
+//	    "AccessKeyId" : "MUA...",
+//	    "SecretAccessKey" : "/7PC5om....",
+//	}
+//
+// Refreshable credentials will expire within the "ExpiryWindow" of the Expiration
+// value in the response. The format of the refreshable credentials response:
+//
+//	{
+//	    "AccessKeyId" : "MUA...",
+//	    "SecretAccessKey" : "/7PC5om....",
+//	    "Token" : "AQoDY....=",
+//	    "Expiration" : "2016-02-25T06:03:31Z"
+//	}
+//
+// Errors should be returned in the following format and only returned with 400
+// or 500 HTTP status codes.
+//
+//	{
+//	    "code": "ErrorCode",
+//	    "message": "Helpful error message."
+//	}
+package endpointcreds
+
+import (
+	"context"
+	"fmt"
+	"net/http"
+	"strings"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/credentials/endpointcreds/internal/client"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// ProviderName is the name of the credentials provider.
+const ProviderName = `CredentialsEndpointProvider`
+
+type getCredentialsAPIClient interface {
+	GetCredentials(context.Context, *client.GetCredentialsInput, ...func(*client.Options)) (*client.GetCredentialsOutput, error)
+}
+
+// Provider satisfies the aws.CredentialsProvider interface, and is a client to
+// retrieve credentials from an arbitrary endpoint.
+type Provider struct {
+	// The AWS Client to make HTTP requests to the endpoint with. The endpoint
+	// the request will be made to is provided by the aws.Config's
+	// EndpointResolver.
+	client getCredentialsAPIClient
+
+	options Options
+}
+
+// HTTPClient is a client for sending HTTP requests
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+// Options is structure of configurable options for Provider
+type Options struct {
+	// Endpoint to retrieve credentials from. Required
+	Endpoint string
+
+	// HTTPClient to handle sending HTTP requests to the target endpoint.
+	HTTPClient HTTPClient
+
+	// Set of options to modify how the credentials operation is invoked.
+	APIOptions []func(*middleware.Stack) error
+
+	// The Retryer to be used for determining whether a failed requested should be retried
+	Retryer aws.Retryer
+
+	// Optional authorization token value if set will be used as the value of
+	// the Authorization header of the endpoint credential request.
+	//
+	// When constructed from environment, the provider will use the value of
+	// AWS_CONTAINER_AUTHORIZATION_TOKEN environment variable as the token
+	//
+	// Will be overridden if AuthorizationTokenProvider is configured
+	AuthorizationToken string
+
+	// Optional auth provider func to dynamically load the auth token from a file
+	// everytime a credential is retrieved
+	//
+	// When constructed from environment, the provider will read and use the content
+	// of the file pointed to by AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE environment variable
+	// as the auth token everytime credentials are retrieved
+	//
+	// Will override AuthorizationToken if configured
+	AuthorizationTokenProvider AuthTokenProvider
+}
+
+// AuthTokenProvider defines an interface to dynamically load a value to be passed
+// for the Authorization header of a credentials request.
+type AuthTokenProvider interface {
+	GetToken() (string, error)
+}
+
+// TokenProviderFunc is a func type implementing AuthTokenProvider interface
+// and enables customizing token provider behavior
+type TokenProviderFunc func() (string, error)
+
+// GetToken func retrieves auth token according to TokenProviderFunc implementation
+func (p TokenProviderFunc) GetToken() (string, error) {
+	return p()
+}
+
+// New returns a credentials Provider for retrieving AWS credentials
+// from arbitrary endpoint.
+func New(endpoint string, optFns ...func(*Options)) *Provider {
+	o := Options{
+		Endpoint: endpoint,
+	}
+
+	for _, fn := range optFns {
+		fn(&o)
+	}
+
+	p := &Provider{
+		client: client.New(client.Options{
+			HTTPClient: o.HTTPClient,
+			Endpoint:   o.Endpoint,
+			APIOptions: o.APIOptions,
+			Retryer:    o.Retryer,
+		}),
+		options: o,
+	}
+
+	return p
+}
+
+// Retrieve will attempt to request the credentials from the endpoint the Provider
+// was configured for. And error will be returned if the retrieval fails.
+func (p *Provider) Retrieve(ctx context.Context) (aws.Credentials, error) {
+	resp, err := p.getCredentials(ctx)
+	if err != nil {
+		return aws.Credentials{}, fmt.Errorf("failed to load credentials, %w", err)
+	}
+
+	creds := aws.Credentials{
+		AccessKeyID:     resp.AccessKeyID,
+		SecretAccessKey: resp.SecretAccessKey,
+		SessionToken:    resp.Token,
+		Source:          ProviderName,
+		AccountID:       resp.AccountID,
+	}
+
+	if resp.Expiration != nil {
+		creds.CanExpire = true
+		creds.Expires = *resp.Expiration
+	}
+
+	return creds, nil
+}
+
+func (p *Provider) getCredentials(ctx context.Context) (*client.GetCredentialsOutput, error) {
+	authToken, err := p.resolveAuthToken()
+	if err != nil {
+		return nil, fmt.Errorf("resolve auth token: %v", err)
+	}
+
+	return p.client.GetCredentials(ctx, &client.GetCredentialsInput{
+		AuthorizationToken: authToken,
+	})
+}
+
+func (p *Provider) resolveAuthToken() (string, error) {
+	authToken := p.options.AuthorizationToken
+
+	var err error
+	if p.options.AuthorizationTokenProvider != nil {
+		authToken, err = p.options.AuthorizationTokenProvider.GetToken()
+		if err != nil {
+			return "", err
+		}
+	}
+
+	if strings.ContainsAny(authToken, "\r\n") {
+		return "", fmt.Errorf("authorization token contains invalid newline sequence")
+	}
+
+	return authToken, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/processcreds/doc.go 🔗

@@ -0,0 +1,92 @@
+// Package processcreds is a credentials provider to retrieve credentials from a
+// external CLI invoked process.
+//
+// WARNING: The following describes a method of sourcing credentials from an external
+// process. This can potentially be dangerous, so proceed with caution. Other
+// credential providers should be preferred if at all possible. If using this
+// option, you should make sure that the config file is as locked down as possible
+// using security best practices for your operating system.
+//
+// # Concurrency and caching
+//
+// The Provider is not safe to be used concurrently, and does not provide any
+// caching of credentials retrieved. You should wrap the Provider with a
+// `aws.CredentialsCache` to provide concurrency safety, and caching of
+// credentials.
+//
+// # Loading credentials with the SDKs AWS Config
+//
+// You can use credentials from a AWS shared config `credential_process` in a
+// variety of ways.
+//
+// One way is to setup your shared config file, located in the default
+// location, with the `credential_process` key and the command you want to be
+// called. You also need to set the AWS_SDK_LOAD_CONFIG environment variable
+// (e.g., `export AWS_SDK_LOAD_CONFIG=1`) to use the shared config file.
+//
+//	[default]
+//	credential_process = /command/to/call
+//
+// Loading configuration using external will use the credential process to
+// retrieve credentials. NOTE: If there are credentials in the profile you are
+// using, the credential process will not be used.
+//
+//	// Initialize a session to load credentials.
+//	cfg, _ := config.LoadDefaultConfig(context.TODO())
+//
+//	// Create S3 service client to use the credentials.
+//	svc := s3.NewFromConfig(cfg)
+//
+// # Loading credentials with the Provider directly
+//
+// Another way to use the credentials process provider is by using the
+// `NewProvider` constructor to create the provider and providing a it with a
+// command to be executed to retrieve credentials.
+//
+// The following example creates a credentials provider for a command, and wraps
+// it with the CredentialsCache before assigning the provider to the Amazon S3 API
+// client's Credentials option.
+//
+//	 // Create credentials using the Provider.
+//		provider := processcreds.NewProvider("/path/to/command")
+//
+//	 // Create the service client value configured for credentials.
+//	 svc := s3.New(s3.Options{
+//	   Credentials: aws.NewCredentialsCache(provider),
+//	 })
+//
+// If you need more control, you can set any configurable options in the
+// credentials using one or more option functions.
+//
+//	provider := processcreds.NewProvider("/path/to/command",
+//	    func(o *processcreds.Options) {
+//	      // Override the provider's default timeout
+//	      o.Timeout = 2 * time.Minute
+//	    })
+//
+// You can also use your own `exec.Cmd` value by satisfying a value that satisfies
+// the `NewCommandBuilder` interface and use the `NewProviderCommand` constructor.
+//
+//	// Create an exec.Cmd
+//	cmdBuilder := processcreds.NewCommandBuilderFunc(
+//		func(ctx context.Context) (*exec.Cmd, error) {
+//			cmd := exec.CommandContext(ctx,
+//				"customCLICommand",
+//				"-a", "argument",
+//			)
+//			cmd.Env = []string{
+//				"ENV_VAR_FOO=value",
+//				"ENV_VAR_BAR=other_value",
+//			}
+//
+//			return cmd, nil
+//		},
+//	)
+//
+//	// Create credentials using your exec.Cmd and custom timeout
+//	provider := processcreds.NewProviderCommand(cmdBuilder,
+//		func(opt *processcreds.Provider) {
+//			// optionally override the provider's default timeout
+//			opt.Timeout = 1 * time.Second
+//		})
+package processcreds

vendor/github.com/aws/aws-sdk-go-v2/credentials/processcreds/provider.go 🔗

@@ -0,0 +1,285 @@
+package processcreds
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"os"
+	"os/exec"
+	"runtime"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/internal/sdkio"
+)
+
+const (
+	// ProviderName is the name this credentials provider will label any
+	// returned credentials Value with.
+	ProviderName = `ProcessProvider`
+
+	// DefaultTimeout default limit on time a process can run.
+	DefaultTimeout = time.Duration(1) * time.Minute
+)
+
+// ProviderError is an error indicating failure initializing or executing the
+// process credentials provider
+type ProviderError struct {
+	Err error
+}
+
+// Error returns the error message.
+func (e *ProviderError) Error() string {
+	return fmt.Sprintf("process provider error: %v", e.Err)
+}
+
+// Unwrap returns the underlying error the provider error wraps.
+func (e *ProviderError) Unwrap() error {
+	return e.Err
+}
+
+// Provider satisfies the credentials.Provider interface, and is a
+// client to retrieve credentials from a process.
+type Provider struct {
+	// Provides a constructor for exec.Cmd that are invoked by the provider for
+	// retrieving credentials. Use this to provide custom creation of exec.Cmd
+	// with things like environment variables, or other configuration.
+	//
+	// The provider defaults to the DefaultNewCommand function.
+	commandBuilder NewCommandBuilder
+
+	options Options
+}
+
+// Options is the configuration options for configuring the Provider.
+type Options struct {
+	// Timeout limits the time a process can run.
+	Timeout time.Duration
+}
+
+// NewCommandBuilder provides the interface for specifying how command will be
+// created that the Provider will use to retrieve credentials with.
+type NewCommandBuilder interface {
+	NewCommand(context.Context) (*exec.Cmd, error)
+}
+
+// NewCommandBuilderFunc provides a wrapper type around a function pointer to
+// satisfy the NewCommandBuilder interface.
+type NewCommandBuilderFunc func(context.Context) (*exec.Cmd, error)
+
+// NewCommand calls the underlying function pointer the builder was initialized with.
+func (fn NewCommandBuilderFunc) NewCommand(ctx context.Context) (*exec.Cmd, error) {
+	return fn(ctx)
+}
+
+// DefaultNewCommandBuilder provides the default NewCommandBuilder
+// implementation used by the provider. It takes a command and arguments to
+// invoke. The command will also be initialized with the current process
+// environment variables, stderr, and stdin pipes.
+type DefaultNewCommandBuilder struct {
+	Args []string
+}
+
+// NewCommand returns an initialized exec.Cmd with the builder's initialized
+// Args. The command is also initialized current process environment variables,
+// stderr, and stdin pipes.
+func (b DefaultNewCommandBuilder) NewCommand(ctx context.Context) (*exec.Cmd, error) {
+	var cmdArgs []string
+	if runtime.GOOS == "windows" {
+		cmdArgs = []string{"cmd.exe", "/C"}
+	} else {
+		cmdArgs = []string{"sh", "-c"}
+	}
+
+	if len(b.Args) == 0 {
+		return nil, &ProviderError{
+			Err: fmt.Errorf("failed to prepare command: command must not be empty"),
+		}
+	}
+
+	cmdArgs = append(cmdArgs, b.Args...)
+	cmd := exec.CommandContext(ctx, cmdArgs[0], cmdArgs[1:]...)
+	cmd.Env = os.Environ()
+
+	cmd.Stderr = os.Stderr // display stderr on console for MFA
+	cmd.Stdin = os.Stdin   // enable stdin for MFA
+
+	return cmd, nil
+}
+
+// NewProvider returns a pointer to a new Credentials object wrapping the
+// Provider.
+//
+// The provider defaults to the DefaultNewCommandBuilder for creating command
+// the Provider will use to retrieve credentials with.
+func NewProvider(command string, options ...func(*Options)) *Provider {
+	var args []string
+
+	// Ensure that the command arguments are not set if the provided command is
+	// empty. This will error out when the command is executed since no
+	// arguments are specified.
+	if len(command) > 0 {
+		args = []string{command}
+	}
+
+	commanBuilder := DefaultNewCommandBuilder{
+		Args: args,
+	}
+	return NewProviderCommand(commanBuilder, options...)
+}
+
+// NewProviderCommand returns a pointer to a new Credentials object with the
+// specified command, and default timeout duration. Use this to provide custom
+// creation of exec.Cmd for options like environment variables, or other
+// configuration.
+func NewProviderCommand(builder NewCommandBuilder, options ...func(*Options)) *Provider {
+	p := &Provider{
+		commandBuilder: builder,
+		options: Options{
+			Timeout: DefaultTimeout,
+		},
+	}
+
+	for _, option := range options {
+		option(&p.options)
+	}
+
+	return p
+}
+
+// A CredentialProcessResponse is the AWS credentials format that must be
+// returned when executing an external credential_process.
+type CredentialProcessResponse struct {
+	// As of this writing, the Version key must be set to 1. This might
+	// increment over time as the structure evolves.
+	Version int
+
+	// The access key ID that identifies the temporary security credentials.
+	AccessKeyID string `json:"AccessKeyId"`
+
+	// The secret access key that can be used to sign requests.
+	SecretAccessKey string
+
+	// The token that users must pass to the service API to use the temporary credentials.
+	SessionToken string
+
+	// The date on which the current credentials expire.
+	Expiration *time.Time
+
+	// The ID of the account for credentials
+	AccountID string `json:"AccountId"`
+}
+
+// Retrieve executes the credential process command and returns the
+// credentials, or error if the command fails.
+func (p *Provider) Retrieve(ctx context.Context) (aws.Credentials, error) {
+	out, err := p.executeCredentialProcess(ctx)
+	if err != nil {
+		return aws.Credentials{Source: ProviderName}, err
+	}
+
+	// Serialize and validate response
+	resp := &CredentialProcessResponse{}
+	if err = json.Unmarshal(out, resp); err != nil {
+		return aws.Credentials{Source: ProviderName}, &ProviderError{
+			Err: fmt.Errorf("parse failed of process output: %s, error: %w", out, err),
+		}
+	}
+
+	if resp.Version != 1 {
+		return aws.Credentials{Source: ProviderName}, &ProviderError{
+			Err: fmt.Errorf("wrong version in process output (not 1)"),
+		}
+	}
+
+	if len(resp.AccessKeyID) == 0 {
+		return aws.Credentials{Source: ProviderName}, &ProviderError{
+			Err: fmt.Errorf("missing AccessKeyId in process output"),
+		}
+	}
+
+	if len(resp.SecretAccessKey) == 0 {
+		return aws.Credentials{Source: ProviderName}, &ProviderError{
+			Err: fmt.Errorf("missing SecretAccessKey in process output"),
+		}
+	}
+
+	creds := aws.Credentials{
+		Source:          ProviderName,
+		AccessKeyID:     resp.AccessKeyID,
+		SecretAccessKey: resp.SecretAccessKey,
+		SessionToken:    resp.SessionToken,
+		AccountID:       resp.AccountID,
+	}
+
+	// Handle expiration
+	if resp.Expiration != nil {
+		creds.CanExpire = true
+		creds.Expires = *resp.Expiration
+	}
+
+	return creds, nil
+}
+
+// executeCredentialProcess starts the credential process on the OS and
+// returns the results or an error.
+func (p *Provider) executeCredentialProcess(ctx context.Context) ([]byte, error) {
+	if p.options.Timeout >= 0 {
+		var cancelFunc func()
+		ctx, cancelFunc = context.WithTimeout(ctx, p.options.Timeout)
+		defer cancelFunc()
+	}
+
+	cmd, err := p.commandBuilder.NewCommand(ctx)
+	if err != nil {
+		return nil, err
+	}
+
+	// get creds json on process's stdout
+	output := bytes.NewBuffer(make([]byte, 0, int(8*sdkio.KibiByte)))
+	if cmd.Stdout != nil {
+		cmd.Stdout = io.MultiWriter(cmd.Stdout, output)
+	} else {
+		cmd.Stdout = output
+	}
+
+	execCh := make(chan error, 1)
+	go executeCommand(cmd, execCh)
+
+	select {
+	case execError := <-execCh:
+		if execError == nil {
+			break
+		}
+		select {
+		case <-ctx.Done():
+			return output.Bytes(), &ProviderError{
+				Err: fmt.Errorf("credential process timed out: %w", execError),
+			}
+		default:
+			return output.Bytes(), &ProviderError{
+				Err: fmt.Errorf("error in credential_process: %w", execError),
+			}
+		}
+	}
+
+	out := output.Bytes()
+	if runtime.GOOS == "windows" {
+		// windows adds slashes to quotes
+		out = bytes.ReplaceAll(out, []byte(`\"`), []byte(`"`))
+	}
+
+	return out, nil
+}
+
+func executeCommand(cmd *exec.Cmd, exec chan error) {
+	// Start the command
+	err := cmd.Start()
+	if err == nil {
+		err = cmd.Wait()
+	}
+
+	exec <- err
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/doc.go 🔗

@@ -0,0 +1,81 @@
+// Package ssocreds provides a credential provider for retrieving temporary AWS
+// credentials using an SSO access token.
+//
+// IMPORTANT: The provider in this package does not initiate or perform the AWS
+// SSO login flow. The SDK provider expects that you have already performed the
+// SSO login flow using AWS CLI using the "aws sso login" command, or by some
+// other mechanism. The provider must find a valid non-expired access token for
+// the AWS SSO user portal URL in ~/.aws/sso/cache. If a cached token is not
+// found, it is expired, or the file is malformed an error will be returned.
+//
+// # Loading AWS SSO credentials with the AWS shared configuration file
+//
+// You can use configure AWS SSO credentials from the AWS shared configuration file by
+// specifying the required keys in the profile and referencing an sso-session:
+//
+//	sso_session
+//	sso_account_id
+//	sso_role_name
+//
+// For example, the following defines a profile "devsso" and specifies the AWS
+// SSO parameters that defines the target account, role, sign-on portal, and
+// the region where the user portal is located. Note: all SSO arguments must be
+// provided, or an error will be returned.
+//
+//	[profile devsso]
+//	sso_session = dev-session
+//	sso_role_name = SSOReadOnlyRole
+//	sso_account_id = 123456789012
+//
+//	[sso-session dev-session]
+//	sso_start_url = https://my-sso-portal.awsapps.com/start
+//	sso_region = us-east-1
+//	sso_registration_scopes = sso:account:access
+//
+// Using the config module, you can load the AWS SDK shared configuration, and
+// specify that this profile be used to retrieve credentials. For example:
+//
+//	config, err := config.LoadDefaultConfig(context.TODO(), config.WithSharedConfigProfile("devsso"))
+//	if err != nil {
+//	    return err
+//	}
+//
+// # Programmatically loading AWS SSO credentials directly
+//
+// You can programmatically construct the AWS SSO Provider in your application,
+// and provide the necessary information to load and retrieve temporary
+// credentials using an access token from ~/.aws/sso/cache.
+//
+//	ssoClient := sso.NewFromConfig(cfg)
+//	ssoOidcClient := ssooidc.NewFromConfig(cfg)
+//	tokenPath, err := ssocreds.StandardCachedTokenFilepath("dev-session")
+//	if err != nil {
+//	    return err
+//	}
+//
+//	var provider aws.CredentialsProvider
+//	provider = ssocreds.New(ssoClient, "123456789012", "SSOReadOnlyRole", "https://my-sso-portal.awsapps.com/start", func(options *ssocreds.Options) {
+//	  options.SSOTokenProvider = ssocreds.NewSSOTokenProvider(ssoOidcClient, tokenPath)
+//	})
+//
+//	// Wrap the provider with aws.CredentialsCache to cache the credentials until their expire time
+//	provider = aws.NewCredentialsCache(provider)
+//
+//	credentials, err := provider.Retrieve(context.TODO())
+//	if err != nil {
+//	    return err
+//	}
+//
+// It is important that you wrap the Provider with aws.CredentialsCache if you
+// are programmatically constructing the provider directly. This prevents your
+// application from accessing the cached access token and requesting new
+// credentials each time the credentials are used.
+//
+// # Additional Resources
+//
+// Configuring the AWS CLI to use AWS Single Sign-On:
+// https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html
+//
+// AWS Single Sign-On User Guide:
+// https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html
+package ssocreds

vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/sso_cached_token.go 🔗

@@ -0,0 +1,233 @@
+package ssocreds
+
+import (
+	"crypto/sha1"
+	"encoding/hex"
+	"encoding/json"
+	"fmt"
+	"io/ioutil"
+	"os"
+	"path/filepath"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/aws-sdk-go-v2/internal/shareddefaults"
+)
+
+var osUserHomeDur = shareddefaults.UserHomeDir
+
+// StandardCachedTokenFilepath returns the filepath for the cached SSO token file, or
+// error if unable get derive the path. Key that will be used to compute a SHA1
+// value that is hex encoded.
+//
+// Derives the filepath using the Key as:
+//
+//	~/.aws/sso/cache/<sha1-hex-encoded-key>.json
+func StandardCachedTokenFilepath(key string) (string, error) {
+	homeDir := osUserHomeDur()
+	if len(homeDir) == 0 {
+		return "", fmt.Errorf("unable to get USER's home directory for cached token")
+	}
+	hash := sha1.New()
+	if _, err := hash.Write([]byte(key)); err != nil {
+		return "", fmt.Errorf("unable to compute cached token filepath key SHA1 hash, %w", err)
+	}
+
+	cacheFilename := strings.ToLower(hex.EncodeToString(hash.Sum(nil))) + ".json"
+
+	return filepath.Join(homeDir, ".aws", "sso", "cache", cacheFilename), nil
+}
+
+type tokenKnownFields struct {
+	AccessToken string   `json:"accessToken,omitempty"`
+	ExpiresAt   *rfc3339 `json:"expiresAt,omitempty"`
+
+	RefreshToken string `json:"refreshToken,omitempty"`
+	ClientID     string `json:"clientId,omitempty"`
+	ClientSecret string `json:"clientSecret,omitempty"`
+}
+
+type token struct {
+	tokenKnownFields
+	UnknownFields map[string]interface{} `json:"-"`
+}
+
+func (t token) MarshalJSON() ([]byte, error) {
+	fields := map[string]interface{}{}
+
+	setTokenFieldString(fields, "accessToken", t.AccessToken)
+	setTokenFieldRFC3339(fields, "expiresAt", t.ExpiresAt)
+
+	setTokenFieldString(fields, "refreshToken", t.RefreshToken)
+	setTokenFieldString(fields, "clientId", t.ClientID)
+	setTokenFieldString(fields, "clientSecret", t.ClientSecret)
+
+	for k, v := range t.UnknownFields {
+		if _, ok := fields[k]; ok {
+			return nil, fmt.Errorf("unknown token field %v, duplicates known field", k)
+		}
+		fields[k] = v
+	}
+
+	return json.Marshal(fields)
+}
+
+func setTokenFieldString(fields map[string]interface{}, key, value string) {
+	if value == "" {
+		return
+	}
+	fields[key] = value
+}
+func setTokenFieldRFC3339(fields map[string]interface{}, key string, value *rfc3339) {
+	if value == nil {
+		return
+	}
+	fields[key] = value
+}
+
+func (t *token) UnmarshalJSON(b []byte) error {
+	var fields map[string]interface{}
+	if err := json.Unmarshal(b, &fields); err != nil {
+		return nil
+	}
+
+	t.UnknownFields = map[string]interface{}{}
+
+	for k, v := range fields {
+		var err error
+		switch k {
+		case "accessToken":
+			err = getTokenFieldString(v, &t.AccessToken)
+		case "expiresAt":
+			err = getTokenFieldRFC3339(v, &t.ExpiresAt)
+		case "refreshToken":
+			err = getTokenFieldString(v, &t.RefreshToken)
+		case "clientId":
+			err = getTokenFieldString(v, &t.ClientID)
+		case "clientSecret":
+			err = getTokenFieldString(v, &t.ClientSecret)
+		default:
+			t.UnknownFields[k] = v
+		}
+
+		if err != nil {
+			return fmt.Errorf("field %q, %w", k, err)
+		}
+	}
+
+	return nil
+}
+
+func getTokenFieldString(v interface{}, value *string) error {
+	var ok bool
+	*value, ok = v.(string)
+	if !ok {
+		return fmt.Errorf("expect value to be string, got %T", v)
+	}
+	return nil
+}
+
+func getTokenFieldRFC3339(v interface{}, value **rfc3339) error {
+	var stringValue string
+	if err := getTokenFieldString(v, &stringValue); err != nil {
+		return err
+	}
+
+	timeValue, err := parseRFC3339(stringValue)
+	if err != nil {
+		return err
+	}
+
+	*value = &timeValue
+	return nil
+}
+
+func loadCachedToken(filename string) (token, error) {
+	fileBytes, err := ioutil.ReadFile(filename)
+	if err != nil {
+		return token{}, fmt.Errorf("failed to read cached SSO token file, %w", err)
+	}
+
+	var t token
+	if err := json.Unmarshal(fileBytes, &t); err != nil {
+		return token{}, fmt.Errorf("failed to parse cached SSO token file, %w", err)
+	}
+
+	if len(t.AccessToken) == 0 || t.ExpiresAt == nil || time.Time(*t.ExpiresAt).IsZero() {
+		return token{}, fmt.Errorf(
+			"cached SSO token must contain accessToken and expiresAt fields")
+	}
+
+	return t, nil
+}
+
+func storeCachedToken(filename string, t token, fileMode os.FileMode) (err error) {
+	tmpFilename := filename + ".tmp-" + strconv.FormatInt(sdk.NowTime().UnixNano(), 10)
+	if err := writeCacheFile(tmpFilename, fileMode, t); err != nil {
+		return err
+	}
+
+	if err := os.Rename(tmpFilename, filename); err != nil {
+		return fmt.Errorf("failed to replace old cached SSO token file, %w", err)
+	}
+
+	return nil
+}
+
+func writeCacheFile(filename string, fileMode os.FileMode, t token) (err error) {
+	var f *os.File
+	f, err = os.OpenFile(filename, os.O_CREATE|os.O_TRUNC|os.O_RDWR, fileMode)
+	if err != nil {
+		return fmt.Errorf("failed to create cached SSO token file %w", err)
+	}
+
+	defer func() {
+		closeErr := f.Close()
+		if err == nil && closeErr != nil {
+			err = fmt.Errorf("failed to close cached SSO token file, %w", closeErr)
+		}
+	}()
+
+	encoder := json.NewEncoder(f)
+
+	if err = encoder.Encode(t); err != nil {
+		return fmt.Errorf("failed to serialize cached SSO token, %w", err)
+	}
+
+	return nil
+}
+
+type rfc3339 time.Time
+
+func parseRFC3339(v string) (rfc3339, error) {
+	parsed, err := time.Parse(time.RFC3339, v)
+	if err != nil {
+		return rfc3339{}, fmt.Errorf("expected RFC3339 timestamp: %w", err)
+	}
+
+	return rfc3339(parsed), nil
+}
+
+func (r *rfc3339) UnmarshalJSON(bytes []byte) (err error) {
+	var value string
+
+	// Use JSON unmarshal to unescape the quoted value making use of JSON's
+	// unquoting rules.
+	if err = json.Unmarshal(bytes, &value); err != nil {
+		return err
+	}
+
+	*r, err = parseRFC3339(value)
+
+	return nil
+}
+
+func (r *rfc3339) MarshalJSON() ([]byte, error) {
+	value := time.Time(*r).Format(time.RFC3339)
+
+	// Use JSON unmarshal to unescape the quoted value making use of JSON's
+	// quoting rules.
+	return json.Marshal(value)
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/sso_credentials_provider.go 🔗

@@ -0,0 +1,153 @@
+package ssocreds
+
+import (
+	"context"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/aws-sdk-go-v2/service/sso"
+)
+
+// ProviderName is the name of the provider used to specify the source of
+// credentials.
+const ProviderName = "SSOProvider"
+
+// GetRoleCredentialsAPIClient is a API client that implements the
+// GetRoleCredentials operation.
+type GetRoleCredentialsAPIClient interface {
+	GetRoleCredentials(context.Context, *sso.GetRoleCredentialsInput, ...func(*sso.Options)) (
+		*sso.GetRoleCredentialsOutput, error,
+	)
+}
+
+// Options is the Provider options structure.
+type Options struct {
+	// The Client which is configured for the AWS Region where the AWS SSO user
+	// portal is located.
+	Client GetRoleCredentialsAPIClient
+
+	// The AWS account that is assigned to the user.
+	AccountID string
+
+	// The role name that is assigned to the user.
+	RoleName string
+
+	// The URL that points to the organization's AWS Single Sign-On (AWS SSO)
+	// user portal.
+	StartURL string
+
+	// The filepath the cached token will be retrieved from. If unset Provider will
+	// use the startURL to determine the filepath at.
+	//
+	//    ~/.aws/sso/cache/<sha1-hex-encoded-startURL>.json
+	//
+	// If custom cached token filepath is used, the Provider's startUrl
+	// parameter will be ignored.
+	CachedTokenFilepath string
+
+	// Used by the SSOCredentialProvider if a token configuration
+	// profile is used in the shared config
+	SSOTokenProvider *SSOTokenProvider
+}
+
+// Provider is an AWS credential provider that retrieves temporary AWS
+// credentials by exchanging an SSO login token.
+type Provider struct {
+	options Options
+
+	cachedTokenFilepath string
+}
+
+// New returns a new AWS Single Sign-On (AWS SSO) credential provider. The
+// provided client is expected to be configured for the AWS Region where the
+// AWS SSO user portal is located.
+func New(client GetRoleCredentialsAPIClient, accountID, roleName, startURL string, optFns ...func(options *Options)) *Provider {
+	options := Options{
+		Client:    client,
+		AccountID: accountID,
+		RoleName:  roleName,
+		StartURL:  startURL,
+	}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	return &Provider{
+		options:             options,
+		cachedTokenFilepath: options.CachedTokenFilepath,
+	}
+}
+
+// Retrieve retrieves temporary AWS credentials from the configured Amazon
+// Single Sign-On (AWS SSO) user portal by exchanging the accessToken present
+// in ~/.aws/sso/cache. However, if a token provider configuration exists
+// in the shared config, then we ought to use the token provider rather then
+// direct access on the cached token.
+func (p *Provider) Retrieve(ctx context.Context) (aws.Credentials, error) {
+	var accessToken *string
+	if p.options.SSOTokenProvider != nil {
+		token, err := p.options.SSOTokenProvider.RetrieveBearerToken(ctx)
+		if err != nil {
+			return aws.Credentials{}, err
+		}
+		accessToken = &token.Value
+	} else {
+		if p.cachedTokenFilepath == "" {
+			cachedTokenFilepath, err := StandardCachedTokenFilepath(p.options.StartURL)
+			if err != nil {
+				return aws.Credentials{}, &InvalidTokenError{Err: err}
+			}
+			p.cachedTokenFilepath = cachedTokenFilepath
+		}
+
+		tokenFile, err := loadCachedToken(p.cachedTokenFilepath)
+		if err != nil {
+			return aws.Credentials{}, &InvalidTokenError{Err: err}
+		}
+
+		if tokenFile.ExpiresAt == nil || sdk.NowTime().After(time.Time(*tokenFile.ExpiresAt)) {
+			return aws.Credentials{}, &InvalidTokenError{}
+		}
+		accessToken = &tokenFile.AccessToken
+	}
+
+	output, err := p.options.Client.GetRoleCredentials(ctx, &sso.GetRoleCredentialsInput{
+		AccessToken: accessToken,
+		AccountId:   &p.options.AccountID,
+		RoleName:    &p.options.RoleName,
+	})
+	if err != nil {
+		return aws.Credentials{}, err
+	}
+
+	return aws.Credentials{
+		AccessKeyID:     aws.ToString(output.RoleCredentials.AccessKeyId),
+		SecretAccessKey: aws.ToString(output.RoleCredentials.SecretAccessKey),
+		SessionToken:    aws.ToString(output.RoleCredentials.SessionToken),
+		CanExpire:       true,
+		Expires:         time.Unix(0, output.RoleCredentials.Expiration*int64(time.Millisecond)).UTC(),
+		Source:          ProviderName,
+		AccountID:       p.options.AccountID,
+	}, nil
+}
+
+// InvalidTokenError is the error type that is returned if loaded token has
+// expired or is otherwise invalid. To refresh the SSO session run AWS SSO
+// login with the corresponding profile.
+type InvalidTokenError struct {
+	Err error
+}
+
+func (i *InvalidTokenError) Unwrap() error {
+	return i.Err
+}
+
+func (i *InvalidTokenError) Error() string {
+	const msg = "the SSO session has expired or is invalid"
+	if i.Err == nil {
+		return msg
+	}
+	return msg + ": " + i.Err.Error()
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/ssocreds/sso_token_provider.go 🔗

@@ -0,0 +1,147 @@
+package ssocreds
+
+import (
+	"context"
+	"fmt"
+	"os"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/aws-sdk-go-v2/service/ssooidc"
+	"github.com/aws/smithy-go/auth/bearer"
+)
+
+// CreateTokenAPIClient provides the interface for the SSOTokenProvider's API
+// client for calling CreateToken operation to refresh the SSO token.
+type CreateTokenAPIClient interface {
+	CreateToken(context.Context, *ssooidc.CreateTokenInput, ...func(*ssooidc.Options)) (
+		*ssooidc.CreateTokenOutput, error,
+	)
+}
+
+// SSOTokenProviderOptions provides the options for configuring the
+// SSOTokenProvider.
+type SSOTokenProviderOptions struct {
+	// Client that can be overridden
+	Client CreateTokenAPIClient
+
+	// The set of API Client options to be applied when invoking the
+	// CreateToken operation.
+	ClientOptions []func(*ssooidc.Options)
+
+	// The path the file containing the cached SSO token will be read from.
+	// Initialized the NewSSOTokenProvider's cachedTokenFilepath parameter.
+	CachedTokenFilepath string
+}
+
+// SSOTokenProvider provides an utility for refreshing SSO AccessTokens for
+// Bearer Authentication. The SSOTokenProvider can only be used to refresh
+// already cached SSO Tokens. This utility cannot perform the initial SSO
+// create token.
+//
+// The SSOTokenProvider is not safe to use concurrently. It must be wrapped in
+// a utility such as smithy-go's auth/bearer#TokenCache. The SDK's
+// config.LoadDefaultConfig will automatically wrap the SSOTokenProvider with
+// the smithy-go TokenCache, if the external configuration loaded configured
+// for an SSO session.
+//
+// The initial SSO create token should be preformed with the AWS CLI before the
+// Go application using the SSOTokenProvider will need to retrieve the SSO
+// token. If the AWS CLI has not created the token cache file, this provider
+// will return an error when attempting to retrieve the cached token.
+//
+// This provider will attempt to refresh the cached SSO token periodically if
+// needed when RetrieveBearerToken is called.
+//
+// A utility such as the AWS CLI must be used to initially create the SSO
+// session and cached token file.
+// https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html
+type SSOTokenProvider struct {
+	options SSOTokenProviderOptions
+}
+
+var _ bearer.TokenProvider = (*SSOTokenProvider)(nil)
+
+// NewSSOTokenProvider returns an initialized SSOTokenProvider that will
+// periodically refresh the SSO token cached stored in the cachedTokenFilepath.
+// The cachedTokenFilepath file's content will be rewritten by the token
+// provider when the token is refreshed.
+//
+// The client must be configured for the AWS region the SSO token was created for.
+func NewSSOTokenProvider(client CreateTokenAPIClient, cachedTokenFilepath string, optFns ...func(o *SSOTokenProviderOptions)) *SSOTokenProvider {
+	options := SSOTokenProviderOptions{
+		Client:              client,
+		CachedTokenFilepath: cachedTokenFilepath,
+	}
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	provider := &SSOTokenProvider{
+		options: options,
+	}
+
+	return provider
+}
+
+// RetrieveBearerToken returns the SSO token stored in the cachedTokenFilepath
+// the SSOTokenProvider was created with. If the token has expired
+// RetrieveBearerToken will attempt to refresh it. If the token cannot be
+// refreshed or is not present an error will be returned.
+//
+// A utility such as the AWS CLI must be used to initially create the SSO
+// session and cached token file. https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html
+func (p SSOTokenProvider) RetrieveBearerToken(ctx context.Context) (bearer.Token, error) {
+	cachedToken, err := loadCachedToken(p.options.CachedTokenFilepath)
+	if err != nil {
+		return bearer.Token{}, err
+	}
+
+	if cachedToken.ExpiresAt != nil && sdk.NowTime().After(time.Time(*cachedToken.ExpiresAt)) {
+		cachedToken, err = p.refreshToken(ctx, cachedToken)
+		if err != nil {
+			return bearer.Token{}, fmt.Errorf("refresh cached SSO token failed, %w", err)
+		}
+	}
+
+	expiresAt := aws.ToTime((*time.Time)(cachedToken.ExpiresAt))
+	return bearer.Token{
+		Value:     cachedToken.AccessToken,
+		CanExpire: !expiresAt.IsZero(),
+		Expires:   expiresAt,
+	}, nil
+}
+
+func (p SSOTokenProvider) refreshToken(ctx context.Context, cachedToken token) (token, error) {
+	if cachedToken.ClientSecret == "" || cachedToken.ClientID == "" || cachedToken.RefreshToken == "" {
+		return token{}, fmt.Errorf("cached SSO token is expired, or not present, and cannot be refreshed")
+	}
+
+	createResult, err := p.options.Client.CreateToken(ctx, &ssooidc.CreateTokenInput{
+		ClientId:     &cachedToken.ClientID,
+		ClientSecret: &cachedToken.ClientSecret,
+		RefreshToken: &cachedToken.RefreshToken,
+		GrantType:    aws.String("refresh_token"),
+	}, p.options.ClientOptions...)
+	if err != nil {
+		return token{}, fmt.Errorf("unable to refresh SSO token, %w", err)
+	}
+
+	expiresAt := sdk.NowTime().Add(time.Duration(createResult.ExpiresIn) * time.Second)
+
+	cachedToken.AccessToken = aws.ToString(createResult.AccessToken)
+	cachedToken.ExpiresAt = (*rfc3339)(&expiresAt)
+	cachedToken.RefreshToken = aws.ToString(createResult.RefreshToken)
+
+	fileInfo, err := os.Stat(p.options.CachedTokenFilepath)
+	if err != nil {
+		return token{}, fmt.Errorf("failed to stat cached SSO token file %w", err)
+	}
+
+	if err = storeCachedToken(p.options.CachedTokenFilepath, cachedToken, fileInfo.Mode()); err != nil {
+		return token{}, fmt.Errorf("unable to cache refreshed SSO token, %w", err)
+	}
+
+	return cachedToken, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/static_provider.go 🔗

@@ -0,0 +1,53 @@
+package credentials
+
+import (
+	"context"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+const (
+	// StaticCredentialsName provides a name of Static provider
+	StaticCredentialsName = "StaticCredentials"
+)
+
+// StaticCredentialsEmptyError is emitted when static credentials are empty.
+type StaticCredentialsEmptyError struct{}
+
+func (*StaticCredentialsEmptyError) Error() string {
+	return "static credentials are empty"
+}
+
+// A StaticCredentialsProvider is a set of credentials which are set, and will
+// never expire.
+type StaticCredentialsProvider struct {
+	Value aws.Credentials
+}
+
+// NewStaticCredentialsProvider return a StaticCredentialsProvider initialized with the AWS
+// credentials passed in.
+func NewStaticCredentialsProvider(key, secret, session string) StaticCredentialsProvider {
+	return StaticCredentialsProvider{
+		Value: aws.Credentials{
+			AccessKeyID:     key,
+			SecretAccessKey: secret,
+			SessionToken:    session,
+		},
+	}
+}
+
+// Retrieve returns the credentials or error if the credentials are invalid.
+func (s StaticCredentialsProvider) Retrieve(_ context.Context) (aws.Credentials, error) {
+	v := s.Value
+	if v.AccessKeyID == "" || v.SecretAccessKey == "" {
+		return aws.Credentials{
+			Source: StaticCredentialsName,
+		}, &StaticCredentialsEmptyError{}
+	}
+
+	if len(v.Source) == 0 {
+		v.Source = StaticCredentialsName
+	}
+
+	return v, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/stscreds/assume_role_provider.go 🔗

@@ -0,0 +1,326 @@
+// Package stscreds are credential Providers to retrieve STS AWS credentials.
+//
+// STS provides multiple ways to retrieve credentials which can be used when making
+// future AWS service API operation calls.
+//
+// The SDK will ensure that per instance of credentials.Credentials all requests
+// to refresh the credentials will be synchronized. But, the SDK is unable to
+// ensure synchronous usage of the AssumeRoleProvider if the value is shared
+// between multiple Credentials or service clients.
+//
+// # Assume Role
+//
+// To assume an IAM role using STS with the SDK you can create a new Credentials
+// with the SDKs's stscreds package.
+//
+//	// Initial credentials loaded from SDK's default credential chain. Such as
+//	// the environment, shared credentials (~/.aws/credentials), or EC2 Instance
+//	// Role. These credentials will be used to to make the STS Assume Role API.
+//	cfg, err := config.LoadDefaultConfig(context.TODO())
+//	if err != nil {
+//		panic(err)
+//	}
+//
+//	// Create the credentials from AssumeRoleProvider to assume the role
+//	// referenced by the "myRoleARN" ARN.
+//	stsSvc := sts.NewFromConfig(cfg)
+//	creds := stscreds.NewAssumeRoleProvider(stsSvc, "myRoleArn")
+//
+//	cfg.Credentials = aws.NewCredentialsCache(creds)
+//
+//	// Create service client value configured for credentials
+//	// from assumed role.
+//	svc := s3.NewFromConfig(cfg)
+//
+// # Assume Role with custom MFA Token provider
+//
+// To assume an IAM role with a MFA token you can either specify a custom MFA
+// token provider or use the SDK's built in StdinTokenProvider that will prompt
+// the user for a token code each time the credentials need to to be refreshed.
+// Specifying a custom token provider allows you to control where the token
+// code is retrieved from, and how it is refreshed.
+//
+// With a custom token provider, the provider is responsible for refreshing the
+// token code when called.
+//
+//		cfg, err := config.LoadDefaultConfig(context.TODO())
+//		if err != nil {
+//			panic(err)
+//		}
+//
+//	 staticTokenProvider := func() (string, error) {
+//	     return someTokenCode, nil
+//	 }
+//
+//		// Create the credentials from AssumeRoleProvider to assume the role
+//		// referenced by the "myRoleARN" ARN using the MFA token code provided.
+//		creds := stscreds.NewAssumeRoleProvider(sts.NewFromConfig(cfg), "myRoleArn", func(o *stscreds.AssumeRoleOptions) {
+//			o.SerialNumber = aws.String("myTokenSerialNumber")
+//			o.TokenProvider = staticTokenProvider
+//		})
+//
+//		cfg.Credentials = aws.NewCredentialsCache(creds)
+//
+//		// Create service client value configured for credentials
+//		// from assumed role.
+//		svc := s3.NewFromConfig(cfg)
+//
+// # Assume Role with MFA Token Provider
+//
+// To assume an IAM role with MFA for longer running tasks where the credentials
+// may need to be refreshed setting the TokenProvider field of AssumeRoleProvider
+// will allow the credential provider to prompt for new MFA token code when the
+// role's credentials need to be refreshed.
+//
+// The StdinTokenProvider function is available to prompt on stdin to retrieve
+// the MFA token code from the user. You can also implement custom prompts by
+// satisfying the TokenProvider function signature.
+//
+// Using StdinTokenProvider with multiple AssumeRoleProviders, or Credentials will
+// have undesirable results as the StdinTokenProvider will not be synchronized. A
+// single Credentials with an AssumeRoleProvider can be shared safely.
+//
+//	cfg, err := config.LoadDefaultConfig(context.TODO())
+//	if err != nil {
+//		panic(err)
+//	}
+//
+//	// Create the credentials from AssumeRoleProvider to assume the role
+//	// referenced by the "myRoleARN" ARN using the MFA token code provided.
+//	creds := stscreds.NewAssumeRoleProvider(sts.NewFromConfig(cfg), "myRoleArn", func(o *stscreds.AssumeRoleOptions) {
+//		o.SerialNumber = aws.String("myTokenSerialNumber")
+//		o.TokenProvider = stscreds.StdinTokenProvider
+//	})
+//
+//	cfg.Credentials = aws.NewCredentialsCache(creds)
+//
+//	// Create service client value configured for credentials
+//	// from assumed role.
+//	svc := s3.NewFromConfig(cfg)
+package stscreds
+
+import (
+	"context"
+	"fmt"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/service/sts"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+)
+
+// StdinTokenProvider will prompt on stdout and read from stdin for a string value.
+// An error is returned if reading from stdin fails.
+//
+// Use this function go read MFA tokens from stdin. The function makes no attempt
+// to make atomic prompts from stdin across multiple gorouties.
+//
+// Using StdinTokenProvider with multiple AssumeRoleProviders, or Credentials will
+// have undesirable results as the StdinTokenProvider will not be synchronized. A
+// single Credentials with an AssumeRoleProvider can be shared safely
+//
+// Will wait forever until something is provided on the stdin.
+func StdinTokenProvider() (string, error) {
+	var v string
+	fmt.Printf("Assume Role MFA token code: ")
+	_, err := fmt.Scanln(&v)
+
+	return v, err
+}
+
+// ProviderName provides a name of AssumeRole provider
+const ProviderName = "AssumeRoleProvider"
+
+// AssumeRoleAPIClient is a client capable of the STS AssumeRole operation.
+type AssumeRoleAPIClient interface {
+	AssumeRole(ctx context.Context, params *sts.AssumeRoleInput, optFns ...func(*sts.Options)) (*sts.AssumeRoleOutput, error)
+}
+
+// DefaultDuration is the default amount of time in minutes that the
+// credentials will be valid for. This value is only used by AssumeRoleProvider
+// for specifying the default expiry duration of an assume role.
+//
+// Other providers such as WebIdentityRoleProvider do not use this value, and
+// instead rely on STS API's default parameter handing to assign a default
+// value.
+var DefaultDuration = time.Duration(15) * time.Minute
+
+// AssumeRoleProvider retrieves temporary credentials from the STS service, and
+// keeps track of their expiration time.
+//
+// This credential provider will be used by the SDKs default credential change
+// when shared configuration is enabled, and the shared config or shared credentials
+// file configure assume role. See Session docs for how to do this.
+//
+// AssumeRoleProvider does not provide any synchronization and it is not safe
+// to share this value across multiple Credentials, Sessions, or service clients
+// without also sharing the same Credentials instance.
+type AssumeRoleProvider struct {
+	options AssumeRoleOptions
+}
+
+// AssumeRoleOptions is the configurable options for AssumeRoleProvider
+type AssumeRoleOptions struct {
+	// Client implementation of the AssumeRole operation. Required
+	Client AssumeRoleAPIClient
+
+	// IAM Role ARN to be assumed. Required
+	RoleARN string
+
+	// Session name, if you wish to uniquely identify this session.
+	RoleSessionName string
+
+	// Expiry duration of the STS credentials. Defaults to 15 minutes if not set.
+	Duration time.Duration
+
+	// Optional ExternalID to pass along, defaults to nil if not set.
+	ExternalID *string
+
+	// The policy plain text must be 2048 bytes or shorter. However, an internal
+	// conversion compresses it into a packed binary format with a separate limit.
+	// The PackedPolicySize response element indicates by percentage how close to
+	// the upper size limit the policy is, with 100% equaling the maximum allowed
+	// size.
+	Policy *string
+
+	// The ARNs of IAM managed policies you want to use as managed session policies.
+	// The policies must exist in the same account as the role.
+	//
+	// This parameter is optional. You can provide up to 10 managed policy ARNs.
+	// However, the plain text that you use for both inline and managed session
+	// policies can't exceed 2,048 characters.
+	//
+	// An AWS conversion compresses the passed session policies and session tags
+	// into a packed binary format that has a separate limit. Your request can fail
+	// for this limit even if your plain text meets the other requirements. The
+	// PackedPolicySize response element indicates by percentage how close the policies
+	// and tags for your request are to the upper size limit.
+	//
+	// Passing policies to this operation returns new temporary credentials. The
+	// resulting session's permissions are the intersection of the role's identity-based
+	// policy and the session policies. You can use the role's temporary credentials
+	// in subsequent AWS API calls to access resources in the account that owns
+	// the role. You cannot use session policies to grant more permissions than
+	// those allowed by the identity-based policy of the role that is being assumed.
+	// For more information, see Session Policies (https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session)
+	// in the IAM User Guide.
+	PolicyARNs []types.PolicyDescriptorType
+
+	// The identification number of the MFA device that is associated with the user
+	// who is making the AssumeRole call. Specify this value if the trust policy
+	// of the role being assumed includes a condition that requires MFA authentication.
+	// The value is either the serial number for a hardware device (such as GAHT12345678)
+	// or an Amazon Resource Name (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user).
+	SerialNumber *string
+
+	// The source identity specified by the principal that is calling the AssumeRole
+	// operation. You can require users to specify a source identity when they assume a
+	// role. You do this by using the sts:SourceIdentity condition key in a role trust
+	// policy. You can use source identity information in CloudTrail logs to determine
+	// who took actions with a role. You can use the aws:SourceIdentity condition key
+	// to further control access to Amazon Web Services resources based on the value of
+	// source identity. For more information about using source identity, see Monitor
+	// and control actions taken with assumed roles
+	// (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html)
+	// in the IAM User Guide.
+	SourceIdentity *string
+
+	// Async method of providing MFA token code for assuming an IAM role with MFA.
+	// The value returned by the function will be used as the TokenCode in the Retrieve
+	// call. See StdinTokenProvider for a provider that prompts and reads from stdin.
+	//
+	// This token provider will be called when ever the assumed role's
+	// credentials need to be refreshed when SerialNumber is set.
+	TokenProvider func() (string, error)
+
+	// A list of session tags that you want to pass. Each session tag consists of a key
+	// name and an associated value. For more information about session tags, see
+	// Tagging STS Sessions
+	// (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html) in the
+	// IAM User Guide. This parameter is optional. You can pass up to 50 session tags.
+	Tags []types.Tag
+
+	// A list of keys for session tags that you want to set as transitive. If you set a
+	// tag key as transitive, the corresponding key and value passes to subsequent
+	// sessions in a role chain. For more information, see Chaining Roles with Session
+	// Tags
+	// (https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_role-chaining)
+	// in the IAM User Guide. This parameter is optional.
+	TransitiveTagKeys []string
+}
+
+// NewAssumeRoleProvider constructs and returns a credentials provider that
+// will retrieve credentials by assuming a IAM role using STS.
+func NewAssumeRoleProvider(client AssumeRoleAPIClient, roleARN string, optFns ...func(*AssumeRoleOptions)) *AssumeRoleProvider {
+	o := AssumeRoleOptions{
+		Client:  client,
+		RoleARN: roleARN,
+	}
+
+	for _, fn := range optFns {
+		fn(&o)
+	}
+
+	return &AssumeRoleProvider{
+		options: o,
+	}
+}
+
+// Retrieve generates a new set of temporary credentials using STS.
+func (p *AssumeRoleProvider) Retrieve(ctx context.Context) (aws.Credentials, error) {
+	// Apply defaults where parameters are not set.
+	if len(p.options.RoleSessionName) == 0 {
+		// Try to work out a role name that will hopefully end up unique.
+		p.options.RoleSessionName = fmt.Sprintf("aws-go-sdk-%d", time.Now().UTC().UnixNano())
+	}
+	if p.options.Duration == 0 {
+		// Expire as often as AWS permits.
+		p.options.Duration = DefaultDuration
+	}
+	input := &sts.AssumeRoleInput{
+		DurationSeconds:   aws.Int32(int32(p.options.Duration / time.Second)),
+		PolicyArns:        p.options.PolicyARNs,
+		RoleArn:           aws.String(p.options.RoleARN),
+		RoleSessionName:   aws.String(p.options.RoleSessionName),
+		ExternalId:        p.options.ExternalID,
+		SourceIdentity:    p.options.SourceIdentity,
+		Tags:              p.options.Tags,
+		TransitiveTagKeys: p.options.TransitiveTagKeys,
+	}
+	if p.options.Policy != nil {
+		input.Policy = p.options.Policy
+	}
+	if p.options.SerialNumber != nil {
+		if p.options.TokenProvider != nil {
+			input.SerialNumber = p.options.SerialNumber
+			code, err := p.options.TokenProvider()
+			if err != nil {
+				return aws.Credentials{}, err
+			}
+			input.TokenCode = aws.String(code)
+		} else {
+			return aws.Credentials{}, fmt.Errorf("assume role with MFA enabled, but TokenProvider is not set")
+		}
+	}
+
+	resp, err := p.options.Client.AssumeRole(ctx, input)
+	if err != nil {
+		return aws.Credentials{Source: ProviderName}, err
+	}
+
+	var accountID string
+	if resp.AssumedRoleUser != nil {
+		accountID = getAccountID(resp.AssumedRoleUser)
+	}
+
+	return aws.Credentials{
+		AccessKeyID:     *resp.Credentials.AccessKeyId,
+		SecretAccessKey: *resp.Credentials.SecretAccessKey,
+		SessionToken:    *resp.Credentials.SessionToken,
+		Source:          ProviderName,
+
+		CanExpire: true,
+		Expires:   *resp.Credentials.Expiration,
+		AccountID: accountID,
+	}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/credentials/stscreds/web_identity_provider.go 🔗

@@ -0,0 +1,169 @@
+package stscreds
+
+import (
+	"context"
+	"fmt"
+	"io/ioutil"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/aws/retry"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/aws-sdk-go-v2/service/sts"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+)
+
+var invalidIdentityTokenExceptionCode = (&types.InvalidIdentityTokenException{}).ErrorCode()
+
+const (
+	// WebIdentityProviderName is the web identity provider name
+	WebIdentityProviderName = "WebIdentityCredentials"
+)
+
+// AssumeRoleWithWebIdentityAPIClient is a client capable of the STS AssumeRoleWithWebIdentity operation.
+type AssumeRoleWithWebIdentityAPIClient interface {
+	AssumeRoleWithWebIdentity(ctx context.Context, params *sts.AssumeRoleWithWebIdentityInput, optFns ...func(*sts.Options)) (*sts.AssumeRoleWithWebIdentityOutput, error)
+}
+
+// WebIdentityRoleProvider is used to retrieve credentials using
+// an OIDC token.
+type WebIdentityRoleProvider struct {
+	options WebIdentityRoleOptions
+}
+
+// WebIdentityRoleOptions is a structure of configurable options for WebIdentityRoleProvider
+type WebIdentityRoleOptions struct {
+	// Client implementation of the AssumeRoleWithWebIdentity operation. Required
+	Client AssumeRoleWithWebIdentityAPIClient
+
+	// JWT Token Provider. Required
+	TokenRetriever IdentityTokenRetriever
+
+	// IAM Role ARN to assume. Required
+	RoleARN string
+
+	// Session name, if you wish to uniquely identify this session.
+	RoleSessionName string
+
+	// Expiry duration of the STS credentials. STS will assign a default expiry
+	// duration if this value is unset. This is different from the Duration
+	// option of AssumeRoleProvider, which automatically assigns 15 minutes if
+	// Duration is unset.
+	//
+	// See the STS AssumeRoleWithWebIdentity API reference guide for more
+	// information on defaults.
+	// https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html
+	Duration time.Duration
+
+	// An IAM policy in JSON format that you want to use as an inline session policy.
+	Policy *string
+
+	// The Amazon Resource Names (ARNs) of the IAM managed policies that you
+	// want to use as managed session policies.  The policies must exist in the
+	// same account as the role.
+	PolicyARNs []types.PolicyDescriptorType
+}
+
+// IdentityTokenRetriever is an interface for retrieving a JWT
+type IdentityTokenRetriever interface {
+	GetIdentityToken() ([]byte, error)
+}
+
+// IdentityTokenFile is for retrieving an identity token from the given file name
+type IdentityTokenFile string
+
+// GetIdentityToken retrieves the JWT token from the file and returns the contents as a []byte
+func (j IdentityTokenFile) GetIdentityToken() ([]byte, error) {
+	b, err := ioutil.ReadFile(string(j))
+	if err != nil {
+		return nil, fmt.Errorf("unable to read file at %s: %v", string(j), err)
+	}
+
+	return b, nil
+}
+
+// NewWebIdentityRoleProvider will return a new WebIdentityRoleProvider with the
+// provided stsiface.ClientAPI
+func NewWebIdentityRoleProvider(client AssumeRoleWithWebIdentityAPIClient, roleARN string, tokenRetriever IdentityTokenRetriever, optFns ...func(*WebIdentityRoleOptions)) *WebIdentityRoleProvider {
+	o := WebIdentityRoleOptions{
+		Client:         client,
+		RoleARN:        roleARN,
+		TokenRetriever: tokenRetriever,
+	}
+
+	for _, fn := range optFns {
+		fn(&o)
+	}
+
+	return &WebIdentityRoleProvider{options: o}
+}
+
+// Retrieve will attempt to assume a role from a token which is located at
+// 'WebIdentityTokenFilePath' specified destination and if that is empty an
+// error will be returned.
+func (p *WebIdentityRoleProvider) Retrieve(ctx context.Context) (aws.Credentials, error) {
+	b, err := p.options.TokenRetriever.GetIdentityToken()
+	if err != nil {
+		return aws.Credentials{}, fmt.Errorf("failed to retrieve jwt from provide source, %w", err)
+	}
+
+	sessionName := p.options.RoleSessionName
+	if len(sessionName) == 0 {
+		// session name is used to uniquely identify a session. This simply
+		// uses unix time in nanoseconds to uniquely identify sessions.
+		sessionName = strconv.FormatInt(sdk.NowTime().UnixNano(), 10)
+	}
+	input := &sts.AssumeRoleWithWebIdentityInput{
+		PolicyArns:       p.options.PolicyARNs,
+		RoleArn:          &p.options.RoleARN,
+		RoleSessionName:  &sessionName,
+		WebIdentityToken: aws.String(string(b)),
+	}
+	if p.options.Duration != 0 {
+		// If set use the value, otherwise STS will assign a default expiration duration.
+		input.DurationSeconds = aws.Int32(int32(p.options.Duration / time.Second))
+	}
+	if p.options.Policy != nil {
+		input.Policy = p.options.Policy
+	}
+
+	resp, err := p.options.Client.AssumeRoleWithWebIdentity(ctx, input, func(options *sts.Options) {
+		options.Retryer = retry.AddWithErrorCodes(options.Retryer, invalidIdentityTokenExceptionCode)
+	})
+	if err != nil {
+		return aws.Credentials{}, fmt.Errorf("failed to retrieve credentials, %w", err)
+	}
+
+	var accountID string
+	if resp.AssumedRoleUser != nil {
+		accountID = getAccountID(resp.AssumedRoleUser)
+	}
+
+	// InvalidIdentityToken error is a temporary error that can occur
+	// when assuming an Role with a JWT web identity token.
+
+	value := aws.Credentials{
+		AccessKeyID:     aws.ToString(resp.Credentials.AccessKeyId),
+		SecretAccessKey: aws.ToString(resp.Credentials.SecretAccessKey),
+		SessionToken:    aws.ToString(resp.Credentials.SessionToken),
+		Source:          WebIdentityProviderName,
+		CanExpire:       true,
+		Expires:         *resp.Credentials.Expiration,
+		AccountID:       accountID,
+	}
+	return value, nil
+}
+
+// extract accountID from arn with format "arn:partition:service:region:account-id:[resource-section]"
+func getAccountID(u *types.AssumedRoleUser) string {
+	if u.Arn == nil {
+		return ""
+	}
+	parts := strings.Split(*u.Arn, ":")
+	if len(parts) < 5 {
+		return ""
+	}
+	return parts[4]
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/CHANGELOG.md 🔗

@@ -0,0 +1,355 @@
+# v1.16.11 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.10 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.9 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.8 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.7 (2024-06-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.6 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.5 (2024-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.4 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.3 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.2 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.1 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.0 (2024-03-21)
+
+* **Feature**: Add config switch `DisableDefaultTimeout` that allows you to disable the default operation timeout (5 seconds) for IMDS calls.
+
+# v1.15.4 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.3 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.2 (2024-02-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.1 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.11 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.10 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.9 (2023-12-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.8 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.7 (2023-11-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.6 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.5 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.4 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.3 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.2 (2023-11-02)
+
+* No change notes available for this release.
+
+# v1.14.1 (2023-11-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.13 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.12 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.11 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.10 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.9 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.8 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.7 (2023-07-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.6 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.5 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.4 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.3 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.2 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.1 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.0 (2023-03-14)
+
+* **Feature**: Add flag to disable IMDSv1 fallback
+
+# v1.12.24 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.23 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.22 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.21 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.20 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.19 (2022-10-24)
+
+* **Bug Fix**: Fixes an issue that prevented logging of the API request or responses when the respective log modes were enabled.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.18 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.17 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.16 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.15 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.14 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.13 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.12 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.11 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.10 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.9 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.8 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.7 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.6 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.5 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.4 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.3 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.2 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.1 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.0 (2022-02-24)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.2 (2021-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.0 (2021-11-06)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.0 (2021-10-21)
+
+* **Feature**: Updated  to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.0 (2021-10-11)
+
+* **Feature**: Respect passed in Context Deadline/Timeout. Updates the IMDS Client operations to not override the passed in Context's Deadline or Timeout options. If an Client operation is called with a Context with a Deadline or Timeout, the client will no longer override it with the client's default timeout.
+* **Bug Fix**: Fix IMDS client's response handling and operation timeout race. Fixes #1253
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.1 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.0 (2021-08-27)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.1 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.0 (2021-08-04)
+
+* **Feature**: adds error handling for defered close calls
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2021-07-15)
+
+* **Feature**: Support has been added for EC2 IPv6-enabled Instance Metadata Service Endpoints.
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2021-06-25)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.1 (2021-05-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_client.go 🔗

@@ -0,0 +1,352 @@
+package imds
+
+import (
+	"context"
+	"fmt"
+	"net"
+	"net/http"
+	"os"
+	"strings"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/aws/retry"
+	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+	internalconfig "github.com/aws/aws-sdk-go-v2/feature/ec2/imds/internal/config"
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// ServiceID provides the unique name of this API client
+const ServiceID = "ec2imds"
+
+// Client provides the API client for interacting with the Amazon EC2 Instance
+// Metadata Service API.
+type Client struct {
+	options Options
+}
+
+// ClientEnableState provides an enumeration if the client is enabled,
+// disabled, or default behavior.
+type ClientEnableState = internalconfig.ClientEnableState
+
+// Enumeration values for ClientEnableState
+const (
+	ClientDefaultEnableState ClientEnableState = internalconfig.ClientDefaultEnableState // default behavior
+	ClientDisabled           ClientEnableState = internalconfig.ClientDisabled           // client disabled
+	ClientEnabled            ClientEnableState = internalconfig.ClientEnabled            // client enabled
+)
+
+// EndpointModeState is an enum configuration variable describing the client endpoint mode.
+// Not configurable directly, but used when using the NewFromConfig.
+type EndpointModeState = internalconfig.EndpointModeState
+
+// Enumeration values for EndpointModeState
+const (
+	EndpointModeStateUnset EndpointModeState = internalconfig.EndpointModeStateUnset
+	EndpointModeStateIPv4  EndpointModeState = internalconfig.EndpointModeStateIPv4
+	EndpointModeStateIPv6  EndpointModeState = internalconfig.EndpointModeStateIPv6
+)
+
+const (
+	disableClientEnvVar = "AWS_EC2_METADATA_DISABLED"
+
+	// Client endpoint options
+	endpointEnvVar = "AWS_EC2_METADATA_SERVICE_ENDPOINT"
+
+	defaultIPv4Endpoint = "http://169.254.169.254"
+	defaultIPv6Endpoint = "http://[fd00:ec2::254]"
+)
+
+// New returns an initialized Client based on the functional options. Provide
+// additional functional options to further configure the behavior of the client,
+// such as changing the client's endpoint or adding custom middleware behavior.
+func New(options Options, optFns ...func(*Options)) *Client {
+	options = options.Copy()
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	options.HTTPClient = resolveHTTPClient(options.HTTPClient)
+
+	if options.Retryer == nil {
+		options.Retryer = retry.NewStandard()
+	}
+	options.Retryer = retry.AddWithMaxBackoffDelay(options.Retryer, 1*time.Second)
+
+	if options.ClientEnableState == ClientDefaultEnableState {
+		if v := os.Getenv(disableClientEnvVar); strings.EqualFold(v, "true") {
+			options.ClientEnableState = ClientDisabled
+		}
+	}
+
+	if len(options.Endpoint) == 0 {
+		if v := os.Getenv(endpointEnvVar); len(v) != 0 {
+			options.Endpoint = v
+		}
+	}
+
+	client := &Client{
+		options: options,
+	}
+
+	if client.options.tokenProvider == nil && !client.options.disableAPIToken {
+		client.options.tokenProvider = newTokenProvider(client, defaultTokenTTL)
+	}
+
+	return client
+}
+
+// NewFromConfig returns an initialized Client based the AWS SDK config, and
+// functional options. Provide additional functional options to further
+// configure the behavior of the client, such as changing the client's endpoint
+// or adding custom middleware behavior.
+func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client {
+	opts := Options{
+		APIOptions:    append([]func(*middleware.Stack) error{}, cfg.APIOptions...),
+		HTTPClient:    cfg.HTTPClient,
+		ClientLogMode: cfg.ClientLogMode,
+		Logger:        cfg.Logger,
+	}
+
+	if cfg.Retryer != nil {
+		opts.Retryer = cfg.Retryer()
+	}
+
+	resolveClientEnableState(cfg, &opts)
+	resolveEndpointConfig(cfg, &opts)
+	resolveEndpointModeConfig(cfg, &opts)
+	resolveEnableFallback(cfg, &opts)
+
+	return New(opts, optFns...)
+}
+
+// Options provides the fields for configuring the API client's behavior.
+type Options struct {
+	// Set of options to modify how an operation is invoked. These apply to all
+	// operations invoked for this client. Use functional options on operation
+	// call to modify this list for per operation behavior.
+	APIOptions []func(*middleware.Stack) error
+
+	// The endpoint the client will use to retrieve EC2 instance metadata.
+	//
+	// Specifies the EC2 Instance Metadata Service endpoint to use. If specified it overrides EndpointMode.
+	//
+	// If unset, and the environment variable AWS_EC2_METADATA_SERVICE_ENDPOINT
+	// has a value the client will use the value of the environment variable as
+	// the endpoint for operation calls.
+	//
+	//    AWS_EC2_METADATA_SERVICE_ENDPOINT=http://[::1]
+	Endpoint string
+
+	// The endpoint selection mode the client will use if no explicit endpoint is provided using the Endpoint field.
+	//
+	// Setting EndpointMode to EndpointModeStateIPv4 will configure the client to use the default EC2 IPv4 endpoint.
+	// Setting EndpointMode to EndpointModeStateIPv6 will configure the client to use the default EC2 IPv6 endpoint.
+	//
+	// By default if EndpointMode is not set (EndpointModeStateUnset) than the default endpoint selection mode EndpointModeStateIPv4.
+	EndpointMode EndpointModeState
+
+	// The HTTP client to invoke API calls with. Defaults to client's default
+	// HTTP implementation if nil.
+	HTTPClient HTTPClient
+
+	// Retryer guides how HTTP requests should be retried in case of recoverable
+	// failures. When nil the API client will use a default retryer.
+	Retryer aws.Retryer
+
+	// Changes if the EC2 Instance Metadata client is enabled or not. Client
+	// will default to enabled if not set to ClientDisabled. When the client is
+	// disabled it will return an error for all operation calls.
+	//
+	// If ClientEnableState value is ClientDefaultEnableState (default value),
+	// and the environment variable "AWS_EC2_METADATA_DISABLED" is set to
+	// "true", the client will be disabled.
+	//
+	//    AWS_EC2_METADATA_DISABLED=true
+	ClientEnableState ClientEnableState
+
+	// Configures the events that will be sent to the configured logger.
+	ClientLogMode aws.ClientLogMode
+
+	// The logger writer interface to write logging messages to.
+	Logger logging.Logger
+
+	// Configure IMDSv1 fallback behavior. By default, the client will attempt
+	// to fall back to IMDSv1 as needed for backwards compatibility. When set to [aws.FalseTernary]
+	// the client will return any errors encountered from attempting to fetch a token
+	// instead of silently using the insecure data flow of IMDSv1.
+	//
+	// See [configuring IMDS] for more information.
+	//
+	// [configuring IMDS]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html
+	EnableFallback aws.Ternary
+
+	// By default, all IMDS client operations enforce a 5-second timeout. You
+	// can disable that behavior with this setting.
+	DisableDefaultTimeout bool
+
+	// provides the caching of API tokens used for operation calls. If unset,
+	// the API token will not be retrieved for the operation.
+	tokenProvider *tokenProvider
+
+	// option to disable the API token provider for testing.
+	disableAPIToken bool
+}
+
+// HTTPClient provides the interface for a client making HTTP requests with the
+// API.
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+// Copy creates a copy of the API options.
+func (o Options) Copy() Options {
+	to := o
+	to.APIOptions = append([]func(*middleware.Stack) error{}, o.APIOptions...)
+	return to
+}
+
+// WithAPIOptions wraps the API middleware functions, as a functional option
+// for the API Client Options. Use this helper to add additional functional
+// options to the API client, or operation calls.
+func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options) {
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, optFns...)
+	}
+}
+
+func (c *Client) invokeOperation(
+	ctx context.Context, opID string, params interface{}, optFns []func(*Options),
+	stackFns ...func(*middleware.Stack, Options) error,
+) (
+	result interface{}, metadata middleware.Metadata, err error,
+) {
+	stack := middleware.NewStack(opID, smithyhttp.NewStackRequest)
+	options := c.options.Copy()
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	if options.ClientEnableState == ClientDisabled {
+		return nil, metadata, &smithy.OperationError{
+			ServiceID:     ServiceID,
+			OperationName: opID,
+			Err: fmt.Errorf(
+				"access disabled to EC2 IMDS via client option, or %q environment variable",
+				disableClientEnvVar),
+		}
+	}
+
+	for _, fn := range stackFns {
+		if err := fn(stack, options); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	for _, fn := range options.APIOptions {
+		if err := fn(stack); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	handler := middleware.DecorateHandler(smithyhttp.NewClientHandler(options.HTTPClient), stack)
+	result, metadata, err = handler.Handle(ctx, params)
+	if err != nil {
+		return nil, metadata, &smithy.OperationError{
+			ServiceID:     ServiceID,
+			OperationName: opID,
+			Err:           err,
+		}
+	}
+
+	return result, metadata, err
+}
+
+const (
+	// HTTP client constants
+	defaultDialerTimeout         = 250 * time.Millisecond
+	defaultResponseHeaderTimeout = 500 * time.Millisecond
+)
+
+func resolveHTTPClient(client HTTPClient) HTTPClient {
+	if client == nil {
+		client = awshttp.NewBuildableClient()
+	}
+
+	if c, ok := client.(*awshttp.BuildableClient); ok {
+		client = c.
+			WithDialerOptions(func(d *net.Dialer) {
+				// Use a custom Dial timeout for the EC2 Metadata service to account
+				// for the possibility the application might not be running in an
+				// environment with the service present. The client should fail fast in
+				// this case.
+				d.Timeout = defaultDialerTimeout
+			}).
+			WithTransportOptions(func(tr *http.Transport) {
+				// Use a custom Transport timeout for the EC2 Metadata service to
+				// account for the possibility that the application might be running in
+				// a container, and EC2Metadata service drops the connection after a
+				// single IP Hop. The client should fail fast in this case.
+				tr.ResponseHeaderTimeout = defaultResponseHeaderTimeout
+			})
+	}
+
+	return client
+}
+
+func resolveClientEnableState(cfg aws.Config, options *Options) error {
+	if options.ClientEnableState != ClientDefaultEnableState {
+		return nil
+	}
+	value, found, err := internalconfig.ResolveClientEnableState(cfg.ConfigSources)
+	if err != nil || !found {
+		return err
+	}
+	options.ClientEnableState = value
+	return nil
+}
+
+func resolveEndpointModeConfig(cfg aws.Config, options *Options) error {
+	if options.EndpointMode != EndpointModeStateUnset {
+		return nil
+	}
+	value, found, err := internalconfig.ResolveEndpointModeConfig(cfg.ConfigSources)
+	if err != nil || !found {
+		return err
+	}
+	options.EndpointMode = value
+	return nil
+}
+
+func resolveEndpointConfig(cfg aws.Config, options *Options) error {
+	if len(options.Endpoint) != 0 {
+		return nil
+	}
+	value, found, err := internalconfig.ResolveEndpointConfig(cfg.ConfigSources)
+	if err != nil || !found {
+		return err
+	}
+	options.Endpoint = value
+	return nil
+}
+
+func resolveEnableFallback(cfg aws.Config, options *Options) {
+	if options.EnableFallback != aws.UnknownTernary {
+		return
+	}
+
+	disabled, ok := internalconfig.ResolveV1FallbackDisabled(cfg.ConfigSources)
+	if !ok {
+		return
+	}
+
+	if disabled {
+		options.EnableFallback = aws.FalseTernary
+	} else {
+		options.EnableFallback = aws.TrueTernary
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetDynamicData.go 🔗

@@ -0,0 +1,77 @@
+package imds
+
+import (
+	"context"
+	"fmt"
+	"io"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const getDynamicDataPath = "/latest/dynamic"
+
+// GetDynamicData uses the path provided to request information from the EC2
+// instance metadata service for dynamic data. The content will be returned
+// as a string, or error if the request failed.
+func (c *Client) GetDynamicData(ctx context.Context, params *GetDynamicDataInput, optFns ...func(*Options)) (*GetDynamicDataOutput, error) {
+	if params == nil {
+		params = &GetDynamicDataInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetDynamicData", params, optFns,
+		addGetDynamicDataMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetDynamicDataOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+// GetDynamicDataInput provides the input parameters for the GetDynamicData
+// operation.
+type GetDynamicDataInput struct {
+	// The relative dynamic data path to retrieve. Can be empty string to
+	// retrieve a response containing a new line separated list of dynamic data
+	// resources available.
+	//
+	// Must not include the dynamic data base path.
+	//
+	// May include leading slash. If Path includes trailing slash the trailing
+	// slash will be included in the request for the resource.
+	Path string
+}
+
+// GetDynamicDataOutput provides the output parameters for the GetDynamicData
+// operation.
+type GetDynamicDataOutput struct {
+	Content io.ReadCloser
+
+	ResultMetadata middleware.Metadata
+}
+
+func addGetDynamicDataMiddleware(stack *middleware.Stack, options Options) error {
+	return addAPIRequestMiddleware(stack,
+		options,
+		"GetDynamicData",
+		buildGetDynamicDataPath,
+		buildGetDynamicDataOutput)
+}
+
+func buildGetDynamicDataPath(params interface{}) (string, error) {
+	p, ok := params.(*GetDynamicDataInput)
+	if !ok {
+		return "", fmt.Errorf("unknown parameter type %T", params)
+	}
+
+	return appendURIPath(getDynamicDataPath, p.Path), nil
+}
+
+func buildGetDynamicDataOutput(resp *smithyhttp.Response) (interface{}, error) {
+	return &GetDynamicDataOutput{
+		Content: resp.Body,
+	}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetIAMInfo.go 🔗

@@ -0,0 +1,103 @@
+package imds
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"strings"
+	"time"
+
+	"github.com/aws/smithy-go"
+	smithyio "github.com/aws/smithy-go/io"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const getIAMInfoPath = getMetadataPath + "/iam/info"
+
+// GetIAMInfo retrieves an identity document describing an
+// instance. Error is returned if the request fails or is unable to parse
+// the response.
+func (c *Client) GetIAMInfo(
+	ctx context.Context, params *GetIAMInfoInput, optFns ...func(*Options),
+) (
+	*GetIAMInfoOutput, error,
+) {
+	if params == nil {
+		params = &GetIAMInfoInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetIAMInfo", params, optFns,
+		addGetIAMInfoMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetIAMInfoOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+// GetIAMInfoInput provides the input parameters for GetIAMInfo operation.
+type GetIAMInfoInput struct{}
+
+// GetIAMInfoOutput provides the output parameters for GetIAMInfo operation.
+type GetIAMInfoOutput struct {
+	IAMInfo
+
+	ResultMetadata middleware.Metadata
+}
+
+func addGetIAMInfoMiddleware(stack *middleware.Stack, options Options) error {
+	return addAPIRequestMiddleware(stack,
+		options,
+		"GetIAMInfo",
+		buildGetIAMInfoPath,
+		buildGetIAMInfoOutput,
+	)
+}
+
+func buildGetIAMInfoPath(params interface{}) (string, error) {
+	return getIAMInfoPath, nil
+}
+
+func buildGetIAMInfoOutput(resp *smithyhttp.Response) (v interface{}, err error) {
+	defer func() {
+		closeErr := resp.Body.Close()
+		if err == nil {
+			err = closeErr
+		} else if closeErr != nil {
+			err = fmt.Errorf("response body close error: %v, original error: %w", closeErr, err)
+		}
+	}()
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(resp.Body, ringBuffer)
+
+	imdsResult := &GetIAMInfoOutput{}
+	if err = json.NewDecoder(body).Decode(&imdsResult.IAMInfo); err != nil {
+		return nil, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode instance identity document, %w", err),
+			Snapshot: ringBuffer.Bytes(),
+		}
+	}
+	// Any code other success is an error
+	if !strings.EqualFold(imdsResult.Code, "success") {
+		return nil, fmt.Errorf("failed to get EC2 IMDS IAM info, %s",
+			imdsResult.Code)
+	}
+
+	return imdsResult, nil
+}
+
+// IAMInfo provides the shape for unmarshaling an IAM info from the metadata
+// API.
+type IAMInfo struct {
+	Code               string
+	LastUpdated        time.Time
+	InstanceProfileArn string
+	InstanceProfileID  string
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetInstanceIdentityDocument.go 🔗

@@ -0,0 +1,110 @@
+package imds
+
+import (
+	"context"
+	"encoding/json"
+	"fmt"
+	"io"
+	"time"
+
+	"github.com/aws/smithy-go"
+	smithyio "github.com/aws/smithy-go/io"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const getInstanceIdentityDocumentPath = getDynamicDataPath + "/instance-identity/document"
+
+// GetInstanceIdentityDocument retrieves an identity document describing an
+// instance. Error is returned if the request fails or is unable to parse
+// the response.
+func (c *Client) GetInstanceIdentityDocument(
+	ctx context.Context, params *GetInstanceIdentityDocumentInput, optFns ...func(*Options),
+) (
+	*GetInstanceIdentityDocumentOutput, error,
+) {
+	if params == nil {
+		params = &GetInstanceIdentityDocumentInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetInstanceIdentityDocument", params, optFns,
+		addGetInstanceIdentityDocumentMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetInstanceIdentityDocumentOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+// GetInstanceIdentityDocumentInput provides the input parameters for
+// GetInstanceIdentityDocument operation.
+type GetInstanceIdentityDocumentInput struct{}
+
+// GetInstanceIdentityDocumentOutput provides the output parameters for
+// GetInstanceIdentityDocument operation.
+type GetInstanceIdentityDocumentOutput struct {
+	InstanceIdentityDocument
+
+	ResultMetadata middleware.Metadata
+}
+
+func addGetInstanceIdentityDocumentMiddleware(stack *middleware.Stack, options Options) error {
+	return addAPIRequestMiddleware(stack,
+		options,
+		"GetInstanceIdentityDocument",
+		buildGetInstanceIdentityDocumentPath,
+		buildGetInstanceIdentityDocumentOutput,
+	)
+}
+
+func buildGetInstanceIdentityDocumentPath(params interface{}) (string, error) {
+	return getInstanceIdentityDocumentPath, nil
+}
+
+func buildGetInstanceIdentityDocumentOutput(resp *smithyhttp.Response) (v interface{}, err error) {
+	defer func() {
+		closeErr := resp.Body.Close()
+		if err == nil {
+			err = closeErr
+		} else if closeErr != nil {
+			err = fmt.Errorf("response body close error: %v, original error: %w", closeErr, err)
+		}
+	}()
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(resp.Body, ringBuffer)
+
+	output := &GetInstanceIdentityDocumentOutput{}
+	if err = json.NewDecoder(body).Decode(&output.InstanceIdentityDocument); err != nil {
+		return nil, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode instance identity document, %w", err),
+			Snapshot: ringBuffer.Bytes(),
+		}
+	}
+
+	return output, nil
+}
+
+// InstanceIdentityDocument provides the shape for unmarshaling
+// an instance identity document
+type InstanceIdentityDocument struct {
+	DevpayProductCodes      []string  `json:"devpayProductCodes"`
+	MarketplaceProductCodes []string  `json:"marketplaceProductCodes"`
+	AvailabilityZone        string    `json:"availabilityZone"`
+	PrivateIP               string    `json:"privateIp"`
+	Version                 string    `json:"version"`
+	Region                  string    `json:"region"`
+	InstanceID              string    `json:"instanceId"`
+	BillingProducts         []string  `json:"billingProducts"`
+	InstanceType            string    `json:"instanceType"`
+	AccountID               string    `json:"accountId"`
+	PendingTime             time.Time `json:"pendingTime"`
+	ImageID                 string    `json:"imageId"`
+	KernelID                string    `json:"kernelId"`
+	RamdiskID               string    `json:"ramdiskId"`
+	Architecture            string    `json:"architecture"`
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetMetadata.go 🔗

@@ -0,0 +1,77 @@
+package imds
+
+import (
+	"context"
+	"fmt"
+	"io"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const getMetadataPath = "/latest/meta-data"
+
+// GetMetadata uses the path provided to request information from the Amazon
+// EC2 Instance Metadata Service. The content will be returned as a string, or
+// error if the request failed.
+func (c *Client) GetMetadata(ctx context.Context, params *GetMetadataInput, optFns ...func(*Options)) (*GetMetadataOutput, error) {
+	if params == nil {
+		params = &GetMetadataInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetMetadata", params, optFns,
+		addGetMetadataMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetMetadataOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+// GetMetadataInput provides the input parameters for the GetMetadata
+// operation.
+type GetMetadataInput struct {
+	// The relative metadata path to retrieve. Can be empty string to retrieve
+	// a response containing a new line separated list of metadata resources
+	// available.
+	//
+	// Must not include the metadata base path.
+	//
+	// May include leading slash. If Path includes trailing slash the trailing slash
+	// will be included in the request for the resource.
+	Path string
+}
+
+// GetMetadataOutput provides the output parameters for the GetMetadata
+// operation.
+type GetMetadataOutput struct {
+	Content io.ReadCloser
+
+	ResultMetadata middleware.Metadata
+}
+
+func addGetMetadataMiddleware(stack *middleware.Stack, options Options) error {
+	return addAPIRequestMiddleware(stack,
+		options,
+		"GetMetadata",
+		buildGetMetadataPath,
+		buildGetMetadataOutput)
+}
+
+func buildGetMetadataPath(params interface{}) (string, error) {
+	p, ok := params.(*GetMetadataInput)
+	if !ok {
+		return "", fmt.Errorf("unknown parameter type %T", params)
+	}
+
+	return appendURIPath(getMetadataPath, p.Path), nil
+}
+
+func buildGetMetadataOutput(resp *smithyhttp.Response) (interface{}, error) {
+	return &GetMetadataOutput{
+		Content: resp.Body,
+	}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetRegion.go 🔗

@@ -0,0 +1,73 @@
+package imds
+
+import (
+	"context"
+	"fmt"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// GetRegion retrieves an identity document describing an
+// instance. Error is returned if the request fails or is unable to parse
+// the response.
+func (c *Client) GetRegion(
+	ctx context.Context, params *GetRegionInput, optFns ...func(*Options),
+) (
+	*GetRegionOutput, error,
+) {
+	if params == nil {
+		params = &GetRegionInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetRegion", params, optFns,
+		addGetRegionMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetRegionOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+// GetRegionInput provides the input parameters for GetRegion operation.
+type GetRegionInput struct{}
+
+// GetRegionOutput provides the output parameters for GetRegion operation.
+type GetRegionOutput struct {
+	Region string
+
+	ResultMetadata middleware.Metadata
+}
+
+func addGetRegionMiddleware(stack *middleware.Stack, options Options) error {
+	return addAPIRequestMiddleware(stack,
+		options,
+		"GetRegion",
+		buildGetInstanceIdentityDocumentPath,
+		buildGetRegionOutput,
+	)
+}
+
+func buildGetRegionOutput(resp *smithyhttp.Response) (interface{}, error) {
+	out, err := buildGetInstanceIdentityDocumentOutput(resp)
+	if err != nil {
+		return nil, err
+	}
+
+	result, ok := out.(*GetInstanceIdentityDocumentOutput)
+	if !ok {
+		return nil, fmt.Errorf("unexpected instance identity document type, %T", out)
+	}
+
+	region := result.Region
+	if len(region) == 0 {
+		return "", fmt.Errorf("instance metadata did not return a region value")
+	}
+
+	return &GetRegionOutput{
+		Region: region,
+	}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetToken.go 🔗

@@ -0,0 +1,119 @@
+package imds
+
+import (
+	"context"
+	"fmt"
+	"io"
+	"strconv"
+	"strings"
+	"time"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const getTokenPath = "/latest/api/token"
+const tokenTTLHeader = "X-Aws-Ec2-Metadata-Token-Ttl-Seconds"
+
+// getToken uses the duration to return a token for EC2 IMDS, or an error if
+// the request failed.
+func (c *Client) getToken(ctx context.Context, params *getTokenInput, optFns ...func(*Options)) (*getTokenOutput, error) {
+	if params == nil {
+		params = &getTokenInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "getToken", params, optFns,
+		addGetTokenMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*getTokenOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type getTokenInput struct {
+	TokenTTL time.Duration
+}
+
+type getTokenOutput struct {
+	Token    string
+	TokenTTL time.Duration
+
+	ResultMetadata middleware.Metadata
+}
+
+func addGetTokenMiddleware(stack *middleware.Stack, options Options) error {
+	err := addRequestMiddleware(stack,
+		options,
+		"PUT",
+		"GetToken",
+		buildGetTokenPath,
+		buildGetTokenOutput)
+	if err != nil {
+		return err
+	}
+
+	err = stack.Serialize.Add(&tokenTTLRequestHeader{}, middleware.After)
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+func buildGetTokenPath(interface{}) (string, error) {
+	return getTokenPath, nil
+}
+
+func buildGetTokenOutput(resp *smithyhttp.Response) (v interface{}, err error) {
+	defer func() {
+		closeErr := resp.Body.Close()
+		if err == nil {
+			err = closeErr
+		} else if closeErr != nil {
+			err = fmt.Errorf("response body close error: %v, original error: %w", closeErr, err)
+		}
+	}()
+
+	ttlHeader := resp.Header.Get(tokenTTLHeader)
+	tokenTTL, err := strconv.ParseInt(ttlHeader, 10, 64)
+	if err != nil {
+		return nil, fmt.Errorf("unable to parse API token, %w", err)
+	}
+
+	var token strings.Builder
+	if _, err = io.Copy(&token, resp.Body); err != nil {
+		return nil, fmt.Errorf("unable to read API token, %w", err)
+	}
+
+	return &getTokenOutput{
+		Token:    token.String(),
+		TokenTTL: time.Duration(tokenTTL) * time.Second,
+	}, nil
+}
+
+type tokenTTLRequestHeader struct{}
+
+func (*tokenTTLRequestHeader) ID() string { return "tokenTTLRequestHeader" }
+func (*tokenTTLRequestHeader) HandleSerialize(
+	ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler,
+) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("expect HTTP transport, got %T", in.Request)
+	}
+
+	input, ok := in.Parameters.(*getTokenInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("expect getTokenInput, got %T", in.Parameters)
+	}
+
+	req.Header.Set(tokenTTLHeader, strconv.Itoa(int(input.TokenTTL/time.Second)))
+
+	return next.HandleSerialize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/api_op_GetUserData.go 🔗

@@ -0,0 +1,61 @@
+package imds
+
+import (
+	"context"
+	"io"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const getUserDataPath = "/latest/user-data"
+
+// GetUserData uses the path provided to request information from the EC2
+// instance metadata service for dynamic data. The content will be returned
+// as a string, or error if the request failed.
+func (c *Client) GetUserData(ctx context.Context, params *GetUserDataInput, optFns ...func(*Options)) (*GetUserDataOutput, error) {
+	if params == nil {
+		params = &GetUserDataInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetUserData", params, optFns,
+		addGetUserDataMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetUserDataOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+// GetUserDataInput provides the input parameters for the GetUserData
+// operation.
+type GetUserDataInput struct{}
+
+// GetUserDataOutput provides the output parameters for the GetUserData
+// operation.
+type GetUserDataOutput struct {
+	Content io.ReadCloser
+
+	ResultMetadata middleware.Metadata
+}
+
+func addGetUserDataMiddleware(stack *middleware.Stack, options Options) error {
+	return addAPIRequestMiddleware(stack,
+		options,
+		"GetUserData",
+		buildGetUserDataPath,
+		buildGetUserDataOutput)
+}
+
+func buildGetUserDataPath(params interface{}) (string, error) {
+	return getUserDataPath, nil
+}
+
+func buildGetUserDataOutput(resp *smithyhttp.Response) (interface{}, error) {
+	return &GetUserDataOutput{
+		Content: resp.Body,
+	}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/auth.go 🔗

@@ -0,0 +1,48 @@
+package imds
+
+import (
+	"context"
+	"github.com/aws/smithy-go/middleware"
+)
+
+type getIdentityMiddleware struct {
+	options Options
+}
+
+func (*getIdentityMiddleware) ID() string {
+	return "GetIdentity"
+}
+
+func (m *getIdentityMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}
+
+type signRequestMiddleware struct {
+}
+
+func (*signRequestMiddleware) ID() string {
+	return "Signing"
+}
+
+func (m *signRequestMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}
+
+type resolveAuthSchemeMiddleware struct {
+	operation string
+	options   Options
+}
+
+func (*resolveAuthSchemeMiddleware) ID() string {
+	return "ResolveAuthScheme"
+}
+
+func (m *resolveAuthSchemeMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/doc.go 🔗

@@ -0,0 +1,12 @@
+// Package imds provides the API client for interacting with the Amazon EC2
+// Instance Metadata Service.
+//
+// All Client operation calls have a default timeout. If the operation is not
+// completed before this timeout expires, the operation will be canceled. This
+// timeout can be overridden through the following:
+//   - Set the options flag DisableDefaultTimeout
+//   - Provide a Context with a timeout or deadline with calling the client's operations.
+//
+// See the EC2 IMDS user guide for more information on using the API.
+// https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
+package imds

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/endpoints.go 🔗

@@ -0,0 +1,20 @@
+package imds
+
+import (
+	"context"
+	"github.com/aws/smithy-go/middleware"
+)
+
+type resolveEndpointV2Middleware struct {
+	options Options
+}
+
+func (*resolveEndpointV2Middleware) ID() string {
+	return "ResolveEndpointV2"
+}
+
+func (m *resolveEndpointV2Middleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/internal/config/resolvers.go 🔗

@@ -0,0 +1,114 @@
+package config
+
+import (
+	"fmt"
+	"strings"
+)
+
+// ClientEnableState provides an enumeration if the client is enabled,
+// disabled, or default behavior.
+type ClientEnableState uint
+
+// Enumeration values for ClientEnableState
+const (
+	ClientDefaultEnableState ClientEnableState = iota
+	ClientDisabled
+	ClientEnabled
+)
+
+// EndpointModeState is the EC2 IMDS Endpoint Configuration Mode
+type EndpointModeState uint
+
+// Enumeration values for ClientEnableState
+const (
+	EndpointModeStateUnset EndpointModeState = iota
+	EndpointModeStateIPv4
+	EndpointModeStateIPv6
+)
+
+// SetFromString sets the EndpointModeState based on the provided string value. Unknown values will default to EndpointModeStateUnset
+func (e *EndpointModeState) SetFromString(v string) error {
+	v = strings.TrimSpace(v)
+
+	switch {
+	case len(v) == 0:
+		*e = EndpointModeStateUnset
+	case strings.EqualFold(v, "IPv6"):
+		*e = EndpointModeStateIPv6
+	case strings.EqualFold(v, "IPv4"):
+		*e = EndpointModeStateIPv4
+	default:
+		return fmt.Errorf("unknown EC2 IMDS endpoint mode, must be either IPv6 or IPv4")
+	}
+	return nil
+}
+
+// ClientEnableStateResolver is a config resolver interface for retrieving whether the IMDS client is disabled.
+type ClientEnableStateResolver interface {
+	GetEC2IMDSClientEnableState() (ClientEnableState, bool, error)
+}
+
+// EndpointModeResolver is a config resolver interface for retrieving the EndpointModeState configuration.
+type EndpointModeResolver interface {
+	GetEC2IMDSEndpointMode() (EndpointModeState, bool, error)
+}
+
+// EndpointResolver is a config resolver interface for retrieving the endpoint.
+type EndpointResolver interface {
+	GetEC2IMDSEndpoint() (string, bool, error)
+}
+
+type v1FallbackDisabledResolver interface {
+	GetEC2IMDSV1FallbackDisabled() (bool, bool)
+}
+
+// ResolveClientEnableState resolves the ClientEnableState from a list of configuration sources.
+func ResolveClientEnableState(sources []interface{}) (value ClientEnableState, found bool, err error) {
+	for _, source := range sources {
+		if resolver, ok := source.(ClientEnableStateResolver); ok {
+			value, found, err = resolver.GetEC2IMDSClientEnableState()
+			if err != nil || found {
+				return value, found, err
+			}
+		}
+	}
+	return value, found, err
+}
+
+// ResolveEndpointModeConfig resolves the EndpointModeState from a list of configuration sources.
+func ResolveEndpointModeConfig(sources []interface{}) (value EndpointModeState, found bool, err error) {
+	for _, source := range sources {
+		if resolver, ok := source.(EndpointModeResolver); ok {
+			value, found, err = resolver.GetEC2IMDSEndpointMode()
+			if err != nil || found {
+				return value, found, err
+			}
+		}
+	}
+	return value, found, err
+}
+
+// ResolveEndpointConfig resolves the endpoint from a list of configuration sources.
+func ResolveEndpointConfig(sources []interface{}) (value string, found bool, err error) {
+	for _, source := range sources {
+		if resolver, ok := source.(EndpointResolver); ok {
+			value, found, err = resolver.GetEC2IMDSEndpoint()
+			if err != nil || found {
+				return value, found, err
+			}
+		}
+	}
+	return value, found, err
+}
+
+// ResolveV1FallbackDisabled ...
+func ResolveV1FallbackDisabled(sources []interface{}) (bool, bool) {
+	for _, source := range sources {
+		if resolver, ok := source.(v1FallbackDisabledResolver); ok {
+			if v, found := resolver.GetEC2IMDSV1FallbackDisabled(); found {
+				return v, true
+			}
+		}
+	}
+	return false, false
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/request_middleware.go 🔗

@@ -0,0 +1,313 @@
+package imds
+
+import (
+	"bytes"
+	"context"
+	"fmt"
+	"io/ioutil"
+	"net/url"
+	"path"
+	"time"
+
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/aws/retry"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+func addAPIRequestMiddleware(stack *middleware.Stack,
+	options Options,
+	operation string,
+	getPath func(interface{}) (string, error),
+	getOutput func(*smithyhttp.Response) (interface{}, error),
+) (err error) {
+	err = addRequestMiddleware(stack, options, "GET", operation, getPath, getOutput)
+	if err != nil {
+		return err
+	}
+
+	// Token Serializer build and state management.
+	if !options.disableAPIToken {
+		err = stack.Finalize.Insert(options.tokenProvider, (*retry.Attempt)(nil).ID(), middleware.After)
+		if err != nil {
+			return err
+		}
+
+		err = stack.Deserialize.Insert(options.tokenProvider, "OperationDeserializer", middleware.Before)
+		if err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func addRequestMiddleware(stack *middleware.Stack,
+	options Options,
+	method string,
+	operation string,
+	getPath func(interface{}) (string, error),
+	getOutput func(*smithyhttp.Response) (interface{}, error),
+) (err error) {
+	err = awsmiddleware.AddSDKAgentKey(awsmiddleware.FeatureMetadata, "ec2-imds")(stack)
+	if err != nil {
+		return err
+	}
+
+	// Operation timeout
+	err = stack.Initialize.Add(&operationTimeout{
+		Disabled:       options.DisableDefaultTimeout,
+		DefaultTimeout: defaultOperationTimeout,
+	}, middleware.Before)
+	if err != nil {
+		return err
+	}
+
+	// Operation Serializer
+	err = stack.Serialize.Add(&serializeRequest{
+		GetPath: getPath,
+		Method:  method,
+	}, middleware.After)
+	if err != nil {
+		return err
+	}
+
+	// Operation endpoint resolver
+	err = stack.Serialize.Insert(&resolveEndpoint{
+		Endpoint:     options.Endpoint,
+		EndpointMode: options.EndpointMode,
+	}, "OperationSerializer", middleware.Before)
+	if err != nil {
+		return err
+	}
+
+	// Operation Deserializer
+	err = stack.Deserialize.Add(&deserializeResponse{
+		GetOutput: getOutput,
+	}, middleware.After)
+	if err != nil {
+		return err
+	}
+
+	err = stack.Deserialize.Add(&smithyhttp.RequestResponseLogger{
+		LogRequest:          options.ClientLogMode.IsRequest(),
+		LogRequestWithBody:  options.ClientLogMode.IsRequestWithBody(),
+		LogResponse:         options.ClientLogMode.IsResponse(),
+		LogResponseWithBody: options.ClientLogMode.IsResponseWithBody(),
+	}, middleware.After)
+	if err != nil {
+		return err
+	}
+
+	err = addSetLoggerMiddleware(stack, options)
+	if err != nil {
+		return err
+	}
+
+	if err := addProtocolFinalizerMiddlewares(stack, options, operation); err != nil {
+		return fmt.Errorf("add protocol finalizers: %w", err)
+	}
+
+	// Retry support
+	return retry.AddRetryMiddlewares(stack, retry.AddRetryMiddlewaresOptions{
+		Retryer:          options.Retryer,
+		LogRetryAttempts: options.ClientLogMode.IsRetries(),
+	})
+}
+
+func addSetLoggerMiddleware(stack *middleware.Stack, o Options) error {
+	return middleware.AddSetLoggerMiddleware(stack, o.Logger)
+}
+
+type serializeRequest struct {
+	GetPath func(interface{}) (string, error)
+	Method  string
+}
+
+func (*serializeRequest) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *serializeRequest) HandleSerialize(
+	ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler,
+) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	reqPath, err := m.GetPath(in.Parameters)
+	if err != nil {
+		return out, metadata, fmt.Errorf("unable to get request URL path, %w", err)
+	}
+
+	request.Request.URL.Path = reqPath
+	request.Request.Method = m.Method
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type deserializeResponse struct {
+	GetOutput func(*smithyhttp.Response) (interface{}, error)
+}
+
+func (*deserializeResponse) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *deserializeResponse) HandleDeserialize(
+	ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler,
+) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	resp, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, fmt.Errorf(
+			"unexpected transport response type, %T, want %T", out.RawResponse, resp)
+	}
+	defer resp.Body.Close()
+
+	// read the full body so that any operation timeouts cleanup will not race
+	// the body being read.
+	body, err := ioutil.ReadAll(resp.Body)
+	if err != nil {
+		return out, metadata, fmt.Errorf("read response body failed, %w", err)
+	}
+	resp.Body = ioutil.NopCloser(bytes.NewReader(body))
+
+	// Anything that's not 200 |< 300 is error
+	if resp.StatusCode < 200 || resp.StatusCode >= 300 {
+		return out, metadata, &smithyhttp.ResponseError{
+			Response: resp,
+			Err:      fmt.Errorf("request to EC2 IMDS failed"),
+		}
+	}
+
+	result, err := m.GetOutput(resp)
+	if err != nil {
+		return out, metadata, fmt.Errorf(
+			"unable to get deserialized result for response, %w", err,
+		)
+	}
+	out.Result = result
+
+	return out, metadata, err
+}
+
+type resolveEndpoint struct {
+	Endpoint     string
+	EndpointMode EndpointModeState
+}
+
+func (*resolveEndpoint) ID() string {
+	return "ResolveEndpoint"
+}
+
+func (m *resolveEndpoint) HandleSerialize(
+	ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler,
+) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	var endpoint string
+	if len(m.Endpoint) > 0 {
+		endpoint = m.Endpoint
+	} else {
+		switch m.EndpointMode {
+		case EndpointModeStateIPv6:
+			endpoint = defaultIPv6Endpoint
+		case EndpointModeStateIPv4:
+			fallthrough
+		case EndpointModeStateUnset:
+			endpoint = defaultIPv4Endpoint
+		default:
+			return out, metadata, fmt.Errorf("unsupported IMDS endpoint mode")
+		}
+	}
+
+	req.URL, err = url.Parse(endpoint)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to parse endpoint URL: %w", err)
+	}
+
+	return next.HandleSerialize(ctx, in)
+}
+
+const (
+	defaultOperationTimeout = 5 * time.Second
+)
+
+// operationTimeout adds a timeout on the middleware stack if the Context the
+// stack was called with does not have a deadline. The next middleware must
+// complete before the timeout, or the context will be canceled.
+//
+// If DefaultTimeout is zero, no default timeout will be used if the Context
+// does not have a timeout.
+//
+// The next middleware must also ensure that any resources that are also
+// canceled by the stack's context are completely consumed before returning.
+// Otherwise the timeout cleanup will race the resource being consumed
+// upstream.
+type operationTimeout struct {
+	Disabled       bool
+	DefaultTimeout time.Duration
+}
+
+func (*operationTimeout) ID() string { return "OperationTimeout" }
+
+func (m *operationTimeout) HandleInitialize(
+	ctx context.Context, input middleware.InitializeInput, next middleware.InitializeHandler,
+) (
+	output middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	if m.Disabled {
+		return next.HandleInitialize(ctx, input)
+	}
+
+	if _, ok := ctx.Deadline(); !ok && m.DefaultTimeout != 0 {
+		var cancelFn func()
+		ctx, cancelFn = context.WithTimeout(ctx, m.DefaultTimeout)
+		defer cancelFn()
+	}
+
+	return next.HandleInitialize(ctx, input)
+}
+
+// appendURIPath joins a URI path component to the existing path with `/`
+// separators between the path components. If the path being added ends with a
+// trailing `/` that slash will be maintained.
+func appendURIPath(base, add string) string {
+	reqPath := path.Join(base, add)
+	if len(add) != 0 && add[len(add)-1] == '/' {
+		reqPath += "/"
+	}
+	return reqPath
+}
+
+func addProtocolFinalizerMiddlewares(stack *middleware.Stack, options Options, operation string) error {
+	if err := stack.Finalize.Add(&resolveAuthSchemeMiddleware{operation: operation, options: options}, middleware.Before); err != nil {
+		return fmt.Errorf("add ResolveAuthScheme: %w", err)
+	}
+	if err := stack.Finalize.Insert(&getIdentityMiddleware{options: options}, "ResolveAuthScheme", middleware.After); err != nil {
+		return fmt.Errorf("add GetIdentity: %w", err)
+	}
+	if err := stack.Finalize.Insert(&resolveEndpointV2Middleware{options: options}, "GetIdentity", middleware.After); err != nil {
+		return fmt.Errorf("add ResolveEndpointV2: %w", err)
+	}
+	if err := stack.Finalize.Insert(&signRequestMiddleware{}, "ResolveEndpointV2", middleware.After); err != nil {
+		return fmt.Errorf("add Signing: %w", err)
+	}
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/feature/ec2/imds/token_provider.go 🔗

@@ -0,0 +1,261 @@
+package imds
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/logging"
+	"net/http"
+	"sync"
+	"sync/atomic"
+	"time"
+
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const (
+	// Headers for Token and TTL
+	tokenHeader     = "x-aws-ec2-metadata-token"
+	defaultTokenTTL = 5 * time.Minute
+)
+
+type tokenProvider struct {
+	client   *Client
+	tokenTTL time.Duration
+
+	token    *apiToken
+	tokenMux sync.RWMutex
+
+	disabled uint32 // Atomic updated
+}
+
+func newTokenProvider(client *Client, ttl time.Duration) *tokenProvider {
+	return &tokenProvider{
+		client:   client,
+		tokenTTL: ttl,
+	}
+}
+
+// apiToken provides the API token used by all operation calls for th EC2
+// Instance metadata service.
+type apiToken struct {
+	token   string
+	expires time.Time
+}
+
+var timeNow = time.Now
+
+// Expired returns if the token is expired.
+func (t *apiToken) Expired() bool {
+	// Calling Round(0) on the current time will truncate the monotonic reading only. Ensures credential expiry
+	// time is always based on reported wall-clock time.
+	return timeNow().Round(0).After(t.expires)
+}
+
+func (t *tokenProvider) ID() string { return "APITokenProvider" }
+
+// HandleFinalize is the finalize stack middleware, that if the token provider is
+// enabled, will attempt to add the cached API token to the request. If the API
+// token is not cached, it will be retrieved in a separate API call, getToken.
+//
+// For retry attempts, handler must be added after attempt retryer.
+//
+// If request for getToken fails the token provider may be disabled from future
+// requests, depending on the response status code.
+func (t *tokenProvider) HandleFinalize(
+	ctx context.Context, input middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	if t.fallbackEnabled() && !t.enabled() {
+		// short-circuits to insecure data flow if token provider is disabled.
+		return next.HandleFinalize(ctx, input)
+	}
+
+	req, ok := input.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unexpected transport request type %T", input.Request)
+	}
+
+	tok, err := t.getToken(ctx)
+	if err != nil {
+		// If the error allows the token to downgrade to insecure flow allow that.
+		var bypassErr *bypassTokenRetrievalError
+		if errors.As(err, &bypassErr) {
+			return next.HandleFinalize(ctx, input)
+		}
+
+		return out, metadata, fmt.Errorf("failed to get API token, %w", err)
+	}
+
+	req.Header.Set(tokenHeader, tok.token)
+
+	return next.HandleFinalize(ctx, input)
+}
+
+// HandleDeserialize is the deserialize stack middleware for determining if the
+// operation the token provider is decorating failed because of a 401
+// unauthorized status code. If the operation failed for that reason the token
+// provider needs to be re-enabled so that it can start adding the API token to
+// operation calls.
+func (t *tokenProvider) HandleDeserialize(
+	ctx context.Context, input middleware.DeserializeInput, next middleware.DeserializeHandler,
+) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, input)
+	if err == nil {
+		return out, metadata, err
+	}
+
+	resp, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, fmt.Errorf("expect HTTP transport, got %T", out.RawResponse)
+	}
+
+	if resp.StatusCode == http.StatusUnauthorized { // unauthorized
+		t.enable()
+		err = &retryableError{Err: err, isRetryable: true}
+	}
+
+	return out, metadata, err
+}
+
+func (t *tokenProvider) getToken(ctx context.Context) (tok *apiToken, err error) {
+	if t.fallbackEnabled() && !t.enabled() {
+		return nil, &bypassTokenRetrievalError{
+			Err: fmt.Errorf("cannot get API token, provider disabled"),
+		}
+	}
+
+	t.tokenMux.RLock()
+	tok = t.token
+	t.tokenMux.RUnlock()
+
+	if tok != nil && !tok.Expired() {
+		return tok, nil
+	}
+
+	tok, err = t.updateToken(ctx)
+	if err != nil {
+		return nil, err
+	}
+
+	return tok, nil
+}
+
+func (t *tokenProvider) updateToken(ctx context.Context) (*apiToken, error) {
+	t.tokenMux.Lock()
+	defer t.tokenMux.Unlock()
+
+	// Prevent multiple requests to update retrieving the token.
+	if t.token != nil && !t.token.Expired() {
+		tok := t.token
+		return tok, nil
+	}
+
+	result, err := t.client.getToken(ctx, &getTokenInput{
+		TokenTTL: t.tokenTTL,
+	})
+	if err != nil {
+		var statusErr interface{ HTTPStatusCode() int }
+		if errors.As(err, &statusErr) {
+			switch statusErr.HTTPStatusCode() {
+			// Disable future get token if failed because of 403, 404, or 405
+			case http.StatusForbidden,
+				http.StatusNotFound,
+				http.StatusMethodNotAllowed:
+
+				if t.fallbackEnabled() {
+					logger := middleware.GetLogger(ctx)
+					logger.Logf(logging.Warn, "falling back to IMDSv1: %v", err)
+					t.disable()
+				}
+
+			// 400 errors are terminal, and need to be upstreamed
+			case http.StatusBadRequest:
+				return nil, err
+			}
+		}
+
+		// Disable if request send failed or timed out getting response
+		var re *smithyhttp.RequestSendError
+		var ce *smithy.CanceledError
+		if errors.As(err, &re) || errors.As(err, &ce) {
+			atomic.StoreUint32(&t.disabled, 1)
+		}
+
+		if !t.fallbackEnabled() {
+			// NOTE: getToken() is an implementation detail of some outer operation
+			// (e.g. GetMetadata). It has its own retries that have already been exhausted.
+			// Mark the underlying error as a terminal error.
+			err = &retryableError{Err: err, isRetryable: false}
+			return nil, err
+		}
+
+		// Token couldn't be retrieved, fallback to IMDSv1 insecure flow for this request
+		// and allow the request to proceed. Future requests _may_ re-attempt fetching a
+		// token if not disabled.
+		return nil, &bypassTokenRetrievalError{Err: err}
+	}
+
+	tok := &apiToken{
+		token:   result.Token,
+		expires: timeNow().Add(result.TokenTTL),
+	}
+	t.token = tok
+
+	return tok, nil
+}
+
+// enabled returns if the token provider is current enabled or not.
+func (t *tokenProvider) enabled() bool {
+	return atomic.LoadUint32(&t.disabled) == 0
+}
+
+// fallbackEnabled returns false if EnableFallback is [aws.FalseTernary], true otherwise
+func (t *tokenProvider) fallbackEnabled() bool {
+	switch t.client.options.EnableFallback {
+	case aws.FalseTernary:
+		return false
+	default:
+		return true
+	}
+}
+
+// disable disables the token provider and it will no longer attempt to inject
+// the token, nor request updates.
+func (t *tokenProvider) disable() {
+	atomic.StoreUint32(&t.disabled, 1)
+}
+
+// enable enables the token provide to start refreshing tokens, and adding them
+// to the pending request.
+func (t *tokenProvider) enable() {
+	t.tokenMux.Lock()
+	t.token = nil
+	t.tokenMux.Unlock()
+	atomic.StoreUint32(&t.disabled, 0)
+}
+
+type bypassTokenRetrievalError struct {
+	Err error
+}
+
+func (e *bypassTokenRetrievalError) Error() string {
+	return fmt.Sprintf("bypass token retrieval, %v", e.Err)
+}
+
+func (e *bypassTokenRetrievalError) Unwrap() error { return e.Err }
+
+type retryableError struct {
+	Err         error
+	isRetryable bool
+}
+
+func (e *retryableError) RetryableError() bool { return e.isRetryable }
+
+func (e *retryableError) Error() string { return e.Err.Error() }

vendor/github.com/aws/aws-sdk-go-v2/internal/auth/auth.go 🔗

@@ -0,0 +1,45 @@
+package auth
+
+import (
+	"github.com/aws/smithy-go/auth"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// HTTPAuthScheme is the SDK's internal implementation of smithyhttp.AuthScheme
+// for pre-existing implementations where the signer was added to client
+// config. SDK clients will key off of this type and ensure per-operation
+// updates to those signers persist on the scheme itself.
+type HTTPAuthScheme struct {
+	schemeID string
+	signer   smithyhttp.Signer
+}
+
+var _ smithyhttp.AuthScheme = (*HTTPAuthScheme)(nil)
+
+// NewHTTPAuthScheme returns an auth scheme instance with the given config.
+func NewHTTPAuthScheme(schemeID string, signer smithyhttp.Signer) *HTTPAuthScheme {
+	return &HTTPAuthScheme{
+		schemeID: schemeID,
+		signer:   signer,
+	}
+}
+
+// SchemeID identifies the auth scheme.
+func (s *HTTPAuthScheme) SchemeID() string {
+	return s.schemeID
+}
+
+// IdentityResolver gets the identity resolver for the auth scheme.
+func (s *HTTPAuthScheme) IdentityResolver(o auth.IdentityResolverOptions) auth.IdentityResolver {
+	return o.GetIdentityResolver(s.schemeID)
+}
+
+// Signer gets the signer for the auth scheme.
+func (s *HTTPAuthScheme) Signer() smithyhttp.Signer {
+	return s.signer
+}
+
+// WithSigner returns a new instance of the auth scheme with the updated signer.
+func (s *HTTPAuthScheme) WithSigner(signer smithyhttp.Signer) *HTTPAuthScheme {
+	return NewHTTPAuthScheme(s.schemeID, signer)
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/auth/scheme.go 🔗

@@ -0,0 +1,191 @@
+package auth
+
+import (
+	"context"
+	"fmt"
+
+	smithy "github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// SigV4 is a constant representing
+// Authentication Scheme Signature Version 4
+const SigV4 = "sigv4"
+
+// SigV4A is a constant representing
+// Authentication Scheme Signature Version 4A
+const SigV4A = "sigv4a"
+
+// SigV4S3Express identifies the S3 S3Express auth scheme.
+const SigV4S3Express = "sigv4-s3express"
+
+// None is a constant representing the
+// None Authentication Scheme
+const None = "none"
+
+// SupportedSchemes is a data structure
+// that indicates the list of supported AWS
+// authentication schemes
+var SupportedSchemes = map[string]bool{
+	SigV4:          true,
+	SigV4A:         true,
+	SigV4S3Express: true,
+	None:           true,
+}
+
+// AuthenticationScheme is a representation of
+// AWS authentication schemes
+type AuthenticationScheme interface {
+	isAuthenticationScheme()
+}
+
+// AuthenticationSchemeV4 is a AWS SigV4 representation
+type AuthenticationSchemeV4 struct {
+	Name                  string
+	SigningName           *string
+	SigningRegion         *string
+	DisableDoubleEncoding *bool
+}
+
+func (a *AuthenticationSchemeV4) isAuthenticationScheme() {}
+
+// AuthenticationSchemeV4A is a AWS SigV4A representation
+type AuthenticationSchemeV4A struct {
+	Name                  string
+	SigningName           *string
+	SigningRegionSet      []string
+	DisableDoubleEncoding *bool
+}
+
+func (a *AuthenticationSchemeV4A) isAuthenticationScheme() {}
+
+// AuthenticationSchemeNone is a representation for the none auth scheme
+type AuthenticationSchemeNone struct{}
+
+func (a *AuthenticationSchemeNone) isAuthenticationScheme() {}
+
+// NoAuthenticationSchemesFoundError is used in signaling
+// that no authentication schemes have been specified.
+type NoAuthenticationSchemesFoundError struct{}
+
+func (e *NoAuthenticationSchemesFoundError) Error() string {
+	return fmt.Sprint("No authentication schemes specified.")
+}
+
+// UnSupportedAuthenticationSchemeSpecifiedError is used in
+// signaling that only unsupported authentication schemes
+// were specified.
+type UnSupportedAuthenticationSchemeSpecifiedError struct {
+	UnsupportedSchemes []string
+}
+
+func (e *UnSupportedAuthenticationSchemeSpecifiedError) Error() string {
+	return fmt.Sprint("Unsupported authentication scheme specified.")
+}
+
+// GetAuthenticationSchemes extracts the relevant authentication scheme data
+// into a custom strongly typed Go data structure.
+func GetAuthenticationSchemes(p *smithy.Properties) ([]AuthenticationScheme, error) {
+	var result []AuthenticationScheme
+	if !p.Has("authSchemes") {
+		return nil, &NoAuthenticationSchemesFoundError{}
+	}
+
+	authSchemes, _ := p.Get("authSchemes").([]interface{})
+
+	var unsupportedSchemes []string
+	for _, scheme := range authSchemes {
+		authScheme, _ := scheme.(map[string]interface{})
+
+		version := authScheme["name"].(string)
+		switch version {
+		case SigV4, SigV4S3Express:
+			v4Scheme := AuthenticationSchemeV4{
+				Name:                  version,
+				SigningName:           getSigningName(authScheme),
+				SigningRegion:         getSigningRegion(authScheme),
+				DisableDoubleEncoding: getDisableDoubleEncoding(authScheme),
+			}
+			result = append(result, AuthenticationScheme(&v4Scheme))
+		case SigV4A:
+			v4aScheme := AuthenticationSchemeV4A{
+				Name:                  SigV4A,
+				SigningName:           getSigningName(authScheme),
+				SigningRegionSet:      getSigningRegionSet(authScheme),
+				DisableDoubleEncoding: getDisableDoubleEncoding(authScheme),
+			}
+			result = append(result, AuthenticationScheme(&v4aScheme))
+		case None:
+			noneScheme := AuthenticationSchemeNone{}
+			result = append(result, AuthenticationScheme(&noneScheme))
+		default:
+			unsupportedSchemes = append(unsupportedSchemes, authScheme["name"].(string))
+			continue
+		}
+	}
+
+	if len(result) == 0 {
+		return nil, &UnSupportedAuthenticationSchemeSpecifiedError{
+			UnsupportedSchemes: unsupportedSchemes,
+		}
+	}
+
+	return result, nil
+}
+
+type disableDoubleEncoding struct{}
+
+// SetDisableDoubleEncoding sets or modifies the disable double encoding option
+// on the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func SetDisableDoubleEncoding(ctx context.Context, value bool) context.Context {
+	return middleware.WithStackValue(ctx, disableDoubleEncoding{}, value)
+}
+
+// GetDisableDoubleEncoding retrieves the disable double encoding option
+// from the context.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetDisableDoubleEncoding(ctx context.Context) (value bool, ok bool) {
+	value, ok = middleware.GetStackValue(ctx, disableDoubleEncoding{}).(bool)
+	return value, ok
+}
+
+func getSigningName(authScheme map[string]interface{}) *string {
+	signingName, ok := authScheme["signingName"].(string)
+	if !ok || signingName == "" {
+		return nil
+	}
+	return &signingName
+}
+
+func getSigningRegionSet(authScheme map[string]interface{}) []string {
+	untypedSigningRegionSet, ok := authScheme["signingRegionSet"].([]interface{})
+	if !ok {
+		return nil
+	}
+	signingRegionSet := []string{}
+	for _, item := range untypedSigningRegionSet {
+		signingRegionSet = append(signingRegionSet, item.(string))
+	}
+	return signingRegionSet
+}
+
+func getSigningRegion(authScheme map[string]interface{}) *string {
+	signingRegion, ok := authScheme["signingRegion"].(string)
+	if !ok || signingRegion == "" {
+		return nil
+	}
+	return &signingRegion
+}
+
+func getDisableDoubleEncoding(authScheme map[string]interface{}) *bool {
+	disableDoubleEncoding, ok := authScheme["disableDoubleEncoding"].(bool)
+	if !ok {
+		return nil
+	}
+	return &disableDoubleEncoding
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/bearer_token_adapter.go 🔗

@@ -0,0 +1,43 @@
+package smithy
+
+import (
+	"context"
+	"fmt"
+	"time"
+
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/auth/bearer"
+)
+
+// BearerTokenAdapter adapts smithy bearer.Token to smithy auth.Identity.
+type BearerTokenAdapter struct {
+	Token bearer.Token
+}
+
+var _ auth.Identity = (*BearerTokenAdapter)(nil)
+
+// Expiration returns the time of expiration for the token.
+func (v *BearerTokenAdapter) Expiration() time.Time {
+	return v.Token.Expires
+}
+
+// BearerTokenProviderAdapter adapts smithy bearer.TokenProvider to smithy
+// auth.IdentityResolver.
+type BearerTokenProviderAdapter struct {
+	Provider bearer.TokenProvider
+}
+
+var _ (auth.IdentityResolver) = (*BearerTokenProviderAdapter)(nil)
+
+// GetIdentity retrieves a bearer token using the underlying provider.
+func (v *BearerTokenProviderAdapter) GetIdentity(ctx context.Context, _ smithy.Properties) (
+	auth.Identity, error,
+) {
+	token, err := v.Provider.RetrieveBearerToken(ctx)
+	if err != nil {
+		return nil, fmt.Errorf("get token: %w", err)
+	}
+
+	return &BearerTokenAdapter{Token: token}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/bearer_token_signer_adapter.go 🔗

@@ -0,0 +1,35 @@
+package smithy
+
+import (
+	"context"
+	"fmt"
+
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/auth/bearer"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// BearerTokenSignerAdapter adapts smithy bearer.Signer to smithy http
+// auth.Signer.
+type BearerTokenSignerAdapter struct {
+	Signer bearer.Signer
+}
+
+var _ (smithyhttp.Signer) = (*BearerTokenSignerAdapter)(nil)
+
+// SignRequest signs the request with the provided bearer token.
+func (v *BearerTokenSignerAdapter) SignRequest(ctx context.Context, r *smithyhttp.Request, identity auth.Identity, _ smithy.Properties) error {
+	ca, ok := identity.(*BearerTokenAdapter)
+	if !ok {
+		return fmt.Errorf("unexpected identity type: %T", identity)
+	}
+
+	signed, err := v.Signer.SignWithBearerToken(ctx, ca.Token, r)
+	if err != nil {
+		return fmt.Errorf("sign request: %w", err)
+	}
+
+	*r = *signed.(*smithyhttp.Request)
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/credentials_adapter.go 🔗

@@ -0,0 +1,46 @@
+package smithy
+
+import (
+	"context"
+	"fmt"
+	"time"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/auth"
+)
+
+// CredentialsAdapter adapts aws.Credentials to auth.Identity.
+type CredentialsAdapter struct {
+	Credentials aws.Credentials
+}
+
+var _ auth.Identity = (*CredentialsAdapter)(nil)
+
+// Expiration returns the time of expiration for the credentials.
+func (v *CredentialsAdapter) Expiration() time.Time {
+	return v.Credentials.Expires
+}
+
+// CredentialsProviderAdapter adapts aws.CredentialsProvider to auth.IdentityResolver.
+type CredentialsProviderAdapter struct {
+	Provider aws.CredentialsProvider
+}
+
+var _ (auth.IdentityResolver) = (*CredentialsProviderAdapter)(nil)
+
+// GetIdentity retrieves AWS credentials using the underlying provider.
+func (v *CredentialsProviderAdapter) GetIdentity(ctx context.Context, _ smithy.Properties) (
+	auth.Identity, error,
+) {
+	if v.Provider == nil {
+		return &CredentialsAdapter{Credentials: aws.Credentials{}}, nil
+	}
+
+	creds, err := v.Provider.Retrieve(ctx)
+	if err != nil {
+		return nil, fmt.Errorf("get credentials: %w", err)
+	}
+
+	return &CredentialsAdapter{Credentials: creds}, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/auth/smithy/v4signer_adapter.go 🔗

@@ -0,0 +1,57 @@
+package smithy
+
+import (
+	"context"
+	"fmt"
+
+	v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+	internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
+	"github.com/aws/aws-sdk-go-v2/internal/sdk"
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/logging"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// V4SignerAdapter adapts v4.HTTPSigner to smithy http.Signer.
+type V4SignerAdapter struct {
+	Signer     v4.HTTPSigner
+	Logger     logging.Logger
+	LogSigning bool
+}
+
+var _ (smithyhttp.Signer) = (*V4SignerAdapter)(nil)
+
+// SignRequest signs the request with the provided identity.
+func (v *V4SignerAdapter) SignRequest(ctx context.Context, r *smithyhttp.Request, identity auth.Identity, props smithy.Properties) error {
+	ca, ok := identity.(*CredentialsAdapter)
+	if !ok {
+		return fmt.Errorf("unexpected identity type: %T", identity)
+	}
+
+	name, ok := smithyhttp.GetSigV4SigningName(&props)
+	if !ok {
+		return fmt.Errorf("sigv4 signing name is required")
+	}
+
+	region, ok := smithyhttp.GetSigV4SigningRegion(&props)
+	if !ok {
+		return fmt.Errorf("sigv4 signing region is required")
+	}
+
+	hash := v4.GetPayloadHash(ctx)
+	signingTime := sdk.NowTime()
+	skew := internalcontext.GetAttemptSkewContext(ctx)
+	signingTime = signingTime.Add(skew)
+	err := v.Signer.SignHTTP(ctx, ca.Credentials, r.Request, hash, name, region, signingTime, func(o *v4.SignerOptions) {
+		o.DisableURIPathEscaping, _ = smithyhttp.GetDisableDoubleEncoding(&props)
+
+		o.Logger = v.Logger
+		o.LogSigning = v.LogSigning
+	})
+	if err != nil {
+		return fmt.Errorf("sign http: %w", err)
+	}
+
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/CHANGELOG.md 🔗

@@ -0,0 +1,320 @@
+# v1.3.15 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.14 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.13 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.12 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.11 (2024-06-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.10 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.9 (2024-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.8 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.7 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.6 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.5 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.4 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.3 (2024-03-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.2 (2024-02-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.1 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.10 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.9 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.8 (2023-12-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.7 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.6 (2023-11-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.5 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.4 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.3 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.2 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.1 (2023-11-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.43 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.42 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.41 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.40 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.39 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.38 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.37 (2023-07-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.36 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.35 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.34 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.33 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.32 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.31 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.30 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.29 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.28 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.27 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.26 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.25 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.24 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.23 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.22 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.21 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.20 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.19 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.18 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.17 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.16 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.15 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.14 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.13 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.12 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.11 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.10 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.9 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.8 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.7 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.6 (2022-03-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.5 (2022-02-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.4 (2022-01-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.3 (2022-01-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.2 (2021-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.0 (2021-11-06)
+
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.7 (2021-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.6 (2021-10-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.5 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.4 (2021-08-27)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.3 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.2 (2021-08-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.1 (2021-07-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.0 (2021-06-25)
+
+* **Release**: Release new modules
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/config.go 🔗

@@ -0,0 +1,65 @@
+package configsources
+
+import (
+	"context"
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// EnableEndpointDiscoveryProvider is an interface for retrieving external configuration value
+// for Enable Endpoint Discovery
+type EnableEndpointDiscoveryProvider interface {
+	GetEnableEndpointDiscovery(ctx context.Context) (value aws.EndpointDiscoveryEnableState, found bool, err error)
+}
+
+// ResolveEnableEndpointDiscovery extracts the first instance of a EnableEndpointDiscoveryProvider from the config slice.
+// Additionally returns a aws.EndpointDiscoveryEnableState to indicate if the value was found in provided configs,
+// and error if one is encountered.
+func ResolveEnableEndpointDiscovery(ctx context.Context, configs []interface{}) (value aws.EndpointDiscoveryEnableState, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(EnableEndpointDiscoveryProvider); ok {
+			value, found, err = p.GetEnableEndpointDiscovery(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// UseDualStackEndpointProvider is an interface for retrieving external configuration values for UseDualStackEndpoint
+type UseDualStackEndpointProvider interface {
+	GetUseDualStackEndpoint(context.Context) (value aws.DualStackEndpointState, found bool, err error)
+}
+
+// ResolveUseDualStackEndpoint extracts the first instance of a UseDualStackEndpoint from the config slice.
+// Additionally returns a boolean to indicate if the value was found in provided configs, and error if one is encountered.
+func ResolveUseDualStackEndpoint(ctx context.Context, configs []interface{}) (value aws.DualStackEndpointState, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(UseDualStackEndpointProvider); ok {
+			value, found, err = p.GetUseDualStackEndpoint(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// UseFIPSEndpointProvider is an interface for retrieving external configuration values for UseFIPSEndpoint
+type UseFIPSEndpointProvider interface {
+	GetUseFIPSEndpoint(context.Context) (value aws.FIPSEndpointState, found bool, err error)
+}
+
+// ResolveUseFIPSEndpoint extracts the first instance of a UseFIPSEndpointProvider from the config slice.
+// Additionally, returns a boolean to indicate if the value was found in provided configs, and error if one is encountered.
+func ResolveUseFIPSEndpoint(ctx context.Context, configs []interface{}) (value aws.FIPSEndpointState, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(UseFIPSEndpointProvider); ok {
+			value, found, err = p.GetUseFIPSEndpoint(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/configsources/endpoints.go 🔗

@@ -0,0 +1,57 @@
+package configsources
+
+import (
+	"context"
+)
+
+// ServiceBaseEndpointProvider is needed to search for all providers
+// that provide a configured service endpoint
+type ServiceBaseEndpointProvider interface {
+	GetServiceBaseEndpoint(ctx context.Context, sdkID string) (string, bool, error)
+}
+
+// IgnoreConfiguredEndpointsProvider is needed to search for all providers
+// that provide a flag to disable configured endpoints.
+//
+// Currently duplicated from github.com/aws/aws-sdk-go-v2/config because
+// service packages cannot import github.com/aws/aws-sdk-go-v2/config
+// due to result import cycle error.
+type IgnoreConfiguredEndpointsProvider interface {
+	GetIgnoreConfiguredEndpoints(ctx context.Context) (bool, bool, error)
+}
+
+// GetIgnoreConfiguredEndpoints is used in knowing when to disable configured
+// endpoints feature.
+//
+// Currently duplicated from github.com/aws/aws-sdk-go-v2/config because
+// service packages cannot import github.com/aws/aws-sdk-go-v2/config
+// due to result import cycle error.
+func GetIgnoreConfiguredEndpoints(ctx context.Context, configs []interface{}) (value bool, found bool, err error) {
+	for _, cfg := range configs {
+		if p, ok := cfg.(IgnoreConfiguredEndpointsProvider); ok {
+			value, found, err = p.GetIgnoreConfiguredEndpoints(ctx)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}
+
+// ResolveServiceBaseEndpoint is used to retrieve service endpoints from configured sources
+// while allowing for configured endpoints to be disabled
+func ResolveServiceBaseEndpoint(ctx context.Context, sdkID string, configs []interface{}) (value string, found bool, err error) {
+	if val, found, _ := GetIgnoreConfiguredEndpoints(ctx, configs); found && val {
+		return "", false, nil
+	}
+
+	for _, cs := range configs {
+		if p, ok := cs.(ServiceBaseEndpointProvider); ok {
+			value, found, err = p.GetServiceBaseEndpoint(context.Background(), sdkID)
+			if err != nil || found {
+				break
+			}
+		}
+	}
+	return
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/context/context.go 🔗

@@ -0,0 +1,52 @@
+package context
+
+import (
+	"context"
+	"time"
+
+	"github.com/aws/smithy-go/middleware"
+)
+
+type s3BackendKey struct{}
+type checksumInputAlgorithmKey struct{}
+type clockSkew struct{}
+
+const (
+	// S3BackendS3Express identifies the S3Express backend
+	S3BackendS3Express = "S3Express"
+)
+
+// SetS3Backend stores the resolved endpoint backend within the request
+// context, which is required for a variety of custom S3 behaviors.
+func SetS3Backend(ctx context.Context, typ string) context.Context {
+	return middleware.WithStackValue(ctx, s3BackendKey{}, typ)
+}
+
+// GetS3Backend retrieves the stored endpoint backend within the context.
+func GetS3Backend(ctx context.Context) string {
+	v, _ := middleware.GetStackValue(ctx, s3BackendKey{}).(string)
+	return v
+}
+
+// SetChecksumInputAlgorithm sets the request checksum algorithm on the
+// context.
+func SetChecksumInputAlgorithm(ctx context.Context, value string) context.Context {
+	return middleware.WithStackValue(ctx, checksumInputAlgorithmKey{}, value)
+}
+
+// GetChecksumInputAlgorithm returns the checksum algorithm from the context.
+func GetChecksumInputAlgorithm(ctx context.Context) string {
+	v, _ := middleware.GetStackValue(ctx, checksumInputAlgorithmKey{}).(string)
+	return v
+}
+
+// SetAttemptSkewContext sets the clock skew value on the context
+func SetAttemptSkewContext(ctx context.Context, v time.Duration) context.Context {
+	return middleware.WithStackValue(ctx, clockSkew{}, v)
+}
+
+// GetAttemptSkewContext gets the clock skew value from the context
+func GetAttemptSkewContext(ctx context.Context) time.Duration {
+	x, _ := middleware.GetStackValue(ctx, clockSkew{}).(time.Duration)
+	return x
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/arn.go 🔗

@@ -0,0 +1,94 @@
+package awsrulesfn
+
+import (
+	"strings"
+)
+
+// ARN provides AWS ARN components broken out into a data structure.
+type ARN struct {
+	Partition  string
+	Service    string
+	Region     string
+	AccountId  string
+	ResourceId OptionalStringSlice
+}
+
+const (
+	arnDelimiters      = ":"
+	resourceDelimiters = "/:"
+	arnSections        = 6
+	arnPrefix          = "arn:"
+
+	// zero-indexed
+	sectionPartition = 1
+	sectionService   = 2
+	sectionRegion    = 3
+	sectionAccountID = 4
+	sectionResource  = 5
+)
+
+// ParseARN returns an [ARN] value parsed from the input string provided. If
+// the ARN cannot be parsed nil will be returned, and error added to
+// [ErrorCollector].
+func ParseARN(input string) *ARN {
+	if !strings.HasPrefix(input, arnPrefix) {
+		return nil
+	}
+
+	sections := strings.SplitN(input, arnDelimiters, arnSections)
+	if numSections := len(sections); numSections != arnSections {
+		return nil
+	}
+
+	if sections[sectionPartition] == "" {
+		return nil
+	}
+	if sections[sectionService] == "" {
+		return nil
+	}
+	if sections[sectionResource] == "" {
+		return nil
+	}
+
+	return &ARN{
+		Partition:  sections[sectionPartition],
+		Service:    sections[sectionService],
+		Region:     sections[sectionRegion],
+		AccountId:  sections[sectionAccountID],
+		ResourceId: splitResource(sections[sectionResource]),
+	}
+}
+
+// splitResource splits the resource components by the ARN resource delimiters.
+func splitResource(v string) []string {
+	var parts []string
+	var offset int
+
+	for offset <= len(v) {
+		idx := strings.IndexAny(v[offset:], "/:")
+		if idx < 0 {
+			parts = append(parts, v[offset:])
+			break
+		}
+		parts = append(parts, v[offset:idx+offset])
+		offset += idx + 1
+	}
+
+	return parts
+}
+
+// OptionalStringSlice provides a helper to safely get the index of a string
+// slice that may be out of bounds. Returns pointer to string if index is
+// valid. Otherwise returns nil.
+type OptionalStringSlice []string
+
+// Get returns a string pointer of the string at index i if the index is valid.
+// Otherwise returns nil.
+func (s OptionalStringSlice) Get(i int) *string {
+	if i < 0 || i >= len(s) {
+		return nil
+	}
+
+	v := s[i]
+	return &v
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/host.go 🔗

@@ -0,0 +1,51 @@
+package awsrulesfn
+
+import (
+	"net"
+	"strings"
+
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// IsVirtualHostableS3Bucket returns if the input is a DNS compatible bucket
+// name and can be used with Amazon S3 virtual hosted style addressing. Similar
+// to [rulesfn.IsValidHostLabel] with the added restriction that the length of label
+// must be [3:63] characters long, all lowercase, and not formatted as an IP
+// address.
+func IsVirtualHostableS3Bucket(input string, allowSubDomains bool) bool {
+	// input should not be formatted as an IP address
+	// NOTE: this will technically trip up on IPv6 hosts with zone IDs, but
+	// validation further down will catch that anyway (it's guaranteed to have
+	// unfriendly characters % and : if that's the case)
+	if net.ParseIP(input) != nil {
+		return false
+	}
+
+	var labels []string
+	if allowSubDomains {
+		labels = strings.Split(input, ".")
+	} else {
+		labels = []string{input}
+	}
+
+	for _, label := range labels {
+		// validate special length constraints
+		if l := len(label); l < 3 || l > 63 {
+			return false
+		}
+
+		// Validate no capital letters
+		for _, r := range label {
+			if r >= 'A' && r <= 'Z' {
+				return false
+			}
+		}
+
+		// Validate valid host label
+		if !smithyhttp.ValidHostLabel(label) {
+			return false
+		}
+	}
+
+	return true
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/partition.go 🔗

@@ -0,0 +1,76 @@
+package awsrulesfn
+
+import "regexp"
+
+// Partition provides the metadata describing an AWS partition.
+type Partition struct {
+	ID            string                     `json:"id"`
+	Regions       map[string]RegionOverrides `json:"regions"`
+	RegionRegex   string                     `json:"regionRegex"`
+	DefaultConfig PartitionConfig            `json:"outputs"`
+}
+
+// PartitionConfig provides the endpoint metadata for an AWS region or partition.
+type PartitionConfig struct {
+	Name                 string `json:"name"`
+	DnsSuffix            string `json:"dnsSuffix"`
+	DualStackDnsSuffix   string `json:"dualStackDnsSuffix"`
+	SupportsFIPS         bool   `json:"supportsFIPS"`
+	SupportsDualStack    bool   `json:"supportsDualStack"`
+	ImplicitGlobalRegion string `json:"implicitGlobalRegion"`
+}
+
+type RegionOverrides struct {
+	Name               *string `json:"name"`
+	DnsSuffix          *string `json:"dnsSuffix"`
+	DualStackDnsSuffix *string `json:"dualStackDnsSuffix"`
+	SupportsFIPS       *bool   `json:"supportsFIPS"`
+	SupportsDualStack  *bool   `json:"supportsDualStack"`
+}
+
+const defaultPartition = "aws"
+
+func getPartition(partitions []Partition, region string) *PartitionConfig {
+	for _, partition := range partitions {
+		if v, ok := partition.Regions[region]; ok {
+			p := mergeOverrides(partition.DefaultConfig, v)
+			return &p
+		}
+	}
+
+	for _, partition := range partitions {
+		regionRegex := regexp.MustCompile(partition.RegionRegex)
+		if regionRegex.MatchString(region) {
+			v := partition.DefaultConfig
+			return &v
+		}
+	}
+
+	for _, partition := range partitions {
+		if partition.ID == defaultPartition {
+			v := partition.DefaultConfig
+			return &v
+		}
+	}
+
+	return nil
+}
+
+func mergeOverrides(into PartitionConfig, from RegionOverrides) PartitionConfig {
+	if from.Name != nil {
+		into.Name = *from.Name
+	}
+	if from.DnsSuffix != nil {
+		into.DnsSuffix = *from.DnsSuffix
+	}
+	if from.DualStackDnsSuffix != nil {
+		into.DualStackDnsSuffix = *from.DualStackDnsSuffix
+	}
+	if from.SupportsFIPS != nil {
+		into.SupportsFIPS = *from.SupportsFIPS
+	}
+	if from.SupportsDualStack != nil {
+		into.SupportsDualStack = *from.SupportsDualStack
+	}
+	return into
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/partitions.go 🔗

@@ -0,0 +1,403 @@
+// Code generated by endpoint/awsrulesfn/internal/partition. DO NOT EDIT.
+
+package awsrulesfn
+
+// GetPartition returns an AWS [Partition] for the region provided. If the
+// partition cannot be determined nil will be returned.
+func GetPartition(region string) *PartitionConfig {
+	return getPartition(partitions, region)
+}
+
+var partitions = []Partition{
+	{
+		ID:          "aws",
+		RegionRegex: "^(us|eu|ap|sa|ca|me|af|il)\\-\\w+\\-\\d+$",
+		DefaultConfig: PartitionConfig{
+			Name:                 "aws",
+			DnsSuffix:            "amazonaws.com",
+			DualStackDnsSuffix:   "api.aws",
+			SupportsFIPS:         true,
+			SupportsDualStack:    true,
+			ImplicitGlobalRegion: "us-east-1",
+		},
+		Regions: map[string]RegionOverrides{
+			"af-south-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-east-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-northeast-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-northeast-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-northeast-3": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-south-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-south-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-southeast-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-southeast-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-southeast-3": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ap-southeast-4": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"aws-global": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ca-central-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"ca-west-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-central-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-central-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-north-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-south-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-south-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-west-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-west-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"eu-west-3": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"il-central-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"me-central-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"me-south-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"sa-east-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-east-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-east-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-west-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-west-2": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+		},
+	},
+	{
+		ID:          "aws-cn",
+		RegionRegex: "^cn\\-\\w+\\-\\d+$",
+		DefaultConfig: PartitionConfig{
+			Name:                 "aws-cn",
+			DnsSuffix:            "amazonaws.com.cn",
+			DualStackDnsSuffix:   "api.amazonwebservices.com.cn",
+			SupportsFIPS:         true,
+			SupportsDualStack:    true,
+			ImplicitGlobalRegion: "cn-northwest-1",
+		},
+		Regions: map[string]RegionOverrides{
+			"aws-cn-global": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"cn-north-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"cn-northwest-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+		},
+	},
+	{
+		ID:          "aws-us-gov",
+		RegionRegex: "^us\\-gov\\-\\w+\\-\\d+$",
+		DefaultConfig: PartitionConfig{
+			Name:                 "aws-us-gov",
+			DnsSuffix:            "amazonaws.com",
+			DualStackDnsSuffix:   "api.aws",
+			SupportsFIPS:         true,
+			SupportsDualStack:    true,
+			ImplicitGlobalRegion: "us-gov-west-1",
+		},
+		Regions: map[string]RegionOverrides{
+			"aws-us-gov-global": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-gov-east-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-gov-west-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+		},
+	},
+	{
+		ID:          "aws-iso",
+		RegionRegex: "^us\\-iso\\-\\w+\\-\\d+$",
+		DefaultConfig: PartitionConfig{
+			Name:                 "aws-iso",
+			DnsSuffix:            "c2s.ic.gov",
+			DualStackDnsSuffix:   "c2s.ic.gov",
+			SupportsFIPS:         true,
+			SupportsDualStack:    false,
+			ImplicitGlobalRegion: "us-iso-east-1",
+		},
+		Regions: map[string]RegionOverrides{
+			"aws-iso-global": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-iso-east-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-iso-west-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+		},
+	},
+	{
+		ID:          "aws-iso-b",
+		RegionRegex: "^us\\-isob\\-\\w+\\-\\d+$",
+		DefaultConfig: PartitionConfig{
+			Name:                 "aws-iso-b",
+			DnsSuffix:            "sc2s.sgov.gov",
+			DualStackDnsSuffix:   "sc2s.sgov.gov",
+			SupportsFIPS:         true,
+			SupportsDualStack:    false,
+			ImplicitGlobalRegion: "us-isob-east-1",
+		},
+		Regions: map[string]RegionOverrides{
+			"aws-iso-b-global": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+			"us-isob-east-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+		},
+	},
+	{
+		ID:          "aws-iso-e",
+		RegionRegex: "^eu\\-isoe\\-\\w+\\-\\d+$",
+		DefaultConfig: PartitionConfig{
+			Name:                 "aws-iso-e",
+			DnsSuffix:            "cloud.adc-e.uk",
+			DualStackDnsSuffix:   "cloud.adc-e.uk",
+			SupportsFIPS:         true,
+			SupportsDualStack:    false,
+			ImplicitGlobalRegion: "eu-isoe-west-1",
+		},
+		Regions: map[string]RegionOverrides{
+			"eu-isoe-west-1": {
+				Name:               nil,
+				DnsSuffix:          nil,
+				DualStackDnsSuffix: nil,
+				SupportsFIPS:       nil,
+				SupportsDualStack:  nil,
+			},
+		},
+	},
+	{
+		ID:          "aws-iso-f",
+		RegionRegex: "^us\\-isof\\-\\w+\\-\\d+$",
+		DefaultConfig: PartitionConfig{
+			Name:                 "aws-iso-f",
+			DnsSuffix:            "csp.hci.ic.gov",
+			DualStackDnsSuffix:   "csp.hci.ic.gov",
+			SupportsFIPS:         true,
+			SupportsDualStack:    false,
+			ImplicitGlobalRegion: "us-isof-south-1",
+		},
+		Regions: map[string]RegionOverrides{},
+	},
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn/partitions.json 🔗

@@ -0,0 +1,220 @@
+{
+  "partitions" : [ {
+    "id" : "aws",
+    "outputs" : {
+      "dnsSuffix" : "amazonaws.com",
+      "dualStackDnsSuffix" : "api.aws",
+      "implicitGlobalRegion" : "us-east-1",
+      "name" : "aws",
+      "supportsDualStack" : true,
+      "supportsFIPS" : true
+    },
+    "regionRegex" : "^(us|eu|ap|sa|ca|me|af|il)\\-\\w+\\-\\d+$",
+    "regions" : {
+      "af-south-1" : {
+        "description" : "Africa (Cape Town)"
+      },
+      "ap-east-1" : {
+        "description" : "Asia Pacific (Hong Kong)"
+      },
+      "ap-northeast-1" : {
+        "description" : "Asia Pacific (Tokyo)"
+      },
+      "ap-northeast-2" : {
+        "description" : "Asia Pacific (Seoul)"
+      },
+      "ap-northeast-3" : {
+        "description" : "Asia Pacific (Osaka)"
+      },
+      "ap-south-1" : {
+        "description" : "Asia Pacific (Mumbai)"
+      },
+      "ap-south-2" : {
+        "description" : "Asia Pacific (Hyderabad)"
+      },
+      "ap-southeast-1" : {
+        "description" : "Asia Pacific (Singapore)"
+      },
+      "ap-southeast-2" : {
+        "description" : "Asia Pacific (Sydney)"
+      },
+      "ap-southeast-3" : {
+        "description" : "Asia Pacific (Jakarta)"
+      },
+      "ap-southeast-4" : {
+        "description" : "Asia Pacific (Melbourne)"
+      },
+      "aws-global" : {
+        "description" : "AWS Standard global region"
+      },
+      "ca-central-1" : {
+        "description" : "Canada (Central)"
+      },
+      "ca-west-1" : {
+        "description" : "Canada West (Calgary)"
+      },
+      "eu-central-1" : {
+        "description" : "Europe (Frankfurt)"
+      },
+      "eu-central-2" : {
+        "description" : "Europe (Zurich)"
+      },
+      "eu-north-1" : {
+        "description" : "Europe (Stockholm)"
+      },
+      "eu-south-1" : {
+        "description" : "Europe (Milan)"
+      },
+      "eu-south-2" : {
+        "description" : "Europe (Spain)"
+      },
+      "eu-west-1" : {
+        "description" : "Europe (Ireland)"
+      },
+      "eu-west-2" : {
+        "description" : "Europe (London)"
+      },
+      "eu-west-3" : {
+        "description" : "Europe (Paris)"
+      },
+      "il-central-1" : {
+        "description" : "Israel (Tel Aviv)"
+      },
+      "me-central-1" : {
+        "description" : "Middle East (UAE)"
+      },
+      "me-south-1" : {
+        "description" : "Middle East (Bahrain)"
+      },
+      "sa-east-1" : {
+        "description" : "South America (Sao Paulo)"
+      },
+      "us-east-1" : {
+        "description" : "US East (N. Virginia)"
+      },
+      "us-east-2" : {
+        "description" : "US East (Ohio)"
+      },
+      "us-west-1" : {
+        "description" : "US West (N. California)"
+      },
+      "us-west-2" : {
+        "description" : "US West (Oregon)"
+      }
+    }
+  }, {
+    "id" : "aws-cn",
+    "outputs" : {
+      "dnsSuffix" : "amazonaws.com.cn",
+      "dualStackDnsSuffix" : "api.amazonwebservices.com.cn",
+      "implicitGlobalRegion" : "cn-northwest-1",
+      "name" : "aws-cn",
+      "supportsDualStack" : true,
+      "supportsFIPS" : true
+    },
+    "regionRegex" : "^cn\\-\\w+\\-\\d+$",
+    "regions" : {
+      "aws-cn-global" : {
+        "description" : "AWS China global region"
+      },
+      "cn-north-1" : {
+        "description" : "China (Beijing)"
+      },
+      "cn-northwest-1" : {
+        "description" : "China (Ningxia)"
+      }
+    }
+  }, {
+    "id" : "aws-us-gov",
+    "outputs" : {
+      "dnsSuffix" : "amazonaws.com",
+      "dualStackDnsSuffix" : "api.aws",
+      "implicitGlobalRegion" : "us-gov-west-1",
+      "name" : "aws-us-gov",
+      "supportsDualStack" : true,
+      "supportsFIPS" : true
+    },
+    "regionRegex" : "^us\\-gov\\-\\w+\\-\\d+$",
+    "regions" : {
+      "aws-us-gov-global" : {
+        "description" : "AWS GovCloud (US) global region"
+      },
+      "us-gov-east-1" : {
+        "description" : "AWS GovCloud (US-East)"
+      },
+      "us-gov-west-1" : {
+        "description" : "AWS GovCloud (US-West)"
+      }
+    }
+  }, {
+    "id" : "aws-iso",
+    "outputs" : {
+      "dnsSuffix" : "c2s.ic.gov",
+      "dualStackDnsSuffix" : "c2s.ic.gov",
+      "implicitGlobalRegion" : "us-iso-east-1",
+      "name" : "aws-iso",
+      "supportsDualStack" : false,
+      "supportsFIPS" : true
+    },
+    "regionRegex" : "^us\\-iso\\-\\w+\\-\\d+$",
+    "regions" : {
+      "aws-iso-global" : {
+        "description" : "AWS ISO (US) global region"
+      },
+      "us-iso-east-1" : {
+        "description" : "US ISO East"
+      },
+      "us-iso-west-1" : {
+        "description" : "US ISO WEST"
+      }
+    }
+  }, {
+    "id" : "aws-iso-b",
+    "outputs" : {
+      "dnsSuffix" : "sc2s.sgov.gov",
+      "dualStackDnsSuffix" : "sc2s.sgov.gov",
+      "implicitGlobalRegion" : "us-isob-east-1",
+      "name" : "aws-iso-b",
+      "supportsDualStack" : false,
+      "supportsFIPS" : true
+    },
+    "regionRegex" : "^us\\-isob\\-\\w+\\-\\d+$",
+    "regions" : {
+      "aws-iso-b-global" : {
+        "description" : "AWS ISOB (US) global region"
+      },
+      "us-isob-east-1" : {
+        "description" : "US ISOB East (Ohio)"
+      }
+    }
+  }, {
+    "id" : "aws-iso-e",
+    "outputs" : {
+      "dnsSuffix" : "cloud.adc-e.uk",
+      "dualStackDnsSuffix" : "cloud.adc-e.uk",
+      "implicitGlobalRegion" : "eu-isoe-west-1",
+      "name" : "aws-iso-e",
+      "supportsDualStack" : false,
+      "supportsFIPS" : true
+    },
+    "regionRegex" : "^eu\\-isoe\\-\\w+\\-\\d+$",
+    "regions" : {
+      "eu-isoe-west-1" : {
+        "description" : "EU ISOE West"
+      }
+    }
+  }, {
+    "id" : "aws-iso-f",
+    "outputs" : {
+      "dnsSuffix" : "csp.hci.ic.gov",
+      "dualStackDnsSuffix" : "csp.hci.ic.gov",
+      "implicitGlobalRegion" : "us-isof-south-1",
+      "name" : "aws-iso-f",
+      "supportsDualStack" : false,
+      "supportsFIPS" : true
+    },
+    "regionRegex" : "^us\\-isof\\-\\w+\\-\\d+$",
+    "regions" : { }
+  } ],
+  "version" : "1.1"
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/endpoints.go 🔗

@@ -0,0 +1,201 @@
+package endpoints
+
+import (
+	"fmt"
+	"regexp"
+	"strings"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+const (
+	defaultProtocol = "https"
+	defaultSigner   = "v4"
+)
+
+var (
+	protocolPriority = []string{"https", "http"}
+	signerPriority   = []string{"v4"}
+)
+
+// Options provide configuration needed to direct how endpoints are resolved.
+type Options struct {
+	// Disable usage of HTTPS (TLS / SSL)
+	DisableHTTPS bool
+}
+
+// Partitions is a slice of partition
+type Partitions []Partition
+
+// ResolveEndpoint resolves a service endpoint for the given region and options.
+func (ps Partitions) ResolveEndpoint(region string, opts Options) (aws.Endpoint, error) {
+	if len(ps) == 0 {
+		return aws.Endpoint{}, fmt.Errorf("no partitions found")
+	}
+
+	for i := 0; i < len(ps); i++ {
+		if !ps[i].canResolveEndpoint(region) {
+			continue
+		}
+
+		return ps[i].ResolveEndpoint(region, opts)
+	}
+
+	// fallback to first partition format to use when resolving the endpoint.
+	return ps[0].ResolveEndpoint(region, opts)
+}
+
+// Partition is an AWS partition description for a service and its' region endpoints.
+type Partition struct {
+	ID                string
+	RegionRegex       *regexp.Regexp
+	PartitionEndpoint string
+	IsRegionalized    bool
+	Defaults          Endpoint
+	Endpoints         Endpoints
+}
+
+func (p Partition) canResolveEndpoint(region string) bool {
+	_, ok := p.Endpoints[region]
+	return ok || p.RegionRegex.MatchString(region)
+}
+
+// ResolveEndpoint resolves and service endpoint for the given region and options.
+func (p Partition) ResolveEndpoint(region string, options Options) (resolved aws.Endpoint, err error) {
+	if len(region) == 0 && len(p.PartitionEndpoint) != 0 {
+		region = p.PartitionEndpoint
+	}
+
+	e, _ := p.endpointForRegion(region)
+
+	return e.resolve(p.ID, region, p.Defaults, options), nil
+}
+
+func (p Partition) endpointForRegion(region string) (Endpoint, bool) {
+	if e, ok := p.Endpoints[region]; ok {
+		return e, true
+	}
+
+	if !p.IsRegionalized {
+		return p.Endpoints[p.PartitionEndpoint], region == p.PartitionEndpoint
+	}
+
+	// Unable to find any matching endpoint, return
+	// blank that will be used for generic endpoint creation.
+	return Endpoint{}, false
+}
+
+// Endpoints is a map of service config regions to endpoints
+type Endpoints map[string]Endpoint
+
+// CredentialScope is the credential scope of a region and service
+type CredentialScope struct {
+	Region  string
+	Service string
+}
+
+// Endpoint is a service endpoint description
+type Endpoint struct {
+	// True if the endpoint cannot be resolved for this partition/region/service
+	Unresolveable aws.Ternary
+
+	Hostname  string
+	Protocols []string
+
+	CredentialScope CredentialScope
+
+	SignatureVersions []string `json:"signatureVersions"`
+}
+
+func (e Endpoint) resolve(partition, region string, def Endpoint, options Options) aws.Endpoint {
+	var merged Endpoint
+	merged.mergeIn(def)
+	merged.mergeIn(e)
+	e = merged
+
+	var u string
+	if e.Unresolveable != aws.TrueTernary {
+		// Only attempt to resolve the endpoint if it can be resolved.
+		hostname := strings.Replace(e.Hostname, "{region}", region, 1)
+
+		scheme := getEndpointScheme(e.Protocols, options.DisableHTTPS)
+		u = scheme + "://" + hostname
+	}
+
+	signingRegion := e.CredentialScope.Region
+	if len(signingRegion) == 0 {
+		signingRegion = region
+	}
+	signingName := e.CredentialScope.Service
+
+	return aws.Endpoint{
+		URL:           u,
+		PartitionID:   partition,
+		SigningRegion: signingRegion,
+		SigningName:   signingName,
+		SigningMethod: getByPriority(e.SignatureVersions, signerPriority, defaultSigner),
+	}
+}
+
+func (e *Endpoint) mergeIn(other Endpoint) {
+	if other.Unresolveable != aws.UnknownTernary {
+		e.Unresolveable = other.Unresolveable
+	}
+	if len(other.Hostname) > 0 {
+		e.Hostname = other.Hostname
+	}
+	if len(other.Protocols) > 0 {
+		e.Protocols = other.Protocols
+	}
+	if len(other.CredentialScope.Region) > 0 {
+		e.CredentialScope.Region = other.CredentialScope.Region
+	}
+	if len(other.CredentialScope.Service) > 0 {
+		e.CredentialScope.Service = other.CredentialScope.Service
+	}
+	if len(other.SignatureVersions) > 0 {
+		e.SignatureVersions = other.SignatureVersions
+	}
+}
+
+func getEndpointScheme(protocols []string, disableHTTPS bool) string {
+	if disableHTTPS {
+		return "http"
+	}
+
+	return getByPriority(protocols, protocolPriority, defaultProtocol)
+}
+
+func getByPriority(s []string, p []string, def string) string {
+	if len(s) == 0 {
+		return def
+	}
+
+	for i := 0; i < len(p); i++ {
+		for j := 0; j < len(s); j++ {
+			if s[j] == p[i] {
+				return s[j]
+			}
+		}
+	}
+
+	return s[0]
+}
+
+// MapFIPSRegion extracts the intrinsic AWS region from one that may have an
+// embedded FIPS microformat.
+func MapFIPSRegion(region string) string {
+	const fipsInfix = "-fips-"
+	const fipsPrefix = "fips-"
+	const fipsSuffix = "-fips"
+
+	if strings.Contains(region, fipsInfix) ||
+		strings.Contains(region, fipsPrefix) ||
+		strings.Contains(region, fipsSuffix) {
+		region = strings.ReplaceAll(region, fipsInfix, "-")
+		region = strings.ReplaceAll(region, fipsPrefix, "")
+		region = strings.ReplaceAll(region, fipsSuffix, "")
+	}
+
+	return region
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/CHANGELOG.md 🔗

@@ -0,0 +1,294 @@
+# v2.6.15 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.14 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.13 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.12 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.11 (2024-06-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.10 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.9 (2024-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.8 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.7 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.6 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.5 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.4 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.3 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.2 (2024-02-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.1 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.6.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.10 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.9 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.8 (2023-12-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.7 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.6 (2023-11-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.5 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.4 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.3 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.2 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.1 (2023-11-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.5.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.37 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.36 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.35 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.34 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.33 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.32 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.31 (2023-07-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.30 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.29 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.28 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.27 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.26 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.25 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.24 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.23 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.22 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.21 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.20 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.19 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.18 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.17 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.16 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.15 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.14 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.13 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.12 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.11 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.10 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.9 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.8 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.7 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.6 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.5 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.4 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.3 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.2 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.1 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.4.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.3.0 (2022-02-24)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.2.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.1.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.0.2 (2021-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.0.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v2.0.0 (2021-11-06)
+
+* **Release**: Endpoint Variant Model Support
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/internal/endpoints/v2/endpoints.go 🔗

@@ -0,0 +1,302 @@
+package endpoints
+
+import (
+	"fmt"
+	"github.com/aws/smithy-go/logging"
+	"regexp"
+	"strings"
+
+	"github.com/aws/aws-sdk-go-v2/aws"
+)
+
+// DefaultKey is a compound map key of a variant and other values.
+type DefaultKey struct {
+	Variant        EndpointVariant
+	ServiceVariant ServiceVariant
+}
+
+// EndpointKey is a compound map key of a region and associated variant value.
+type EndpointKey struct {
+	Region         string
+	Variant        EndpointVariant
+	ServiceVariant ServiceVariant
+}
+
+// EndpointVariant is a bit field to describe the endpoints attributes.
+type EndpointVariant uint64
+
+const (
+	// FIPSVariant indicates that the endpoint is FIPS capable.
+	FIPSVariant EndpointVariant = 1 << (64 - 1 - iota)
+
+	// DualStackVariant indicates that the endpoint is DualStack capable.
+	DualStackVariant
+)
+
+// ServiceVariant is a bit field to describe the service endpoint attributes.
+type ServiceVariant uint64
+
+const (
+	defaultProtocol = "https"
+	defaultSigner   = "v4"
+)
+
+var (
+	protocolPriority = []string{"https", "http"}
+	signerPriority   = []string{"v4", "s3v4"}
+)
+
+// Options provide configuration needed to direct how endpoints are resolved.
+type Options struct {
+	// Logger is a logging implementation that log events should be sent to.
+	Logger logging.Logger
+
+	// LogDeprecated indicates that deprecated endpoints should be logged to the provided logger.
+	LogDeprecated bool
+
+	// ResolvedRegion is the resolved region string. If provided (non-zero length) it takes priority
+	// over the region name passed to the ResolveEndpoint call.
+	ResolvedRegion string
+
+	// Disable usage of HTTPS (TLS / SSL)
+	DisableHTTPS bool
+
+	// Instruct the resolver to use a service endpoint that supports dual-stack.
+	// If a service does not have a dual-stack endpoint an error will be returned by the resolver.
+	UseDualStackEndpoint aws.DualStackEndpointState
+
+	// Instruct the resolver to use a service endpoint that supports FIPS.
+	// If a service does not have a FIPS endpoint an error will be returned by the resolver.
+	UseFIPSEndpoint aws.FIPSEndpointState
+
+	// ServiceVariant is a bitfield of service specified endpoint variant data.
+	ServiceVariant ServiceVariant
+}
+
+// GetEndpointVariant returns the EndpointVariant for the variant associated options.
+func (o Options) GetEndpointVariant() (v EndpointVariant) {
+	if o.UseDualStackEndpoint == aws.DualStackEndpointStateEnabled {
+		v |= DualStackVariant
+	}
+	if o.UseFIPSEndpoint == aws.FIPSEndpointStateEnabled {
+		v |= FIPSVariant
+	}
+	return v
+}
+
+// Partitions is a slice of partition
+type Partitions []Partition
+
+// ResolveEndpoint resolves a service endpoint for the given region and options.
+func (ps Partitions) ResolveEndpoint(region string, opts Options) (aws.Endpoint, error) {
+	if len(ps) == 0 {
+		return aws.Endpoint{}, fmt.Errorf("no partitions found")
+	}
+
+	if opts.Logger == nil {
+		opts.Logger = logging.Nop{}
+	}
+
+	if len(opts.ResolvedRegion) > 0 {
+		region = opts.ResolvedRegion
+	}
+
+	for i := 0; i < len(ps); i++ {
+		if !ps[i].canResolveEndpoint(region, opts) {
+			continue
+		}
+
+		return ps[i].ResolveEndpoint(region, opts)
+	}
+
+	// fallback to first partition format to use when resolving the endpoint.
+	return ps[0].ResolveEndpoint(region, opts)
+}
+
+// Partition is an AWS partition description for a service and its' region endpoints.
+type Partition struct {
+	ID                string
+	RegionRegex       *regexp.Regexp
+	PartitionEndpoint string
+	IsRegionalized    bool
+	Defaults          map[DefaultKey]Endpoint
+	Endpoints         Endpoints
+}
+
+func (p Partition) canResolveEndpoint(region string, opts Options) bool {
+	_, ok := p.Endpoints[EndpointKey{
+		Region:  region,
+		Variant: opts.GetEndpointVariant(),
+	}]
+	return ok || p.RegionRegex.MatchString(region)
+}
+
+// ResolveEndpoint resolves and service endpoint for the given region and options.
+func (p Partition) ResolveEndpoint(region string, options Options) (resolved aws.Endpoint, err error) {
+	if len(region) == 0 && len(p.PartitionEndpoint) != 0 {
+		region = p.PartitionEndpoint
+	}
+
+	endpoints := p.Endpoints
+
+	variant := options.GetEndpointVariant()
+	serviceVariant := options.ServiceVariant
+
+	defaults := p.Defaults[DefaultKey{
+		Variant:        variant,
+		ServiceVariant: serviceVariant,
+	}]
+
+	return p.endpointForRegion(region, variant, serviceVariant, endpoints).resolve(p.ID, region, defaults, options)
+}
+
+func (p Partition) endpointForRegion(region string, variant EndpointVariant, serviceVariant ServiceVariant, endpoints Endpoints) Endpoint {
+	key := EndpointKey{
+		Region:  region,
+		Variant: variant,
+	}
+
+	if e, ok := endpoints[key]; ok {
+		return e
+	}
+
+	if !p.IsRegionalized {
+		return endpoints[EndpointKey{
+			Region:         p.PartitionEndpoint,
+			Variant:        variant,
+			ServiceVariant: serviceVariant,
+		}]
+	}
+
+	// Unable to find any matching endpoint, return
+	// blank that will be used for generic endpoint creation.
+	return Endpoint{}
+}
+
+// Endpoints is a map of service config regions to endpoints
+type Endpoints map[EndpointKey]Endpoint
+
+// CredentialScope is the credential scope of a region and service
+type CredentialScope struct {
+	Region  string
+	Service string
+}
+
+// Endpoint is a service endpoint description
+type Endpoint struct {
+	// True if the endpoint cannot be resolved for this partition/region/service
+	Unresolveable aws.Ternary
+
+	Hostname  string
+	Protocols []string
+
+	CredentialScope CredentialScope
+
+	SignatureVersions []string
+
+	// Indicates that this endpoint is deprecated.
+	Deprecated aws.Ternary
+}
+
+// IsZero returns whether the endpoint structure is an empty (zero) value.
+func (e Endpoint) IsZero() bool {
+	switch {
+	case e.Unresolveable != aws.UnknownTernary:
+		return false
+	case len(e.Hostname) != 0:
+		return false
+	case len(e.Protocols) != 0:
+		return false
+	case e.CredentialScope != (CredentialScope{}):
+		return false
+	case len(e.SignatureVersions) != 0:
+		return false
+	}
+	return true
+}
+
+func (e Endpoint) resolve(partition, region string, def Endpoint, options Options) (aws.Endpoint, error) {
+	var merged Endpoint
+	merged.mergeIn(def)
+	merged.mergeIn(e)
+	e = merged
+
+	if e.IsZero() {
+		return aws.Endpoint{}, fmt.Errorf("unable to resolve endpoint for region: %v", region)
+	}
+
+	var u string
+	if e.Unresolveable != aws.TrueTernary {
+		// Only attempt to resolve the endpoint if it can be resolved.
+		hostname := strings.Replace(e.Hostname, "{region}", region, 1)
+
+		scheme := getEndpointScheme(e.Protocols, options.DisableHTTPS)
+		u = scheme + "://" + hostname
+	}
+
+	signingRegion := e.CredentialScope.Region
+	if len(signingRegion) == 0 {
+		signingRegion = region
+	}
+	signingName := e.CredentialScope.Service
+
+	if e.Deprecated == aws.TrueTernary && options.LogDeprecated {
+		options.Logger.Logf(logging.Warn, "endpoint identifier %q, url %q marked as deprecated", region, u)
+	}
+
+	return aws.Endpoint{
+		URL:           u,
+		PartitionID:   partition,
+		SigningRegion: signingRegion,
+		SigningName:   signingName,
+		SigningMethod: getByPriority(e.SignatureVersions, signerPriority, defaultSigner),
+	}, nil
+}
+
+func (e *Endpoint) mergeIn(other Endpoint) {
+	if other.Unresolveable != aws.UnknownTernary {
+		e.Unresolveable = other.Unresolveable
+	}
+	if len(other.Hostname) > 0 {
+		e.Hostname = other.Hostname
+	}
+	if len(other.Protocols) > 0 {
+		e.Protocols = other.Protocols
+	}
+	if len(other.CredentialScope.Region) > 0 {
+		e.CredentialScope.Region = other.CredentialScope.Region
+	}
+	if len(other.CredentialScope.Service) > 0 {
+		e.CredentialScope.Service = other.CredentialScope.Service
+	}
+	if len(other.SignatureVersions) > 0 {
+		e.SignatureVersions = other.SignatureVersions
+	}
+	if other.Deprecated != aws.UnknownTernary {
+		e.Deprecated = other.Deprecated
+	}
+}
+
+func getEndpointScheme(protocols []string, disableHTTPS bool) string {
+	if disableHTTPS {
+		return "http"
+	}
+
+	return getByPriority(protocols, protocolPriority, defaultProtocol)
+}
+
+func getByPriority(s []string, p []string, def string) string {
+	if len(s) == 0 {
+		return def
+	}
+
+	for i := 0; i < len(p); i++ {
+		for j := 0; j < len(s); j++ {
+			if s[j] == p[i] {
+				return s[j]
+			}
+		}
+	}
+
+	return s[0]
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/CHANGELOG.md 🔗

@@ -0,0 +1,271 @@
+# v1.8.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+
+# v1.7.3 (2024-01-22)
+
+* **Bug Fix**: Remove invalid escaping of shared config values. All values in the shared config file will now be interpreted literally, save for fully-quoted strings which are unwrapped for legacy reasons.
+
+# v1.7.2 (2023-12-08)
+
+* **Bug Fix**: Correct loading of [services *] sections into shared config.
+
+# v1.7.1 (2023-11-16)
+
+* **Bug Fix**: Fix recognition of trailing comments in shared config properties. # or ; separators that aren't preceded by whitespace at the end of a property value should be considered part of it.
+
+# v1.7.0 (2023-11-13)
+
+* **Feature**: Replace the legacy config parser with a modern, less-strict implementation. Parsing failures within a section will now simply ignore the invalid line rather than silently drop the entire section.
+
+# v1.6.0 (2023-11-09.2)
+
+* **Feature**: BREAKFIX: In order to support subproperty parsing, invalid property definitions must not be ignored
+
+# v1.5.2 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.1 (2023-11-07)
+
+* **Bug Fix**: Fix subproperty performance regression
+
+# v1.5.0 (2023-11-01)
+
+* **Feature**: Adds support for configured endpoints via environment variables and the AWS shared configuration file.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.45 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.44 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.43 (2023-09-22)
+
+* **Bug Fix**: Fixed a bug where merging `max_attempts` or `duration_seconds` fields across shared config files with invalid values would silently default them to 0.
+* **Bug Fix**: Move type assertion of config values out of the parsing stage, which resolves an issue where the contents of a profile would silently be dropped with certain numeric formats.
+
+# v1.3.42 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.41 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.40 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.39 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.38 (2023-07-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.37 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.36 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.35 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.34 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.33 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.32 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.31 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.30 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.29 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.28 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.27 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.26 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.25 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.24 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.23 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.22 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.21 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.20 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.19 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.18 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.17 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.16 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.15 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.14 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.13 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.12 (2022-05-17)
+
+* **Bug Fix**: Removes the fuzz testing files from the module, as they are invalid and not used.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.11 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.10 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.9 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.8 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.7 (2022-03-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.6 (2022-02-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.5 (2022-01-28)
+
+* **Bug Fix**: Fixes the SDK's handling of `duration_sections` in the shared credentials file or specified in multiple shared config and shared credentials files under the same profile. [#1568](https://github.com/aws/aws-sdk-go-v2/pull/1568). Thanks to [Amir Szekely](https://github.com/kichik) for help reproduce this bug.
+
+# v1.3.4 (2022-01-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.3 (2022-01-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.2 (2021-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2021-11-06)
+
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.5 (2021-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.4 (2021-10-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.3 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.2 (2021-08-27)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.1 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2021-08-04)
+
+* **Feature**: adds error handling for defered close calls
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.1 (2021-07-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.0 (2021-07-01)
+
+* **Feature**: Support for `:`, `=`, `[`, `]` being present in expression values.
+
+# v1.0.1 (2021-06-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.0.0 (2021-05-20)
+
+* **Release**: The `github.com/aws/aws-sdk-go-v2/internal/ini` package is now a Go Module.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/errors.go 🔗

@@ -0,0 +1,22 @@
+package ini
+
+import "fmt"
+
+// UnableToReadFile is an error indicating that a ini file could not be read
+type UnableToReadFile struct {
+	Err error
+}
+
+// Error returns an error message and the underlying error message if present
+func (e *UnableToReadFile) Error() string {
+	base := "unable to read file"
+	if e.Err == nil {
+		return base
+	}
+	return fmt.Sprintf("%s: %v", base, e.Err)
+}
+
+// Unwrap returns the underlying error
+func (e *UnableToReadFile) Unwrap() error {
+	return e.Err
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/ini.go 🔗

@@ -0,0 +1,56 @@
+// Package ini implements parsing of the AWS shared config file.
+//
+//	Example:
+//	sections, err := ini.OpenFile("/path/to/file")
+//	if err != nil {
+//		panic(err)
+//	}
+//
+//	profile := "foo"
+//	section, ok := sections.GetSection(profile)
+//	if !ok {
+//		fmt.Printf("section %q could not be found", profile)
+//	}
+package ini
+
+import (
+	"fmt"
+	"io"
+	"os"
+	"strings"
+)
+
+// OpenFile parses shared config from the given file path.
+func OpenFile(path string) (sections Sections, err error) {
+	f, oerr := os.Open(path)
+	if oerr != nil {
+		return Sections{}, &UnableToReadFile{Err: oerr}
+	}
+
+	defer func() {
+		closeErr := f.Close()
+		if err == nil {
+			err = closeErr
+		} else if closeErr != nil {
+			err = fmt.Errorf("close error: %v, original error: %w", closeErr, err)
+		}
+	}()
+
+	return Parse(f, path)
+}
+
+// Parse parses shared config from the given reader.
+func Parse(r io.Reader, path string) (Sections, error) {
+	contents, err := io.ReadAll(r)
+	if err != nil {
+		return Sections{}, fmt.Errorf("read all: %v", err)
+	}
+
+	lines := strings.Split(string(contents), "\n")
+	tokens, err := tokenize(lines)
+	if err != nil {
+		return Sections{}, fmt.Errorf("tokenize: %v", err)
+	}
+
+	return parse(tokens, path), nil
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/parse.go 🔗

@@ -0,0 +1,109 @@
+package ini
+
+import (
+	"fmt"
+	"strings"
+)
+
+func parse(tokens []lineToken, path string) Sections {
+	parser := &parser{
+		path:     path,
+		sections: NewSections(),
+	}
+	parser.parse(tokens)
+	return parser.sections
+}
+
+type parser struct {
+	csection, ckey string   // current state
+	path           string   // source file path
+	sections       Sections // parse result
+}
+
+func (p *parser) parse(tokens []lineToken) {
+	for _, otok := range tokens {
+		switch tok := otok.(type) {
+		case *lineTokenProfile:
+			p.handleProfile(tok)
+		case *lineTokenProperty:
+			p.handleProperty(tok)
+		case *lineTokenSubProperty:
+			p.handleSubProperty(tok)
+		case *lineTokenContinuation:
+			p.handleContinuation(tok)
+		}
+	}
+}
+
+func (p *parser) handleProfile(tok *lineTokenProfile) {
+	name := tok.Name
+	if tok.Type != "" {
+		name = fmt.Sprintf("%s %s", tok.Type, tok.Name)
+	}
+	p.ckey = ""
+	p.csection = name
+	if _, ok := p.sections.container[name]; !ok {
+		p.sections.container[name] = NewSection(name)
+	}
+}
+
+func (p *parser) handleProperty(tok *lineTokenProperty) {
+	if p.csection == "" {
+		return // LEGACY: don't error on "global" properties
+	}
+
+	p.ckey = tok.Key
+	if _, ok := p.sections.container[p.csection].values[tok.Key]; ok {
+		section := p.sections.container[p.csection]
+		section.Logs = append(p.sections.container[p.csection].Logs,
+			fmt.Sprintf(
+				"For profile: %v, overriding %v value, with a %v value found in a duplicate profile defined later in the same file %v. \n",
+				p.csection, tok.Key, tok.Key, p.path,
+			),
+		)
+		p.sections.container[p.csection] = section
+	}
+
+	p.sections.container[p.csection].values[tok.Key] = Value{
+		str: tok.Value,
+	}
+	p.sections.container[p.csection].SourceFile[tok.Key] = p.path
+}
+
+func (p *parser) handleSubProperty(tok *lineTokenSubProperty) {
+	if p.csection == "" {
+		return // LEGACY: don't error on "global" properties
+	}
+
+	if p.ckey == "" || p.sections.container[p.csection].values[p.ckey].str != "" {
+		// This is an "orphaned" subproperty, either because it's at
+		// the beginning of a section or because the last property's
+		// value isn't empty. Either way we're lenient here and
+		// "promote" this to a normal property.
+		p.handleProperty(&lineTokenProperty{
+			Key:   tok.Key,
+			Value: strings.TrimSpace(trimPropertyComment(tok.Value)),
+		})
+		return
+	}
+
+	if p.sections.container[p.csection].values[p.ckey].mp == nil {
+		p.sections.container[p.csection].values[p.ckey] = Value{
+			mp: map[string]string{},
+		}
+	}
+	p.sections.container[p.csection].values[p.ckey].mp[tok.Key] = tok.Value
+}
+
+func (p *parser) handleContinuation(tok *lineTokenContinuation) {
+	if p.ckey == "" {
+		return
+	}
+
+	value, _ := p.sections.container[p.csection].values[p.ckey]
+	if value.str != "" && value.mp == nil {
+		value.str = fmt.Sprintf("%s\n%s", value.str, tok.Value)
+	}
+
+	p.sections.container[p.csection].values[p.ckey] = value
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/sections.go 🔗

@@ -0,0 +1,157 @@
+package ini
+
+import (
+	"sort"
+)
+
+// Sections is a map of Section structures that represent
+// a configuration.
+type Sections struct {
+	container map[string]Section
+}
+
+// NewSections returns empty ini Sections
+func NewSections() Sections {
+	return Sections{
+		container: make(map[string]Section, 0),
+	}
+}
+
+// GetSection will return section p. If section p does not exist,
+// false will be returned in the second parameter.
+func (t Sections) GetSection(p string) (Section, bool) {
+	v, ok := t.container[p]
+	return v, ok
+}
+
+// HasSection denotes if Sections consist of a section with
+// provided name.
+func (t Sections) HasSection(p string) bool {
+	_, ok := t.container[p]
+	return ok
+}
+
+// SetSection sets a section value for provided section name.
+func (t Sections) SetSection(p string, v Section) Sections {
+	t.container[p] = v
+	return t
+}
+
+// DeleteSection deletes a section entry/value for provided section name./
+func (t Sections) DeleteSection(p string) {
+	delete(t.container, p)
+}
+
+// values represents a map of union values.
+type values map[string]Value
+
+// List will return a list of all sections that were successfully
+// parsed.
+func (t Sections) List() []string {
+	keys := make([]string, len(t.container))
+	i := 0
+	for k := range t.container {
+		keys[i] = k
+		i++
+	}
+
+	sort.Strings(keys)
+	return keys
+}
+
+// Section contains a name and values. This represent
+// a sectioned entry in a configuration file.
+type Section struct {
+	// Name is the Section profile name
+	Name string
+
+	// values are the values within parsed profile
+	values values
+
+	// Errors is the list of errors
+	Errors []error
+
+	// Logs is the list of logs
+	Logs []string
+
+	// SourceFile is the INI Source file from where this section
+	// was retrieved. They key is the property, value is the
+	// source file the property was retrieved from.
+	SourceFile map[string]string
+}
+
+// NewSection returns an initialize section for the name
+func NewSection(name string) Section {
+	return Section{
+		Name:       name,
+		values:     values{},
+		SourceFile: map[string]string{},
+	}
+}
+
+// List will return a list of all
+// services in values
+func (t Section) List() []string {
+	keys := make([]string, len(t.values))
+	i := 0
+	for k := range t.values {
+		keys[i] = k
+		i++
+	}
+
+	sort.Strings(keys)
+	return keys
+}
+
+// UpdateSourceFile updates source file for a property to provided filepath.
+func (t Section) UpdateSourceFile(property string, filepath string) {
+	t.SourceFile[property] = filepath
+}
+
+// UpdateValue updates value for a provided key with provided value
+func (t Section) UpdateValue(k string, v Value) error {
+	t.values[k] = v
+	return nil
+}
+
+// Has will return whether or not an entry exists in a given section
+func (t Section) Has(k string) bool {
+	_, ok := t.values[k]
+	return ok
+}
+
+// ValueType will returned what type the union is set to. If
+// k was not found, the NoneType will be returned.
+func (t Section) ValueType(k string) (ValueType, bool) {
+	v, ok := t.values[k]
+	return v.Type, ok
+}
+
+// Bool returns a bool value at k
+func (t Section) Bool(k string) (bool, bool) {
+	return t.values[k].BoolValue()
+}
+
+// Int returns an integer value at k
+func (t Section) Int(k string) (int64, bool) {
+	return t.values[k].IntValue()
+}
+
+// Map returns a map value at k
+func (t Section) Map(k string) map[string]string {
+	return t.values[k].MapValue()
+}
+
+// Float64 returns a float value at k
+func (t Section) Float64(k string) (float64, bool) {
+	return t.values[k].FloatValue()
+}
+
+// String returns the string value at k
+func (t Section) String(k string) string {
+	_, ok := t.values[k]
+	if !ok {
+		return ""
+	}
+	return t.values[k].StringValue()
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/strings.go 🔗

@@ -0,0 +1,89 @@
+package ini
+
+import (
+	"strings"
+)
+
+func trimProfileComment(s string) string {
+	r, _, _ := strings.Cut(s, "#")
+	r, _, _ = strings.Cut(r, ";")
+	return r
+}
+
+func trimPropertyComment(s string) string {
+	r, _, _ := strings.Cut(s, " #")
+	r, _, _ = strings.Cut(r, " ;")
+	r, _, _ = strings.Cut(r, "\t#")
+	r, _, _ = strings.Cut(r, "\t;")
+	return r
+}
+
+// assumes no surrounding comment
+func splitProperty(s string) (string, string, bool) {
+	equalsi := strings.Index(s, "=")
+	coloni := strings.Index(s, ":") // LEGACY: also supported for property assignment
+	sep := "="
+	if equalsi == -1 || coloni != -1 && coloni < equalsi {
+		sep = ":"
+	}
+
+	k, v, ok := strings.Cut(s, sep)
+	if !ok {
+		return "", "", false
+	}
+	return strings.TrimSpace(k), strings.TrimSpace(v), true
+}
+
+// assumes no surrounding comment, whitespace, or profile brackets
+func splitProfile(s string) (string, string) {
+	var first int
+	for i, r := range s {
+		if isLineSpace(r) {
+			if first == 0 {
+				first = i
+			}
+		} else {
+			if first != 0 {
+				return s[:first], s[i:]
+			}
+		}
+	}
+	if first == 0 {
+		return "", s // type component is effectively blank
+	}
+	return "", ""
+}
+
+func isLineSpace(r rune) bool {
+	return r == ' ' || r == '\t'
+}
+
+func unquote(s string) string {
+	if isSingleQuoted(s) || isDoubleQuoted(s) {
+		return s[1 : len(s)-1]
+	}
+	return s
+}
+
+// applies various legacy conversions to property values:
+//   - remote wrapping single/doublequotes
+func legacyStrconv(s string) string {
+	s = unquote(s)
+	return s
+}
+
+func isSingleQuoted(s string) bool {
+	return hasAffixes(s, "'", "'")
+}
+
+func isDoubleQuoted(s string) bool {
+	return hasAffixes(s, `"`, `"`)
+}
+
+func isBracketed(s string) bool {
+	return hasAffixes(s, "[", "]")
+}
+
+func hasAffixes(s, left, right string) bool {
+	return strings.HasPrefix(s, left) && strings.HasSuffix(s, right)
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/token.go 🔗

@@ -0,0 +1,32 @@
+package ini
+
+type lineToken interface {
+	isLineToken()
+}
+
+type lineTokenProfile struct {
+	Type string
+	Name string
+}
+
+func (*lineTokenProfile) isLineToken() {}
+
+type lineTokenProperty struct {
+	Key   string
+	Value string
+}
+
+func (*lineTokenProperty) isLineToken() {}
+
+type lineTokenContinuation struct {
+	Value string
+}
+
+func (*lineTokenContinuation) isLineToken() {}
+
+type lineTokenSubProperty struct {
+	Key   string
+	Value string
+}
+
+func (*lineTokenSubProperty) isLineToken() {}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/tokenize.go 🔗

@@ -0,0 +1,92 @@
+package ini
+
+import (
+	"strings"
+)
+
+func tokenize(lines []string) ([]lineToken, error) {
+	tokens := make([]lineToken, 0, len(lines))
+	for _, line := range lines {
+		if len(strings.TrimSpace(line)) == 0 || isLineComment(line) {
+			continue
+		}
+
+		if tok := asProfile(line); tok != nil {
+			tokens = append(tokens, tok)
+		} else if tok := asProperty(line); tok != nil {
+			tokens = append(tokens, tok)
+		} else if tok := asSubProperty(line); tok != nil {
+			tokens = append(tokens, tok)
+		} else if tok := asContinuation(line); tok != nil {
+			tokens = append(tokens, tok)
+		} // unrecognized tokens are effectively ignored
+	}
+	return tokens, nil
+}
+
+func isLineComment(line string) bool {
+	trimmed := strings.TrimLeft(line, " \t")
+	return strings.HasPrefix(trimmed, "#") || strings.HasPrefix(trimmed, ";")
+}
+
+func asProfile(line string) *lineTokenProfile { // " [ type name ] ; comment"
+	trimmed := strings.TrimSpace(trimProfileComment(line)) // "[ type name ]"
+	if !isBracketed(trimmed) {
+		return nil
+	}
+	trimmed = trimmed[1 : len(trimmed)-1] // " type name " (or just " name ")
+	trimmed = strings.TrimSpace(trimmed)  // "type name" / "name"
+	typ, name := splitProfile(trimmed)
+	return &lineTokenProfile{
+		Type: typ,
+		Name: name,
+	}
+}
+
+func asProperty(line string) *lineTokenProperty {
+	if isLineSpace(rune(line[0])) {
+		return nil
+	}
+
+	trimmed := trimPropertyComment(line)
+	trimmed = strings.TrimRight(trimmed, " \t")
+	k, v, ok := splitProperty(trimmed)
+	if !ok {
+		return nil
+	}
+
+	return &lineTokenProperty{
+		Key:   strings.ToLower(k), // LEGACY: normalize key case
+		Value: legacyStrconv(v),   // LEGACY: see func docs
+	}
+}
+
+func asSubProperty(line string) *lineTokenSubProperty {
+	if !isLineSpace(rune(line[0])) {
+		return nil
+	}
+
+	// comments on sub-properties are included in the value
+	trimmed := strings.TrimLeft(line, " \t")
+	k, v, ok := splitProperty(trimmed)
+	if !ok {
+		return nil
+	}
+
+	return &lineTokenSubProperty{ // same LEGACY constraints as in normal property
+		Key:   strings.ToLower(k),
+		Value: legacyStrconv(v),
+	}
+}
+
+func asContinuation(line string) *lineTokenContinuation {
+	if !isLineSpace(rune(line[0])) {
+		return nil
+	}
+
+	// includes comments like sub-properties
+	trimmed := strings.TrimLeft(line, " \t")
+	return &lineTokenContinuation{
+		Value: trimmed,
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/ini/value.go 🔗

@@ -0,0 +1,93 @@
+package ini
+
+import (
+	"fmt"
+	"strconv"
+	"strings"
+)
+
+// ValueType is an enum that will signify what type
+// the Value is
+type ValueType int
+
+func (v ValueType) String() string {
+	switch v {
+	case NoneType:
+		return "NONE"
+	case StringType:
+		return "STRING"
+	}
+
+	return ""
+}
+
+// ValueType enums
+const (
+	NoneType = ValueType(iota)
+	StringType
+	QuotedStringType
+)
+
+// Value is a union container
+type Value struct {
+	Type ValueType
+
+	str string
+	mp  map[string]string
+}
+
+// NewStringValue returns a Value type generated using a string input.
+func NewStringValue(str string) (Value, error) {
+	return Value{str: str}, nil
+}
+
+func (v Value) String() string {
+	switch v.Type {
+	case StringType:
+		return fmt.Sprintf("string: %s", string(v.str))
+	case QuotedStringType:
+		return fmt.Sprintf("quoted string: %s", string(v.str))
+	default:
+		return "union not set"
+	}
+}
+
+// MapValue returns a map value for sub properties
+func (v Value) MapValue() map[string]string {
+	return v.mp
+}
+
+// IntValue returns an integer value
+func (v Value) IntValue() (int64, bool) {
+	i, err := strconv.ParseInt(string(v.str), 0, 64)
+	if err != nil {
+		return 0, false
+	}
+	return i, true
+}
+
+// FloatValue returns a float value
+func (v Value) FloatValue() (float64, bool) {
+	f, err := strconv.ParseFloat(string(v.str), 64)
+	if err != nil {
+		return 0, false
+	}
+	return f, true
+}
+
+// BoolValue returns a bool value
+func (v Value) BoolValue() (bool, bool) {
+	// we don't use ParseBool as it recognizes more than what we've
+	// historically supported
+	if strings.EqualFold(v.str, "true") {
+		return true, true
+	} else if strings.EqualFold(v.str, "false") {
+		return false, true
+	}
+	return false, false
+}
+
+// StringValue returns the string value
+func (v Value) StringValue() string {
+	return v.str
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/middleware/middleware.go 🔗

@@ -0,0 +1,42 @@
+package middleware
+
+import (
+	"context"
+	"sync/atomic"
+	"time"
+
+	internalcontext "github.com/aws/aws-sdk-go-v2/internal/context"
+	"github.com/aws/smithy-go/middleware"
+)
+
+// AddTimeOffsetMiddleware sets a value representing clock skew on the request context.
+// This can be read by other operations (such as signing) to correct the date value they send
+// on the request
+type AddTimeOffsetMiddleware struct {
+	Offset *atomic.Int64
+}
+
+// ID the identifier for AddTimeOffsetMiddleware
+func (m *AddTimeOffsetMiddleware) ID() string { return "AddTimeOffsetMiddleware" }
+
+// HandleBuild sets a value for attemptSkew on the request context if one is set on the client.
+func (m AddTimeOffsetMiddleware) HandleBuild(ctx context.Context, in middleware.BuildInput, next middleware.BuildHandler) (
+	out middleware.BuildOutput, metadata middleware.Metadata, err error,
+) {
+	if m.Offset != nil {
+		offset := time.Duration(m.Offset.Load())
+		ctx = internalcontext.SetAttemptSkewContext(ctx, offset)
+	}
+	return next.HandleBuild(ctx, in)
+}
+
+// HandleDeserialize gets the clock skew context from the context, and if set, sets it on the pointer
+// held by AddTimeOffsetMiddleware
+func (m *AddTimeOffsetMiddleware) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	if v := internalcontext.GetAttemptSkewContext(ctx); v != 0 {
+		m.Offset.Store(v.Nanoseconds())
+	}
+	return next.HandleDeserialize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/rand/rand.go 🔗

@@ -0,0 +1,33 @@
+package rand
+
+import (
+	"crypto/rand"
+	"fmt"
+	"io"
+	"math/big"
+)
+
+func init() {
+	Reader = rand.Reader
+}
+
+// Reader provides a random reader that can reset during testing.
+var Reader io.Reader
+
+var floatMaxBigInt = big.NewInt(1 << 53)
+
+// Float64 returns a float64 read from an io.Reader source. The returned float will be between [0.0, 1.0).
+func Float64(reader io.Reader) (float64, error) {
+	bi, err := rand.Int(reader, floatMaxBigInt)
+	if err != nil {
+		return 0, fmt.Errorf("failed to read random value, %v", err)
+	}
+
+	return float64(bi.Int64()) / (1 << 53), nil
+}
+
+// CryptoRandFloat64 returns a random float64 obtained from the crypto rand
+// source.
+func CryptoRandFloat64() (float64, error) {
+	return Float64(Reader)
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/sdk/time.go 🔗

@@ -0,0 +1,74 @@
+package sdk
+
+import (
+	"context"
+	"time"
+)
+
+func init() {
+	NowTime = time.Now
+	Sleep = time.Sleep
+	SleepWithContext = sleepWithContext
+}
+
+// NowTime is a value for getting the current time. This value can be overridden
+// for testing mocking out current time.
+var NowTime func() time.Time
+
+// Sleep is a value for sleeping for a duration. This value can be overridden
+// for testing and mocking out sleep duration.
+var Sleep func(time.Duration)
+
+// SleepWithContext will wait for the timer duration to expire, or the context
+// is canceled. Which ever happens first. If the context is canceled the Context's
+// error will be returned.
+//
+// This value can be overridden for testing and mocking out sleep duration.
+var SleepWithContext func(context.Context, time.Duration) error
+
+// sleepWithContext will wait for the timer duration to expire, or the context
+// is canceled. Which ever happens first. If the context is canceled the
+// Context's error will be returned.
+func sleepWithContext(ctx context.Context, dur time.Duration) error {
+	t := time.NewTimer(dur)
+	defer t.Stop()
+
+	select {
+	case <-t.C:
+		break
+	case <-ctx.Done():
+		return ctx.Err()
+	}
+
+	return nil
+}
+
+// noOpSleepWithContext does nothing, returns immediately.
+func noOpSleepWithContext(context.Context, time.Duration) error {
+	return nil
+}
+
+func noOpSleep(time.Duration) {}
+
+// TestingUseNopSleep is a utility for disabling sleep across the SDK for
+// testing.
+func TestingUseNopSleep() func() {
+	SleepWithContext = noOpSleepWithContext
+	Sleep = noOpSleep
+
+	return func() {
+		SleepWithContext = sleepWithContext
+		Sleep = time.Sleep
+	}
+}
+
+// TestingUseReferenceTime is a utility for swapping the time function across the SDK to return a specific reference time
+// for testing purposes.
+func TestingUseReferenceTime(referenceTime time.Time) func() {
+	NowTime = func() time.Time {
+		return referenceTime
+	}
+	return func() {
+		NowTime = time.Now
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/sdkio/byte.go 🔗

@@ -0,0 +1,12 @@
+package sdkio
+
+const (
+	// Byte is 8 bits
+	Byte int64 = 1
+	// KibiByte (KiB) is 1024 Bytes
+	KibiByte = Byte * 1024
+	// MebiByte (MiB) is 1024 KiB
+	MebiByte = KibiByte * 1024
+	// GibiByte (GiB) is 1024 MiB
+	GibiByte = MebiByte * 1024
+)

vendor/github.com/aws/aws-sdk-go-v2/internal/shareddefaults/shared_config.go 🔗

@@ -0,0 +1,47 @@
+package shareddefaults
+
+import (
+	"os"
+	"os/user"
+	"path/filepath"
+)
+
+// SharedCredentialsFilename returns the SDK's default file path
+// for the shared credentials file.
+//
+// Builds the shared config file path based on the OS's platform.
+//
+//   - Linux/Unix: $HOME/.aws/credentials
+//   - Windows: %USERPROFILE%\.aws\credentials
+func SharedCredentialsFilename() string {
+	return filepath.Join(UserHomeDir(), ".aws", "credentials")
+}
+
+// SharedConfigFilename returns the SDK's default file path for
+// the shared config file.
+//
+// Builds the shared config file path based on the OS's platform.
+//
+//   - Linux/Unix: $HOME/.aws/config
+//   - Windows: %USERPROFILE%\.aws\config
+func SharedConfigFilename() string {
+	return filepath.Join(UserHomeDir(), ".aws", "config")
+}
+
+// UserHomeDir returns the home directory for the user the process is
+// running under.
+func UserHomeDir() string {
+	// Ignore errors since we only care about Windows and *nix.
+	home, _ := os.UserHomeDir()
+
+	if len(home) > 0 {
+		return home
+	}
+
+	currUser, _ := user.Current()
+	if currUser != nil {
+		home = currUser.HomeDir
+	}
+
+	return home
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/strings/strings.go 🔗

@@ -0,0 +1,11 @@
+package strings
+
+import (
+	"strings"
+)
+
+// HasPrefixFold tests whether the string s begins with prefix, interpreted as UTF-8 strings,
+// under Unicode case-folding.
+func HasPrefixFold(s, prefix string) bool {
+	return len(s) >= len(prefix) && strings.EqualFold(s[0:len(prefix)], prefix)
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight/LICENSE 🔗

@@ -0,0 +1,28 @@
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+   * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+   * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+   * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+

vendor/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight/docs.go 🔗

@@ -0,0 +1,7 @@
+// Package singleflight provides a duplicate function call suppression
+// mechanism. This package is a fork of the Go golang.org/x/sync/singleflight
+// package. The package is forked, because the package a part of the unstable
+// and unversioned golang.org/x/sync module.
+//
+// https://github.com/golang/sync/tree/67f06af15bc961c363a7260195bcd53487529a21/singleflight
+package singleflight

vendor/github.com/aws/aws-sdk-go-v2/internal/sync/singleflight/singleflight.go 🔗

@@ -0,0 +1,210 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package singleflight
+
+import (
+	"bytes"
+	"errors"
+	"fmt"
+	"runtime"
+	"runtime/debug"
+	"sync"
+)
+
+// errGoexit indicates the runtime.Goexit was called in
+// the user given function.
+var errGoexit = errors.New("runtime.Goexit was called")
+
+// A panicError is an arbitrary value recovered from a panic
+// with the stack trace during the execution of given function.
+type panicError struct {
+	value interface{}
+	stack []byte
+}
+
+// Error implements error interface.
+func (p *panicError) Error() string {
+	return fmt.Sprintf("%v\n\n%s", p.value, p.stack)
+}
+
+func newPanicError(v interface{}) error {
+	stack := debug.Stack()
+
+	// The first line of the stack trace is of the form "goroutine N [status]:"
+	// but by the time the panic reaches Do the goroutine may no longer exist
+	// and its status will have changed. Trim out the misleading line.
+	if line := bytes.IndexByte(stack[:], '\n'); line >= 0 {
+		stack = stack[line+1:]
+	}
+	return &panicError{value: v, stack: stack}
+}
+
+// call is an in-flight or completed singleflight.Do call
+type call struct {
+	wg sync.WaitGroup
+
+	// These fields are written once before the WaitGroup is done
+	// and are only read after the WaitGroup is done.
+	val interface{}
+	err error
+
+	// forgotten indicates whether Forget was called with this call's key
+	// while the call was still in flight.
+	forgotten bool
+
+	// These fields are read and written with the singleflight
+	// mutex held before the WaitGroup is done, and are read but
+	// not written after the WaitGroup is done.
+	dups  int
+	chans []chan<- Result
+}
+
+// Group represents a class of work and forms a namespace in
+// which units of work can be executed with duplicate suppression.
+type Group struct {
+	mu sync.Mutex       // protects m
+	m  map[string]*call // lazily initialized
+}
+
+// Result holds the results of Do, so they can be passed
+// on a channel.
+type Result struct {
+	Val    interface{}
+	Err    error
+	Shared bool
+}
+
+// Do executes and returns the results of the given function, making
+// sure that only one execution is in-flight for a given key at a
+// time. If a duplicate comes in, the duplicate caller waits for the
+// original to complete and receives the same results.
+// The return value shared indicates whether v was given to multiple callers.
+func (g *Group) Do(key string, fn func() (interface{}, error)) (v interface{}, err error, shared bool) {
+	g.mu.Lock()
+	if g.m == nil {
+		g.m = make(map[string]*call)
+	}
+	if c, ok := g.m[key]; ok {
+		c.dups++
+		g.mu.Unlock()
+		c.wg.Wait()
+
+		if e, ok := c.err.(*panicError); ok {
+			panic(e)
+		} else if c.err == errGoexit {
+			runtime.Goexit()
+		}
+		return c.val, c.err, true
+	}
+	c := new(call)
+	c.wg.Add(1)
+	g.m[key] = c
+	g.mu.Unlock()
+
+	g.doCall(c, key, fn)
+	return c.val, c.err, c.dups > 0
+}
+
+// DoChan is like Do but returns a channel that will receive the
+// results when they are ready.
+//
+// The returned channel will not be closed.
+func (g *Group) DoChan(key string, fn func() (interface{}, error)) <-chan Result {
+	ch := make(chan Result, 1)
+	g.mu.Lock()
+	if g.m == nil {
+		g.m = make(map[string]*call)
+	}
+	if c, ok := g.m[key]; ok {
+		c.dups++
+		c.chans = append(c.chans, ch)
+		g.mu.Unlock()
+		return ch
+	}
+	c := &call{chans: []chan<- Result{ch}}
+	c.wg.Add(1)
+	g.m[key] = c
+	g.mu.Unlock()
+
+	go g.doCall(c, key, fn)
+
+	return ch
+}
+
+// doCall handles the single call for a key.
+func (g *Group) doCall(c *call, key string, fn func() (interface{}, error)) {
+	normalReturn := false
+	recovered := false
+
+	// use double-defer to distinguish panic from runtime.Goexit,
+	// more details see https://golang.org/cl/134395
+	defer func() {
+		// the given function invoked runtime.Goexit
+		if !normalReturn && !recovered {
+			c.err = errGoexit
+		}
+
+		c.wg.Done()
+		g.mu.Lock()
+		defer g.mu.Unlock()
+		if !c.forgotten {
+			delete(g.m, key)
+		}
+
+		if e, ok := c.err.(*panicError); ok {
+			// In order to prevent the waiting channels from being blocked forever,
+			// needs to ensure that this panic cannot be recovered.
+			if len(c.chans) > 0 {
+				go panic(e)
+				select {} // Keep this goroutine around so that it will appear in the crash dump.
+			} else {
+				panic(e)
+			}
+		} else if c.err == errGoexit {
+			// Already in the process of goexit, no need to call again
+		} else {
+			// Normal return
+			for _, ch := range c.chans {
+				ch <- Result{c.val, c.err, c.dups > 0}
+			}
+		}
+	}()
+
+	func() {
+		defer func() {
+			if !normalReturn {
+				// Ideally, we would wait to take a stack trace until we've determined
+				// whether this is a panic or a runtime.Goexit.
+				//
+				// Unfortunately, the only way we can distinguish the two is to see
+				// whether the recover stopped the goroutine from terminating, and by
+				// the time we know that, the part of the stack trace relevant to the
+				// panic has been discarded.
+				if r := recover(); r != nil {
+					c.err = newPanicError(r)
+				}
+			}
+		}()
+
+		c.val, c.err = fn()
+		normalReturn = true
+	}()
+
+	if !normalReturn {
+		recovered = true
+	}
+}
+
+// Forget tells the singleflight to forget about a key.  Future calls
+// to Do for this key will call the function rather than waiting for
+// an earlier call to complete.
+func (g *Group) Forget(key string) {
+	g.mu.Lock()
+	if c, ok := g.m[key]; ok {
+		c.forgotten = true
+	}
+	delete(g.m, key)
+	g.mu.Unlock()
+}

vendor/github.com/aws/aws-sdk-go-v2/internal/timeconv/duration.go 🔗

@@ -0,0 +1,13 @@
+package timeconv
+
+import "time"
+
+// FloatSecondsDur converts a fractional seconds to duration.
+func FloatSecondsDur(v float64) time.Duration {
+	return time.Duration(v * float64(time.Second))
+}
+
+// DurSecondsFloat converts a duration into fractional seconds.
+func DurSecondsFloat(d time.Duration) float64 {
+	return float64(d) / float64(time.Second)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/CHANGELOG.md 🔗

@@ -0,0 +1,140 @@
+# v1.11.3 (2024-06-28)
+
+* No change notes available for this release.
+
+# v1.11.2 (2024-03-29)
+
+* No change notes available for this release.
+
+# v1.11.1 (2024-02-21)
+
+* No change notes available for this release.
+
+# v1.11.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+
+# v1.10.4 (2023-12-07)
+
+* No change notes available for this release.
+
+# v1.10.3 (2023-11-30)
+
+* No change notes available for this release.
+
+# v1.10.2 (2023-11-29)
+
+* No change notes available for this release.
+
+# v1.10.1 (2023-11-15)
+
+* No change notes available for this release.
+
+# v1.10.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+
+# v1.9.15 (2023-10-06)
+
+* No change notes available for this release.
+
+# v1.9.14 (2023-08-18)
+
+* No change notes available for this release.
+
+# v1.9.13 (2023-08-07)
+
+* No change notes available for this release.
+
+# v1.9.12 (2023-07-31)
+
+* No change notes available for this release.
+
+# v1.9.11 (2022-12-02)
+
+* No change notes available for this release.
+
+# v1.9.10 (2022-10-24)
+
+* No change notes available for this release.
+
+# v1.9.9 (2022-09-14)
+
+* No change notes available for this release.
+
+# v1.9.8 (2022-09-02)
+
+* No change notes available for this release.
+
+# v1.9.7 (2022-08-31)
+
+* No change notes available for this release.
+
+# v1.9.6 (2022-08-29)
+
+* No change notes available for this release.
+
+# v1.9.5 (2022-08-11)
+
+* No change notes available for this release.
+
+# v1.9.4 (2022-08-09)
+
+* No change notes available for this release.
+
+# v1.9.3 (2022-06-29)
+
+* No change notes available for this release.
+
+# v1.9.2 (2022-06-07)
+
+* No change notes available for this release.
+
+# v1.9.1 (2022-03-24)
+
+* No change notes available for this release.
+
+# v1.9.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.8.0 (2022-02-24)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.7.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.6.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.5.0 (2021-11-06)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.4.0 (2021-10-21)
+
+* **Feature**: Updated  to latest version
+
+# v1.3.0 (2021-08-27)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.2.2 (2021-08-04)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+
+# v1.2.1 (2021-07-15)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.2.0 (2021-06-25)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+
+# v1.1.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+

vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/accept_encoding_gzip.go 🔗

@@ -0,0 +1,176 @@
+package acceptencoding
+
+import (
+	"compress/gzip"
+	"context"
+	"fmt"
+	"io"
+
+	"github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+const acceptEncodingHeaderKey = "Accept-Encoding"
+const contentEncodingHeaderKey = "Content-Encoding"
+
+// AddAcceptEncodingGzipOptions provides the options for the
+// AddAcceptEncodingGzip middleware setup.
+type AddAcceptEncodingGzipOptions struct {
+	Enable bool
+}
+
+// AddAcceptEncodingGzip explicitly adds handling for accept-encoding GZIP
+// middleware to the operation stack. This allows checksums to be correctly
+// computed without disabling GZIP support.
+func AddAcceptEncodingGzip(stack *middleware.Stack, options AddAcceptEncodingGzipOptions) error {
+	if options.Enable {
+		if err := stack.Finalize.Add(&EnableGzip{}, middleware.Before); err != nil {
+			return err
+		}
+		if err := stack.Deserialize.Insert(&DecompressGzip{}, "OperationDeserializer", middleware.After); err != nil {
+			return err
+		}
+		return nil
+	}
+
+	return stack.Finalize.Add(&DisableGzip{}, middleware.Before)
+}
+
+// DisableGzip provides the middleware that will
+// disable the underlying http client automatically enabling for gzip
+// decompress content-encoding support.
+type DisableGzip struct{}
+
+// ID returns the id for the middleware.
+func (*DisableGzip) ID() string {
+	return "DisableAcceptEncodingGzip"
+}
+
+// HandleFinalize implements the FinalizeMiddleware interface.
+func (*DisableGzip) HandleFinalize(
+	ctx context.Context, input middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	output middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := input.Request.(*smithyhttp.Request)
+	if !ok {
+		return output, metadata, &smithy.SerializationError{
+			Err: fmt.Errorf("unknown request type %T", input.Request),
+		}
+	}
+
+	// Explicitly enable gzip support, this will prevent the http client from
+	// auto extracting the zipped content.
+	req.Header.Set(acceptEncodingHeaderKey, "identity")
+
+	return next.HandleFinalize(ctx, input)
+}
+
+// EnableGzip provides a middleware to enable support for
+// gzip responses, with manual decompression. This prevents the underlying HTTP
+// client from performing the gzip decompression automatically.
+type EnableGzip struct{}
+
+// ID returns the id for the middleware.
+func (*EnableGzip) ID() string {
+	return "AcceptEncodingGzip"
+}
+
+// HandleFinalize implements the FinalizeMiddleware interface.
+func (*EnableGzip) HandleFinalize(
+	ctx context.Context, input middleware.FinalizeInput, next middleware.FinalizeHandler,
+) (
+	output middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := input.Request.(*smithyhttp.Request)
+	if !ok {
+		return output, metadata, &smithy.SerializationError{
+			Err: fmt.Errorf("unknown request type %T", input.Request),
+		}
+	}
+
+	// Explicitly enable gzip support, this will prevent the http client from
+	// auto extracting the zipped content.
+	req.Header.Set(acceptEncodingHeaderKey, "gzip")
+
+	return next.HandleFinalize(ctx, input)
+}
+
+// DecompressGzip provides the middleware for decompressing a gzip
+// response from the service.
+type DecompressGzip struct{}
+
+// ID returns the id for the middleware.
+func (*DecompressGzip) ID() string {
+	return "DecompressGzip"
+}
+
+// HandleDeserialize implements the DeserializeMiddlware interface.
+func (*DecompressGzip) HandleDeserialize(
+	ctx context.Context, input middleware.DeserializeInput, next middleware.DeserializeHandler,
+) (
+	output middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	output, metadata, err = next.HandleDeserialize(ctx, input)
+	if err != nil {
+		return output, metadata, err
+	}
+
+	resp, ok := output.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return output, metadata, &smithy.DeserializationError{
+			Err: fmt.Errorf("unknown response type %T", output.RawResponse),
+		}
+	}
+	if v := resp.Header.Get(contentEncodingHeaderKey); v != "gzip" {
+		return output, metadata, err
+	}
+
+	// Clear content length since it will no longer be valid once the response
+	// body is decompressed.
+	resp.Header.Del("Content-Length")
+	resp.ContentLength = -1
+
+	resp.Body = wrapGzipReader(resp.Body)
+
+	return output, metadata, err
+}
+
+type gzipReader struct {
+	reader io.ReadCloser
+	gzip   *gzip.Reader
+}
+
+func wrapGzipReader(reader io.ReadCloser) *gzipReader {
+	return &gzipReader{
+		reader: reader,
+	}
+}
+
+// Read wraps the gzip reader around the underlying io.Reader to extract the
+// response bytes on the fly.
+func (g *gzipReader) Read(b []byte) (n int, err error) {
+	if g.gzip == nil {
+		g.gzip, err = gzip.NewReader(g.reader)
+		if err != nil {
+			g.gzip = nil // ensure uninitialized gzip value isn't used in close.
+			return 0, fmt.Errorf("failed to decompress gzip response, %w", err)
+		}
+	}
+
+	return g.gzip.Read(b)
+}
+
+func (g *gzipReader) Close() error {
+	if g.gzip == nil {
+		return nil
+	}
+
+	if err := g.gzip.Close(); err != nil {
+		g.reader.Close()
+		return fmt.Errorf("failed to decompress gzip response, %w", err)
+	}
+
+	return g.reader.Close()
+}

vendor/github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding/doc.go 🔗

@@ -0,0 +1,22 @@
+/*
+Package acceptencoding provides customizations associated with Accept Encoding Header.
+
+# Accept encoding gzip
+
+The Go HTTP client automatically supports accept-encoding and content-encoding
+gzip by default. This default behavior is not desired by the SDK, and prevents
+validating the response body's checksum. To prevent this the SDK must manually
+control usage of content-encoding gzip.
+
+To control content-encoding, the SDK must always set the `Accept-Encoding`
+header to a value. This prevents the HTTP client from using gzip automatically.
+When gzip is enabled on the API client, the SDK's customization will control
+decompressing the gzip data in order to not break the checksum validation. When
+gzip is disabled, the API client will disable gzip, preventing the HTTP
+client's default behavior.
+
+An `EnableAcceptEncodingGzip` option may or may not be present depending on the client using
+the below middleware. The option if present can be used to enable auto decompressing
+gzip by the SDK.
+*/
+package acceptencoding

vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/CHANGELOG.md 🔗

@@ -0,0 +1,346 @@
+# v1.11.17 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.16 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.15 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.14 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.13 (2024-06-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.12 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.11 (2024-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.10 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.9 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.8 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.7 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.6 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.5 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.4 (2024-03-05)
+
+* **Bug Fix**: Restore typo'd API `AddAsIsInternalPresigingMiddleware` as an alias for backwards compatibility.
+
+# v1.11.3 (2024-03-04)
+
+* **Bug Fix**: Correct a typo in internal AddAsIsPresigningMiddleware API.
+
+# v1.11.2 (2024-02-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.1 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.10 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.9 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.8 (2023-12-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.7 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.6 (2023-11-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.5 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.4 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.3 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.2 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.1 (2023-11-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.37 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.36 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.35 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.34 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.33 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.32 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.31 (2023-07-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.30 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.29 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.28 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.27 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.26 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.25 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.24 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.23 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.22 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.21 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.20 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.19 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.18 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.17 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.16 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.15 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.14 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.13 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.12 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.11 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.10 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.9 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.8 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.7 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.6 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.5 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.4 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.3 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.2 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.1 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.0 (2022-02-24)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.2 (2021-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.0 (2021-11-06)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.0 (2021-10-21)
+
+* **Feature**: Updated  to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.2 (2021-10-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.1 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2021-08-27)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.3 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.2 (2021-08-04)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.1 (2021-07-15)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2021-06-25)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.1 (2021-05-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.1.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/context.go 🔗

@@ -0,0 +1,56 @@
+package presignedurl
+
+import (
+	"context"
+
+	"github.com/aws/smithy-go/middleware"
+)
+
+// WithIsPresigning adds the isPresigning sentinel value to a context to signal
+// that the middleware stack is using the presign flow.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func WithIsPresigning(ctx context.Context) context.Context {
+	return middleware.WithStackValue(ctx, isPresigningKey{}, true)
+}
+
+// GetIsPresigning returns if the context contains the isPresigning sentinel
+// value for presigning flows.
+//
+// Scoped to stack values. Use github.com/aws/smithy-go/middleware#ClearStackValues
+// to clear all stack values.
+func GetIsPresigning(ctx context.Context) bool {
+	v, _ := middleware.GetStackValue(ctx, isPresigningKey{}).(bool)
+	return v
+}
+
+type isPresigningKey struct{}
+
+// AddAsIsPresigningMiddleware adds a middleware to the head of the stack that
+// will update the stack's context to be flagged as being invoked for the
+// purpose of presigning.
+func AddAsIsPresigningMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(asIsPresigningMiddleware{}, middleware.Before)
+}
+
+// AddAsIsPresigingMiddleware is an alias for backwards compatibility.
+//
+// Deprecated: This API was released with a typo. Use
+// [AddAsIsPresigningMiddleware] instead.
+func AddAsIsPresigingMiddleware(stack *middleware.Stack) error {
+	return AddAsIsPresigningMiddleware(stack)
+}
+
+type asIsPresigningMiddleware struct{}
+
+func (asIsPresigningMiddleware) ID() string { return "AsIsPresigningMiddleware" }
+
+func (asIsPresigningMiddleware) HandleInitialize(
+	ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler,
+) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	ctx = WithIsPresigning(ctx)
+	return next.HandleInitialize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/internal/presigned-url/middleware.go 🔗

@@ -0,0 +1,110 @@
+package presignedurl
+
+import (
+	"context"
+	"fmt"
+
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+
+	"github.com/aws/smithy-go/middleware"
+)
+
+// URLPresigner provides the interface to presign the input parameters in to a
+// presigned URL.
+type URLPresigner interface {
+	// PresignURL presigns a URL.
+	PresignURL(ctx context.Context, srcRegion string, params interface{}) (*v4.PresignedHTTPRequest, error)
+}
+
+// ParameterAccessor provides an collection of accessor to for retrieving and
+// setting the values needed to PresignedURL generation
+type ParameterAccessor struct {
+	// GetPresignedURL accessor points to a function that retrieves a presigned url if present
+	GetPresignedURL func(interface{}) (string, bool, error)
+
+	// GetSourceRegion accessor points to a function that retrieves source region for presigned url
+	GetSourceRegion func(interface{}) (string, bool, error)
+
+	// CopyInput accessor points to a function that takes in an input, and returns a copy.
+	CopyInput func(interface{}) (interface{}, error)
+
+	// SetDestinationRegion accessor points to a function that sets destination region on api input struct
+	SetDestinationRegion func(interface{}, string) error
+
+	// SetPresignedURL accessor points to a function that sets presigned url on api input struct
+	SetPresignedURL func(interface{}, string) error
+}
+
+// Options provides the set of options needed by the presigned URL middleware.
+type Options struct {
+	// Accessor are the parameter accessors used by this middleware
+	Accessor ParameterAccessor
+
+	// Presigner is the URLPresigner used by the middleware
+	Presigner URLPresigner
+}
+
+// AddMiddleware adds the Presign URL middleware to the middleware stack.
+func AddMiddleware(stack *middleware.Stack, opts Options) error {
+	return stack.Initialize.Add(&presign{options: opts}, middleware.Before)
+}
+
+// RemoveMiddleware removes the Presign URL middleware from the stack.
+func RemoveMiddleware(stack *middleware.Stack) error {
+	_, err := stack.Initialize.Remove((*presign)(nil).ID())
+	return err
+}
+
+type presign struct {
+	options Options
+}
+
+func (m *presign) ID() string { return "Presign" }
+
+func (m *presign) HandleInitialize(
+	ctx context.Context, input middleware.InitializeInput, next middleware.InitializeHandler,
+) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	// If PresignedURL is already set ignore middleware.
+	if _, ok, err := m.options.Accessor.GetPresignedURL(input.Parameters); err != nil {
+		return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
+	} else if ok {
+		return next.HandleInitialize(ctx, input)
+	}
+
+	// If have source region is not set ignore middleware.
+	srcRegion, ok, err := m.options.Accessor.GetSourceRegion(input.Parameters)
+	if err != nil {
+		return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
+	} else if !ok || len(srcRegion) == 0 {
+		return next.HandleInitialize(ctx, input)
+	}
+
+	// Create a copy of the original input so the destination region value can
+	// be added. This ensures that value does not leak into the original
+	// request parameters.
+	paramCpy, err := m.options.Accessor.CopyInput(input.Parameters)
+	if err != nil {
+		return out, metadata, fmt.Errorf("unable to create presigned URL, %w", err)
+	}
+
+	// Destination region is the API client's configured region.
+	dstRegion := awsmiddleware.GetRegion(ctx)
+	if err = m.options.Accessor.SetDestinationRegion(paramCpy, dstRegion); err != nil {
+		return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
+	}
+
+	presignedReq, err := m.options.Presigner.PresignURL(ctx, srcRegion, paramCpy)
+	if err != nil {
+		return out, metadata, fmt.Errorf("unable to create presigned URL, %w", err)
+	}
+
+	// Update the original input with the presigned URL value.
+	if err = m.options.Accessor.SetPresignedURL(input.Parameters, presignedReq.URL); err != nil {
+		return out, metadata, fmt.Errorf("presign middleware failed, %w", err)
+	}
+
+	return next.HandleInitialize(ctx, input)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/CHANGELOG.md 🔗

@@ -0,0 +1,475 @@
+# v1.22.4 (2024-07-18)
+
+* No change notes available for this release.
+
+# v1.22.3 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.22.2 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.22.1 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.22.0 (2024-06-26)
+
+* **Feature**: Support list-of-string endpoint parameter.
+
+# v1.21.1 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.0 (2024-06-18)
+
+* **Feature**: Track usage of various AWS SDK features in user-agent string.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.12 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.11 (2024-06-07)
+
+* **Bug Fix**: Add clock skew correction on all service clients
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.10 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.9 (2024-05-23)
+
+* No change notes available for this release.
+
+# v1.20.8 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.7 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.6 (2024-05-08)
+
+* **Bug Fix**: GoDoc improvement
+
+# v1.20.5 (2024-04-05)
+
+* No change notes available for this release.
+
+# v1.20.4 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.3 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.2 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.1 (2024-02-23)
+
+* **Bug Fix**: Move all common, SDK-side middleware stack ops into the service client module to prevent cross-module compatibility issues in the future.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.0 (2024-02-22)
+
+* **Feature**: Add middleware stack snapshot tests.
+
+# v1.19.2 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.19.1 (2024-02-20)
+
+* **Bug Fix**: When sourcing values for a service's `EndpointParameters`, the lack of a configured region (i.e. `options.Region == ""`) will now translate to a `nil` value for `EndpointParameters.Region` instead of a pointer to the empty string `""`. This will result in a much more explicit error when calling an operation instead of an obscure hostname lookup failure.
+
+# v1.19.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.7 (2024-01-18)
+
+* No change notes available for this release.
+
+# v1.18.6 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.5 (2023-12-08)
+
+* **Bug Fix**: Reinstate presence of default Retryer in functional options, but still respect max attempts set therein.
+
+# v1.18.4 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.3 (2023-12-06)
+
+* **Bug Fix**: Restore pre-refactor auth behavior where all operations could technically be performed anonymously.
+
+# v1.18.2 (2023-12-01)
+
+* **Bug Fix**: Correct wrapping of errors in authentication workflow.
+* **Bug Fix**: Correctly recognize cache-wrapped instances of AnonymousCredentials at client construction.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.1 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.0 (2023-11-29)
+
+* **Feature**: Expose Options() accessor on service clients.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.5 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.4 (2023-11-28)
+
+* **Bug Fix**: Respect setting RetryMaxAttempts in functional options at client construction.
+
+# v1.17.3 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.2 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.1 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.0 (2023-11-01)
+
+* **Feature**: Adds support for configured endpoints via environment variables and the AWS shared configuration file.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.2 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.1 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.0 (2023-10-02)
+
+* **Feature**: Fix FIPS Endpoints in aws-us-gov.
+
+# v1.14.1 (2023-09-22)
+
+* No change notes available for this release.
+
+# v1.14.0 (2023-09-18)
+
+* **Announcement**: [BREAKFIX] Change in MaxResults datatype from value to pointer type in cognito-sync service.
+* **Feature**: Adds several endpoint ruleset changes across all models: smaller rulesets, removed non-unique regional endpoints, fixes FIPS and DualStack endpoints, and make region not required in SDK::Endpoint. Additional breakfix to cognito-sync field.
+
+# v1.13.6 (2023-08-31)
+
+* No change notes available for this release.
+
+# v1.13.5 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.4 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.3 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.2 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.1 (2023-08-01)
+
+* No change notes available for this release.
+
+# v1.13.0 (2023-07-31)
+
+* **Feature**: Adds support for smithy-modeled endpoint resolution. A new rules-based endpoint resolution will be added to the SDK which will supercede and deprecate existing endpoint resolution. Specifically, EndpointResolver will be deprecated while BaseEndpoint and EndpointResolverV2 will take its place. For more information, please see the Endpoints section in our Developer Guide.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.14 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.13 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.12 (2023-06-15)
+
+* No change notes available for this release.
+
+# v1.12.11 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.10 (2023-05-04)
+
+* No change notes available for this release.
+
+# v1.12.9 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.8 (2023-04-10)
+
+* No change notes available for this release.
+
+# v1.12.7 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.6 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.5 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.4 (2023-02-22)
+
+* **Bug Fix**: Prevent nil pointer dereference when retrieving error codes.
+
+# v1.12.3 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.2 (2023-02-15)
+
+* **Announcement**: When receiving an error response in restJson-based services, an incorrect error type may have been returned based on the content of the response. This has been fixed via PR #2012 tracked in issue #1910.
+* **Bug Fix**: Correct error type parsing for restJson services.
+
+# v1.12.1 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.0 (2023-01-05)
+
+* **Feature**: Add `ErrorCodeOverride` field to all error structs (aws/smithy-go#401).
+
+# v1.11.28 (2022-12-20)
+
+* No change notes available for this release.
+
+# v1.11.27 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.26 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.25 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.24 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.23 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.22 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.21 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.20 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.19 (2022-08-30)
+
+* **Documentation**: Documentation updates for the AWS IAM Identity Center Portal CLI Reference.
+
+# v1.11.18 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.17 (2022-08-15)
+
+* **Documentation**: Documentation updates to reflect service rename - AWS IAM Identity Center (successor to AWS Single Sign-On)
+
+# v1.11.16 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.15 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.14 (2022-08-08)
+
+* **Documentation**: Documentation updates to reflect service rename - AWS IAM Identity Center (successor to AWS Single Sign-On)
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.13 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.12 (2022-07-11)
+
+* No change notes available for this release.
+
+# v1.11.11 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.10 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.9 (2022-06-16)
+
+* No change notes available for this release.
+
+# v1.11.8 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.7 (2022-05-26)
+
+* No change notes available for this release.
+
+# v1.11.6 (2022-05-25)
+
+* No change notes available for this release.
+
+# v1.11.5 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.4 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.3 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.2 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.1 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.0 (2022-02-24)
+
+* **Feature**: API client updated
+* **Feature**: Adds RetryMaxAttempts and RetryMod to API client Options. This allows the API clients' default Retryer to be configured from the shared configuration files or environment variables. Adding a new Retry mode of `Adaptive`. `Adaptive` retry mode is an experimental mode, adding client rate limiting when throttles reponses are received from an API. See [retry.AdaptiveMode](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws/retry#AdaptiveMode) for more details, and configuration options.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Documentation**: Updated API models
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.0 (2021-12-21)
+
+* **Feature**: API Paginators now support specifying the initial starting token, and support stopping on empty string tokens.
+
+# v1.6.2 (2021-12-02)
+
+* **Bug Fix**: Fixes a bug that prevented aws.EndpointResolverWithOptions from being used by the service client. ([#1514](https://github.com/aws/aws-sdk-go-v2/pull/1514))
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.0 (2021-11-06)
+
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Feature**: Updated service to latest API model.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.0 (2021-10-21)
+
+* **Feature**: Updated  to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.2 (2021-10-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.1 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.0 (2021-08-27)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.3 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.2 (2021-08-04)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.1 (2021-07-15)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2021-06-25)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.1 (2021-05-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/service/sso/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_client.go 🔗

@@ -0,0 +1,627 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/aws/defaults"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/aws/retry"
+	"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+	internalauth "github.com/aws/aws-sdk-go-v2/internal/auth"
+	internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+	internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+	internalmiddleware "github.com/aws/aws-sdk-go-v2/internal/middleware"
+	smithy "github.com/aws/smithy-go"
+	smithyauth "github.com/aws/smithy-go/auth"
+	smithydocument "github.com/aws/smithy-go/document"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net"
+	"net/http"
+	"sync/atomic"
+	"time"
+)
+
+const ServiceID = "SSO"
+const ServiceAPIVersion = "2019-06-10"
+
+// Client provides the API client to make operations call for AWS Single Sign-On.
+type Client struct {
+	options Options
+
+	// Difference between the time reported by the server and the client
+	timeOffset *atomic.Int64
+}
+
+// New returns an initialized Client based on the functional options. Provide
+// additional functional options to further configure the behavior of the client,
+// such as changing the client's endpoint or adding custom middleware behavior.
+func New(options Options, optFns ...func(*Options)) *Client {
+	options = options.Copy()
+
+	resolveDefaultLogger(&options)
+
+	setResolvedDefaultsMode(&options)
+
+	resolveRetryer(&options)
+
+	resolveHTTPClient(&options)
+
+	resolveHTTPSignerV4(&options)
+
+	resolveEndpointResolverV2(&options)
+
+	resolveAuthSchemeResolver(&options)
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	finalizeRetryMaxAttempts(&options)
+
+	ignoreAnonymousAuth(&options)
+
+	wrapWithAnonymousAuth(&options)
+
+	resolveAuthSchemes(&options)
+
+	client := &Client{
+		options: options,
+	}
+
+	initializeTimeOffsetResolver(client)
+
+	return client
+}
+
+// Options returns a copy of the client configuration.
+//
+// Callers SHOULD NOT perform mutations on any inner structures within client
+// config. Config overrides should instead be made on a per-operation basis through
+// functional options.
+func (c *Client) Options() Options {
+	return c.options.Copy()
+}
+
+func (c *Client) invokeOperation(ctx context.Context, opID string, params interface{}, optFns []func(*Options), stackFns ...func(*middleware.Stack, Options) error) (result interface{}, metadata middleware.Metadata, err error) {
+	ctx = middleware.ClearStackValues(ctx)
+	stack := middleware.NewStack(opID, smithyhttp.NewStackRequest)
+	options := c.options.Copy()
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	finalizeOperationRetryMaxAttempts(&options, *c)
+
+	finalizeClientEndpointResolverOptions(&options)
+
+	for _, fn := range stackFns {
+		if err := fn(stack, options); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	for _, fn := range options.APIOptions {
+		if err := fn(stack); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	handler := middleware.DecorateHandler(smithyhttp.NewClientHandler(options.HTTPClient), stack)
+	result, metadata, err = handler.Handle(ctx, params)
+	if err != nil {
+		err = &smithy.OperationError{
+			ServiceID:     ServiceID,
+			OperationName: opID,
+			Err:           err,
+		}
+	}
+	return result, metadata, err
+}
+
+type operationInputKey struct{}
+
+func setOperationInput(ctx context.Context, input interface{}) context.Context {
+	return middleware.WithStackValue(ctx, operationInputKey{}, input)
+}
+
+func getOperationInput(ctx context.Context) interface{} {
+	return middleware.GetStackValue(ctx, operationInputKey{})
+}
+
+type setOperationInputMiddleware struct {
+}
+
+func (*setOperationInputMiddleware) ID() string {
+	return "setOperationInput"
+}
+
+func (m *setOperationInputMiddleware) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	ctx = setOperationInput(ctx, in.Parameters)
+	return next.HandleSerialize(ctx, in)
+}
+
+func addProtocolFinalizerMiddlewares(stack *middleware.Stack, options Options, operation string) error {
+	if err := stack.Finalize.Add(&resolveAuthSchemeMiddleware{operation: operation, options: options}, middleware.Before); err != nil {
+		return fmt.Errorf("add ResolveAuthScheme: %w", err)
+	}
+	if err := stack.Finalize.Insert(&getIdentityMiddleware{options: options}, "ResolveAuthScheme", middleware.After); err != nil {
+		return fmt.Errorf("add GetIdentity: %v", err)
+	}
+	if err := stack.Finalize.Insert(&resolveEndpointV2Middleware{options: options}, "GetIdentity", middleware.After); err != nil {
+		return fmt.Errorf("add ResolveEndpointV2: %v", err)
+	}
+	if err := stack.Finalize.Insert(&signRequestMiddleware{}, "ResolveEndpointV2", middleware.After); err != nil {
+		return fmt.Errorf("add Signing: %w", err)
+	}
+	return nil
+}
+func resolveAuthSchemeResolver(options *Options) {
+	if options.AuthSchemeResolver == nil {
+		options.AuthSchemeResolver = &defaultAuthSchemeResolver{}
+	}
+}
+
+func resolveAuthSchemes(options *Options) {
+	if options.AuthSchemes == nil {
+		options.AuthSchemes = []smithyhttp.AuthScheme{
+			internalauth.NewHTTPAuthScheme("aws.auth#sigv4", &internalauthsmithy.V4SignerAdapter{
+				Signer:     options.HTTPSignerV4,
+				Logger:     options.Logger,
+				LogSigning: options.ClientLogMode.IsSigning(),
+			}),
+		}
+	}
+}
+
+type noSmithyDocumentSerde = smithydocument.NoSerde
+
+type legacyEndpointContextSetter struct {
+	LegacyResolver EndpointResolver
+}
+
+func (*legacyEndpointContextSetter) ID() string {
+	return "legacyEndpointContextSetter"
+}
+
+func (m *legacyEndpointContextSetter) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	if m.LegacyResolver != nil {
+		ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, true)
+	}
+
+	return next.HandleInitialize(ctx, in)
+
+}
+func addlegacyEndpointContextSetter(stack *middleware.Stack, o Options) error {
+	return stack.Initialize.Add(&legacyEndpointContextSetter{
+		LegacyResolver: o.EndpointResolver,
+	}, middleware.Before)
+}
+
+func resolveDefaultLogger(o *Options) {
+	if o.Logger != nil {
+		return
+	}
+	o.Logger = logging.Nop{}
+}
+
+func addSetLoggerMiddleware(stack *middleware.Stack, o Options) error {
+	return middleware.AddSetLoggerMiddleware(stack, o.Logger)
+}
+
+func setResolvedDefaultsMode(o *Options) {
+	if len(o.resolvedDefaultsMode) > 0 {
+		return
+	}
+
+	var mode aws.DefaultsMode
+	mode.SetFromString(string(o.DefaultsMode))
+
+	if mode == aws.DefaultsModeAuto {
+		mode = defaults.ResolveDefaultsModeAuto(o.Region, o.RuntimeEnvironment)
+	}
+
+	o.resolvedDefaultsMode = mode
+}
+
+// NewFromConfig returns a new client from the provided config.
+func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client {
+	opts := Options{
+		Region:                cfg.Region,
+		DefaultsMode:          cfg.DefaultsMode,
+		RuntimeEnvironment:    cfg.RuntimeEnvironment,
+		HTTPClient:            cfg.HTTPClient,
+		Credentials:           cfg.Credentials,
+		APIOptions:            cfg.APIOptions,
+		Logger:                cfg.Logger,
+		ClientLogMode:         cfg.ClientLogMode,
+		AppID:                 cfg.AppID,
+		AccountIDEndpointMode: cfg.AccountIDEndpointMode,
+	}
+	resolveAWSRetryerProvider(cfg, &opts)
+	resolveAWSRetryMaxAttempts(cfg, &opts)
+	resolveAWSRetryMode(cfg, &opts)
+	resolveAWSEndpointResolver(cfg, &opts)
+	resolveUseDualStackEndpoint(cfg, &opts)
+	resolveUseFIPSEndpoint(cfg, &opts)
+	resolveBaseEndpoint(cfg, &opts)
+	return New(opts, optFns...)
+}
+
+func resolveHTTPClient(o *Options) {
+	var buildable *awshttp.BuildableClient
+
+	if o.HTTPClient != nil {
+		var ok bool
+		buildable, ok = o.HTTPClient.(*awshttp.BuildableClient)
+		if !ok {
+			return
+		}
+	} else {
+		buildable = awshttp.NewBuildableClient()
+	}
+
+	modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+	if err == nil {
+		buildable = buildable.WithDialerOptions(func(dialer *net.Dialer) {
+			if dialerTimeout, ok := modeConfig.GetConnectTimeout(); ok {
+				dialer.Timeout = dialerTimeout
+			}
+		})
+
+		buildable = buildable.WithTransportOptions(func(transport *http.Transport) {
+			if tlsHandshakeTimeout, ok := modeConfig.GetTLSNegotiationTimeout(); ok {
+				transport.TLSHandshakeTimeout = tlsHandshakeTimeout
+			}
+		})
+	}
+
+	o.HTTPClient = buildable
+}
+
+func resolveRetryer(o *Options) {
+	if o.Retryer != nil {
+		return
+	}
+
+	if len(o.RetryMode) == 0 {
+		modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+		if err == nil {
+			o.RetryMode = modeConfig.RetryMode
+		}
+	}
+	if len(o.RetryMode) == 0 {
+		o.RetryMode = aws.RetryModeStandard
+	}
+
+	var standardOptions []func(*retry.StandardOptions)
+	if v := o.RetryMaxAttempts; v != 0 {
+		standardOptions = append(standardOptions, func(so *retry.StandardOptions) {
+			so.MaxAttempts = v
+		})
+	}
+
+	switch o.RetryMode {
+	case aws.RetryModeAdaptive:
+		var adaptiveOptions []func(*retry.AdaptiveModeOptions)
+		if len(standardOptions) != 0 {
+			adaptiveOptions = append(adaptiveOptions, func(ao *retry.AdaptiveModeOptions) {
+				ao.StandardOptions = append(ao.StandardOptions, standardOptions...)
+			})
+		}
+		o.Retryer = retry.NewAdaptiveMode(adaptiveOptions...)
+
+	default:
+		o.Retryer = retry.NewStandard(standardOptions...)
+	}
+}
+
+func resolveAWSRetryerProvider(cfg aws.Config, o *Options) {
+	if cfg.Retryer == nil {
+		return
+	}
+	o.Retryer = cfg.Retryer()
+}
+
+func resolveAWSRetryMode(cfg aws.Config, o *Options) {
+	if len(cfg.RetryMode) == 0 {
+		return
+	}
+	o.RetryMode = cfg.RetryMode
+}
+func resolveAWSRetryMaxAttempts(cfg aws.Config, o *Options) {
+	if cfg.RetryMaxAttempts == 0 {
+		return
+	}
+	o.RetryMaxAttempts = cfg.RetryMaxAttempts
+}
+
+func finalizeRetryMaxAttempts(o *Options) {
+	if o.RetryMaxAttempts == 0 {
+		return
+	}
+
+	o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func finalizeOperationRetryMaxAttempts(o *Options, client Client) {
+	if v := o.RetryMaxAttempts; v == 0 || v == client.options.RetryMaxAttempts {
+		return
+	}
+
+	o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func resolveAWSEndpointResolver(cfg aws.Config, o *Options) {
+	if cfg.EndpointResolver == nil && cfg.EndpointResolverWithOptions == nil {
+		return
+	}
+	o.EndpointResolver = withEndpointResolver(cfg.EndpointResolver, cfg.EndpointResolverWithOptions)
+}
+
+func addClientUserAgent(stack *middleware.Stack, options Options) error {
+	ua, err := getOrAddRequestUserAgent(stack)
+	if err != nil {
+		return err
+	}
+
+	ua.AddSDKAgentKeyValue(awsmiddleware.APIMetadata, "sso", goModuleVersion)
+	if len(options.AppID) > 0 {
+		ua.AddSDKAgentKey(awsmiddleware.ApplicationIdentifier, options.AppID)
+	}
+
+	return nil
+}
+
+func getOrAddRequestUserAgent(stack *middleware.Stack) (*awsmiddleware.RequestUserAgent, error) {
+	id := (*awsmiddleware.RequestUserAgent)(nil).ID()
+	mw, ok := stack.Build.Get(id)
+	if !ok {
+		mw = awsmiddleware.NewRequestUserAgent()
+		if err := stack.Build.Add(mw, middleware.After); err != nil {
+			return nil, err
+		}
+	}
+
+	ua, ok := mw.(*awsmiddleware.RequestUserAgent)
+	if !ok {
+		return nil, fmt.Errorf("%T for %s middleware did not match expected type", mw, id)
+	}
+
+	return ua, nil
+}
+
+type HTTPSignerV4 interface {
+	SignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*v4.SignerOptions)) error
+}
+
+func resolveHTTPSignerV4(o *Options) {
+	if o.HTTPSignerV4 != nil {
+		return
+	}
+	o.HTTPSignerV4 = newDefaultV4Signer(*o)
+}
+
+func newDefaultV4Signer(o Options) *v4.Signer {
+	return v4.NewSigner(func(so *v4.SignerOptions) {
+		so.Logger = o.Logger
+		so.LogSigning = o.ClientLogMode.IsSigning()
+	})
+}
+
+func addClientRequestID(stack *middleware.Stack) error {
+	return stack.Build.Add(&awsmiddleware.ClientRequestID{}, middleware.After)
+}
+
+func addComputeContentLength(stack *middleware.Stack) error {
+	return stack.Build.Add(&smithyhttp.ComputeContentLength{}, middleware.After)
+}
+
+func addRawResponseToMetadata(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&awsmiddleware.AddRawResponse{}, middleware.Before)
+}
+
+func addRecordResponseTiming(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&awsmiddleware.RecordResponseTiming{}, middleware.After)
+}
+func addStreamingEventsPayload(stack *middleware.Stack) error {
+	return stack.Finalize.Add(&v4.StreamingEventsPayload{}, middleware.Before)
+}
+
+func addUnsignedPayload(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.UnsignedPayload{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addComputePayloadSHA256(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.ComputePayloadSHA256{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addContentSHA256Header(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.ContentSHA256Header{}, (*v4.ComputePayloadSHA256)(nil).ID(), middleware.After)
+}
+
+func addIsWaiterUserAgent(o *Options) {
+	o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+		ua, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureWaiter)
+		return nil
+	})
+}
+
+func addIsPaginatorUserAgent(o *Options) {
+	o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+		ua, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeaturePaginator)
+		return nil
+	})
+}
+
+func addRetry(stack *middleware.Stack, o Options) error {
+	attempt := retry.NewAttemptMiddleware(o.Retryer, smithyhttp.RequestCloner, func(m *retry.Attempt) {
+		m.LogAttempts = o.ClientLogMode.IsRetries()
+	})
+	if err := stack.Finalize.Insert(attempt, "Signing", middleware.Before); err != nil {
+		return err
+	}
+	if err := stack.Finalize.Insert(&retry.MetricsHeader{}, attempt.ID(), middleware.After); err != nil {
+		return err
+	}
+	return nil
+}
+
+// resolves dual-stack endpoint configuration
+func resolveUseDualStackEndpoint(cfg aws.Config, o *Options) error {
+	if len(cfg.ConfigSources) == 0 {
+		return nil
+	}
+	value, found, err := internalConfig.ResolveUseDualStackEndpoint(context.Background(), cfg.ConfigSources)
+	if err != nil {
+		return err
+	}
+	if found {
+		o.EndpointOptions.UseDualStackEndpoint = value
+	}
+	return nil
+}
+
+// resolves FIPS endpoint configuration
+func resolveUseFIPSEndpoint(cfg aws.Config, o *Options) error {
+	if len(cfg.ConfigSources) == 0 {
+		return nil
+	}
+	value, found, err := internalConfig.ResolveUseFIPSEndpoint(context.Background(), cfg.ConfigSources)
+	if err != nil {
+		return err
+	}
+	if found {
+		o.EndpointOptions.UseFIPSEndpoint = value
+	}
+	return nil
+}
+
+func resolveAccountID(identity smithyauth.Identity, mode aws.AccountIDEndpointMode) *string {
+	if mode == aws.AccountIDEndpointModeDisabled {
+		return nil
+	}
+
+	if ca, ok := identity.(*internalauthsmithy.CredentialsAdapter); ok && ca.Credentials.AccountID != "" {
+		return aws.String(ca.Credentials.AccountID)
+	}
+
+	return nil
+}
+
+func addTimeOffsetBuild(stack *middleware.Stack, c *Client) error {
+	mw := internalmiddleware.AddTimeOffsetMiddleware{Offset: c.timeOffset}
+	if err := stack.Build.Add(&mw, middleware.After); err != nil {
+		return err
+	}
+	return stack.Deserialize.Insert(&mw, "RecordResponseTiming", middleware.Before)
+}
+func initializeTimeOffsetResolver(c *Client) {
+	c.timeOffset = new(atomic.Int64)
+}
+
+func checkAccountID(identity smithyauth.Identity, mode aws.AccountIDEndpointMode) error {
+	switch mode {
+	case aws.AccountIDEndpointModeUnset:
+	case aws.AccountIDEndpointModePreferred:
+	case aws.AccountIDEndpointModeDisabled:
+	case aws.AccountIDEndpointModeRequired:
+		if ca, ok := identity.(*internalauthsmithy.CredentialsAdapter); !ok {
+			return fmt.Errorf("accountID is required but not set")
+		} else if ca.Credentials.AccountID == "" {
+			return fmt.Errorf("accountID is required but not set")
+		}
+	// default check in case invalid mode is configured through request config
+	default:
+		return fmt.Errorf("invalid accountID endpoint mode %s, must be preferred/required/disabled", mode)
+	}
+
+	return nil
+}
+
+func addUserAgentRetryMode(stack *middleware.Stack, options Options) error {
+	ua, err := getOrAddRequestUserAgent(stack)
+	if err != nil {
+		return err
+	}
+
+	switch options.Retryer.(type) {
+	case *retry.Standard:
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeStandard)
+	case *retry.AdaptiveMode:
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeAdaptive)
+	}
+	return nil
+}
+
+func addRecursionDetection(stack *middleware.Stack) error {
+	return stack.Build.Add(&awsmiddleware.RecursionDetection{}, middleware.After)
+}
+
+func addRequestIDRetrieverMiddleware(stack *middleware.Stack) error {
+	return stack.Deserialize.Insert(&awsmiddleware.RequestIDRetriever{}, "OperationDeserializer", middleware.Before)
+
+}
+
+func addResponseErrorMiddleware(stack *middleware.Stack) error {
+	return stack.Deserialize.Insert(&awshttp.ResponseErrorWrapper{}, "RequestIDRetriever", middleware.Before)
+
+}
+
+func addRequestResponseLogging(stack *middleware.Stack, o Options) error {
+	return stack.Deserialize.Add(&smithyhttp.RequestResponseLogger{
+		LogRequest:          o.ClientLogMode.IsRequest(),
+		LogRequestWithBody:  o.ClientLogMode.IsRequestWithBody(),
+		LogResponse:         o.ClientLogMode.IsResponse(),
+		LogResponseWithBody: o.ClientLogMode.IsResponseWithBody(),
+	}, middleware.After)
+}
+
+type disableHTTPSMiddleware struct {
+	DisableHTTPS bool
+}
+
+func (*disableHTTPSMiddleware) ID() string {
+	return "disableHTTPS"
+}
+
+func (m *disableHTTPSMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.DisableHTTPS && !smithyhttp.GetHostnameImmutable(ctx) {
+		req.URL.Scheme = "http"
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+func addDisableHTTPSMiddleware(stack *middleware.Stack, o Options) error {
+	return stack.Finalize.Insert(&disableHTTPSMiddleware{
+		DisableHTTPS: o.EndpointOptions.DisableHTTPS,
+	}, "ResolveEndpointV2", middleware.After)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_GetRoleCredentials.go 🔗

@@ -0,0 +1,153 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/service/sso/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns the STS short-term credentials for a given role name that is assigned
+// to the user.
+func (c *Client) GetRoleCredentials(ctx context.Context, params *GetRoleCredentialsInput, optFns ...func(*Options)) (*GetRoleCredentialsOutput, error) {
+	if params == nil {
+		params = &GetRoleCredentialsInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetRoleCredentials", params, optFns, c.addOperationGetRoleCredentialsMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetRoleCredentialsOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type GetRoleCredentialsInput struct {
+
+	// The token issued by the CreateToken API call. For more information, see [CreateToken] in the
+	// IAM Identity Center OIDC API Reference Guide.
+	//
+	// [CreateToken]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html
+	//
+	// This member is required.
+	AccessToken *string
+
+	// The identifier for the AWS account that is assigned to the user.
+	//
+	// This member is required.
+	AccountId *string
+
+	// The friendly name of the role that is assigned to the user.
+	//
+	// This member is required.
+	RoleName *string
+
+	noSmithyDocumentSerde
+}
+
+type GetRoleCredentialsOutput struct {
+
+	// The credentials for the role that is assigned to the user.
+	RoleCredentials *types.RoleCredentials
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetRoleCredentialsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpGetRoleCredentials{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpGetRoleCredentials{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "GetRoleCredentials"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpGetRoleCredentialsValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetRoleCredentials(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opGetRoleCredentials(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "GetRoleCredentials",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_ListAccountRoles.go 🔗

@@ -0,0 +1,251 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/service/sso/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Lists all roles that are assigned to the user for a given AWS account.
+func (c *Client) ListAccountRoles(ctx context.Context, params *ListAccountRolesInput, optFns ...func(*Options)) (*ListAccountRolesOutput, error) {
+	if params == nil {
+		params = &ListAccountRolesInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "ListAccountRoles", params, optFns, c.addOperationListAccountRolesMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*ListAccountRolesOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type ListAccountRolesInput struct {
+
+	// The token issued by the CreateToken API call. For more information, see [CreateToken] in the
+	// IAM Identity Center OIDC API Reference Guide.
+	//
+	// [CreateToken]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html
+	//
+	// This member is required.
+	AccessToken *string
+
+	// The identifier for the AWS account that is assigned to the user.
+	//
+	// This member is required.
+	AccountId *string
+
+	// The number of items that clients can request per page.
+	MaxResults *int32
+
+	// The page token from the previous response output when you request subsequent
+	// pages.
+	NextToken *string
+
+	noSmithyDocumentSerde
+}
+
+type ListAccountRolesOutput struct {
+
+	// The page token client that is used to retrieve the list of accounts.
+	NextToken *string
+
+	// A paginated response with the list of roles and the next token if more results
+	// are available.
+	RoleList []types.RoleInfo
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListAccountRolesMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpListAccountRoles{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListAccountRoles{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "ListAccountRoles"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpListAccountRolesValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListAccountRoles(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+// ListAccountRolesPaginatorOptions is the paginator options for ListAccountRoles
+type ListAccountRolesPaginatorOptions struct {
+	// The number of items that clients can request per page.
+	Limit int32
+
+	// Set to true if pagination should stop if the service returns a pagination token
+	// that matches the most recent token provided to the service.
+	StopOnDuplicateToken bool
+}
+
+// ListAccountRolesPaginator is a paginator for ListAccountRoles
+type ListAccountRolesPaginator struct {
+	options   ListAccountRolesPaginatorOptions
+	client    ListAccountRolesAPIClient
+	params    *ListAccountRolesInput
+	nextToken *string
+	firstPage bool
+}
+
+// NewListAccountRolesPaginator returns a new ListAccountRolesPaginator
+func NewListAccountRolesPaginator(client ListAccountRolesAPIClient, params *ListAccountRolesInput, optFns ...func(*ListAccountRolesPaginatorOptions)) *ListAccountRolesPaginator {
+	if params == nil {
+		params = &ListAccountRolesInput{}
+	}
+
+	options := ListAccountRolesPaginatorOptions{}
+	if params.MaxResults != nil {
+		options.Limit = *params.MaxResults
+	}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	return &ListAccountRolesPaginator{
+		options:   options,
+		client:    client,
+		params:    params,
+		firstPage: true,
+		nextToken: params.NextToken,
+	}
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListAccountRolesPaginator) HasMorePages() bool {
+	return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListAccountRoles page.
+func (p *ListAccountRolesPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListAccountRolesOutput, error) {
+	if !p.HasMorePages() {
+		return nil, fmt.Errorf("no more pages available")
+	}
+
+	params := *p.params
+	params.NextToken = p.nextToken
+
+	var limit *int32
+	if p.options.Limit > 0 {
+		limit = &p.options.Limit
+	}
+	params.MaxResults = limit
+
+	optFns = append([]func(*Options){
+		addIsPaginatorUserAgent,
+	}, optFns...)
+	result, err := p.client.ListAccountRoles(ctx, &params, optFns...)
+	if err != nil {
+		return nil, err
+	}
+	p.firstPage = false
+
+	prevToken := p.nextToken
+	p.nextToken = result.NextToken
+
+	if p.options.StopOnDuplicateToken &&
+		prevToken != nil &&
+		p.nextToken != nil &&
+		*prevToken == *p.nextToken {
+		p.nextToken = nil
+	}
+
+	return result, nil
+}
+
+// ListAccountRolesAPIClient is a client that implements the ListAccountRoles
+// operation.
+type ListAccountRolesAPIClient interface {
+	ListAccountRoles(context.Context, *ListAccountRolesInput, ...func(*Options)) (*ListAccountRolesOutput, error)
+}
+
+var _ ListAccountRolesAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListAccountRoles(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "ListAccountRoles",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_ListAccounts.go 🔗

@@ -0,0 +1,249 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/service/sso/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Lists all AWS accounts assigned to the user. These AWS accounts are assigned by
+// the administrator of the account. For more information, see [Assign User Access]in the IAM Identity
+// Center User Guide. This operation returns a paginated response.
+//
+// [Assign User Access]: https://docs.aws.amazon.com/singlesignon/latest/userguide/useraccess.html#assignusers
+func (c *Client) ListAccounts(ctx context.Context, params *ListAccountsInput, optFns ...func(*Options)) (*ListAccountsOutput, error) {
+	if params == nil {
+		params = &ListAccountsInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "ListAccounts", params, optFns, c.addOperationListAccountsMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*ListAccountsOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type ListAccountsInput struct {
+
+	// The token issued by the CreateToken API call. For more information, see [CreateToken] in the
+	// IAM Identity Center OIDC API Reference Guide.
+	//
+	// [CreateToken]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html
+	//
+	// This member is required.
+	AccessToken *string
+
+	// This is the number of items clients can request per page.
+	MaxResults *int32
+
+	// (Optional) When requesting subsequent pages, this is the page token from the
+	// previous response output.
+	NextToken *string
+
+	noSmithyDocumentSerde
+}
+
+type ListAccountsOutput struct {
+
+	// A paginated response with the list of account information and the next token if
+	// more results are available.
+	AccountList []types.AccountInfo
+
+	// The page token client that is used to retrieve the list of accounts.
+	NextToken *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationListAccountsMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpListAccounts{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpListAccounts{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "ListAccounts"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpListAccountsValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opListAccounts(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+// ListAccountsPaginatorOptions is the paginator options for ListAccounts
+type ListAccountsPaginatorOptions struct {
+	// This is the number of items clients can request per page.
+	Limit int32
+
+	// Set to true if pagination should stop if the service returns a pagination token
+	// that matches the most recent token provided to the service.
+	StopOnDuplicateToken bool
+}
+
+// ListAccountsPaginator is a paginator for ListAccounts
+type ListAccountsPaginator struct {
+	options   ListAccountsPaginatorOptions
+	client    ListAccountsAPIClient
+	params    *ListAccountsInput
+	nextToken *string
+	firstPage bool
+}
+
+// NewListAccountsPaginator returns a new ListAccountsPaginator
+func NewListAccountsPaginator(client ListAccountsAPIClient, params *ListAccountsInput, optFns ...func(*ListAccountsPaginatorOptions)) *ListAccountsPaginator {
+	if params == nil {
+		params = &ListAccountsInput{}
+	}
+
+	options := ListAccountsPaginatorOptions{}
+	if params.MaxResults != nil {
+		options.Limit = *params.MaxResults
+	}
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	return &ListAccountsPaginator{
+		options:   options,
+		client:    client,
+		params:    params,
+		firstPage: true,
+		nextToken: params.NextToken,
+	}
+}
+
+// HasMorePages returns a boolean indicating whether more pages are available
+func (p *ListAccountsPaginator) HasMorePages() bool {
+	return p.firstPage || (p.nextToken != nil && len(*p.nextToken) != 0)
+}
+
+// NextPage retrieves the next ListAccounts page.
+func (p *ListAccountsPaginator) NextPage(ctx context.Context, optFns ...func(*Options)) (*ListAccountsOutput, error) {
+	if !p.HasMorePages() {
+		return nil, fmt.Errorf("no more pages available")
+	}
+
+	params := *p.params
+	params.NextToken = p.nextToken
+
+	var limit *int32
+	if p.options.Limit > 0 {
+		limit = &p.options.Limit
+	}
+	params.MaxResults = limit
+
+	optFns = append([]func(*Options){
+		addIsPaginatorUserAgent,
+	}, optFns...)
+	result, err := p.client.ListAccounts(ctx, &params, optFns...)
+	if err != nil {
+		return nil, err
+	}
+	p.firstPage = false
+
+	prevToken := p.nextToken
+	p.nextToken = result.NextToken
+
+	if p.options.StopOnDuplicateToken &&
+		prevToken != nil &&
+		p.nextToken != nil &&
+		*prevToken == *p.nextToken {
+		p.nextToken = nil
+	}
+
+	return result, nil
+}
+
+// ListAccountsAPIClient is a client that implements the ListAccounts operation.
+type ListAccountsAPIClient interface {
+	ListAccounts(context.Context, *ListAccountsInput, ...func(*Options)) (*ListAccountsOutput, error)
+}
+
+var _ ListAccountsAPIClient = (*Client)(nil)
+
+func newServiceMetadataMiddleware_opListAccounts(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "ListAccounts",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/api_op_Logout.go 🔗

@@ -0,0 +1,152 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Removes the locally stored SSO tokens from the client-side cache and sends an
+// API call to the IAM Identity Center service to invalidate the corresponding
+// server-side IAM Identity Center sign in session.
+//
+// If a user uses IAM Identity Center to access the AWS CLI, the user’s IAM
+// Identity Center sign in session is used to obtain an IAM session, as specified
+// in the corresponding IAM Identity Center permission set. More specifically, IAM
+// Identity Center assumes an IAM role in the target account on behalf of the user,
+// and the corresponding temporary AWS credentials are returned to the client.
+//
+// After user logout, any existing IAM role sessions that were created by using
+// IAM Identity Center permission sets continue based on the duration configured in
+// the permission set. For more information, see [User authentications]in the IAM Identity Center User
+// Guide.
+//
+// [User authentications]: https://docs.aws.amazon.com/singlesignon/latest/userguide/authconcept.html
+func (c *Client) Logout(ctx context.Context, params *LogoutInput, optFns ...func(*Options)) (*LogoutOutput, error) {
+	if params == nil {
+		params = &LogoutInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "Logout", params, optFns, c.addOperationLogoutMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*LogoutOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type LogoutInput struct {
+
+	// The token issued by the CreateToken API call. For more information, see [CreateToken] in the
+	// IAM Identity Center OIDC API Reference Guide.
+	//
+	// [CreateToken]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html
+	//
+	// This member is required.
+	AccessToken *string
+
+	noSmithyDocumentSerde
+}
+
+type LogoutOutput struct {
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationLogoutMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpLogout{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpLogout{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "Logout"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpLogoutValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opLogout(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opLogout(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "Logout",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/auth.go 🔗

@@ -0,0 +1,308 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	smithy "github.com/aws/smithy-go"
+	smithyauth "github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+func bindAuthParamsRegion(_ interface{}, params *AuthResolverParameters, _ interface{}, options Options) {
+	params.Region = options.Region
+}
+
+type setLegacyContextSigningOptionsMiddleware struct {
+}
+
+func (*setLegacyContextSigningOptionsMiddleware) ID() string {
+	return "setLegacyContextSigningOptions"
+}
+
+func (m *setLegacyContextSigningOptionsMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	rscheme := getResolvedAuthScheme(ctx)
+	schemeID := rscheme.Scheme.SchemeID()
+
+	if sn := awsmiddleware.GetSigningName(ctx); sn != "" {
+		if schemeID == "aws.auth#sigv4" {
+			smithyhttp.SetSigV4SigningName(&rscheme.SignerProperties, sn)
+		} else if schemeID == "aws.auth#sigv4a" {
+			smithyhttp.SetSigV4ASigningName(&rscheme.SignerProperties, sn)
+		}
+	}
+
+	if sr := awsmiddleware.GetSigningRegion(ctx); sr != "" {
+		if schemeID == "aws.auth#sigv4" {
+			smithyhttp.SetSigV4SigningRegion(&rscheme.SignerProperties, sr)
+		} else if schemeID == "aws.auth#sigv4a" {
+			smithyhttp.SetSigV4ASigningRegions(&rscheme.SignerProperties, []string{sr})
+		}
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+func addSetLegacyContextSigningOptionsMiddleware(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&setLegacyContextSigningOptionsMiddleware{}, "Signing", middleware.Before)
+}
+
+type withAnonymous struct {
+	resolver AuthSchemeResolver
+}
+
+var _ AuthSchemeResolver = (*withAnonymous)(nil)
+
+func (v *withAnonymous) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+	opts, err := v.resolver.ResolveAuthSchemes(ctx, params)
+	if err != nil {
+		return nil, err
+	}
+
+	opts = append(opts, &smithyauth.Option{
+		SchemeID: smithyauth.SchemeIDAnonymous,
+	})
+	return opts, nil
+}
+
+func wrapWithAnonymousAuth(options *Options) {
+	if _, ok := options.AuthSchemeResolver.(*defaultAuthSchemeResolver); !ok {
+		return
+	}
+
+	options.AuthSchemeResolver = &withAnonymous{
+		resolver: options.AuthSchemeResolver,
+	}
+}
+
+// AuthResolverParameters contains the set of inputs necessary for auth scheme
+// resolution.
+type AuthResolverParameters struct {
+	// The name of the operation being invoked.
+	Operation string
+
+	// The region in which the operation is being invoked.
+	Region string
+}
+
+func bindAuthResolverParams(ctx context.Context, operation string, input interface{}, options Options) *AuthResolverParameters {
+	params := &AuthResolverParameters{
+		Operation: operation,
+	}
+
+	bindAuthParamsRegion(ctx, params, input, options)
+
+	return params
+}
+
+// AuthSchemeResolver returns a set of possible authentication options for an
+// operation.
+type AuthSchemeResolver interface {
+	ResolveAuthSchemes(context.Context, *AuthResolverParameters) ([]*smithyauth.Option, error)
+}
+
+type defaultAuthSchemeResolver struct{}
+
+var _ AuthSchemeResolver = (*defaultAuthSchemeResolver)(nil)
+
+func (*defaultAuthSchemeResolver) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+	if overrides, ok := operationAuthOptions[params.Operation]; ok {
+		return overrides(params), nil
+	}
+	return serviceAuthOptions(params), nil
+}
+
+var operationAuthOptions = map[string]func(*AuthResolverParameters) []*smithyauth.Option{
+	"GetRoleCredentials": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+
+	"ListAccountRoles": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+
+	"ListAccounts": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+
+	"Logout": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+}
+
+func serviceAuthOptions(params *AuthResolverParameters) []*smithyauth.Option {
+	return []*smithyauth.Option{
+		{
+			SchemeID: smithyauth.SchemeIDSigV4,
+			SignerProperties: func() smithy.Properties {
+				var props smithy.Properties
+				smithyhttp.SetSigV4SigningName(&props, "awsssoportal")
+				smithyhttp.SetSigV4SigningRegion(&props, params.Region)
+				return props
+			}(),
+		},
+	}
+}
+
+type resolveAuthSchemeMiddleware struct {
+	operation string
+	options   Options
+}
+
+func (*resolveAuthSchemeMiddleware) ID() string {
+	return "ResolveAuthScheme"
+}
+
+func (m *resolveAuthSchemeMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	params := bindAuthResolverParams(ctx, m.operation, getOperationInput(ctx), m.options)
+	options, err := m.options.AuthSchemeResolver.ResolveAuthSchemes(ctx, params)
+	if err != nil {
+		return out, metadata, fmt.Errorf("resolve auth scheme: %w", err)
+	}
+
+	scheme, ok := m.selectScheme(options)
+	if !ok {
+		return out, metadata, fmt.Errorf("could not select an auth scheme")
+	}
+
+	ctx = setResolvedAuthScheme(ctx, scheme)
+	return next.HandleFinalize(ctx, in)
+}
+
+func (m *resolveAuthSchemeMiddleware) selectScheme(options []*smithyauth.Option) (*resolvedAuthScheme, bool) {
+	for _, option := range options {
+		if option.SchemeID == smithyauth.SchemeIDAnonymous {
+			return newResolvedAuthScheme(smithyhttp.NewAnonymousScheme(), option), true
+		}
+
+		for _, scheme := range m.options.AuthSchemes {
+			if scheme.SchemeID() != option.SchemeID {
+				continue
+			}
+
+			if scheme.IdentityResolver(m.options) != nil {
+				return newResolvedAuthScheme(scheme, option), true
+			}
+		}
+	}
+
+	return nil, false
+}
+
+type resolvedAuthSchemeKey struct{}
+
+type resolvedAuthScheme struct {
+	Scheme             smithyhttp.AuthScheme
+	IdentityProperties smithy.Properties
+	SignerProperties   smithy.Properties
+}
+
+func newResolvedAuthScheme(scheme smithyhttp.AuthScheme, option *smithyauth.Option) *resolvedAuthScheme {
+	return &resolvedAuthScheme{
+		Scheme:             scheme,
+		IdentityProperties: option.IdentityProperties,
+		SignerProperties:   option.SignerProperties,
+	}
+}
+
+func setResolvedAuthScheme(ctx context.Context, scheme *resolvedAuthScheme) context.Context {
+	return middleware.WithStackValue(ctx, resolvedAuthSchemeKey{}, scheme)
+}
+
+func getResolvedAuthScheme(ctx context.Context) *resolvedAuthScheme {
+	v, _ := middleware.GetStackValue(ctx, resolvedAuthSchemeKey{}).(*resolvedAuthScheme)
+	return v
+}
+
+type getIdentityMiddleware struct {
+	options Options
+}
+
+func (*getIdentityMiddleware) ID() string {
+	return "GetIdentity"
+}
+
+func (m *getIdentityMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	resolver := rscheme.Scheme.IdentityResolver(m.options)
+	if resolver == nil {
+		return out, metadata, fmt.Errorf("no identity resolver")
+	}
+
+	identity, err := resolver.GetIdentity(ctx, rscheme.IdentityProperties)
+	if err != nil {
+		return out, metadata, fmt.Errorf("get identity: %w", err)
+	}
+
+	ctx = setIdentity(ctx, identity)
+	return next.HandleFinalize(ctx, in)
+}
+
+type identityKey struct{}
+
+func setIdentity(ctx context.Context, identity smithyauth.Identity) context.Context {
+	return middleware.WithStackValue(ctx, identityKey{}, identity)
+}
+
+func getIdentity(ctx context.Context) smithyauth.Identity {
+	v, _ := middleware.GetStackValue(ctx, identityKey{}).(smithyauth.Identity)
+	return v
+}
+
+type signRequestMiddleware struct {
+}
+
+func (*signRequestMiddleware) ID() string {
+	return "Signing"
+}
+
+func (m *signRequestMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unexpected transport type %T", in.Request)
+	}
+
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	identity := getIdentity(ctx)
+	if identity == nil {
+		return out, metadata, fmt.Errorf("no identity")
+	}
+
+	signer := rscheme.Scheme.Signer()
+	if signer == nil {
+		return out, metadata, fmt.Errorf("no signer")
+	}
+
+	if err := signer.SignRequest(ctx, req, identity, rscheme.SignerProperties); err != nil {
+		return out, metadata, fmt.Errorf("sign request: %w", err)
+	}
+
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/deserializers.go 🔗

@@ -0,0 +1,1161 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws/protocol/restjson"
+	"github.com/aws/aws-sdk-go-v2/service/sso/types"
+	smithy "github.com/aws/smithy-go"
+	smithyio "github.com/aws/smithy-go/io"
+	"github.com/aws/smithy-go/middleware"
+	"github.com/aws/smithy-go/ptr"
+	smithytime "github.com/aws/smithy-go/time"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"io"
+	"io/ioutil"
+	"strings"
+	"time"
+)
+
+func deserializeS3Expires(v string) (*time.Time, error) {
+	t, err := smithytime.ParseHTTPDate(v)
+	if err != nil {
+		return nil, nil
+	}
+	return &t, nil
+}
+
+type awsRestjson1_deserializeOpGetRoleCredentials struct {
+}
+
+func (*awsRestjson1_deserializeOpGetRoleCredentials) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpGetRoleCredentials) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorGetRoleCredentials(response, &metadata)
+	}
+	output := &GetRoleCredentialsOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(response.Body, ringBuffer)
+
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	err = awsRestjson1_deserializeOpDocumentGetRoleCredentialsOutput(&output, shape)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorGetRoleCredentials(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("ResourceNotFoundException", errorCode):
+		return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+	case strings.EqualFold("TooManyRequestsException", errorCode):
+		return awsRestjson1_deserializeErrorTooManyRequestsException(response, errorBody)
+
+	case strings.EqualFold("UnauthorizedException", errorCode):
+		return awsRestjson1_deserializeErrorUnauthorizedException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeOpDocumentGetRoleCredentialsOutput(v **GetRoleCredentialsOutput, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *GetRoleCredentialsOutput
+	if *v == nil {
+		sv = &GetRoleCredentialsOutput{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "roleCredentials":
+			if err := awsRestjson1_deserializeDocumentRoleCredentials(&sv.RoleCredentials, value); err != nil {
+				return err
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+type awsRestjson1_deserializeOpListAccountRoles struct {
+}
+
+func (*awsRestjson1_deserializeOpListAccountRoles) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListAccountRoles) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorListAccountRoles(response, &metadata)
+	}
+	output := &ListAccountRolesOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(response.Body, ringBuffer)
+
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	err = awsRestjson1_deserializeOpDocumentListAccountRolesOutput(&output, shape)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListAccountRoles(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("ResourceNotFoundException", errorCode):
+		return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+	case strings.EqualFold("TooManyRequestsException", errorCode):
+		return awsRestjson1_deserializeErrorTooManyRequestsException(response, errorBody)
+
+	case strings.EqualFold("UnauthorizedException", errorCode):
+		return awsRestjson1_deserializeErrorUnauthorizedException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeOpDocumentListAccountRolesOutput(v **ListAccountRolesOutput, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *ListAccountRolesOutput
+	if *v == nil {
+		sv = &ListAccountRolesOutput{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "nextToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected NextTokenType to be of type string, got %T instead", value)
+				}
+				sv.NextToken = ptr.String(jtv)
+			}
+
+		case "roleList":
+			if err := awsRestjson1_deserializeDocumentRoleListType(&sv.RoleList, value); err != nil {
+				return err
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+type awsRestjson1_deserializeOpListAccounts struct {
+}
+
+func (*awsRestjson1_deserializeOpListAccounts) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpListAccounts) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorListAccounts(response, &metadata)
+	}
+	output := &ListAccountsOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(response.Body, ringBuffer)
+
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	err = awsRestjson1_deserializeOpDocumentListAccountsOutput(&output, shape)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorListAccounts(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("ResourceNotFoundException", errorCode):
+		return awsRestjson1_deserializeErrorResourceNotFoundException(response, errorBody)
+
+	case strings.EqualFold("TooManyRequestsException", errorCode):
+		return awsRestjson1_deserializeErrorTooManyRequestsException(response, errorBody)
+
+	case strings.EqualFold("UnauthorizedException", errorCode):
+		return awsRestjson1_deserializeErrorUnauthorizedException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeOpDocumentListAccountsOutput(v **ListAccountsOutput, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *ListAccountsOutput
+	if *v == nil {
+		sv = &ListAccountsOutput{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "accountList":
+			if err := awsRestjson1_deserializeDocumentAccountListType(&sv.AccountList, value); err != nil {
+				return err
+			}
+
+		case "nextToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected NextTokenType to be of type string, got %T instead", value)
+				}
+				sv.NextToken = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+type awsRestjson1_deserializeOpLogout struct {
+}
+
+func (*awsRestjson1_deserializeOpLogout) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpLogout) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorLogout(response, &metadata)
+	}
+	output := &LogoutOutput{}
+	out.Result = output
+
+	if _, err = io.Copy(ioutil.Discard, response.Body); err != nil {
+		return out, metadata, &smithy.DeserializationError{
+			Err: fmt.Errorf("failed to discard response body, %w", err),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorLogout(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("TooManyRequestsException", errorCode):
+		return awsRestjson1_deserializeErrorTooManyRequestsException(response, errorBody)
+
+	case strings.EqualFold("UnauthorizedException", errorCode):
+		return awsRestjson1_deserializeErrorUnauthorizedException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeErrorInvalidRequestException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidRequestException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidRequestException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorResourceNotFoundException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.ResourceNotFoundException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentResourceNotFoundException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorTooManyRequestsException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.TooManyRequestsException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentTooManyRequestsException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorUnauthorizedException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.UnauthorizedException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentUnauthorizedException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeDocumentAccountInfo(v **types.AccountInfo, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.AccountInfo
+	if *v == nil {
+		sv = &types.AccountInfo{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "accountId":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected AccountIdType to be of type string, got %T instead", value)
+				}
+				sv.AccountId = ptr.String(jtv)
+			}
+
+		case "accountName":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected AccountNameType to be of type string, got %T instead", value)
+				}
+				sv.AccountName = ptr.String(jtv)
+			}
+
+		case "emailAddress":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected EmailAddressType to be of type string, got %T instead", value)
+				}
+				sv.EmailAddress = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentAccountListType(v *[]types.AccountInfo, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.([]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var cv []types.AccountInfo
+	if *v == nil {
+		cv = []types.AccountInfo{}
+	} else {
+		cv = *v
+	}
+
+	for _, value := range shape {
+		var col types.AccountInfo
+		destAddr := &col
+		if err := awsRestjson1_deserializeDocumentAccountInfo(&destAddr, value); err != nil {
+			return err
+		}
+		col = *destAddr
+		cv = append(cv, col)
+
+	}
+	*v = cv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidRequestException(v **types.InvalidRequestException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidRequestException
+	if *v == nil {
+		sv = &types.InvalidRequestException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "message":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Message = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentResourceNotFoundException(v **types.ResourceNotFoundException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.ResourceNotFoundException
+	if *v == nil {
+		sv = &types.ResourceNotFoundException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "message":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Message = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentRoleCredentials(v **types.RoleCredentials, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.RoleCredentials
+	if *v == nil {
+		sv = &types.RoleCredentials{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "accessKeyId":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected AccessKeyType to be of type string, got %T instead", value)
+				}
+				sv.AccessKeyId = ptr.String(jtv)
+			}
+
+		case "expiration":
+			if value != nil {
+				jtv, ok := value.(json.Number)
+				if !ok {
+					return fmt.Errorf("expected ExpirationTimestampType to be json.Number, got %T instead", value)
+				}
+				i64, err := jtv.Int64()
+				if err != nil {
+					return err
+				}
+				sv.Expiration = i64
+			}
+
+		case "secretAccessKey":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected SecretAccessKeyType to be of type string, got %T instead", value)
+				}
+				sv.SecretAccessKey = ptr.String(jtv)
+			}
+
+		case "sessionToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected SessionTokenType to be of type string, got %T instead", value)
+				}
+				sv.SessionToken = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentRoleInfo(v **types.RoleInfo, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.RoleInfo
+	if *v == nil {
+		sv = &types.RoleInfo{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "accountId":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected AccountIdType to be of type string, got %T instead", value)
+				}
+				sv.AccountId = ptr.String(jtv)
+			}
+
+		case "roleName":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected RoleNameType to be of type string, got %T instead", value)
+				}
+				sv.RoleName = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentRoleListType(v *[]types.RoleInfo, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.([]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var cv []types.RoleInfo
+	if *v == nil {
+		cv = []types.RoleInfo{}
+	} else {
+		cv = *v
+	}
+
+	for _, value := range shape {
+		var col types.RoleInfo
+		destAddr := &col
+		if err := awsRestjson1_deserializeDocumentRoleInfo(&destAddr, value); err != nil {
+			return err
+		}
+		col = *destAddr
+		cv = append(cv, col)
+
+	}
+	*v = cv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentTooManyRequestsException(v **types.TooManyRequestsException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.TooManyRequestsException
+	if *v == nil {
+		sv = &types.TooManyRequestsException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "message":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Message = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentUnauthorizedException(v **types.UnauthorizedException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.UnauthorizedException
+	if *v == nil {
+		sv = &types.UnauthorizedException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "message":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Message = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/doc.go 🔗

@@ -0,0 +1,27 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+// Package sso provides the API client, operations, and parameter types for AWS
+// Single Sign-On.
+//
+// AWS IAM Identity Center (successor to AWS Single Sign-On) Portal is a web
+// service that makes it easy for you to assign user access to IAM Identity Center
+// resources such as the AWS access portal. Users can get AWS account applications
+// and roles assigned to them and get federated into the application.
+//
+// Although AWS Single Sign-On was renamed, the sso and identitystore API
+// namespaces will continue to retain their original name for backward
+// compatibility purposes. For more information, see [IAM Identity Center rename].
+//
+// This reference guide describes the IAM Identity Center Portal operations that
+// you can call programatically and includes detailed information on data types and
+// errors.
+//
+// AWS provides SDKs that consist of libraries and sample code for various
+// programming languages and platforms, such as Java, Ruby, .Net, iOS, or Android.
+// The SDKs provide a convenient way to create programmatic access to IAM Identity
+// Center and other AWS services. For more information about the AWS SDKs,
+// including how to download and install them, see [Tools for Amazon Web Services].
+//
+// [Tools for Amazon Web Services]: http://aws.amazon.com/tools/
+// [IAM Identity Center rename]: https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html#renamed
+package sso

vendor/github.com/aws/aws-sdk-go-v2/service/sso/endpoints.go 🔗

@@ -0,0 +1,550 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+	"github.com/aws/aws-sdk-go-v2/internal/endpoints"
+	"github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn"
+	internalendpoints "github.com/aws/aws-sdk-go-v2/service/sso/internal/endpoints"
+	smithyauth "github.com/aws/smithy-go/auth"
+	smithyendpoints "github.com/aws/smithy-go/endpoints"
+	"github.com/aws/smithy-go/middleware"
+	"github.com/aws/smithy-go/ptr"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net/http"
+	"net/url"
+	"os"
+	"strings"
+)
+
+// EndpointResolverOptions is the service endpoint resolver options
+type EndpointResolverOptions = internalendpoints.Options
+
+// EndpointResolver interface for resolving service endpoints.
+type EndpointResolver interface {
+	ResolveEndpoint(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+}
+
+var _ EndpointResolver = &internalendpoints.Resolver{}
+
+// NewDefaultEndpointResolver constructs a new service endpoint resolver
+func NewDefaultEndpointResolver() *internalendpoints.Resolver {
+	return internalendpoints.New()
+}
+
+// EndpointResolverFunc is a helper utility that wraps a function so it satisfies
+// the EndpointResolver interface. This is useful when you want to add additional
+// endpoint resolving logic, or stub out specific endpoints with custom values.
+type EndpointResolverFunc func(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+
+func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+	return fn(region, options)
+}
+
+// EndpointResolverFromURL returns an EndpointResolver configured using the
+// provided endpoint url. By default, the resolved endpoint resolver uses the
+// client region as signing region, and the endpoint source is set to
+// EndpointSourceCustom.You can provide functional options to configure endpoint
+// values for the resolved endpoint.
+func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver {
+	e := aws.Endpoint{URL: url, Source: aws.EndpointSourceCustom}
+	for _, fn := range optFns {
+		fn(&e)
+	}
+
+	return EndpointResolverFunc(
+		func(region string, options EndpointResolverOptions) (aws.Endpoint, error) {
+			if len(e.SigningRegion) == 0 {
+				e.SigningRegion = region
+			}
+			return e, nil
+		},
+	)
+}
+
+type ResolveEndpoint struct {
+	Resolver EndpointResolver
+	Options  EndpointResolverOptions
+}
+
+func (*ResolveEndpoint) ID() string {
+	return "ResolveEndpoint"
+}
+
+func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	if !awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+		return next.HandleSerialize(ctx, in)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.Resolver == nil {
+		return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+	}
+
+	eo := m.Options
+	eo.Logger = middleware.GetLogger(ctx)
+
+	var endpoint aws.Endpoint
+	endpoint, err = m.Resolver.ResolveEndpoint(awsmiddleware.GetRegion(ctx), eo)
+	if err != nil {
+		nf := (&aws.EndpointNotFoundError{})
+		if errors.As(err, &nf) {
+			ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, false)
+			return next.HandleSerialize(ctx, in)
+		}
+		return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+	}
+
+	req.URL, err = url.Parse(endpoint.URL)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to parse endpoint URL: %w", err)
+	}
+
+	if len(awsmiddleware.GetSigningName(ctx)) == 0 {
+		signingName := endpoint.SigningName
+		if len(signingName) == 0 {
+			signingName = "awsssoportal"
+		}
+		ctx = awsmiddleware.SetSigningName(ctx, signingName)
+	}
+	ctx = awsmiddleware.SetEndpointSource(ctx, endpoint.Source)
+	ctx = smithyhttp.SetHostnameImmutable(ctx, endpoint.HostnameImmutable)
+	ctx = awsmiddleware.SetSigningRegion(ctx, endpoint.SigningRegion)
+	ctx = awsmiddleware.SetPartitionID(ctx, endpoint.PartitionID)
+	return next.HandleSerialize(ctx, in)
+}
+func addResolveEndpointMiddleware(stack *middleware.Stack, o Options) error {
+	return stack.Serialize.Insert(&ResolveEndpoint{
+		Resolver: o.EndpointResolver,
+		Options:  o.EndpointOptions,
+	}, "OperationSerializer", middleware.Before)
+}
+
+func removeResolveEndpointMiddleware(stack *middleware.Stack) error {
+	_, err := stack.Serialize.Remove((&ResolveEndpoint{}).ID())
+	return err
+}
+
+type wrappedEndpointResolver struct {
+	awsResolver aws.EndpointResolverWithOptions
+}
+
+func (w *wrappedEndpointResolver) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+	return w.awsResolver.ResolveEndpoint(ServiceID, region, options)
+}
+
+type awsEndpointResolverAdaptor func(service, region string) (aws.Endpoint, error)
+
+func (a awsEndpointResolverAdaptor) ResolveEndpoint(service, region string, options ...interface{}) (aws.Endpoint, error) {
+	return a(service, region)
+}
+
+var _ aws.EndpointResolverWithOptions = awsEndpointResolverAdaptor(nil)
+
+// withEndpointResolver returns an aws.EndpointResolverWithOptions that first delegates endpoint resolution to the awsResolver.
+// If awsResolver returns aws.EndpointNotFoundError error, the v1 resolver middleware will swallow the error,
+// and set an appropriate context flag such that fallback will occur when EndpointResolverV2 is invoked
+// via its middleware.
+//
+// If another error (besides aws.EndpointNotFoundError) is returned, then that error will be propagated.
+func withEndpointResolver(awsResolver aws.EndpointResolver, awsResolverWithOptions aws.EndpointResolverWithOptions) EndpointResolver {
+	var resolver aws.EndpointResolverWithOptions
+
+	if awsResolverWithOptions != nil {
+		resolver = awsResolverWithOptions
+	} else if awsResolver != nil {
+		resolver = awsEndpointResolverAdaptor(awsResolver.ResolveEndpoint)
+	}
+
+	return &wrappedEndpointResolver{
+		awsResolver: resolver,
+	}
+}
+
+func finalizeClientEndpointResolverOptions(options *Options) {
+	options.EndpointOptions.LogDeprecated = options.ClientLogMode.IsDeprecatedUsage()
+
+	if len(options.EndpointOptions.ResolvedRegion) == 0 {
+		const fipsInfix = "-fips-"
+		const fipsPrefix = "fips-"
+		const fipsSuffix = "-fips"
+
+		if strings.Contains(options.Region, fipsInfix) ||
+			strings.Contains(options.Region, fipsPrefix) ||
+			strings.Contains(options.Region, fipsSuffix) {
+			options.EndpointOptions.ResolvedRegion = strings.ReplaceAll(strings.ReplaceAll(strings.ReplaceAll(
+				options.Region, fipsInfix, "-"), fipsPrefix, ""), fipsSuffix, "")
+			options.EndpointOptions.UseFIPSEndpoint = aws.FIPSEndpointStateEnabled
+		}
+	}
+
+}
+
+func resolveEndpointResolverV2(options *Options) {
+	if options.EndpointResolverV2 == nil {
+		options.EndpointResolverV2 = NewDefaultEndpointResolverV2()
+	}
+}
+
+func resolveBaseEndpoint(cfg aws.Config, o *Options) {
+	if cfg.BaseEndpoint != nil {
+		o.BaseEndpoint = cfg.BaseEndpoint
+	}
+
+	_, g := os.LookupEnv("AWS_ENDPOINT_URL")
+	_, s := os.LookupEnv("AWS_ENDPOINT_URL_SSO")
+
+	if g && !s {
+		return
+	}
+
+	value, found, err := internalConfig.ResolveServiceBaseEndpoint(context.Background(), "SSO", cfg.ConfigSources)
+	if found && err == nil {
+		o.BaseEndpoint = &value
+	}
+}
+
+func bindRegion(region string) *string {
+	if region == "" {
+		return nil
+	}
+	return aws.String(endpoints.MapFIPSRegion(region))
+}
+
+// EndpointParameters provides the parameters that influence how endpoints are
+// resolved.
+type EndpointParameters struct {
+	// The AWS region used to dispatch the request.
+	//
+	// Parameter is
+	// required.
+	//
+	// AWS::Region
+	Region *string
+
+	// When true, use the dual-stack endpoint. If the configured endpoint does not
+	// support dual-stack, dispatching the request MAY return an error.
+	//
+	// Defaults to
+	// false if no value is provided.
+	//
+	// AWS::UseDualStack
+	UseDualStack *bool
+
+	// When true, send this request to the FIPS-compliant regional endpoint. If the
+	// configured endpoint does not have a FIPS compliant endpoint, dispatching the
+	// request will return an error.
+	//
+	// Defaults to false if no value is
+	// provided.
+	//
+	// AWS::UseFIPS
+	UseFIPS *bool
+
+	// Override the endpoint used to send this request
+	//
+	// Parameter is
+	// required.
+	//
+	// SDK::Endpoint
+	Endpoint *string
+}
+
+// ValidateRequired validates required parameters are set.
+func (p EndpointParameters) ValidateRequired() error {
+	if p.UseDualStack == nil {
+		return fmt.Errorf("parameter UseDualStack is required")
+	}
+
+	if p.UseFIPS == nil {
+		return fmt.Errorf("parameter UseFIPS is required")
+	}
+
+	return nil
+}
+
+// WithDefaults returns a shallow copy of EndpointParameterswith default values
+// applied to members where applicable.
+func (p EndpointParameters) WithDefaults() EndpointParameters {
+	if p.UseDualStack == nil {
+		p.UseDualStack = ptr.Bool(false)
+	}
+
+	if p.UseFIPS == nil {
+		p.UseFIPS = ptr.Bool(false)
+	}
+	return p
+}
+
+type stringSlice []string
+
+func (s stringSlice) Get(i int) *string {
+	if i < 0 || i >= len(s) {
+		return nil
+	}
+
+	v := s[i]
+	return &v
+}
+
+// EndpointResolverV2 provides the interface for resolving service endpoints.
+type EndpointResolverV2 interface {
+	// ResolveEndpoint attempts to resolve the endpoint with the provided options,
+	// returning the endpoint if found. Otherwise an error is returned.
+	ResolveEndpoint(ctx context.Context, params EndpointParameters) (
+		smithyendpoints.Endpoint, error,
+	)
+}
+
+// resolver provides the implementation for resolving endpoints.
+type resolver struct{}
+
+func NewDefaultEndpointResolverV2() EndpointResolverV2 {
+	return &resolver{}
+}
+
+// ResolveEndpoint attempts to resolve the endpoint with the provided options,
+// returning the endpoint if found. Otherwise an error is returned.
+func (r *resolver) ResolveEndpoint(
+	ctx context.Context, params EndpointParameters,
+) (
+	endpoint smithyendpoints.Endpoint, err error,
+) {
+	params = params.WithDefaults()
+	if err = params.ValidateRequired(); err != nil {
+		return endpoint, fmt.Errorf("endpoint parameters are not valid, %w", err)
+	}
+	_UseDualStack := *params.UseDualStack
+	_UseFIPS := *params.UseFIPS
+
+	if exprVal := params.Endpoint; exprVal != nil {
+		_Endpoint := *exprVal
+		_ = _Endpoint
+		if _UseFIPS == true {
+			return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: FIPS and custom endpoint are not supported")
+		}
+		if _UseDualStack == true {
+			return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: Dualstack and custom endpoint are not supported")
+		}
+		uriString := _Endpoint
+
+		uri, err := url.Parse(uriString)
+		if err != nil {
+			return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+		}
+
+		return smithyendpoints.Endpoint{
+			URI:     *uri,
+			Headers: http.Header{},
+		}, nil
+	}
+	if exprVal := params.Region; exprVal != nil {
+		_Region := *exprVal
+		_ = _Region
+		if exprVal := awsrulesfn.GetPartition(_Region); exprVal != nil {
+			_PartitionResult := *exprVal
+			_ = _PartitionResult
+			if _UseFIPS == true {
+				if _UseDualStack == true {
+					if true == _PartitionResult.SupportsFIPS {
+						if true == _PartitionResult.SupportsDualStack {
+							uriString := func() string {
+								var out strings.Builder
+								out.WriteString("https://portal.sso-fips.")
+								out.WriteString(_Region)
+								out.WriteString(".")
+								out.WriteString(_PartitionResult.DualStackDnsSuffix)
+								return out.String()
+							}()
+
+							uri, err := url.Parse(uriString)
+							if err != nil {
+								return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+							}
+
+							return smithyendpoints.Endpoint{
+								URI:     *uri,
+								Headers: http.Header{},
+							}, nil
+						}
+					}
+					return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS and DualStack are enabled, but this partition does not support one or both")
+				}
+			}
+			if _UseFIPS == true {
+				if true == _PartitionResult.SupportsFIPS {
+					if "aws-us-gov" == _PartitionResult.Name {
+						uriString := func() string {
+							var out strings.Builder
+							out.WriteString("https://portal.sso.")
+							out.WriteString(_Region)
+							out.WriteString(".amazonaws.com")
+							return out.String()
+						}()
+
+						uri, err := url.Parse(uriString)
+						if err != nil {
+							return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+						}
+
+						return smithyendpoints.Endpoint{
+							URI:     *uri,
+							Headers: http.Header{},
+						}, nil
+					}
+					uriString := func() string {
+						var out strings.Builder
+						out.WriteString("https://portal.sso-fips.")
+						out.WriteString(_Region)
+						out.WriteString(".")
+						out.WriteString(_PartitionResult.DnsSuffix)
+						return out.String()
+					}()
+
+					uri, err := url.Parse(uriString)
+					if err != nil {
+						return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+					}
+
+					return smithyendpoints.Endpoint{
+						URI:     *uri,
+						Headers: http.Header{},
+					}, nil
+				}
+				return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS is enabled but this partition does not support FIPS")
+			}
+			if _UseDualStack == true {
+				if true == _PartitionResult.SupportsDualStack {
+					uriString := func() string {
+						var out strings.Builder
+						out.WriteString("https://portal.sso.")
+						out.WriteString(_Region)
+						out.WriteString(".")
+						out.WriteString(_PartitionResult.DualStackDnsSuffix)
+						return out.String()
+					}()
+
+					uri, err := url.Parse(uriString)
+					if err != nil {
+						return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+					}
+
+					return smithyendpoints.Endpoint{
+						URI:     *uri,
+						Headers: http.Header{},
+					}, nil
+				}
+				return endpoint, fmt.Errorf("endpoint rule error, %s", "DualStack is enabled but this partition does not support DualStack")
+			}
+			uriString := func() string {
+				var out strings.Builder
+				out.WriteString("https://portal.sso.")
+				out.WriteString(_Region)
+				out.WriteString(".")
+				out.WriteString(_PartitionResult.DnsSuffix)
+				return out.String()
+			}()
+
+			uri, err := url.Parse(uriString)
+			if err != nil {
+				return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+			}
+
+			return smithyendpoints.Endpoint{
+				URI:     *uri,
+				Headers: http.Header{},
+			}, nil
+		}
+		return endpoint, fmt.Errorf("Endpoint resolution failed. Invalid operation or environment input.")
+	}
+	return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: Missing Region")
+}
+
+type endpointParamsBinder interface {
+	bindEndpointParams(*EndpointParameters)
+}
+
+func bindEndpointParams(ctx context.Context, input interface{}, options Options) *EndpointParameters {
+	params := &EndpointParameters{}
+
+	params.Region = bindRegion(options.Region)
+	params.UseDualStack = aws.Bool(options.EndpointOptions.UseDualStackEndpoint == aws.DualStackEndpointStateEnabled)
+	params.UseFIPS = aws.Bool(options.EndpointOptions.UseFIPSEndpoint == aws.FIPSEndpointStateEnabled)
+	params.Endpoint = options.BaseEndpoint
+
+	if b, ok := input.(endpointParamsBinder); ok {
+		b.bindEndpointParams(params)
+	}
+
+	return params
+}
+
+type resolveEndpointV2Middleware struct {
+	options Options
+}
+
+func (*resolveEndpointV2Middleware) ID() string {
+	return "ResolveEndpointV2"
+}
+
+func (m *resolveEndpointV2Middleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	if awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+		return next.HandleFinalize(ctx, in)
+	}
+
+	if err := checkAccountID(getIdentity(ctx), m.options.AccountIDEndpointMode); err != nil {
+		return out, metadata, fmt.Errorf("invalid accountID set: %w", err)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.options.EndpointResolverV2 == nil {
+		return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+	}
+
+	params := bindEndpointParams(ctx, getOperationInput(ctx), m.options)
+	endpt, err := m.options.EndpointResolverV2.ResolveEndpoint(ctx, *params)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+	}
+
+	if endpt.URI.RawPath == "" && req.URL.RawPath != "" {
+		endpt.URI.RawPath = endpt.URI.Path
+	}
+	req.URL.Scheme = endpt.URI.Scheme
+	req.URL.Host = endpt.URI.Host
+	req.URL.Path = smithyhttp.JoinPath(endpt.URI.Path, req.URL.Path)
+	req.URL.RawPath = smithyhttp.JoinPath(endpt.URI.RawPath, req.URL.RawPath)
+	for k := range endpt.Headers {
+		req.Header.Set(k, endpt.Headers.Get(k))
+	}
+
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	opts, _ := smithyauth.GetAuthOptions(&endpt.Properties)
+	for _, o := range opts {
+		rscheme.SignerProperties.SetAll(&o.SignerProperties)
+	}
+
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/generated.json 🔗

@@ -0,0 +1,35 @@
+{
+    "dependencies": {
+        "github.com/aws/aws-sdk-go-v2": "v1.4.0",
+        "github.com/aws/aws-sdk-go-v2/internal/configsources": "v0.0.0-00010101000000-000000000000",
+        "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2": "v2.0.0-00010101000000-000000000000",
+        "github.com/aws/smithy-go": "v1.4.0"
+    },
+    "files": [
+        "api_client.go",
+        "api_client_test.go",
+        "api_op_GetRoleCredentials.go",
+        "api_op_ListAccountRoles.go",
+        "api_op_ListAccounts.go",
+        "api_op_Logout.go",
+        "auth.go",
+        "deserializers.go",
+        "doc.go",
+        "endpoints.go",
+        "endpoints_config_test.go",
+        "endpoints_test.go",
+        "generated.json",
+        "internal/endpoints/endpoints.go",
+        "internal/endpoints/endpoints_test.go",
+        "options.go",
+        "protocol_test.go",
+        "serializers.go",
+        "snapshot_test.go",
+        "types/errors.go",
+        "types/types.go",
+        "validators.go"
+    ],
+    "go": "1.15",
+    "module": "github.com/aws/aws-sdk-go-v2/service/sso",
+    "unstable": false
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/internal/endpoints/endpoints.go 🔗

@@ -0,0 +1,566 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package endpoints
+
+import (
+	"github.com/aws/aws-sdk-go-v2/aws"
+	endpoints "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2"
+	"github.com/aws/smithy-go/logging"
+	"regexp"
+)
+
+// Options is the endpoint resolver configuration options
+type Options struct {
+	// Logger is a logging implementation that log events should be sent to.
+	Logger logging.Logger
+
+	// LogDeprecated indicates that deprecated endpoints should be logged to the
+	// provided logger.
+	LogDeprecated bool
+
+	// ResolvedRegion is used to override the region to be resolved, rather then the
+	// using the value passed to the ResolveEndpoint method. This value is used by the
+	// SDK to translate regions like fips-us-east-1 or us-east-1-fips to an alternative
+	// name. You must not set this value directly in your application.
+	ResolvedRegion string
+
+	// DisableHTTPS informs the resolver to return an endpoint that does not use the
+	// HTTPS scheme.
+	DisableHTTPS bool
+
+	// UseDualStackEndpoint specifies the resolver must resolve a dual-stack endpoint.
+	UseDualStackEndpoint aws.DualStackEndpointState
+
+	// UseFIPSEndpoint specifies the resolver must resolve a FIPS endpoint.
+	UseFIPSEndpoint aws.FIPSEndpointState
+}
+
+func (o Options) GetResolvedRegion() string {
+	return o.ResolvedRegion
+}
+
+func (o Options) GetDisableHTTPS() bool {
+	return o.DisableHTTPS
+}
+
+func (o Options) GetUseDualStackEndpoint() aws.DualStackEndpointState {
+	return o.UseDualStackEndpoint
+}
+
+func (o Options) GetUseFIPSEndpoint() aws.FIPSEndpointState {
+	return o.UseFIPSEndpoint
+}
+
+func transformToSharedOptions(options Options) endpoints.Options {
+	return endpoints.Options{
+		Logger:               options.Logger,
+		LogDeprecated:        options.LogDeprecated,
+		ResolvedRegion:       options.ResolvedRegion,
+		DisableHTTPS:         options.DisableHTTPS,
+		UseDualStackEndpoint: options.UseDualStackEndpoint,
+		UseFIPSEndpoint:      options.UseFIPSEndpoint,
+	}
+}
+
+// Resolver SSO endpoint resolver
+type Resolver struct {
+	partitions endpoints.Partitions
+}
+
+// ResolveEndpoint resolves the service endpoint for the given region and options
+func (r *Resolver) ResolveEndpoint(region string, options Options) (endpoint aws.Endpoint, err error) {
+	if len(region) == 0 {
+		return endpoint, &aws.MissingRegionError{}
+	}
+
+	opt := transformToSharedOptions(options)
+	return r.partitions.ResolveEndpoint(region, opt)
+}
+
+// New returns a new Resolver
+func New() *Resolver {
+	return &Resolver{
+		partitions: defaultPartitions,
+	}
+}
+
+var partitionRegexp = struct {
+	Aws      *regexp.Regexp
+	AwsCn    *regexp.Regexp
+	AwsIso   *regexp.Regexp
+	AwsIsoB  *regexp.Regexp
+	AwsIsoE  *regexp.Regexp
+	AwsIsoF  *regexp.Regexp
+	AwsUsGov *regexp.Regexp
+}{
+
+	Aws:      regexp.MustCompile("^(us|eu|ap|sa|ca|me|af|il)\\-\\w+\\-\\d+$"),
+	AwsCn:    regexp.MustCompile("^cn\\-\\w+\\-\\d+$"),
+	AwsIso:   regexp.MustCompile("^us\\-iso\\-\\w+\\-\\d+$"),
+	AwsIsoB:  regexp.MustCompile("^us\\-isob\\-\\w+\\-\\d+$"),
+	AwsIsoE:  regexp.MustCompile("^eu\\-isoe\\-\\w+\\-\\d+$"),
+	AwsIsoF:  regexp.MustCompile("^us\\-isof\\-\\w+\\-\\d+$"),
+	AwsUsGov: regexp.MustCompile("^us\\-gov\\-\\w+\\-\\d+$"),
+}
+
+var defaultPartitions = endpoints.Partitions{
+	{
+		ID: "aws",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "portal.sso.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "portal.sso.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.Aws,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "af-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.af-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "af-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-northeast-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-northeast-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-northeast-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-northeast-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-3",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-northeast-3.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-northeast-3",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-south-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-south-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-south-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-southeast-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-southeast-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-3",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-southeast-3.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-3",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-4",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ap-southeast-4.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-4",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ca-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ca-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ca-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ca-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.ca-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ca-west-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-central-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-central-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-central-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-north-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-north-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-north-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-south-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-south-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-south-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-west-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-west-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-west-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-west-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-west-3",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.eu-west-3.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-west-3",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "il-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.il-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "il-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "me-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.me-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "me-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "me-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.me-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "me-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "sa-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.sa-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "sa-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.us-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-east-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.us-east-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-east-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.us-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-west-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-2",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.us-west-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-west-2",
+				},
+			},
+		},
+	},
+	{
+		ID: "aws-cn",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "portal.sso.{region}.api.amazonwebservices.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.amazonaws.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.api.amazonwebservices.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "portal.sso.{region}.amazonaws.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsCn,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "cn-north-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.cn-north-1.amazonaws.com.cn",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "cn-north-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "cn-northwest-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.cn-northwest-1.amazonaws.com.cn",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "cn-northwest-1",
+				},
+			},
+		},
+	},
+	{
+		ID: "aws-iso",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.c2s.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "portal.sso.{region}.c2s.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIso,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-iso-b",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.sc2s.sgov.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "portal.sso.{region}.sc2s.sgov.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoB,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-iso-e",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.cloud.adc-e.uk",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "portal.sso.{region}.cloud.adc-e.uk",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoE,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-iso-f",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.csp.hci.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "portal.sso.{region}.csp.hci.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoF,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-us-gov",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "portal.sso.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "portal.sso-fips.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "portal.sso.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsUsGov,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "us-gov-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.us-gov-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-gov-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-gov-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "portal.sso.us-gov-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-gov-west-1",
+				},
+			},
+		},
+	},
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/options.go 🔗

@@ -0,0 +1,227 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+	smithyauth "github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net/http"
+)
+
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+type Options struct {
+	// Set of options to modify how an operation is invoked. These apply to all
+	// operations invoked for this client. Use functional options on operation call to
+	// modify this list for per operation behavior.
+	APIOptions []func(*middleware.Stack) error
+
+	// Indicates how aws account ID is applied in endpoint2.0 routing
+	AccountIDEndpointMode aws.AccountIDEndpointMode
+
+	// The optional application specific identifier appended to the User-Agent header.
+	AppID string
+
+	// This endpoint will be given as input to an EndpointResolverV2. It is used for
+	// providing a custom base endpoint that is subject to modifications by the
+	// processing EndpointResolverV2.
+	BaseEndpoint *string
+
+	// Configures the events that will be sent to the configured logger.
+	ClientLogMode aws.ClientLogMode
+
+	// The credentials object to use when signing requests.
+	Credentials aws.CredentialsProvider
+
+	// The configuration DefaultsMode that the SDK should use when constructing the
+	// clients initial default settings.
+	DefaultsMode aws.DefaultsMode
+
+	// The endpoint options to be used when attempting to resolve an endpoint.
+	EndpointOptions EndpointResolverOptions
+
+	// The service endpoint resolver.
+	//
+	// Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
+	// value for this field will likely prevent you from using any endpoint-related
+	// service features released after the introduction of EndpointResolverV2 and
+	// BaseEndpoint.
+	//
+	// To migrate an EndpointResolver implementation that uses a custom endpoint, set
+	// the client option BaseEndpoint instead.
+	EndpointResolver EndpointResolver
+
+	// Resolves the endpoint used for a particular service operation. This should be
+	// used over the deprecated EndpointResolver.
+	EndpointResolverV2 EndpointResolverV2
+
+	// Signature Version 4 (SigV4) Signer
+	HTTPSignerV4 HTTPSignerV4
+
+	// The logger writer interface to write logging messages to.
+	Logger logging.Logger
+
+	// The region to send requests to. (Required)
+	Region string
+
+	// RetryMaxAttempts specifies the maximum number attempts an API client will call
+	// an operation that fails with a retryable error. A value of 0 is ignored, and
+	// will not be used to configure the API client created default retryer, or modify
+	// per operation call's retry max attempts.
+	//
+	// If specified in an operation call's functional options with a value that is
+	// different than the constructed client's Options, the Client's Retryer will be
+	// wrapped to use the operation's specific RetryMaxAttempts value.
+	RetryMaxAttempts int
+
+	// RetryMode specifies the retry mode the API client will be created with, if
+	// Retryer option is not also specified.
+	//
+	// When creating a new API Clients this member will only be used if the Retryer
+	// Options member is nil. This value will be ignored if Retryer is not nil.
+	//
+	// Currently does not support per operation call overrides, may in the future.
+	RetryMode aws.RetryMode
+
+	// Retryer guides how HTTP requests should be retried in case of recoverable
+	// failures. When nil the API client will use a default retryer. The kind of
+	// default retry created by the API client can be changed with the RetryMode
+	// option.
+	Retryer aws.Retryer
+
+	// The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
+	// to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
+	// should not populate this structure programmatically, or rely on the values here
+	// within your applications.
+	RuntimeEnvironment aws.RuntimeEnvironment
+
+	// The initial DefaultsMode used when the client options were constructed. If the
+	// DefaultsMode was set to aws.DefaultsModeAuto this will store what the resolved
+	// value was at that point in time.
+	//
+	// Currently does not support per operation call overrides, may in the future.
+	resolvedDefaultsMode aws.DefaultsMode
+
+	// The HTTP client to invoke API calls with. Defaults to client's default HTTP
+	// implementation if nil.
+	HTTPClient HTTPClient
+
+	// The auth scheme resolver which determines how to authenticate for each
+	// operation.
+	AuthSchemeResolver AuthSchemeResolver
+
+	// The list of auth schemes supported by the client.
+	AuthSchemes []smithyhttp.AuthScheme
+}
+
+// Copy creates a clone where the APIOptions list is deep copied.
+func (o Options) Copy() Options {
+	to := o
+	to.APIOptions = make([]func(*middleware.Stack) error, len(o.APIOptions))
+	copy(to.APIOptions, o.APIOptions)
+
+	return to
+}
+
+func (o Options) GetIdentityResolver(schemeID string) smithyauth.IdentityResolver {
+	if schemeID == "aws.auth#sigv4" {
+		return getSigV4IdentityResolver(o)
+	}
+	if schemeID == "smithy.api#noAuth" {
+		return &smithyauth.AnonymousIdentityResolver{}
+	}
+	return nil
+}
+
+// WithAPIOptions returns a functional option for setting the Client's APIOptions
+// option.
+func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options) {
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, optFns...)
+	}
+}
+
+// Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for
+// this field will likely prevent you from using any endpoint-related service
+// features released after the introduction of EndpointResolverV2 and BaseEndpoint.
+//
+// To migrate an EndpointResolver implementation that uses a custom endpoint, set
+// the client option BaseEndpoint instead.
+func WithEndpointResolver(v EndpointResolver) func(*Options) {
+	return func(o *Options) {
+		o.EndpointResolver = v
+	}
+}
+
+// WithEndpointResolverV2 returns a functional option for setting the Client's
+// EndpointResolverV2 option.
+func WithEndpointResolverV2(v EndpointResolverV2) func(*Options) {
+	return func(o *Options) {
+		o.EndpointResolverV2 = v
+	}
+}
+
+func getSigV4IdentityResolver(o Options) smithyauth.IdentityResolver {
+	if o.Credentials != nil {
+		return &internalauthsmithy.CredentialsProviderAdapter{Provider: o.Credentials}
+	}
+	return nil
+}
+
+// WithSigV4SigningName applies an override to the authentication workflow to
+// use the given signing name for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing name from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningName(name string) func(*Options) {
+	fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+		out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+	) {
+		return next.HandleInitialize(awsmiddleware.SetSigningName(ctx, name), in)
+	}
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+			return s.Initialize.Add(
+				middleware.InitializeMiddlewareFunc("withSigV4SigningName", fn),
+				middleware.Before,
+			)
+		})
+	}
+}
+
+// WithSigV4SigningRegion applies an override to the authentication workflow to
+// use the given signing region for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing region from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningRegion(region string) func(*Options) {
+	fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+		out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+	) {
+		return next.HandleInitialize(awsmiddleware.SetSigningRegion(ctx, region), in)
+	}
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+			return s.Initialize.Add(
+				middleware.InitializeMiddlewareFunc("withSigV4SigningRegion", fn),
+				middleware.Before,
+			)
+		})
+	}
+}
+
+func ignoreAnonymousAuth(options *Options) {
+	if aws.IsCredentialsProvider(options.Credentials, (*aws.AnonymousCredentials)(nil)) {
+		options.Credentials = nil
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/serializers.go 🔗

@@ -0,0 +1,284 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	smithy "github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/encoding/httpbinding"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+type awsRestjson1_serializeOpGetRoleCredentials struct {
+}
+
+func (*awsRestjson1_serializeOpGetRoleCredentials) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpGetRoleCredentials) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*GetRoleCredentialsInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/federation/credentials")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "GET"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if err := awsRestjson1_serializeOpHttpBindingsGetRoleCredentialsInput(input, restEncoder); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsGetRoleCredentialsInput(v *GetRoleCredentialsInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	if v.AccessToken != nil && len(*v.AccessToken) > 0 {
+		locationName := "X-Amz-Sso_bearer_token"
+		encoder.SetHeader(locationName).String(*v.AccessToken)
+	}
+
+	if v.AccountId != nil {
+		encoder.SetQuery("account_id").String(*v.AccountId)
+	}
+
+	if v.RoleName != nil {
+		encoder.SetQuery("role_name").String(*v.RoleName)
+	}
+
+	return nil
+}
+
+type awsRestjson1_serializeOpListAccountRoles struct {
+}
+
+func (*awsRestjson1_serializeOpListAccountRoles) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListAccountRoles) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*ListAccountRolesInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/assignment/roles")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "GET"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if err := awsRestjson1_serializeOpHttpBindingsListAccountRolesInput(input, restEncoder); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListAccountRolesInput(v *ListAccountRolesInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	if v.AccessToken != nil && len(*v.AccessToken) > 0 {
+		locationName := "X-Amz-Sso_bearer_token"
+		encoder.SetHeader(locationName).String(*v.AccessToken)
+	}
+
+	if v.AccountId != nil {
+		encoder.SetQuery("account_id").String(*v.AccountId)
+	}
+
+	if v.MaxResults != nil {
+		encoder.SetQuery("max_result").Integer(*v.MaxResults)
+	}
+
+	if v.NextToken != nil {
+		encoder.SetQuery("next_token").String(*v.NextToken)
+	}
+
+	return nil
+}
+
+type awsRestjson1_serializeOpListAccounts struct {
+}
+
+func (*awsRestjson1_serializeOpListAccounts) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpListAccounts) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*ListAccountsInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/assignment/accounts")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "GET"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if err := awsRestjson1_serializeOpHttpBindingsListAccountsInput(input, restEncoder); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsListAccountsInput(v *ListAccountsInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	if v.AccessToken != nil && len(*v.AccessToken) > 0 {
+		locationName := "X-Amz-Sso_bearer_token"
+		encoder.SetHeader(locationName).String(*v.AccessToken)
+	}
+
+	if v.MaxResults != nil {
+		encoder.SetQuery("max_result").Integer(*v.MaxResults)
+	}
+
+	if v.NextToken != nil {
+		encoder.SetQuery("next_token").String(*v.NextToken)
+	}
+
+	return nil
+}
+
+type awsRestjson1_serializeOpLogout struct {
+}
+
+func (*awsRestjson1_serializeOpLogout) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpLogout) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*LogoutInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/logout")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "POST"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if err := awsRestjson1_serializeOpHttpBindingsLogoutInput(input, restEncoder); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsLogoutInput(v *LogoutInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	if v.AccessToken != nil && len(*v.AccessToken) > 0 {
+		locationName := "X-Amz-Sso_bearer_token"
+		encoder.SetHeader(locationName).String(*v.AccessToken)
+	}
+
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sso/types/errors.go 🔗

@@ -0,0 +1,115 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+import (
+	"fmt"
+	smithy "github.com/aws/smithy-go"
+)
+
+// Indicates that a problem occurred with the input to the request. For example, a
+// required parameter might be missing or out of range.
+type InvalidRequestException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidRequestException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidRequestException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidRequestException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidRequestException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidRequestException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The specified resource doesn't exist.
+type ResourceNotFoundException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *ResourceNotFoundException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ResourceNotFoundException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *ResourceNotFoundException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "ResourceNotFoundException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *ResourceNotFoundException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the request is being made too frequently and is more than what
+// the server can handle.
+type TooManyRequestsException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *TooManyRequestsException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *TooManyRequestsException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *TooManyRequestsException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "TooManyRequestsException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *TooManyRequestsException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the request is not authorized. This can happen due to an invalid
+// access token in the request.
+type UnauthorizedException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *UnauthorizedException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *UnauthorizedException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *UnauthorizedException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "UnauthorizedException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *UnauthorizedException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }

vendor/github.com/aws/aws-sdk-go-v2/service/sso/types/types.go 🔗

@@ -0,0 +1,63 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+import (
+	smithydocument "github.com/aws/smithy-go/document"
+)
+
+// Provides information about your AWS account.
+type AccountInfo struct {
+
+	// The identifier of the AWS account that is assigned to the user.
+	AccountId *string
+
+	// The display name of the AWS account that is assigned to the user.
+	AccountName *string
+
+	// The email address of the AWS account that is assigned to the user.
+	EmailAddress *string
+
+	noSmithyDocumentSerde
+}
+
+// Provides information about the role credentials that are assigned to the user.
+type RoleCredentials struct {
+
+	// The identifier used for the temporary security credentials. For more
+	// information, see [Using Temporary Security Credentials to Request Access to AWS Resources]in the AWS IAM User Guide.
+	//
+	// [Using Temporary Security Credentials to Request Access to AWS Resources]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html
+	AccessKeyId *string
+
+	// The date on which temporary security credentials expire.
+	Expiration int64
+
+	// The key that is used to sign the request. For more information, see [Using Temporary Security Credentials to Request Access to AWS Resources] in the AWS
+	// IAM User Guide.
+	//
+	// [Using Temporary Security Credentials to Request Access to AWS Resources]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html
+	SecretAccessKey *string
+
+	// The token used for temporary credentials. For more information, see [Using Temporary Security Credentials to Request Access to AWS Resources] in the AWS
+	// IAM User Guide.
+	//
+	// [Using Temporary Security Credentials to Request Access to AWS Resources]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html
+	SessionToken *string
+
+	noSmithyDocumentSerde
+}
+
+// Provides information about the role that is assigned to the user.
+type RoleInfo struct {
+
+	// The identifier of the AWS account assigned to the user.
+	AccountId *string
+
+	// The friendly name of the role that is assigned to the user.
+	RoleName *string
+
+	noSmithyDocumentSerde
+}
+
+type noSmithyDocumentSerde = smithydocument.NoSerde

vendor/github.com/aws/aws-sdk-go-v2/service/sso/validators.go 🔗

@@ -0,0 +1,175 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sso
+
+import (
+	"context"
+	"fmt"
+	smithy "github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/middleware"
+)
+
+type validateOpGetRoleCredentials struct {
+}
+
+func (*validateOpGetRoleCredentials) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpGetRoleCredentials) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*GetRoleCredentialsInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpGetRoleCredentialsInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpListAccountRoles struct {
+}
+
+func (*validateOpListAccountRoles) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpListAccountRoles) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*ListAccountRolesInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpListAccountRolesInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpListAccounts struct {
+}
+
+func (*validateOpListAccounts) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpListAccounts) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*ListAccountsInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpListAccountsInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpLogout struct {
+}
+
+func (*validateOpLogout) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpLogout) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*LogoutInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpLogoutInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+func addOpGetRoleCredentialsValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpGetRoleCredentials{}, middleware.After)
+}
+
+func addOpListAccountRolesValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpListAccountRoles{}, middleware.After)
+}
+
+func addOpListAccountsValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpListAccounts{}, middleware.After)
+}
+
+func addOpLogoutValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpLogout{}, middleware.After)
+}
+
+func validateOpGetRoleCredentialsInput(v *GetRoleCredentialsInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "GetRoleCredentialsInput"}
+	if v.RoleName == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("RoleName"))
+	}
+	if v.AccountId == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("AccountId"))
+	}
+	if v.AccessToken == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("AccessToken"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpListAccountRolesInput(v *ListAccountRolesInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "ListAccountRolesInput"}
+	if v.AccessToken == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("AccessToken"))
+	}
+	if v.AccountId == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("AccountId"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpListAccountsInput(v *ListAccountsInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "ListAccountsInput"}
+	if v.AccessToken == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("AccessToken"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpLogoutInput(v *LogoutInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "LogoutInput"}
+	if v.AccessToken == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("AccessToken"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/CHANGELOG.md 🔗

@@ -0,0 +1,469 @@
+# v1.26.4 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.3 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.2 (2024-07-03)
+
+* No change notes available for this release.
+
+# v1.26.1 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.0 (2024-06-26)
+
+* **Feature**: Support list-of-string endpoint parameter.
+
+# v1.25.1 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.0 (2024-06-18)
+
+* **Feature**: Track usage of various AWS SDK features in user-agent string.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.6 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.5 (2024-06-07)
+
+* **Bug Fix**: Add clock skew correction on all service clients
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.4 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.3 (2024-05-23)
+
+* No change notes available for this release.
+
+# v1.24.2 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.1 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.0 (2024-05-10)
+
+* **Feature**: Updated request parameters for PKCE support.
+
+# v1.23.5 (2024-05-08)
+
+* **Bug Fix**: GoDoc improvement
+
+# v1.23.4 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.3 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.2 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.1 (2024-02-23)
+
+* **Bug Fix**: Move all common, SDK-side middleware stack ops into the service client module to prevent cross-module compatibility issues in the future.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.0 (2024-02-22)
+
+* **Feature**: Add middleware stack snapshot tests.
+
+# v1.22.2 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.22.1 (2024-02-20)
+
+* **Bug Fix**: When sourcing values for a service's `EndpointParameters`, the lack of a configured region (i.e. `options.Region == ""`) will now translate to a `nil` value for `EndpointParameters.Region` instead of a pointer to the empty string `""`. This will result in a much more explicit error when calling an operation instead of an obscure hostname lookup failure.
+
+# v1.22.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.7 (2024-01-16)
+
+* No change notes available for this release.
+
+# v1.21.6 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.5 (2023-12-08)
+
+* **Bug Fix**: Reinstate presence of default Retryer in functional options, but still respect max attempts set therein.
+
+# v1.21.4 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.3 (2023-12-06)
+
+* **Bug Fix**: Restore pre-refactor auth behavior where all operations could technically be performed anonymously.
+
+# v1.21.2 (2023-12-01)
+
+* **Bug Fix**: Correct wrapping of errors in authentication workflow.
+* **Bug Fix**: Correctly recognize cache-wrapped instances of AnonymousCredentials at client construction.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.1 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.0 (2023-11-29)
+
+* **Feature**: Expose Options() accessor on service clients.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.3 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.2 (2023-11-28)
+
+* **Bug Fix**: Respect setting RetryMaxAttempts in functional options at client construction.
+
+# v1.20.1 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.0 (2023-11-17)
+
+* **Feature**: Adding support for `sso-oauth:CreateTokenWithIAM`.
+
+# v1.19.2 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.19.1 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.19.0 (2023-11-01)
+
+* **Feature**: Adds support for configured endpoints via environment variables and the AWS shared configuration file.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.3 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.2 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.1 (2023-09-22)
+
+* No change notes available for this release.
+
+# v1.17.0 (2023-09-20)
+
+* **Feature**: Update FIPS endpoints in aws-us-gov.
+
+# v1.16.0 (2023-09-18)
+
+* **Announcement**: [BREAKFIX] Change in MaxResults datatype from value to pointer type in cognito-sync service.
+* **Feature**: Adds several endpoint ruleset changes across all models: smaller rulesets, removed non-unique regional endpoints, fixes FIPS and DualStack endpoints, and make region not required in SDK::Endpoint. Additional breakfix to cognito-sync field.
+
+# v1.15.6 (2023-09-05)
+
+* No change notes available for this release.
+
+# v1.15.5 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.4 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.3 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.2 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.1 (2023-08-01)
+
+* No change notes available for this release.
+
+# v1.15.0 (2023-07-31)
+
+* **Feature**: Adds support for smithy-modeled endpoint resolution. A new rules-based endpoint resolution will be added to the SDK which will supercede and deprecate existing endpoint resolution. Specifically, EndpointResolver will be deprecated while BaseEndpoint and EndpointResolverV2 will take its place. For more information, please see the Endpoints section in our Developer Guide.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.14 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.13 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.12 (2023-06-15)
+
+* No change notes available for this release.
+
+# v1.14.11 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.10 (2023-05-04)
+
+* No change notes available for this release.
+
+# v1.14.9 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.8 (2023-04-10)
+
+* No change notes available for this release.
+
+# v1.14.7 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.6 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.5 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.4 (2023-02-22)
+
+* **Bug Fix**: Prevent nil pointer dereference when retrieving error codes.
+
+# v1.14.3 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.2 (2023-02-15)
+
+* **Announcement**: When receiving an error response in restJson-based services, an incorrect error type may have been returned based on the content of the response. This has been fixed via PR #2012 tracked in issue #1910.
+* **Bug Fix**: Correct error type parsing for restJson services.
+
+# v1.14.1 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.0 (2023-01-05)
+
+* **Feature**: Add `ErrorCodeOverride` field to all error structs (aws/smithy-go#401).
+
+# v1.13.11 (2022-12-19)
+
+* No change notes available for this release.
+
+# v1.13.10 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.9 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.8 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.7 (2022-10-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.6 (2022-09-30)
+
+* **Documentation**: Documentation updates for the IAM Identity Center OIDC CLI Reference.
+
+# v1.13.5 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.4 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.3 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.2 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.1 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.0 (2022-08-25)
+
+* **Feature**: Updated required request parameters on IAM Identity Center's OIDC CreateToken action.
+
+# v1.12.14 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.13 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.12 (2022-08-08)
+
+* **Documentation**: Documentation updates to reflect service rename - AWS IAM Identity Center (successor to AWS Single Sign-On)
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.11 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.10 (2022-07-11)
+
+* No change notes available for this release.
+
+# v1.12.9 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.8 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.7 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.6 (2022-05-27)
+
+* No change notes available for this release.
+
+# v1.12.5 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.4 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.3 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.2 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.1 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.0 (2022-02-24)
+
+* **Feature**: API client updated
+* **Feature**: Adds RetryMaxAttempts and RetryMod to API client Options. This allows the API clients' default Retryer to be configured from the shared configuration files or environment variables. Adding a new Retry mode of `Adaptive`. `Adaptive` retry mode is an experimental mode, adding client rate limiting when throttles reponses are received from an API. See [retry.AdaptiveMode](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws/retry#AdaptiveMode) for more details, and configuration options.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.9.0 (2022-01-07)
+
+* **Feature**: API client updated
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.2 (2021-12-02)
+
+* **Bug Fix**: Fixes a bug that prevented aws.EndpointResolverWithOptions from being used by the service client. ([#1514](https://github.com/aws/aws-sdk-go-v2/pull/1514))
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.0 (2021-11-06)
+
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.0 (2021-10-21)
+
+* **Feature**: Updated  to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.0 (2021-10-11)
+
+* **Feature**: API client updated
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.0 (2021-09-17)
+
+* **Feature**: Updated API client and endpoints to latest revision.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.0 (2021-08-27)
+
+* **Feature**: Updated API model to latest revision.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.3 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.2 (2021-08-04)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.1 (2021-07-15)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.3.0 (2021-06-25)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.1 (2021-05-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.2.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_client.go 🔗

@@ -0,0 +1,627 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/aws/defaults"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/aws/retry"
+	"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+	internalauth "github.com/aws/aws-sdk-go-v2/internal/auth"
+	internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+	internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+	internalmiddleware "github.com/aws/aws-sdk-go-v2/internal/middleware"
+	smithy "github.com/aws/smithy-go"
+	smithyauth "github.com/aws/smithy-go/auth"
+	smithydocument "github.com/aws/smithy-go/document"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net"
+	"net/http"
+	"sync/atomic"
+	"time"
+)
+
+const ServiceID = "SSO OIDC"
+const ServiceAPIVersion = "2019-06-10"
+
+// Client provides the API client to make operations call for AWS SSO OIDC.
+type Client struct {
+	options Options
+
+	// Difference between the time reported by the server and the client
+	timeOffset *atomic.Int64
+}
+
+// New returns an initialized Client based on the functional options. Provide
+// additional functional options to further configure the behavior of the client,
+// such as changing the client's endpoint or adding custom middleware behavior.
+func New(options Options, optFns ...func(*Options)) *Client {
+	options = options.Copy()
+
+	resolveDefaultLogger(&options)
+
+	setResolvedDefaultsMode(&options)
+
+	resolveRetryer(&options)
+
+	resolveHTTPClient(&options)
+
+	resolveHTTPSignerV4(&options)
+
+	resolveEndpointResolverV2(&options)
+
+	resolveAuthSchemeResolver(&options)
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	finalizeRetryMaxAttempts(&options)
+
+	ignoreAnonymousAuth(&options)
+
+	wrapWithAnonymousAuth(&options)
+
+	resolveAuthSchemes(&options)
+
+	client := &Client{
+		options: options,
+	}
+
+	initializeTimeOffsetResolver(client)
+
+	return client
+}
+
+// Options returns a copy of the client configuration.
+//
+// Callers SHOULD NOT perform mutations on any inner structures within client
+// config. Config overrides should instead be made on a per-operation basis through
+// functional options.
+func (c *Client) Options() Options {
+	return c.options.Copy()
+}
+
+func (c *Client) invokeOperation(ctx context.Context, opID string, params interface{}, optFns []func(*Options), stackFns ...func(*middleware.Stack, Options) error) (result interface{}, metadata middleware.Metadata, err error) {
+	ctx = middleware.ClearStackValues(ctx)
+	stack := middleware.NewStack(opID, smithyhttp.NewStackRequest)
+	options := c.options.Copy()
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	finalizeOperationRetryMaxAttempts(&options, *c)
+
+	finalizeClientEndpointResolverOptions(&options)
+
+	for _, fn := range stackFns {
+		if err := fn(stack, options); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	for _, fn := range options.APIOptions {
+		if err := fn(stack); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	handler := middleware.DecorateHandler(smithyhttp.NewClientHandler(options.HTTPClient), stack)
+	result, metadata, err = handler.Handle(ctx, params)
+	if err != nil {
+		err = &smithy.OperationError{
+			ServiceID:     ServiceID,
+			OperationName: opID,
+			Err:           err,
+		}
+	}
+	return result, metadata, err
+}
+
+type operationInputKey struct{}
+
+func setOperationInput(ctx context.Context, input interface{}) context.Context {
+	return middleware.WithStackValue(ctx, operationInputKey{}, input)
+}
+
+func getOperationInput(ctx context.Context) interface{} {
+	return middleware.GetStackValue(ctx, operationInputKey{})
+}
+
+type setOperationInputMiddleware struct {
+}
+
+func (*setOperationInputMiddleware) ID() string {
+	return "setOperationInput"
+}
+
+func (m *setOperationInputMiddleware) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	ctx = setOperationInput(ctx, in.Parameters)
+	return next.HandleSerialize(ctx, in)
+}
+
+func addProtocolFinalizerMiddlewares(stack *middleware.Stack, options Options, operation string) error {
+	if err := stack.Finalize.Add(&resolveAuthSchemeMiddleware{operation: operation, options: options}, middleware.Before); err != nil {
+		return fmt.Errorf("add ResolveAuthScheme: %w", err)
+	}
+	if err := stack.Finalize.Insert(&getIdentityMiddleware{options: options}, "ResolveAuthScheme", middleware.After); err != nil {
+		return fmt.Errorf("add GetIdentity: %v", err)
+	}
+	if err := stack.Finalize.Insert(&resolveEndpointV2Middleware{options: options}, "GetIdentity", middleware.After); err != nil {
+		return fmt.Errorf("add ResolveEndpointV2: %v", err)
+	}
+	if err := stack.Finalize.Insert(&signRequestMiddleware{}, "ResolveEndpointV2", middleware.After); err != nil {
+		return fmt.Errorf("add Signing: %w", err)
+	}
+	return nil
+}
+func resolveAuthSchemeResolver(options *Options) {
+	if options.AuthSchemeResolver == nil {
+		options.AuthSchemeResolver = &defaultAuthSchemeResolver{}
+	}
+}
+
+func resolveAuthSchemes(options *Options) {
+	if options.AuthSchemes == nil {
+		options.AuthSchemes = []smithyhttp.AuthScheme{
+			internalauth.NewHTTPAuthScheme("aws.auth#sigv4", &internalauthsmithy.V4SignerAdapter{
+				Signer:     options.HTTPSignerV4,
+				Logger:     options.Logger,
+				LogSigning: options.ClientLogMode.IsSigning(),
+			}),
+		}
+	}
+}
+
+type noSmithyDocumentSerde = smithydocument.NoSerde
+
+type legacyEndpointContextSetter struct {
+	LegacyResolver EndpointResolver
+}
+
+func (*legacyEndpointContextSetter) ID() string {
+	return "legacyEndpointContextSetter"
+}
+
+func (m *legacyEndpointContextSetter) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	if m.LegacyResolver != nil {
+		ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, true)
+	}
+
+	return next.HandleInitialize(ctx, in)
+
+}
+func addlegacyEndpointContextSetter(stack *middleware.Stack, o Options) error {
+	return stack.Initialize.Add(&legacyEndpointContextSetter{
+		LegacyResolver: o.EndpointResolver,
+	}, middleware.Before)
+}
+
+func resolveDefaultLogger(o *Options) {
+	if o.Logger != nil {
+		return
+	}
+	o.Logger = logging.Nop{}
+}
+
+func addSetLoggerMiddleware(stack *middleware.Stack, o Options) error {
+	return middleware.AddSetLoggerMiddleware(stack, o.Logger)
+}
+
+func setResolvedDefaultsMode(o *Options) {
+	if len(o.resolvedDefaultsMode) > 0 {
+		return
+	}
+
+	var mode aws.DefaultsMode
+	mode.SetFromString(string(o.DefaultsMode))
+
+	if mode == aws.DefaultsModeAuto {
+		mode = defaults.ResolveDefaultsModeAuto(o.Region, o.RuntimeEnvironment)
+	}
+
+	o.resolvedDefaultsMode = mode
+}
+
+// NewFromConfig returns a new client from the provided config.
+func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client {
+	opts := Options{
+		Region:                cfg.Region,
+		DefaultsMode:          cfg.DefaultsMode,
+		RuntimeEnvironment:    cfg.RuntimeEnvironment,
+		HTTPClient:            cfg.HTTPClient,
+		Credentials:           cfg.Credentials,
+		APIOptions:            cfg.APIOptions,
+		Logger:                cfg.Logger,
+		ClientLogMode:         cfg.ClientLogMode,
+		AppID:                 cfg.AppID,
+		AccountIDEndpointMode: cfg.AccountIDEndpointMode,
+	}
+	resolveAWSRetryerProvider(cfg, &opts)
+	resolveAWSRetryMaxAttempts(cfg, &opts)
+	resolveAWSRetryMode(cfg, &opts)
+	resolveAWSEndpointResolver(cfg, &opts)
+	resolveUseDualStackEndpoint(cfg, &opts)
+	resolveUseFIPSEndpoint(cfg, &opts)
+	resolveBaseEndpoint(cfg, &opts)
+	return New(opts, optFns...)
+}
+
+func resolveHTTPClient(o *Options) {
+	var buildable *awshttp.BuildableClient
+
+	if o.HTTPClient != nil {
+		var ok bool
+		buildable, ok = o.HTTPClient.(*awshttp.BuildableClient)
+		if !ok {
+			return
+		}
+	} else {
+		buildable = awshttp.NewBuildableClient()
+	}
+
+	modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+	if err == nil {
+		buildable = buildable.WithDialerOptions(func(dialer *net.Dialer) {
+			if dialerTimeout, ok := modeConfig.GetConnectTimeout(); ok {
+				dialer.Timeout = dialerTimeout
+			}
+		})
+
+		buildable = buildable.WithTransportOptions(func(transport *http.Transport) {
+			if tlsHandshakeTimeout, ok := modeConfig.GetTLSNegotiationTimeout(); ok {
+				transport.TLSHandshakeTimeout = tlsHandshakeTimeout
+			}
+		})
+	}
+
+	o.HTTPClient = buildable
+}
+
+func resolveRetryer(o *Options) {
+	if o.Retryer != nil {
+		return
+	}
+
+	if len(o.RetryMode) == 0 {
+		modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+		if err == nil {
+			o.RetryMode = modeConfig.RetryMode
+		}
+	}
+	if len(o.RetryMode) == 0 {
+		o.RetryMode = aws.RetryModeStandard
+	}
+
+	var standardOptions []func(*retry.StandardOptions)
+	if v := o.RetryMaxAttempts; v != 0 {
+		standardOptions = append(standardOptions, func(so *retry.StandardOptions) {
+			so.MaxAttempts = v
+		})
+	}
+
+	switch o.RetryMode {
+	case aws.RetryModeAdaptive:
+		var adaptiveOptions []func(*retry.AdaptiveModeOptions)
+		if len(standardOptions) != 0 {
+			adaptiveOptions = append(adaptiveOptions, func(ao *retry.AdaptiveModeOptions) {
+				ao.StandardOptions = append(ao.StandardOptions, standardOptions...)
+			})
+		}
+		o.Retryer = retry.NewAdaptiveMode(adaptiveOptions...)
+
+	default:
+		o.Retryer = retry.NewStandard(standardOptions...)
+	}
+}
+
+func resolveAWSRetryerProvider(cfg aws.Config, o *Options) {
+	if cfg.Retryer == nil {
+		return
+	}
+	o.Retryer = cfg.Retryer()
+}
+
+func resolveAWSRetryMode(cfg aws.Config, o *Options) {
+	if len(cfg.RetryMode) == 0 {
+		return
+	}
+	o.RetryMode = cfg.RetryMode
+}
+func resolveAWSRetryMaxAttempts(cfg aws.Config, o *Options) {
+	if cfg.RetryMaxAttempts == 0 {
+		return
+	}
+	o.RetryMaxAttempts = cfg.RetryMaxAttempts
+}
+
+func finalizeRetryMaxAttempts(o *Options) {
+	if o.RetryMaxAttempts == 0 {
+		return
+	}
+
+	o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func finalizeOperationRetryMaxAttempts(o *Options, client Client) {
+	if v := o.RetryMaxAttempts; v == 0 || v == client.options.RetryMaxAttempts {
+		return
+	}
+
+	o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func resolveAWSEndpointResolver(cfg aws.Config, o *Options) {
+	if cfg.EndpointResolver == nil && cfg.EndpointResolverWithOptions == nil {
+		return
+	}
+	o.EndpointResolver = withEndpointResolver(cfg.EndpointResolver, cfg.EndpointResolverWithOptions)
+}
+
+func addClientUserAgent(stack *middleware.Stack, options Options) error {
+	ua, err := getOrAddRequestUserAgent(stack)
+	if err != nil {
+		return err
+	}
+
+	ua.AddSDKAgentKeyValue(awsmiddleware.APIMetadata, "ssooidc", goModuleVersion)
+	if len(options.AppID) > 0 {
+		ua.AddSDKAgentKey(awsmiddleware.ApplicationIdentifier, options.AppID)
+	}
+
+	return nil
+}
+
+func getOrAddRequestUserAgent(stack *middleware.Stack) (*awsmiddleware.RequestUserAgent, error) {
+	id := (*awsmiddleware.RequestUserAgent)(nil).ID()
+	mw, ok := stack.Build.Get(id)
+	if !ok {
+		mw = awsmiddleware.NewRequestUserAgent()
+		if err := stack.Build.Add(mw, middleware.After); err != nil {
+			return nil, err
+		}
+	}
+
+	ua, ok := mw.(*awsmiddleware.RequestUserAgent)
+	if !ok {
+		return nil, fmt.Errorf("%T for %s middleware did not match expected type", mw, id)
+	}
+
+	return ua, nil
+}
+
+type HTTPSignerV4 interface {
+	SignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*v4.SignerOptions)) error
+}
+
+func resolveHTTPSignerV4(o *Options) {
+	if o.HTTPSignerV4 != nil {
+		return
+	}
+	o.HTTPSignerV4 = newDefaultV4Signer(*o)
+}
+
+func newDefaultV4Signer(o Options) *v4.Signer {
+	return v4.NewSigner(func(so *v4.SignerOptions) {
+		so.Logger = o.Logger
+		so.LogSigning = o.ClientLogMode.IsSigning()
+	})
+}
+
+func addClientRequestID(stack *middleware.Stack) error {
+	return stack.Build.Add(&awsmiddleware.ClientRequestID{}, middleware.After)
+}
+
+func addComputeContentLength(stack *middleware.Stack) error {
+	return stack.Build.Add(&smithyhttp.ComputeContentLength{}, middleware.After)
+}
+
+func addRawResponseToMetadata(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&awsmiddleware.AddRawResponse{}, middleware.Before)
+}
+
+func addRecordResponseTiming(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&awsmiddleware.RecordResponseTiming{}, middleware.After)
+}
+func addStreamingEventsPayload(stack *middleware.Stack) error {
+	return stack.Finalize.Add(&v4.StreamingEventsPayload{}, middleware.Before)
+}
+
+func addUnsignedPayload(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.UnsignedPayload{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addComputePayloadSHA256(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.ComputePayloadSHA256{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addContentSHA256Header(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.ContentSHA256Header{}, (*v4.ComputePayloadSHA256)(nil).ID(), middleware.After)
+}
+
+func addIsWaiterUserAgent(o *Options) {
+	o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+		ua, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureWaiter)
+		return nil
+	})
+}
+
+func addIsPaginatorUserAgent(o *Options) {
+	o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+		ua, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeaturePaginator)
+		return nil
+	})
+}
+
+func addRetry(stack *middleware.Stack, o Options) error {
+	attempt := retry.NewAttemptMiddleware(o.Retryer, smithyhttp.RequestCloner, func(m *retry.Attempt) {
+		m.LogAttempts = o.ClientLogMode.IsRetries()
+	})
+	if err := stack.Finalize.Insert(attempt, "Signing", middleware.Before); err != nil {
+		return err
+	}
+	if err := stack.Finalize.Insert(&retry.MetricsHeader{}, attempt.ID(), middleware.After); err != nil {
+		return err
+	}
+	return nil
+}
+
+// resolves dual-stack endpoint configuration
+func resolveUseDualStackEndpoint(cfg aws.Config, o *Options) error {
+	if len(cfg.ConfigSources) == 0 {
+		return nil
+	}
+	value, found, err := internalConfig.ResolveUseDualStackEndpoint(context.Background(), cfg.ConfigSources)
+	if err != nil {
+		return err
+	}
+	if found {
+		o.EndpointOptions.UseDualStackEndpoint = value
+	}
+	return nil
+}
+
+// resolves FIPS endpoint configuration
+func resolveUseFIPSEndpoint(cfg aws.Config, o *Options) error {
+	if len(cfg.ConfigSources) == 0 {
+		return nil
+	}
+	value, found, err := internalConfig.ResolveUseFIPSEndpoint(context.Background(), cfg.ConfigSources)
+	if err != nil {
+		return err
+	}
+	if found {
+		o.EndpointOptions.UseFIPSEndpoint = value
+	}
+	return nil
+}
+
+func resolveAccountID(identity smithyauth.Identity, mode aws.AccountIDEndpointMode) *string {
+	if mode == aws.AccountIDEndpointModeDisabled {
+		return nil
+	}
+
+	if ca, ok := identity.(*internalauthsmithy.CredentialsAdapter); ok && ca.Credentials.AccountID != "" {
+		return aws.String(ca.Credentials.AccountID)
+	}
+
+	return nil
+}
+
+func addTimeOffsetBuild(stack *middleware.Stack, c *Client) error {
+	mw := internalmiddleware.AddTimeOffsetMiddleware{Offset: c.timeOffset}
+	if err := stack.Build.Add(&mw, middleware.After); err != nil {
+		return err
+	}
+	return stack.Deserialize.Insert(&mw, "RecordResponseTiming", middleware.Before)
+}
+func initializeTimeOffsetResolver(c *Client) {
+	c.timeOffset = new(atomic.Int64)
+}
+
+func checkAccountID(identity smithyauth.Identity, mode aws.AccountIDEndpointMode) error {
+	switch mode {
+	case aws.AccountIDEndpointModeUnset:
+	case aws.AccountIDEndpointModePreferred:
+	case aws.AccountIDEndpointModeDisabled:
+	case aws.AccountIDEndpointModeRequired:
+		if ca, ok := identity.(*internalauthsmithy.CredentialsAdapter); !ok {
+			return fmt.Errorf("accountID is required but not set")
+		} else if ca.Credentials.AccountID == "" {
+			return fmt.Errorf("accountID is required but not set")
+		}
+	// default check in case invalid mode is configured through request config
+	default:
+		return fmt.Errorf("invalid accountID endpoint mode %s, must be preferred/required/disabled", mode)
+	}
+
+	return nil
+}
+
+func addUserAgentRetryMode(stack *middleware.Stack, options Options) error {
+	ua, err := getOrAddRequestUserAgent(stack)
+	if err != nil {
+		return err
+	}
+
+	switch options.Retryer.(type) {
+	case *retry.Standard:
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeStandard)
+	case *retry.AdaptiveMode:
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeAdaptive)
+	}
+	return nil
+}
+
+func addRecursionDetection(stack *middleware.Stack) error {
+	return stack.Build.Add(&awsmiddleware.RecursionDetection{}, middleware.After)
+}
+
+func addRequestIDRetrieverMiddleware(stack *middleware.Stack) error {
+	return stack.Deserialize.Insert(&awsmiddleware.RequestIDRetriever{}, "OperationDeserializer", middleware.Before)
+
+}
+
+func addResponseErrorMiddleware(stack *middleware.Stack) error {
+	return stack.Deserialize.Insert(&awshttp.ResponseErrorWrapper{}, "RequestIDRetriever", middleware.Before)
+
+}
+
+func addRequestResponseLogging(stack *middleware.Stack, o Options) error {
+	return stack.Deserialize.Add(&smithyhttp.RequestResponseLogger{
+		LogRequest:          o.ClientLogMode.IsRequest(),
+		LogRequestWithBody:  o.ClientLogMode.IsRequestWithBody(),
+		LogResponse:         o.ClientLogMode.IsResponse(),
+		LogResponseWithBody: o.ClientLogMode.IsResponseWithBody(),
+	}, middleware.After)
+}
+
+type disableHTTPSMiddleware struct {
+	DisableHTTPS bool
+}
+
+func (*disableHTTPSMiddleware) ID() string {
+	return "disableHTTPS"
+}
+
+func (m *disableHTTPSMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.DisableHTTPS && !smithyhttp.GetHostnameImmutable(ctx) {
+		req.URL.Scheme = "http"
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+func addDisableHTTPSMiddleware(stack *middleware.Stack, o Options) error {
+	return stack.Finalize.Insert(&disableHTTPSMiddleware{
+		DisableHTTPS: o.EndpointOptions.DisableHTTPS,
+	}, "ResolveEndpointV2", middleware.After)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_CreateToken.go 🔗

@@ -0,0 +1,225 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Creates and returns access and refresh tokens for clients that are
+// authenticated using client secrets. The access token can be used to fetch
+// short-term credentials for the assigned AWS accounts or to access application
+// APIs using bearer authentication.
+func (c *Client) CreateToken(ctx context.Context, params *CreateTokenInput, optFns ...func(*Options)) (*CreateTokenOutput, error) {
+	if params == nil {
+		params = &CreateTokenInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "CreateToken", params, optFns, c.addOperationCreateTokenMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*CreateTokenOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type CreateTokenInput struct {
+
+	// The unique identifier string for the client or application. This value comes
+	// from the result of the RegisterClientAPI.
+	//
+	// This member is required.
+	ClientId *string
+
+	// A secret string generated for the client. This value should come from the
+	// persisted result of the RegisterClientAPI.
+	//
+	// This member is required.
+	ClientSecret *string
+
+	// Supports the following OAuth grant types: Device Code and Refresh Token.
+	// Specify either of the following values, depending on the grant type that you
+	// want:
+	//
+	// * Device Code - urn:ietf:params:oauth:grant-type:device_code
+	//
+	// * Refresh Token - refresh_token
+	//
+	// For information about how to obtain the device code, see the StartDeviceAuthorization topic.
+	//
+	// This member is required.
+	GrantType *string
+
+	// Used only when calling this API for the Authorization Code grant type. The
+	// short-term code is used to identify this authorization request. This grant type
+	// is currently unsupported for the CreateTokenAPI.
+	Code *string
+
+	// Used only when calling this API for the Authorization Code grant type. This
+	// value is generated by the client and presented to validate the original code
+	// challenge value the client passed at authorization time.
+	CodeVerifier *string
+
+	// Used only when calling this API for the Device Code grant type. This short-term
+	// code is used to identify this authorization request. This comes from the result
+	// of the StartDeviceAuthorizationAPI.
+	DeviceCode *string
+
+	// Used only when calling this API for the Authorization Code grant type. This
+	// value specifies the location of the client or application that has registered to
+	// receive the authorization code.
+	RedirectUri *string
+
+	// Used only when calling this API for the Refresh Token grant type. This token is
+	// used to refresh short-term tokens, such as the access token, that might expire.
+	//
+	// For more information about the features and limitations of the current IAM
+	// Identity Center OIDC implementation, see Considerations for Using this Guide in
+	// the [IAM Identity Center OIDC API Reference].
+	//
+	// [IAM Identity Center OIDC API Reference]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/Welcome.html
+	RefreshToken *string
+
+	// The list of scopes for which authorization is requested. The access token that
+	// is issued is limited to the scopes that are granted. If this value is not
+	// specified, IAM Identity Center authorizes all scopes that are configured for the
+	// client during the call to RegisterClient.
+	Scope []string
+
+	noSmithyDocumentSerde
+}
+
+type CreateTokenOutput struct {
+
+	// A bearer token to access Amazon Web Services accounts and applications assigned
+	// to a user.
+	AccessToken *string
+
+	// Indicates the time in seconds when an access token will expire.
+	ExpiresIn int32
+
+	// The idToken is not implemented or supported. For more information about the
+	// features and limitations of the current IAM Identity Center OIDC implementation,
+	// see Considerations for Using this Guide in the [IAM Identity Center OIDC API Reference].
+	//
+	// A JSON Web Token (JWT) that identifies who is associated with the issued access
+	// token.
+	//
+	// [IAM Identity Center OIDC API Reference]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/Welcome.html
+	IdToken *string
+
+	// A token that, if present, can be used to refresh a previously issued access
+	// token that might have expired.
+	//
+	// For more information about the features and limitations of the current IAM
+	// Identity Center OIDC implementation, see Considerations for Using this Guide in
+	// the [IAM Identity Center OIDC API Reference].
+	//
+	// [IAM Identity Center OIDC API Reference]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/Welcome.html
+	RefreshToken *string
+
+	// Used to notify the client that the returned token is an access token. The
+	// supported token type is Bearer .
+	TokenType *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationCreateTokenMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpCreateToken{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpCreateToken{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "CreateToken"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpCreateTokenValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opCreateToken(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opCreateToken(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "CreateToken",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_CreateTokenWithIAM.go 🔗

@@ -0,0 +1,256 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Creates and returns access and refresh tokens for clients and applications that
+// are authenticated using IAM entities. The access token can be used to fetch
+// short-term credentials for the assigned Amazon Web Services accounts or to
+// access application APIs using bearer authentication.
+func (c *Client) CreateTokenWithIAM(ctx context.Context, params *CreateTokenWithIAMInput, optFns ...func(*Options)) (*CreateTokenWithIAMOutput, error) {
+	if params == nil {
+		params = &CreateTokenWithIAMInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "CreateTokenWithIAM", params, optFns, c.addOperationCreateTokenWithIAMMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*CreateTokenWithIAMOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type CreateTokenWithIAMInput struct {
+
+	// The unique identifier string for the client or application. This value is an
+	// application ARN that has OAuth grants configured.
+	//
+	// This member is required.
+	ClientId *string
+
+	// Supports the following OAuth grant types: Authorization Code, Refresh Token,
+	// JWT Bearer, and Token Exchange. Specify one of the following values, depending
+	// on the grant type that you want:
+	//
+	// * Authorization Code - authorization_code
+	//
+	// * Refresh Token - refresh_token
+	//
+	// * JWT Bearer - urn:ietf:params:oauth:grant-type:jwt-bearer
+	//
+	// * Token Exchange - urn:ietf:params:oauth:grant-type:token-exchange
+	//
+	// This member is required.
+	GrantType *string
+
+	// Used only when calling this API for the JWT Bearer grant type. This value
+	// specifies the JSON Web Token (JWT) issued by a trusted token issuer. To
+	// authorize a trusted token issuer, configure the JWT Bearer GrantOptions for the
+	// application.
+	Assertion *string
+
+	// Used only when calling this API for the Authorization Code grant type. This
+	// short-term code is used to identify this authorization request. The code is
+	// obtained through a redirect from IAM Identity Center to a redirect URI persisted
+	// in the Authorization Code GrantOptions for the application.
+	Code *string
+
+	// Used only when calling this API for the Authorization Code grant type. This
+	// value is generated by the client and presented to validate the original code
+	// challenge value the client passed at authorization time.
+	CodeVerifier *string
+
+	// Used only when calling this API for the Authorization Code grant type. This
+	// value specifies the location of the client or application that has registered to
+	// receive the authorization code.
+	RedirectUri *string
+
+	// Used only when calling this API for the Refresh Token grant type. This token is
+	// used to refresh short-term tokens, such as the access token, that might expire.
+	//
+	// For more information about the features and limitations of the current IAM
+	// Identity Center OIDC implementation, see Considerations for Using this Guide in
+	// the [IAM Identity Center OIDC API Reference].
+	//
+	// [IAM Identity Center OIDC API Reference]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/Welcome.html
+	RefreshToken *string
+
+	// Used only when calling this API for the Token Exchange grant type. This value
+	// specifies the type of token that the requester can receive. The following values
+	// are supported:
+	//
+	// * Access Token - urn:ietf:params:oauth:token-type:access_token
+	//
+	// * Refresh Token - urn:ietf:params:oauth:token-type:refresh_token
+	RequestedTokenType *string
+
+	// The list of scopes for which authorization is requested. The access token that
+	// is issued is limited to the scopes that are granted. If the value is not
+	// specified, IAM Identity Center authorizes all scopes configured for the
+	// application, including the following default scopes: openid , aws ,
+	// sts:identity_context .
+	Scope []string
+
+	// Used only when calling this API for the Token Exchange grant type. This value
+	// specifies the subject of the exchange. The value of the subject token must be an
+	// access token issued by IAM Identity Center to a different client or application.
+	// The access token must have authorized scopes that indicate the requested
+	// application as a target audience.
+	SubjectToken *string
+
+	// Used only when calling this API for the Token Exchange grant type. This value
+	// specifies the type of token that is passed as the subject of the exchange. The
+	// following value is supported:
+	//
+	// * Access Token - urn:ietf:params:oauth:token-type:access_token
+	SubjectTokenType *string
+
+	noSmithyDocumentSerde
+}
+
+type CreateTokenWithIAMOutput struct {
+
+	// A bearer token to access Amazon Web Services accounts and applications assigned
+	// to a user.
+	AccessToken *string
+
+	// Indicates the time in seconds when an access token will expire.
+	ExpiresIn int32
+
+	// A JSON Web Token (JWT) that identifies the user associated with the issued
+	// access token.
+	IdToken *string
+
+	// Indicates the type of tokens that are issued by IAM Identity Center. The
+	// following values are supported:
+	//
+	// * Access Token - urn:ietf:params:oauth:token-type:access_token
+	//
+	// * Refresh Token - urn:ietf:params:oauth:token-type:refresh_token
+	IssuedTokenType *string
+
+	// A token that, if present, can be used to refresh a previously issued access
+	// token that might have expired.
+	//
+	// For more information about the features and limitations of the current IAM
+	// Identity Center OIDC implementation, see Considerations for Using this Guide in
+	// the [IAM Identity Center OIDC API Reference].
+	//
+	// [IAM Identity Center OIDC API Reference]: https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/Welcome.html
+	RefreshToken *string
+
+	// The list of scopes for which authorization is granted. The access token that is
+	// issued is limited to the scopes that are granted.
+	Scope []string
+
+	// Used to notify the requester that the returned token is an access token. The
+	// supported token type is Bearer .
+	TokenType *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationCreateTokenWithIAMMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpCreateTokenWithIAM{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpCreateTokenWithIAM{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "CreateTokenWithIAM"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addComputePayloadSHA256(stack); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpCreateTokenWithIAMValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opCreateTokenWithIAM(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opCreateTokenWithIAM(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "CreateTokenWithIAM",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_RegisterClient.go 🔗

@@ -0,0 +1,186 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Registers a client with IAM Identity Center. This allows clients to initiate
+// device authorization. The output should be persisted for reuse through many
+// authentication requests.
+func (c *Client) RegisterClient(ctx context.Context, params *RegisterClientInput, optFns ...func(*Options)) (*RegisterClientOutput, error) {
+	if params == nil {
+		params = &RegisterClientInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "RegisterClient", params, optFns, c.addOperationRegisterClientMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*RegisterClientOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type RegisterClientInput struct {
+
+	// The friendly name of the client.
+	//
+	// This member is required.
+	ClientName *string
+
+	// The type of client. The service supports only public as a client type. Anything
+	// other than public will be rejected by the service.
+	//
+	// This member is required.
+	ClientType *string
+
+	// This IAM Identity Center application ARN is used to define
+	// administrator-managed configuration for public client access to resources. At
+	// authorization, the scopes, grants, and redirect URI available to this client
+	// will be restricted by this application resource.
+	EntitledApplicationArn *string
+
+	// The list of OAuth 2.0 grant types that are defined by the client. This list is
+	// used to restrict the token granting flows available to the client.
+	GrantTypes []string
+
+	// The IAM Identity Center Issuer URL associated with an instance of IAM Identity
+	// Center. This value is needed for user access to resources through the client.
+	IssuerUrl *string
+
+	// The list of redirect URI that are defined by the client. At completion of
+	// authorization, this list is used to restrict what locations the user agent can
+	// be redirected back to.
+	RedirectUris []string
+
+	// The list of scopes that are defined by the client. Upon authorization, this
+	// list is used to restrict permissions when granting an access token.
+	Scopes []string
+
+	noSmithyDocumentSerde
+}
+
+type RegisterClientOutput struct {
+
+	// An endpoint that the client can use to request authorization.
+	AuthorizationEndpoint *string
+
+	// The unique identifier string for each client. This client uses this identifier
+	// to get authenticated by the service in subsequent calls.
+	ClientId *string
+
+	// Indicates the time at which the clientId and clientSecret were issued.
+	ClientIdIssuedAt int64
+
+	// A secret string generated for the client. The client will use this string to
+	// get authenticated by the service in subsequent calls.
+	ClientSecret *string
+
+	// Indicates the time at which the clientId and clientSecret will become invalid.
+	ClientSecretExpiresAt int64
+
+	// An endpoint that the client can use to create tokens.
+	TokenEndpoint *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationRegisterClientMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpRegisterClient{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpRegisterClient{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "RegisterClient"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpRegisterClientValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opRegisterClient(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opRegisterClient(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "RegisterClient",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/api_op_StartDeviceAuthorization.go 🔗

@@ -0,0 +1,176 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Initiates device authorization by requesting a pair of verification codes from
+// the authorization service.
+func (c *Client) StartDeviceAuthorization(ctx context.Context, params *StartDeviceAuthorizationInput, optFns ...func(*Options)) (*StartDeviceAuthorizationOutput, error) {
+	if params == nil {
+		params = &StartDeviceAuthorizationInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "StartDeviceAuthorization", params, optFns, c.addOperationStartDeviceAuthorizationMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*StartDeviceAuthorizationOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type StartDeviceAuthorizationInput struct {
+
+	// The unique identifier string for the client that is registered with IAM
+	// Identity Center. This value should come from the persisted result of the RegisterClientAPI
+	// operation.
+	//
+	// This member is required.
+	ClientId *string
+
+	// A secret string that is generated for the client. This value should come from
+	// the persisted result of the RegisterClientAPI operation.
+	//
+	// This member is required.
+	ClientSecret *string
+
+	// The URL for the Amazon Web Services access portal. For more information, see [Using the Amazon Web Services access portal]
+	// in the IAM Identity Center User Guide.
+	//
+	// [Using the Amazon Web Services access portal]: https://docs.aws.amazon.com/singlesignon/latest/userguide/using-the-portal.html
+	//
+	// This member is required.
+	StartUrl *string
+
+	noSmithyDocumentSerde
+}
+
+type StartDeviceAuthorizationOutput struct {
+
+	// The short-lived code that is used by the device when polling for a session
+	// token.
+	DeviceCode *string
+
+	// Indicates the number of seconds in which the verification code will become
+	// invalid.
+	ExpiresIn int32
+
+	// Indicates the number of seconds the client must wait between attempts when
+	// polling for a session.
+	Interval int32
+
+	// A one-time user verification code. This is needed to authorize an in-use device.
+	UserCode *string
+
+	// The URI of the verification page that takes the userCode to authorize the
+	// device.
+	VerificationUri *string
+
+	// An alternate URL that the client can use to automatically launch a browser.
+	// This process skips the manual step in which the user visits the verification
+	// page and enters their code.
+	VerificationUriComplete *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationStartDeviceAuthorizationMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsRestjson1_serializeOpStartDeviceAuthorization{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsRestjson1_deserializeOpStartDeviceAuthorization{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "StartDeviceAuthorization"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpStartDeviceAuthorizationValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opStartDeviceAuthorization(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opStartDeviceAuthorization(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "StartDeviceAuthorization",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/auth.go 🔗

@@ -0,0 +1,302 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	smithy "github.com/aws/smithy-go"
+	smithyauth "github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+func bindAuthParamsRegion(_ interface{}, params *AuthResolverParameters, _ interface{}, options Options) {
+	params.Region = options.Region
+}
+
+type setLegacyContextSigningOptionsMiddleware struct {
+}
+
+func (*setLegacyContextSigningOptionsMiddleware) ID() string {
+	return "setLegacyContextSigningOptions"
+}
+
+func (m *setLegacyContextSigningOptionsMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	rscheme := getResolvedAuthScheme(ctx)
+	schemeID := rscheme.Scheme.SchemeID()
+
+	if sn := awsmiddleware.GetSigningName(ctx); sn != "" {
+		if schemeID == "aws.auth#sigv4" {
+			smithyhttp.SetSigV4SigningName(&rscheme.SignerProperties, sn)
+		} else if schemeID == "aws.auth#sigv4a" {
+			smithyhttp.SetSigV4ASigningName(&rscheme.SignerProperties, sn)
+		}
+	}
+
+	if sr := awsmiddleware.GetSigningRegion(ctx); sr != "" {
+		if schemeID == "aws.auth#sigv4" {
+			smithyhttp.SetSigV4SigningRegion(&rscheme.SignerProperties, sr)
+		} else if schemeID == "aws.auth#sigv4a" {
+			smithyhttp.SetSigV4ASigningRegions(&rscheme.SignerProperties, []string{sr})
+		}
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+func addSetLegacyContextSigningOptionsMiddleware(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&setLegacyContextSigningOptionsMiddleware{}, "Signing", middleware.Before)
+}
+
+type withAnonymous struct {
+	resolver AuthSchemeResolver
+}
+
+var _ AuthSchemeResolver = (*withAnonymous)(nil)
+
+func (v *withAnonymous) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+	opts, err := v.resolver.ResolveAuthSchemes(ctx, params)
+	if err != nil {
+		return nil, err
+	}
+
+	opts = append(opts, &smithyauth.Option{
+		SchemeID: smithyauth.SchemeIDAnonymous,
+	})
+	return opts, nil
+}
+
+func wrapWithAnonymousAuth(options *Options) {
+	if _, ok := options.AuthSchemeResolver.(*defaultAuthSchemeResolver); !ok {
+		return
+	}
+
+	options.AuthSchemeResolver = &withAnonymous{
+		resolver: options.AuthSchemeResolver,
+	}
+}
+
+// AuthResolverParameters contains the set of inputs necessary for auth scheme
+// resolution.
+type AuthResolverParameters struct {
+	// The name of the operation being invoked.
+	Operation string
+
+	// The region in which the operation is being invoked.
+	Region string
+}
+
+func bindAuthResolverParams(ctx context.Context, operation string, input interface{}, options Options) *AuthResolverParameters {
+	params := &AuthResolverParameters{
+		Operation: operation,
+	}
+
+	bindAuthParamsRegion(ctx, params, input, options)
+
+	return params
+}
+
+// AuthSchemeResolver returns a set of possible authentication options for an
+// operation.
+type AuthSchemeResolver interface {
+	ResolveAuthSchemes(context.Context, *AuthResolverParameters) ([]*smithyauth.Option, error)
+}
+
+type defaultAuthSchemeResolver struct{}
+
+var _ AuthSchemeResolver = (*defaultAuthSchemeResolver)(nil)
+
+func (*defaultAuthSchemeResolver) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+	if overrides, ok := operationAuthOptions[params.Operation]; ok {
+		return overrides(params), nil
+	}
+	return serviceAuthOptions(params), nil
+}
+
+var operationAuthOptions = map[string]func(*AuthResolverParameters) []*smithyauth.Option{
+	"CreateToken": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+
+	"RegisterClient": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+
+	"StartDeviceAuthorization": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+}
+
+func serviceAuthOptions(params *AuthResolverParameters) []*smithyauth.Option {
+	return []*smithyauth.Option{
+		{
+			SchemeID: smithyauth.SchemeIDSigV4,
+			SignerProperties: func() smithy.Properties {
+				var props smithy.Properties
+				smithyhttp.SetSigV4SigningName(&props, "sso-oauth")
+				smithyhttp.SetSigV4SigningRegion(&props, params.Region)
+				return props
+			}(),
+		},
+	}
+}
+
+type resolveAuthSchemeMiddleware struct {
+	operation string
+	options   Options
+}
+
+func (*resolveAuthSchemeMiddleware) ID() string {
+	return "ResolveAuthScheme"
+}
+
+func (m *resolveAuthSchemeMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	params := bindAuthResolverParams(ctx, m.operation, getOperationInput(ctx), m.options)
+	options, err := m.options.AuthSchemeResolver.ResolveAuthSchemes(ctx, params)
+	if err != nil {
+		return out, metadata, fmt.Errorf("resolve auth scheme: %w", err)
+	}
+
+	scheme, ok := m.selectScheme(options)
+	if !ok {
+		return out, metadata, fmt.Errorf("could not select an auth scheme")
+	}
+
+	ctx = setResolvedAuthScheme(ctx, scheme)
+	return next.HandleFinalize(ctx, in)
+}
+
+func (m *resolveAuthSchemeMiddleware) selectScheme(options []*smithyauth.Option) (*resolvedAuthScheme, bool) {
+	for _, option := range options {
+		if option.SchemeID == smithyauth.SchemeIDAnonymous {
+			return newResolvedAuthScheme(smithyhttp.NewAnonymousScheme(), option), true
+		}
+
+		for _, scheme := range m.options.AuthSchemes {
+			if scheme.SchemeID() != option.SchemeID {
+				continue
+			}
+
+			if scheme.IdentityResolver(m.options) != nil {
+				return newResolvedAuthScheme(scheme, option), true
+			}
+		}
+	}
+
+	return nil, false
+}
+
+type resolvedAuthSchemeKey struct{}
+
+type resolvedAuthScheme struct {
+	Scheme             smithyhttp.AuthScheme
+	IdentityProperties smithy.Properties
+	SignerProperties   smithy.Properties
+}
+
+func newResolvedAuthScheme(scheme smithyhttp.AuthScheme, option *smithyauth.Option) *resolvedAuthScheme {
+	return &resolvedAuthScheme{
+		Scheme:             scheme,
+		IdentityProperties: option.IdentityProperties,
+		SignerProperties:   option.SignerProperties,
+	}
+}
+
+func setResolvedAuthScheme(ctx context.Context, scheme *resolvedAuthScheme) context.Context {
+	return middleware.WithStackValue(ctx, resolvedAuthSchemeKey{}, scheme)
+}
+
+func getResolvedAuthScheme(ctx context.Context) *resolvedAuthScheme {
+	v, _ := middleware.GetStackValue(ctx, resolvedAuthSchemeKey{}).(*resolvedAuthScheme)
+	return v
+}
+
+type getIdentityMiddleware struct {
+	options Options
+}
+
+func (*getIdentityMiddleware) ID() string {
+	return "GetIdentity"
+}
+
+func (m *getIdentityMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	resolver := rscheme.Scheme.IdentityResolver(m.options)
+	if resolver == nil {
+		return out, metadata, fmt.Errorf("no identity resolver")
+	}
+
+	identity, err := resolver.GetIdentity(ctx, rscheme.IdentityProperties)
+	if err != nil {
+		return out, metadata, fmt.Errorf("get identity: %w", err)
+	}
+
+	ctx = setIdentity(ctx, identity)
+	return next.HandleFinalize(ctx, in)
+}
+
+type identityKey struct{}
+
+func setIdentity(ctx context.Context, identity smithyauth.Identity) context.Context {
+	return middleware.WithStackValue(ctx, identityKey{}, identity)
+}
+
+func getIdentity(ctx context.Context) smithyauth.Identity {
+	v, _ := middleware.GetStackValue(ctx, identityKey{}).(smithyauth.Identity)
+	return v
+}
+
+type signRequestMiddleware struct {
+}
+
+func (*signRequestMiddleware) ID() string {
+	return "Signing"
+}
+
+func (m *signRequestMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unexpected transport type %T", in.Request)
+	}
+
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	identity := getIdentity(ctx)
+	if identity == nil {
+		return out, metadata, fmt.Errorf("no identity")
+	}
+
+	signer := rscheme.Scheme.Signer()
+	if signer == nil {
+		return out, metadata, fmt.Errorf("no signer")
+	}
+
+	if err := signer.SignRequest(ctx, req, identity, rscheme.SignerProperties); err != nil {
+		return out, metadata, fmt.Errorf("sign request: %w", err)
+	}
+
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/deserializers.go 🔗

@@ -0,0 +1,2167 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"bytes"
+	"context"
+	"encoding/json"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws/protocol/restjson"
+	"github.com/aws/aws-sdk-go-v2/service/ssooidc/types"
+	smithy "github.com/aws/smithy-go"
+	smithyio "github.com/aws/smithy-go/io"
+	"github.com/aws/smithy-go/middleware"
+	"github.com/aws/smithy-go/ptr"
+	smithytime "github.com/aws/smithy-go/time"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"io"
+	"strings"
+	"time"
+)
+
+func deserializeS3Expires(v string) (*time.Time, error) {
+	t, err := smithytime.ParseHTTPDate(v)
+	if err != nil {
+		return nil, nil
+	}
+	return &t, nil
+}
+
+type awsRestjson1_deserializeOpCreateToken struct {
+}
+
+func (*awsRestjson1_deserializeOpCreateToken) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpCreateToken) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorCreateToken(response, &metadata)
+	}
+	output := &CreateTokenOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(response.Body, ringBuffer)
+
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	err = awsRestjson1_deserializeOpDocumentCreateTokenOutput(&output, shape)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorCreateToken(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("AccessDeniedException", errorCode):
+		return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+	case strings.EqualFold("AuthorizationPendingException", errorCode):
+		return awsRestjson1_deserializeErrorAuthorizationPendingException(response, errorBody)
+
+	case strings.EqualFold("ExpiredTokenException", errorCode):
+		return awsRestjson1_deserializeErrorExpiredTokenException(response, errorBody)
+
+	case strings.EqualFold("InternalServerException", errorCode):
+		return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+	case strings.EqualFold("InvalidClientException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidClientException(response, errorBody)
+
+	case strings.EqualFold("InvalidGrantException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidGrantException(response, errorBody)
+
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("InvalidScopeException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidScopeException(response, errorBody)
+
+	case strings.EqualFold("SlowDownException", errorCode):
+		return awsRestjson1_deserializeErrorSlowDownException(response, errorBody)
+
+	case strings.EqualFold("UnauthorizedClientException", errorCode):
+		return awsRestjson1_deserializeErrorUnauthorizedClientException(response, errorBody)
+
+	case strings.EqualFold("UnsupportedGrantTypeException", errorCode):
+		return awsRestjson1_deserializeErrorUnsupportedGrantTypeException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeOpDocumentCreateTokenOutput(v **CreateTokenOutput, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *CreateTokenOutput
+	if *v == nil {
+		sv = &CreateTokenOutput{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "accessToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected AccessToken to be of type string, got %T instead", value)
+				}
+				sv.AccessToken = ptr.String(jtv)
+			}
+
+		case "expiresIn":
+			if value != nil {
+				jtv, ok := value.(json.Number)
+				if !ok {
+					return fmt.Errorf("expected ExpirationInSeconds to be json.Number, got %T instead", value)
+				}
+				i64, err := jtv.Int64()
+				if err != nil {
+					return err
+				}
+				sv.ExpiresIn = int32(i64)
+			}
+
+		case "idToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected IdToken to be of type string, got %T instead", value)
+				}
+				sv.IdToken = ptr.String(jtv)
+			}
+
+		case "refreshToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected RefreshToken to be of type string, got %T instead", value)
+				}
+				sv.RefreshToken = ptr.String(jtv)
+			}
+
+		case "tokenType":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected TokenType to be of type string, got %T instead", value)
+				}
+				sv.TokenType = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+type awsRestjson1_deserializeOpCreateTokenWithIAM struct {
+}
+
+func (*awsRestjson1_deserializeOpCreateTokenWithIAM) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpCreateTokenWithIAM) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorCreateTokenWithIAM(response, &metadata)
+	}
+	output := &CreateTokenWithIAMOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(response.Body, ringBuffer)
+
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	err = awsRestjson1_deserializeOpDocumentCreateTokenWithIAMOutput(&output, shape)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorCreateTokenWithIAM(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("AccessDeniedException", errorCode):
+		return awsRestjson1_deserializeErrorAccessDeniedException(response, errorBody)
+
+	case strings.EqualFold("AuthorizationPendingException", errorCode):
+		return awsRestjson1_deserializeErrorAuthorizationPendingException(response, errorBody)
+
+	case strings.EqualFold("ExpiredTokenException", errorCode):
+		return awsRestjson1_deserializeErrorExpiredTokenException(response, errorBody)
+
+	case strings.EqualFold("InternalServerException", errorCode):
+		return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+	case strings.EqualFold("InvalidClientException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidClientException(response, errorBody)
+
+	case strings.EqualFold("InvalidGrantException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidGrantException(response, errorBody)
+
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("InvalidRequestRegionException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestRegionException(response, errorBody)
+
+	case strings.EqualFold("InvalidScopeException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidScopeException(response, errorBody)
+
+	case strings.EqualFold("SlowDownException", errorCode):
+		return awsRestjson1_deserializeErrorSlowDownException(response, errorBody)
+
+	case strings.EqualFold("UnauthorizedClientException", errorCode):
+		return awsRestjson1_deserializeErrorUnauthorizedClientException(response, errorBody)
+
+	case strings.EqualFold("UnsupportedGrantTypeException", errorCode):
+		return awsRestjson1_deserializeErrorUnsupportedGrantTypeException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeOpDocumentCreateTokenWithIAMOutput(v **CreateTokenWithIAMOutput, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *CreateTokenWithIAMOutput
+	if *v == nil {
+		sv = &CreateTokenWithIAMOutput{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "accessToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected AccessToken to be of type string, got %T instead", value)
+				}
+				sv.AccessToken = ptr.String(jtv)
+			}
+
+		case "expiresIn":
+			if value != nil {
+				jtv, ok := value.(json.Number)
+				if !ok {
+					return fmt.Errorf("expected ExpirationInSeconds to be json.Number, got %T instead", value)
+				}
+				i64, err := jtv.Int64()
+				if err != nil {
+					return err
+				}
+				sv.ExpiresIn = int32(i64)
+			}
+
+		case "idToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected IdToken to be of type string, got %T instead", value)
+				}
+				sv.IdToken = ptr.String(jtv)
+			}
+
+		case "issuedTokenType":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected TokenTypeURI to be of type string, got %T instead", value)
+				}
+				sv.IssuedTokenType = ptr.String(jtv)
+			}
+
+		case "refreshToken":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected RefreshToken to be of type string, got %T instead", value)
+				}
+				sv.RefreshToken = ptr.String(jtv)
+			}
+
+		case "scope":
+			if err := awsRestjson1_deserializeDocumentScopes(&sv.Scope, value); err != nil {
+				return err
+			}
+
+		case "tokenType":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected TokenType to be of type string, got %T instead", value)
+				}
+				sv.TokenType = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+type awsRestjson1_deserializeOpRegisterClient struct {
+}
+
+func (*awsRestjson1_deserializeOpRegisterClient) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpRegisterClient) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorRegisterClient(response, &metadata)
+	}
+	output := &RegisterClientOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(response.Body, ringBuffer)
+
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	err = awsRestjson1_deserializeOpDocumentRegisterClientOutput(&output, shape)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorRegisterClient(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("InternalServerException", errorCode):
+		return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+	case strings.EqualFold("InvalidClientMetadataException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidClientMetadataException(response, errorBody)
+
+	case strings.EqualFold("InvalidRedirectUriException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRedirectUriException(response, errorBody)
+
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("InvalidScopeException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidScopeException(response, errorBody)
+
+	case strings.EqualFold("UnsupportedGrantTypeException", errorCode):
+		return awsRestjson1_deserializeErrorUnsupportedGrantTypeException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeOpDocumentRegisterClientOutput(v **RegisterClientOutput, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *RegisterClientOutput
+	if *v == nil {
+		sv = &RegisterClientOutput{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "authorizationEndpoint":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected URI to be of type string, got %T instead", value)
+				}
+				sv.AuthorizationEndpoint = ptr.String(jtv)
+			}
+
+		case "clientId":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ClientId to be of type string, got %T instead", value)
+				}
+				sv.ClientId = ptr.String(jtv)
+			}
+
+		case "clientIdIssuedAt":
+			if value != nil {
+				jtv, ok := value.(json.Number)
+				if !ok {
+					return fmt.Errorf("expected LongTimeStampType to be json.Number, got %T instead", value)
+				}
+				i64, err := jtv.Int64()
+				if err != nil {
+					return err
+				}
+				sv.ClientIdIssuedAt = i64
+			}
+
+		case "clientSecret":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ClientSecret to be of type string, got %T instead", value)
+				}
+				sv.ClientSecret = ptr.String(jtv)
+			}
+
+		case "clientSecretExpiresAt":
+			if value != nil {
+				jtv, ok := value.(json.Number)
+				if !ok {
+					return fmt.Errorf("expected LongTimeStampType to be json.Number, got %T instead", value)
+				}
+				i64, err := jtv.Int64()
+				if err != nil {
+					return err
+				}
+				sv.ClientSecretExpiresAt = i64
+			}
+
+		case "tokenEndpoint":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected URI to be of type string, got %T instead", value)
+				}
+				sv.TokenEndpoint = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+type awsRestjson1_deserializeOpStartDeviceAuthorization struct {
+}
+
+func (*awsRestjson1_deserializeOpStartDeviceAuthorization) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsRestjson1_deserializeOpStartDeviceAuthorization) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsRestjson1_deserializeOpErrorStartDeviceAuthorization(response, &metadata)
+	}
+	output := &StartDeviceAuthorizationOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(response.Body, ringBuffer)
+
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	err = awsRestjson1_deserializeOpDocumentStartDeviceAuthorizationOutput(&output, shape)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body with invalid JSON, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return out, metadata, err
+}
+
+func awsRestjson1_deserializeOpErrorStartDeviceAuthorization(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	headerCode := response.Header.Get("X-Amzn-ErrorType")
+	if len(headerCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(headerCode)
+	}
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	jsonCode, message, err := restjson.GetErrorInfo(decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+	if len(headerCode) == 0 && len(jsonCode) != 0 {
+		errorCode = restjson.SanitizeErrorCode(jsonCode)
+	}
+	if len(message) != 0 {
+		errorMessage = message
+	}
+
+	switch {
+	case strings.EqualFold("InternalServerException", errorCode):
+		return awsRestjson1_deserializeErrorInternalServerException(response, errorBody)
+
+	case strings.EqualFold("InvalidClientException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidClientException(response, errorBody)
+
+	case strings.EqualFold("InvalidRequestException", errorCode):
+		return awsRestjson1_deserializeErrorInvalidRequestException(response, errorBody)
+
+	case strings.EqualFold("SlowDownException", errorCode):
+		return awsRestjson1_deserializeErrorSlowDownException(response, errorBody)
+
+	case strings.EqualFold("UnauthorizedClientException", errorCode):
+		return awsRestjson1_deserializeErrorUnauthorizedClientException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsRestjson1_deserializeOpDocumentStartDeviceAuthorizationOutput(v **StartDeviceAuthorizationOutput, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *StartDeviceAuthorizationOutput
+	if *v == nil {
+		sv = &StartDeviceAuthorizationOutput{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "deviceCode":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected DeviceCode to be of type string, got %T instead", value)
+				}
+				sv.DeviceCode = ptr.String(jtv)
+			}
+
+		case "expiresIn":
+			if value != nil {
+				jtv, ok := value.(json.Number)
+				if !ok {
+					return fmt.Errorf("expected ExpirationInSeconds to be json.Number, got %T instead", value)
+				}
+				i64, err := jtv.Int64()
+				if err != nil {
+					return err
+				}
+				sv.ExpiresIn = int32(i64)
+			}
+
+		case "interval":
+			if value != nil {
+				jtv, ok := value.(json.Number)
+				if !ok {
+					return fmt.Errorf("expected IntervalInSeconds to be json.Number, got %T instead", value)
+				}
+				i64, err := jtv.Int64()
+				if err != nil {
+					return err
+				}
+				sv.Interval = int32(i64)
+			}
+
+		case "userCode":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected UserCode to be of type string, got %T instead", value)
+				}
+				sv.UserCode = ptr.String(jtv)
+			}
+
+		case "verificationUri":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected URI to be of type string, got %T instead", value)
+				}
+				sv.VerificationUri = ptr.String(jtv)
+			}
+
+		case "verificationUriComplete":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected URI to be of type string, got %T instead", value)
+				}
+				sv.VerificationUriComplete = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeErrorAccessDeniedException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.AccessDeniedException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentAccessDeniedException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorAuthorizationPendingException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.AuthorizationPendingException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentAuthorizationPendingException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorExpiredTokenException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.ExpiredTokenException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentExpiredTokenException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInternalServerException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InternalServerException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInternalServerException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInvalidClientException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidClientException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidClientException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInvalidClientMetadataException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidClientMetadataException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidClientMetadataException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInvalidGrantException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidGrantException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidGrantException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInvalidRedirectUriException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidRedirectUriException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidRedirectUriException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInvalidRequestException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidRequestException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidRequestException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInvalidRequestRegionException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidRequestRegionException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidRequestRegionException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorInvalidScopeException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidScopeException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentInvalidScopeException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorSlowDownException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.SlowDownException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentSlowDownException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorUnauthorizedClientException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.UnauthorizedClientException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentUnauthorizedClientException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeErrorUnsupportedGrantTypeException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.UnsupportedGrantTypeException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+
+	body := io.TeeReader(errorBody, ringBuffer)
+	decoder := json.NewDecoder(body)
+	decoder.UseNumber()
+	var shape interface{}
+	if err := decoder.Decode(&shape); err != nil && err != io.EOF {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	err := awsRestjson1_deserializeDocumentUnsupportedGrantTypeException(&output, shape)
+
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return err
+	}
+
+	errorBody.Seek(0, io.SeekStart)
+
+	return output
+}
+
+func awsRestjson1_deserializeDocumentAccessDeniedException(v **types.AccessDeniedException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.AccessDeniedException
+	if *v == nil {
+		sv = &types.AccessDeniedException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentAuthorizationPendingException(v **types.AuthorizationPendingException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.AuthorizationPendingException
+	if *v == nil {
+		sv = &types.AuthorizationPendingException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentExpiredTokenException(v **types.ExpiredTokenException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.ExpiredTokenException
+	if *v == nil {
+		sv = &types.ExpiredTokenException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInternalServerException(v **types.InternalServerException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InternalServerException
+	if *v == nil {
+		sv = &types.InternalServerException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidClientException(v **types.InvalidClientException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidClientException
+	if *v == nil {
+		sv = &types.InvalidClientException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidClientMetadataException(v **types.InvalidClientMetadataException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidClientMetadataException
+	if *v == nil {
+		sv = &types.InvalidClientMetadataException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidGrantException(v **types.InvalidGrantException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidGrantException
+	if *v == nil {
+		sv = &types.InvalidGrantException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidRedirectUriException(v **types.InvalidRedirectUriException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidRedirectUriException
+	if *v == nil {
+		sv = &types.InvalidRedirectUriException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidRequestException(v **types.InvalidRequestException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidRequestException
+	if *v == nil {
+		sv = &types.InvalidRequestException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidRequestRegionException(v **types.InvalidRequestRegionException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidRequestRegionException
+	if *v == nil {
+		sv = &types.InvalidRequestRegionException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "endpoint":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Location to be of type string, got %T instead", value)
+				}
+				sv.Endpoint = ptr.String(jtv)
+			}
+
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		case "region":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Region to be of type string, got %T instead", value)
+				}
+				sv.Region = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentInvalidScopeException(v **types.InvalidScopeException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.InvalidScopeException
+	if *v == nil {
+		sv = &types.InvalidScopeException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentScopes(v *[]string, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.([]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var cv []string
+	if *v == nil {
+		cv = []string{}
+	} else {
+		cv = *v
+	}
+
+	for _, value := range shape {
+		var col string
+		if value != nil {
+			jtv, ok := value.(string)
+			if !ok {
+				return fmt.Errorf("expected Scope to be of type string, got %T instead", value)
+			}
+			col = jtv
+		}
+		cv = append(cv, col)
+
+	}
+	*v = cv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentSlowDownException(v **types.SlowDownException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.SlowDownException
+	if *v == nil {
+		sv = &types.SlowDownException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentUnauthorizedClientException(v **types.UnauthorizedClientException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.UnauthorizedClientException
+	if *v == nil {
+		sv = &types.UnauthorizedClientException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}
+
+func awsRestjson1_deserializeDocumentUnsupportedGrantTypeException(v **types.UnsupportedGrantTypeException, value interface{}) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	if value == nil {
+		return nil
+	}
+
+	shape, ok := value.(map[string]interface{})
+	if !ok {
+		return fmt.Errorf("unexpected JSON type %v", value)
+	}
+
+	var sv *types.UnsupportedGrantTypeException
+	if *v == nil {
+		sv = &types.UnsupportedGrantTypeException{}
+	} else {
+		sv = *v
+	}
+
+	for key, value := range shape {
+		switch key {
+		case "error":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected Error to be of type string, got %T instead", value)
+				}
+				sv.Error_ = ptr.String(jtv)
+			}
+
+		case "error_description":
+			if value != nil {
+				jtv, ok := value.(string)
+				if !ok {
+					return fmt.Errorf("expected ErrorDescription to be of type string, got %T instead", value)
+				}
+				sv.Error_description = ptr.String(jtv)
+			}
+
+		default:
+			_, _ = key, value
+
+		}
+	}
+	*v = sv
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/doc.go 🔗

@@ -0,0 +1,46 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+// Package ssooidc provides the API client, operations, and parameter types for
+// AWS SSO OIDC.
+//
+// IAM Identity Center OpenID Connect (OIDC) is a web service that enables a
+// client (such as CLI or a native application) to register with IAM Identity
+// Center. The service also enables the client to fetch the user’s access token
+// upon successful authentication and authorization with IAM Identity Center.
+//
+// IAM Identity Center uses the sso and identitystore API namespaces.
+//
+// # Considerations for Using This Guide
+//
+// Before you begin using this guide, we recommend that you first review the
+// following important information about how the IAM Identity Center OIDC service
+// works.
+//
+//   - The IAM Identity Center OIDC service currently implements only the portions
+//     of the OAuth 2.0 Device Authorization Grant standard ([https://tools.ietf.org/html/rfc8628] ) that are necessary to
+//     enable single sign-on authentication with the CLI.
+//
+//   - With older versions of the CLI, the service only emits OIDC access tokens,
+//     so to obtain a new token, users must explicitly re-authenticate. To access the
+//     OIDC flow that supports token refresh and doesn’t require re-authentication,
+//     update to the latest CLI version (1.27.10 for CLI V1 and 2.9.0 for CLI V2) with
+//     support for OIDC token refresh and configurable IAM Identity Center session
+//     durations. For more information, see [Configure Amazon Web Services access portal session duration].
+//
+//   - The access tokens provided by this service grant access to all Amazon Web
+//     Services account entitlements assigned to an IAM Identity Center user, not just
+//     a particular application.
+//
+//   - The documentation in this guide does not describe the mechanism to convert
+//     the access token into Amazon Web Services Auth (“sigv4”) credentials for use
+//     with IAM-protected Amazon Web Services service endpoints. For more information,
+//     see [GetRoleCredentials]in the IAM Identity Center Portal API Reference Guide.
+//
+// For general information about IAM Identity Center, see [What is IAM Identity Center?] in the IAM Identity
+// Center User Guide.
+//
+// [Configure Amazon Web Services access portal session duration]: https://docs.aws.amazon.com/singlesignon/latest/userguide/configure-user-session.html
+// [GetRoleCredentials]: https://docs.aws.amazon.com/singlesignon/latest/PortalAPIReference/API_GetRoleCredentials.html
+// [https://tools.ietf.org/html/rfc8628]: https://tools.ietf.org/html/rfc8628
+// [What is IAM Identity Center?]: https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html
+package ssooidc

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/endpoints.go 🔗

@@ -0,0 +1,550 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+	"github.com/aws/aws-sdk-go-v2/internal/endpoints"
+	"github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn"
+	internalendpoints "github.com/aws/aws-sdk-go-v2/service/ssooidc/internal/endpoints"
+	smithyauth "github.com/aws/smithy-go/auth"
+	smithyendpoints "github.com/aws/smithy-go/endpoints"
+	"github.com/aws/smithy-go/middleware"
+	"github.com/aws/smithy-go/ptr"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net/http"
+	"net/url"
+	"os"
+	"strings"
+)
+
+// EndpointResolverOptions is the service endpoint resolver options
+type EndpointResolverOptions = internalendpoints.Options
+
+// EndpointResolver interface for resolving service endpoints.
+type EndpointResolver interface {
+	ResolveEndpoint(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+}
+
+var _ EndpointResolver = &internalendpoints.Resolver{}
+
+// NewDefaultEndpointResolver constructs a new service endpoint resolver
+func NewDefaultEndpointResolver() *internalendpoints.Resolver {
+	return internalendpoints.New()
+}
+
+// EndpointResolverFunc is a helper utility that wraps a function so it satisfies
+// the EndpointResolver interface. This is useful when you want to add additional
+// endpoint resolving logic, or stub out specific endpoints with custom values.
+type EndpointResolverFunc func(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+
+func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+	return fn(region, options)
+}
+
+// EndpointResolverFromURL returns an EndpointResolver configured using the
+// provided endpoint url. By default, the resolved endpoint resolver uses the
+// client region as signing region, and the endpoint source is set to
+// EndpointSourceCustom.You can provide functional options to configure endpoint
+// values for the resolved endpoint.
+func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver {
+	e := aws.Endpoint{URL: url, Source: aws.EndpointSourceCustom}
+	for _, fn := range optFns {
+		fn(&e)
+	}
+
+	return EndpointResolverFunc(
+		func(region string, options EndpointResolverOptions) (aws.Endpoint, error) {
+			if len(e.SigningRegion) == 0 {
+				e.SigningRegion = region
+			}
+			return e, nil
+		},
+	)
+}
+
+type ResolveEndpoint struct {
+	Resolver EndpointResolver
+	Options  EndpointResolverOptions
+}
+
+func (*ResolveEndpoint) ID() string {
+	return "ResolveEndpoint"
+}
+
+func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	if !awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+		return next.HandleSerialize(ctx, in)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.Resolver == nil {
+		return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+	}
+
+	eo := m.Options
+	eo.Logger = middleware.GetLogger(ctx)
+
+	var endpoint aws.Endpoint
+	endpoint, err = m.Resolver.ResolveEndpoint(awsmiddleware.GetRegion(ctx), eo)
+	if err != nil {
+		nf := (&aws.EndpointNotFoundError{})
+		if errors.As(err, &nf) {
+			ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, false)
+			return next.HandleSerialize(ctx, in)
+		}
+		return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+	}
+
+	req.URL, err = url.Parse(endpoint.URL)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to parse endpoint URL: %w", err)
+	}
+
+	if len(awsmiddleware.GetSigningName(ctx)) == 0 {
+		signingName := endpoint.SigningName
+		if len(signingName) == 0 {
+			signingName = "sso-oauth"
+		}
+		ctx = awsmiddleware.SetSigningName(ctx, signingName)
+	}
+	ctx = awsmiddleware.SetEndpointSource(ctx, endpoint.Source)
+	ctx = smithyhttp.SetHostnameImmutable(ctx, endpoint.HostnameImmutable)
+	ctx = awsmiddleware.SetSigningRegion(ctx, endpoint.SigningRegion)
+	ctx = awsmiddleware.SetPartitionID(ctx, endpoint.PartitionID)
+	return next.HandleSerialize(ctx, in)
+}
+func addResolveEndpointMiddleware(stack *middleware.Stack, o Options) error {
+	return stack.Serialize.Insert(&ResolveEndpoint{
+		Resolver: o.EndpointResolver,
+		Options:  o.EndpointOptions,
+	}, "OperationSerializer", middleware.Before)
+}
+
+func removeResolveEndpointMiddleware(stack *middleware.Stack) error {
+	_, err := stack.Serialize.Remove((&ResolveEndpoint{}).ID())
+	return err
+}
+
+type wrappedEndpointResolver struct {
+	awsResolver aws.EndpointResolverWithOptions
+}
+
+func (w *wrappedEndpointResolver) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+	return w.awsResolver.ResolveEndpoint(ServiceID, region, options)
+}
+
+type awsEndpointResolverAdaptor func(service, region string) (aws.Endpoint, error)
+
+func (a awsEndpointResolverAdaptor) ResolveEndpoint(service, region string, options ...interface{}) (aws.Endpoint, error) {
+	return a(service, region)
+}
+
+var _ aws.EndpointResolverWithOptions = awsEndpointResolverAdaptor(nil)
+
+// withEndpointResolver returns an aws.EndpointResolverWithOptions that first delegates endpoint resolution to the awsResolver.
+// If awsResolver returns aws.EndpointNotFoundError error, the v1 resolver middleware will swallow the error,
+// and set an appropriate context flag such that fallback will occur when EndpointResolverV2 is invoked
+// via its middleware.
+//
+// If another error (besides aws.EndpointNotFoundError) is returned, then that error will be propagated.
+func withEndpointResolver(awsResolver aws.EndpointResolver, awsResolverWithOptions aws.EndpointResolverWithOptions) EndpointResolver {
+	var resolver aws.EndpointResolverWithOptions
+
+	if awsResolverWithOptions != nil {
+		resolver = awsResolverWithOptions
+	} else if awsResolver != nil {
+		resolver = awsEndpointResolverAdaptor(awsResolver.ResolveEndpoint)
+	}
+
+	return &wrappedEndpointResolver{
+		awsResolver: resolver,
+	}
+}
+
+func finalizeClientEndpointResolverOptions(options *Options) {
+	options.EndpointOptions.LogDeprecated = options.ClientLogMode.IsDeprecatedUsage()
+
+	if len(options.EndpointOptions.ResolvedRegion) == 0 {
+		const fipsInfix = "-fips-"
+		const fipsPrefix = "fips-"
+		const fipsSuffix = "-fips"
+
+		if strings.Contains(options.Region, fipsInfix) ||
+			strings.Contains(options.Region, fipsPrefix) ||
+			strings.Contains(options.Region, fipsSuffix) {
+			options.EndpointOptions.ResolvedRegion = strings.ReplaceAll(strings.ReplaceAll(strings.ReplaceAll(
+				options.Region, fipsInfix, "-"), fipsPrefix, ""), fipsSuffix, "")
+			options.EndpointOptions.UseFIPSEndpoint = aws.FIPSEndpointStateEnabled
+		}
+	}
+
+}
+
+func resolveEndpointResolverV2(options *Options) {
+	if options.EndpointResolverV2 == nil {
+		options.EndpointResolverV2 = NewDefaultEndpointResolverV2()
+	}
+}
+
+func resolveBaseEndpoint(cfg aws.Config, o *Options) {
+	if cfg.BaseEndpoint != nil {
+		o.BaseEndpoint = cfg.BaseEndpoint
+	}
+
+	_, g := os.LookupEnv("AWS_ENDPOINT_URL")
+	_, s := os.LookupEnv("AWS_ENDPOINT_URL_SSO_OIDC")
+
+	if g && !s {
+		return
+	}
+
+	value, found, err := internalConfig.ResolveServiceBaseEndpoint(context.Background(), "SSO OIDC", cfg.ConfigSources)
+	if found && err == nil {
+		o.BaseEndpoint = &value
+	}
+}
+
+func bindRegion(region string) *string {
+	if region == "" {
+		return nil
+	}
+	return aws.String(endpoints.MapFIPSRegion(region))
+}
+
+// EndpointParameters provides the parameters that influence how endpoints are
+// resolved.
+type EndpointParameters struct {
+	// The AWS region used to dispatch the request.
+	//
+	// Parameter is
+	// required.
+	//
+	// AWS::Region
+	Region *string
+
+	// When true, use the dual-stack endpoint. If the configured endpoint does not
+	// support dual-stack, dispatching the request MAY return an error.
+	//
+	// Defaults to
+	// false if no value is provided.
+	//
+	// AWS::UseDualStack
+	UseDualStack *bool
+
+	// When true, send this request to the FIPS-compliant regional endpoint. If the
+	// configured endpoint does not have a FIPS compliant endpoint, dispatching the
+	// request will return an error.
+	//
+	// Defaults to false if no value is
+	// provided.
+	//
+	// AWS::UseFIPS
+	UseFIPS *bool
+
+	// Override the endpoint used to send this request
+	//
+	// Parameter is
+	// required.
+	//
+	// SDK::Endpoint
+	Endpoint *string
+}
+
+// ValidateRequired validates required parameters are set.
+func (p EndpointParameters) ValidateRequired() error {
+	if p.UseDualStack == nil {
+		return fmt.Errorf("parameter UseDualStack is required")
+	}
+
+	if p.UseFIPS == nil {
+		return fmt.Errorf("parameter UseFIPS is required")
+	}
+
+	return nil
+}
+
+// WithDefaults returns a shallow copy of EndpointParameterswith default values
+// applied to members where applicable.
+func (p EndpointParameters) WithDefaults() EndpointParameters {
+	if p.UseDualStack == nil {
+		p.UseDualStack = ptr.Bool(false)
+	}
+
+	if p.UseFIPS == nil {
+		p.UseFIPS = ptr.Bool(false)
+	}
+	return p
+}
+
+type stringSlice []string
+
+func (s stringSlice) Get(i int) *string {
+	if i < 0 || i >= len(s) {
+		return nil
+	}
+
+	v := s[i]
+	return &v
+}
+
+// EndpointResolverV2 provides the interface for resolving service endpoints.
+type EndpointResolverV2 interface {
+	// ResolveEndpoint attempts to resolve the endpoint with the provided options,
+	// returning the endpoint if found. Otherwise an error is returned.
+	ResolveEndpoint(ctx context.Context, params EndpointParameters) (
+		smithyendpoints.Endpoint, error,
+	)
+}
+
+// resolver provides the implementation for resolving endpoints.
+type resolver struct{}
+
+func NewDefaultEndpointResolverV2() EndpointResolverV2 {
+	return &resolver{}
+}
+
+// ResolveEndpoint attempts to resolve the endpoint with the provided options,
+// returning the endpoint if found. Otherwise an error is returned.
+func (r *resolver) ResolveEndpoint(
+	ctx context.Context, params EndpointParameters,
+) (
+	endpoint smithyendpoints.Endpoint, err error,
+) {
+	params = params.WithDefaults()
+	if err = params.ValidateRequired(); err != nil {
+		return endpoint, fmt.Errorf("endpoint parameters are not valid, %w", err)
+	}
+	_UseDualStack := *params.UseDualStack
+	_UseFIPS := *params.UseFIPS
+
+	if exprVal := params.Endpoint; exprVal != nil {
+		_Endpoint := *exprVal
+		_ = _Endpoint
+		if _UseFIPS == true {
+			return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: FIPS and custom endpoint are not supported")
+		}
+		if _UseDualStack == true {
+			return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: Dualstack and custom endpoint are not supported")
+		}
+		uriString := _Endpoint
+
+		uri, err := url.Parse(uriString)
+		if err != nil {
+			return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+		}
+
+		return smithyendpoints.Endpoint{
+			URI:     *uri,
+			Headers: http.Header{},
+		}, nil
+	}
+	if exprVal := params.Region; exprVal != nil {
+		_Region := *exprVal
+		_ = _Region
+		if exprVal := awsrulesfn.GetPartition(_Region); exprVal != nil {
+			_PartitionResult := *exprVal
+			_ = _PartitionResult
+			if _UseFIPS == true {
+				if _UseDualStack == true {
+					if true == _PartitionResult.SupportsFIPS {
+						if true == _PartitionResult.SupportsDualStack {
+							uriString := func() string {
+								var out strings.Builder
+								out.WriteString("https://oidc-fips.")
+								out.WriteString(_Region)
+								out.WriteString(".")
+								out.WriteString(_PartitionResult.DualStackDnsSuffix)
+								return out.String()
+							}()
+
+							uri, err := url.Parse(uriString)
+							if err != nil {
+								return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+							}
+
+							return smithyendpoints.Endpoint{
+								URI:     *uri,
+								Headers: http.Header{},
+							}, nil
+						}
+					}
+					return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS and DualStack are enabled, but this partition does not support one or both")
+				}
+			}
+			if _UseFIPS == true {
+				if _PartitionResult.SupportsFIPS == true {
+					if _PartitionResult.Name == "aws-us-gov" {
+						uriString := func() string {
+							var out strings.Builder
+							out.WriteString("https://oidc.")
+							out.WriteString(_Region)
+							out.WriteString(".amazonaws.com")
+							return out.String()
+						}()
+
+						uri, err := url.Parse(uriString)
+						if err != nil {
+							return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+						}
+
+						return smithyendpoints.Endpoint{
+							URI:     *uri,
+							Headers: http.Header{},
+						}, nil
+					}
+					uriString := func() string {
+						var out strings.Builder
+						out.WriteString("https://oidc-fips.")
+						out.WriteString(_Region)
+						out.WriteString(".")
+						out.WriteString(_PartitionResult.DnsSuffix)
+						return out.String()
+					}()
+
+					uri, err := url.Parse(uriString)
+					if err != nil {
+						return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+					}
+
+					return smithyendpoints.Endpoint{
+						URI:     *uri,
+						Headers: http.Header{},
+					}, nil
+				}
+				return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS is enabled but this partition does not support FIPS")
+			}
+			if _UseDualStack == true {
+				if true == _PartitionResult.SupportsDualStack {
+					uriString := func() string {
+						var out strings.Builder
+						out.WriteString("https://oidc.")
+						out.WriteString(_Region)
+						out.WriteString(".")
+						out.WriteString(_PartitionResult.DualStackDnsSuffix)
+						return out.String()
+					}()
+
+					uri, err := url.Parse(uriString)
+					if err != nil {
+						return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+					}
+
+					return smithyendpoints.Endpoint{
+						URI:     *uri,
+						Headers: http.Header{},
+					}, nil
+				}
+				return endpoint, fmt.Errorf("endpoint rule error, %s", "DualStack is enabled but this partition does not support DualStack")
+			}
+			uriString := func() string {
+				var out strings.Builder
+				out.WriteString("https://oidc.")
+				out.WriteString(_Region)
+				out.WriteString(".")
+				out.WriteString(_PartitionResult.DnsSuffix)
+				return out.String()
+			}()
+
+			uri, err := url.Parse(uriString)
+			if err != nil {
+				return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+			}
+
+			return smithyendpoints.Endpoint{
+				URI:     *uri,
+				Headers: http.Header{},
+			}, nil
+		}
+		return endpoint, fmt.Errorf("Endpoint resolution failed. Invalid operation or environment input.")
+	}
+	return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: Missing Region")
+}
+
+type endpointParamsBinder interface {
+	bindEndpointParams(*EndpointParameters)
+}
+
+func bindEndpointParams(ctx context.Context, input interface{}, options Options) *EndpointParameters {
+	params := &EndpointParameters{}
+
+	params.Region = bindRegion(options.Region)
+	params.UseDualStack = aws.Bool(options.EndpointOptions.UseDualStackEndpoint == aws.DualStackEndpointStateEnabled)
+	params.UseFIPS = aws.Bool(options.EndpointOptions.UseFIPSEndpoint == aws.FIPSEndpointStateEnabled)
+	params.Endpoint = options.BaseEndpoint
+
+	if b, ok := input.(endpointParamsBinder); ok {
+		b.bindEndpointParams(params)
+	}
+
+	return params
+}
+
+type resolveEndpointV2Middleware struct {
+	options Options
+}
+
+func (*resolveEndpointV2Middleware) ID() string {
+	return "ResolveEndpointV2"
+}
+
+func (m *resolveEndpointV2Middleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	if awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+		return next.HandleFinalize(ctx, in)
+	}
+
+	if err := checkAccountID(getIdentity(ctx), m.options.AccountIDEndpointMode); err != nil {
+		return out, metadata, fmt.Errorf("invalid accountID set: %w", err)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.options.EndpointResolverV2 == nil {
+		return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+	}
+
+	params := bindEndpointParams(ctx, getOperationInput(ctx), m.options)
+	endpt, err := m.options.EndpointResolverV2.ResolveEndpoint(ctx, *params)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+	}
+
+	if endpt.URI.RawPath == "" && req.URL.RawPath != "" {
+		endpt.URI.RawPath = endpt.URI.Path
+	}
+	req.URL.Scheme = endpt.URI.Scheme
+	req.URL.Host = endpt.URI.Host
+	req.URL.Path = smithyhttp.JoinPath(endpt.URI.Path, req.URL.Path)
+	req.URL.RawPath = smithyhttp.JoinPath(endpt.URI.RawPath, req.URL.RawPath)
+	for k := range endpt.Headers {
+		req.Header.Set(k, endpt.Headers.Get(k))
+	}
+
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	opts, _ := smithyauth.GetAuthOptions(&endpt.Properties)
+	for _, o := range opts {
+		rscheme.SignerProperties.SetAll(&o.SignerProperties)
+	}
+
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/generated.json 🔗

@@ -0,0 +1,35 @@
+{
+    "dependencies": {
+        "github.com/aws/aws-sdk-go-v2": "v1.4.0",
+        "github.com/aws/aws-sdk-go-v2/internal/configsources": "v0.0.0-00010101000000-000000000000",
+        "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2": "v2.0.0-00010101000000-000000000000",
+        "github.com/aws/smithy-go": "v1.4.0"
+    },
+    "files": [
+        "api_client.go",
+        "api_client_test.go",
+        "api_op_CreateToken.go",
+        "api_op_CreateTokenWithIAM.go",
+        "api_op_RegisterClient.go",
+        "api_op_StartDeviceAuthorization.go",
+        "auth.go",
+        "deserializers.go",
+        "doc.go",
+        "endpoints.go",
+        "endpoints_config_test.go",
+        "endpoints_test.go",
+        "generated.json",
+        "internal/endpoints/endpoints.go",
+        "internal/endpoints/endpoints_test.go",
+        "options.go",
+        "protocol_test.go",
+        "serializers.go",
+        "snapshot_test.go",
+        "types/errors.go",
+        "types/types.go",
+        "validators.go"
+    ],
+    "go": "1.15",
+    "module": "github.com/aws/aws-sdk-go-v2/service/ssooidc",
+    "unstable": false
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/internal/endpoints/endpoints.go 🔗

@@ -0,0 +1,566 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package endpoints
+
+import (
+	"github.com/aws/aws-sdk-go-v2/aws"
+	endpoints "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2"
+	"github.com/aws/smithy-go/logging"
+	"regexp"
+)
+
+// Options is the endpoint resolver configuration options
+type Options struct {
+	// Logger is a logging implementation that log events should be sent to.
+	Logger logging.Logger
+
+	// LogDeprecated indicates that deprecated endpoints should be logged to the
+	// provided logger.
+	LogDeprecated bool
+
+	// ResolvedRegion is used to override the region to be resolved, rather then the
+	// using the value passed to the ResolveEndpoint method. This value is used by the
+	// SDK to translate regions like fips-us-east-1 or us-east-1-fips to an alternative
+	// name. You must not set this value directly in your application.
+	ResolvedRegion string
+
+	// DisableHTTPS informs the resolver to return an endpoint that does not use the
+	// HTTPS scheme.
+	DisableHTTPS bool
+
+	// UseDualStackEndpoint specifies the resolver must resolve a dual-stack endpoint.
+	UseDualStackEndpoint aws.DualStackEndpointState
+
+	// UseFIPSEndpoint specifies the resolver must resolve a FIPS endpoint.
+	UseFIPSEndpoint aws.FIPSEndpointState
+}
+
+func (o Options) GetResolvedRegion() string {
+	return o.ResolvedRegion
+}
+
+func (o Options) GetDisableHTTPS() bool {
+	return o.DisableHTTPS
+}
+
+func (o Options) GetUseDualStackEndpoint() aws.DualStackEndpointState {
+	return o.UseDualStackEndpoint
+}
+
+func (o Options) GetUseFIPSEndpoint() aws.FIPSEndpointState {
+	return o.UseFIPSEndpoint
+}
+
+func transformToSharedOptions(options Options) endpoints.Options {
+	return endpoints.Options{
+		Logger:               options.Logger,
+		LogDeprecated:        options.LogDeprecated,
+		ResolvedRegion:       options.ResolvedRegion,
+		DisableHTTPS:         options.DisableHTTPS,
+		UseDualStackEndpoint: options.UseDualStackEndpoint,
+		UseFIPSEndpoint:      options.UseFIPSEndpoint,
+	}
+}
+
+// Resolver SSO OIDC endpoint resolver
+type Resolver struct {
+	partitions endpoints.Partitions
+}
+
+// ResolveEndpoint resolves the service endpoint for the given region and options
+func (r *Resolver) ResolveEndpoint(region string, options Options) (endpoint aws.Endpoint, err error) {
+	if len(region) == 0 {
+		return endpoint, &aws.MissingRegionError{}
+	}
+
+	opt := transformToSharedOptions(options)
+	return r.partitions.ResolveEndpoint(region, opt)
+}
+
+// New returns a new Resolver
+func New() *Resolver {
+	return &Resolver{
+		partitions: defaultPartitions,
+	}
+}
+
+var partitionRegexp = struct {
+	Aws      *regexp.Regexp
+	AwsCn    *regexp.Regexp
+	AwsIso   *regexp.Regexp
+	AwsIsoB  *regexp.Regexp
+	AwsIsoE  *regexp.Regexp
+	AwsIsoF  *regexp.Regexp
+	AwsUsGov *regexp.Regexp
+}{
+
+	Aws:      regexp.MustCompile("^(us|eu|ap|sa|ca|me|af|il)\\-\\w+\\-\\d+$"),
+	AwsCn:    regexp.MustCompile("^cn\\-\\w+\\-\\d+$"),
+	AwsIso:   regexp.MustCompile("^us\\-iso\\-\\w+\\-\\d+$"),
+	AwsIsoB:  regexp.MustCompile("^us\\-isob\\-\\w+\\-\\d+$"),
+	AwsIsoE:  regexp.MustCompile("^eu\\-isoe\\-\\w+\\-\\d+$"),
+	AwsIsoF:  regexp.MustCompile("^us\\-isof\\-\\w+\\-\\d+$"),
+	AwsUsGov: regexp.MustCompile("^us\\-gov\\-\\w+\\-\\d+$"),
+}
+
+var defaultPartitions = endpoints.Partitions{
+	{
+		ID: "aws",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "oidc.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "oidc.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.Aws,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "af-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.af-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "af-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-northeast-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-northeast-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-northeast-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-northeast-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-3",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-northeast-3.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-northeast-3",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-south-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-south-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-south-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-southeast-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-southeast-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-3",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-southeast-3.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-3",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-4",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ap-southeast-4.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ap-southeast-4",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ca-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ca-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ca-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ca-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.ca-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "ca-west-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-central-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-central-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-central-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-north-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-north-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-north-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-south-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-south-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-south-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-west-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-west-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-west-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-west-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "eu-west-3",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.eu-west-3.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "eu-west-3",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "il-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.il-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "il-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "me-central-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.me-central-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "me-central-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "me-south-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.me-south-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "me-south-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "sa-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.sa-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "sa-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.us-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-east-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.us-east-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-east-2",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.us-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-west-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-2",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.us-west-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-west-2",
+				},
+			},
+		},
+	},
+	{
+		ID: "aws-cn",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "oidc.{region}.api.amazonwebservices.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.amazonaws.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.api.amazonwebservices.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "oidc.{region}.amazonaws.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsCn,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "cn-north-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.cn-north-1.amazonaws.com.cn",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "cn-north-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "cn-northwest-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.cn-northwest-1.amazonaws.com.cn",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "cn-northwest-1",
+				},
+			},
+		},
+	},
+	{
+		ID: "aws-iso",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.c2s.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "oidc.{region}.c2s.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIso,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-iso-b",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.sc2s.sgov.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "oidc.{region}.sc2s.sgov.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoB,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-iso-e",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.cloud.adc-e.uk",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "oidc.{region}.cloud.adc-e.uk",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoE,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-iso-f",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.csp.hci.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "oidc.{region}.csp.hci.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoF,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-us-gov",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "oidc.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "oidc-fips.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "oidc.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsUsGov,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "us-gov-east-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.us-gov-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-gov-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "us-gov-west-1",
+			}: endpoints.Endpoint{
+				Hostname: "oidc.us-gov-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-gov-west-1",
+				},
+			},
+		},
+	},
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/options.go 🔗

@@ -0,0 +1,227 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+	smithyauth "github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net/http"
+)
+
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+type Options struct {
+	// Set of options to modify how an operation is invoked. These apply to all
+	// operations invoked for this client. Use functional options on operation call to
+	// modify this list for per operation behavior.
+	APIOptions []func(*middleware.Stack) error
+
+	// Indicates how aws account ID is applied in endpoint2.0 routing
+	AccountIDEndpointMode aws.AccountIDEndpointMode
+
+	// The optional application specific identifier appended to the User-Agent header.
+	AppID string
+
+	// This endpoint will be given as input to an EndpointResolverV2. It is used for
+	// providing a custom base endpoint that is subject to modifications by the
+	// processing EndpointResolverV2.
+	BaseEndpoint *string
+
+	// Configures the events that will be sent to the configured logger.
+	ClientLogMode aws.ClientLogMode
+
+	// The credentials object to use when signing requests.
+	Credentials aws.CredentialsProvider
+
+	// The configuration DefaultsMode that the SDK should use when constructing the
+	// clients initial default settings.
+	DefaultsMode aws.DefaultsMode
+
+	// The endpoint options to be used when attempting to resolve an endpoint.
+	EndpointOptions EndpointResolverOptions
+
+	// The service endpoint resolver.
+	//
+	// Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
+	// value for this field will likely prevent you from using any endpoint-related
+	// service features released after the introduction of EndpointResolverV2 and
+	// BaseEndpoint.
+	//
+	// To migrate an EndpointResolver implementation that uses a custom endpoint, set
+	// the client option BaseEndpoint instead.
+	EndpointResolver EndpointResolver
+
+	// Resolves the endpoint used for a particular service operation. This should be
+	// used over the deprecated EndpointResolver.
+	EndpointResolverV2 EndpointResolverV2
+
+	// Signature Version 4 (SigV4) Signer
+	HTTPSignerV4 HTTPSignerV4
+
+	// The logger writer interface to write logging messages to.
+	Logger logging.Logger
+
+	// The region to send requests to. (Required)
+	Region string
+
+	// RetryMaxAttempts specifies the maximum number attempts an API client will call
+	// an operation that fails with a retryable error. A value of 0 is ignored, and
+	// will not be used to configure the API client created default retryer, or modify
+	// per operation call's retry max attempts.
+	//
+	// If specified in an operation call's functional options with a value that is
+	// different than the constructed client's Options, the Client's Retryer will be
+	// wrapped to use the operation's specific RetryMaxAttempts value.
+	RetryMaxAttempts int
+
+	// RetryMode specifies the retry mode the API client will be created with, if
+	// Retryer option is not also specified.
+	//
+	// When creating a new API Clients this member will only be used if the Retryer
+	// Options member is nil. This value will be ignored if Retryer is not nil.
+	//
+	// Currently does not support per operation call overrides, may in the future.
+	RetryMode aws.RetryMode
+
+	// Retryer guides how HTTP requests should be retried in case of recoverable
+	// failures. When nil the API client will use a default retryer. The kind of
+	// default retry created by the API client can be changed with the RetryMode
+	// option.
+	Retryer aws.Retryer
+
+	// The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
+	// to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
+	// should not populate this structure programmatically, or rely on the values here
+	// within your applications.
+	RuntimeEnvironment aws.RuntimeEnvironment
+
+	// The initial DefaultsMode used when the client options were constructed. If the
+	// DefaultsMode was set to aws.DefaultsModeAuto this will store what the resolved
+	// value was at that point in time.
+	//
+	// Currently does not support per operation call overrides, may in the future.
+	resolvedDefaultsMode aws.DefaultsMode
+
+	// The HTTP client to invoke API calls with. Defaults to client's default HTTP
+	// implementation if nil.
+	HTTPClient HTTPClient
+
+	// The auth scheme resolver which determines how to authenticate for each
+	// operation.
+	AuthSchemeResolver AuthSchemeResolver
+
+	// The list of auth schemes supported by the client.
+	AuthSchemes []smithyhttp.AuthScheme
+}
+
+// Copy creates a clone where the APIOptions list is deep copied.
+func (o Options) Copy() Options {
+	to := o
+	to.APIOptions = make([]func(*middleware.Stack) error, len(o.APIOptions))
+	copy(to.APIOptions, o.APIOptions)
+
+	return to
+}
+
+func (o Options) GetIdentityResolver(schemeID string) smithyauth.IdentityResolver {
+	if schemeID == "aws.auth#sigv4" {
+		return getSigV4IdentityResolver(o)
+	}
+	if schemeID == "smithy.api#noAuth" {
+		return &smithyauth.AnonymousIdentityResolver{}
+	}
+	return nil
+}
+
+// WithAPIOptions returns a functional option for setting the Client's APIOptions
+// option.
+func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options) {
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, optFns...)
+	}
+}
+
+// Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for
+// this field will likely prevent you from using any endpoint-related service
+// features released after the introduction of EndpointResolverV2 and BaseEndpoint.
+//
+// To migrate an EndpointResolver implementation that uses a custom endpoint, set
+// the client option BaseEndpoint instead.
+func WithEndpointResolver(v EndpointResolver) func(*Options) {
+	return func(o *Options) {
+		o.EndpointResolver = v
+	}
+}
+
+// WithEndpointResolverV2 returns a functional option for setting the Client's
+// EndpointResolverV2 option.
+func WithEndpointResolverV2(v EndpointResolverV2) func(*Options) {
+	return func(o *Options) {
+		o.EndpointResolverV2 = v
+	}
+}
+
+func getSigV4IdentityResolver(o Options) smithyauth.IdentityResolver {
+	if o.Credentials != nil {
+		return &internalauthsmithy.CredentialsProviderAdapter{Provider: o.Credentials}
+	}
+	return nil
+}
+
+// WithSigV4SigningName applies an override to the authentication workflow to
+// use the given signing name for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing name from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningName(name string) func(*Options) {
+	fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+		out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+	) {
+		return next.HandleInitialize(awsmiddleware.SetSigningName(ctx, name), in)
+	}
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+			return s.Initialize.Add(
+				middleware.InitializeMiddlewareFunc("withSigV4SigningName", fn),
+				middleware.Before,
+			)
+		})
+	}
+}
+
+// WithSigV4SigningRegion applies an override to the authentication workflow to
+// use the given signing region for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing region from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningRegion(region string) func(*Options) {
+	fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+		out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+	) {
+		return next.HandleInitialize(awsmiddleware.SetSigningRegion(ctx, region), in)
+	}
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+			return s.Initialize.Add(
+				middleware.InitializeMiddlewareFunc("withSigV4SigningRegion", fn),
+				middleware.Before,
+			)
+		})
+	}
+}
+
+func ignoreAnonymousAuth(options *Options) {
+	if aws.IsCredentialsProvider(options.Credentials, (*aws.AnonymousCredentials)(nil)) {
+		options.Credentials = nil
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/serializers.go 🔗

@@ -0,0 +1,487 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"bytes"
+	"context"
+	"fmt"
+	smithy "github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/encoding/httpbinding"
+	smithyjson "github.com/aws/smithy-go/encoding/json"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+type awsRestjson1_serializeOpCreateToken struct {
+}
+
+func (*awsRestjson1_serializeOpCreateToken) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpCreateToken) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*CreateTokenInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/token")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "POST"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	restEncoder.SetHeader("Content-Type").String("application/json")
+
+	jsonEncoder := smithyjson.NewEncoder()
+	if err := awsRestjson1_serializeOpDocumentCreateTokenInput(input, jsonEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsCreateTokenInput(v *CreateTokenInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	return nil
+}
+
+func awsRestjson1_serializeOpDocumentCreateTokenInput(v *CreateTokenInput, value smithyjson.Value) error {
+	object := value.Object()
+	defer object.Close()
+
+	if v.ClientId != nil {
+		ok := object.Key("clientId")
+		ok.String(*v.ClientId)
+	}
+
+	if v.ClientSecret != nil {
+		ok := object.Key("clientSecret")
+		ok.String(*v.ClientSecret)
+	}
+
+	if v.Code != nil {
+		ok := object.Key("code")
+		ok.String(*v.Code)
+	}
+
+	if v.CodeVerifier != nil {
+		ok := object.Key("codeVerifier")
+		ok.String(*v.CodeVerifier)
+	}
+
+	if v.DeviceCode != nil {
+		ok := object.Key("deviceCode")
+		ok.String(*v.DeviceCode)
+	}
+
+	if v.GrantType != nil {
+		ok := object.Key("grantType")
+		ok.String(*v.GrantType)
+	}
+
+	if v.RedirectUri != nil {
+		ok := object.Key("redirectUri")
+		ok.String(*v.RedirectUri)
+	}
+
+	if v.RefreshToken != nil {
+		ok := object.Key("refreshToken")
+		ok.String(*v.RefreshToken)
+	}
+
+	if v.Scope != nil {
+		ok := object.Key("scope")
+		if err := awsRestjson1_serializeDocumentScopes(v.Scope, ok); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+type awsRestjson1_serializeOpCreateTokenWithIAM struct {
+}
+
+func (*awsRestjson1_serializeOpCreateTokenWithIAM) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpCreateTokenWithIAM) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*CreateTokenWithIAMInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/token?aws_iam=t")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "POST"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	restEncoder.SetHeader("Content-Type").String("application/json")
+
+	jsonEncoder := smithyjson.NewEncoder()
+	if err := awsRestjson1_serializeOpDocumentCreateTokenWithIAMInput(input, jsonEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsCreateTokenWithIAMInput(v *CreateTokenWithIAMInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	return nil
+}
+
+func awsRestjson1_serializeOpDocumentCreateTokenWithIAMInput(v *CreateTokenWithIAMInput, value smithyjson.Value) error {
+	object := value.Object()
+	defer object.Close()
+
+	if v.Assertion != nil {
+		ok := object.Key("assertion")
+		ok.String(*v.Assertion)
+	}
+
+	if v.ClientId != nil {
+		ok := object.Key("clientId")
+		ok.String(*v.ClientId)
+	}
+
+	if v.Code != nil {
+		ok := object.Key("code")
+		ok.String(*v.Code)
+	}
+
+	if v.CodeVerifier != nil {
+		ok := object.Key("codeVerifier")
+		ok.String(*v.CodeVerifier)
+	}
+
+	if v.GrantType != nil {
+		ok := object.Key("grantType")
+		ok.String(*v.GrantType)
+	}
+
+	if v.RedirectUri != nil {
+		ok := object.Key("redirectUri")
+		ok.String(*v.RedirectUri)
+	}
+
+	if v.RefreshToken != nil {
+		ok := object.Key("refreshToken")
+		ok.String(*v.RefreshToken)
+	}
+
+	if v.RequestedTokenType != nil {
+		ok := object.Key("requestedTokenType")
+		ok.String(*v.RequestedTokenType)
+	}
+
+	if v.Scope != nil {
+		ok := object.Key("scope")
+		if err := awsRestjson1_serializeDocumentScopes(v.Scope, ok); err != nil {
+			return err
+		}
+	}
+
+	if v.SubjectToken != nil {
+		ok := object.Key("subjectToken")
+		ok.String(*v.SubjectToken)
+	}
+
+	if v.SubjectTokenType != nil {
+		ok := object.Key("subjectTokenType")
+		ok.String(*v.SubjectTokenType)
+	}
+
+	return nil
+}
+
+type awsRestjson1_serializeOpRegisterClient struct {
+}
+
+func (*awsRestjson1_serializeOpRegisterClient) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpRegisterClient) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*RegisterClientInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/client/register")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "POST"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	restEncoder.SetHeader("Content-Type").String("application/json")
+
+	jsonEncoder := smithyjson.NewEncoder()
+	if err := awsRestjson1_serializeOpDocumentRegisterClientInput(input, jsonEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsRegisterClientInput(v *RegisterClientInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	return nil
+}
+
+func awsRestjson1_serializeOpDocumentRegisterClientInput(v *RegisterClientInput, value smithyjson.Value) error {
+	object := value.Object()
+	defer object.Close()
+
+	if v.ClientName != nil {
+		ok := object.Key("clientName")
+		ok.String(*v.ClientName)
+	}
+
+	if v.ClientType != nil {
+		ok := object.Key("clientType")
+		ok.String(*v.ClientType)
+	}
+
+	if v.EntitledApplicationArn != nil {
+		ok := object.Key("entitledApplicationArn")
+		ok.String(*v.EntitledApplicationArn)
+	}
+
+	if v.GrantTypes != nil {
+		ok := object.Key("grantTypes")
+		if err := awsRestjson1_serializeDocumentGrantTypes(v.GrantTypes, ok); err != nil {
+			return err
+		}
+	}
+
+	if v.IssuerUrl != nil {
+		ok := object.Key("issuerUrl")
+		ok.String(*v.IssuerUrl)
+	}
+
+	if v.RedirectUris != nil {
+		ok := object.Key("redirectUris")
+		if err := awsRestjson1_serializeDocumentRedirectUris(v.RedirectUris, ok); err != nil {
+			return err
+		}
+	}
+
+	if v.Scopes != nil {
+		ok := object.Key("scopes")
+		if err := awsRestjson1_serializeDocumentScopes(v.Scopes, ok); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+type awsRestjson1_serializeOpStartDeviceAuthorization struct {
+}
+
+func (*awsRestjson1_serializeOpStartDeviceAuthorization) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsRestjson1_serializeOpStartDeviceAuthorization) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*StartDeviceAuthorizationInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	opPath, opQuery := httpbinding.SplitURI("/device_authorization")
+	request.URL.Path = smithyhttp.JoinPath(request.URL.Path, opPath)
+	request.URL.RawQuery = smithyhttp.JoinRawQuery(request.URL.RawQuery, opQuery)
+	request.Method = "POST"
+	var restEncoder *httpbinding.Encoder
+	if request.URL.RawPath == "" {
+		restEncoder, err = httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	} else {
+		request.URL.RawPath = smithyhttp.JoinPath(request.URL.RawPath, opPath)
+		restEncoder, err = httpbinding.NewEncoderWithRawPath(request.URL.Path, request.URL.RawPath, request.URL.RawQuery, request.Header)
+	}
+
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	restEncoder.SetHeader("Content-Type").String("application/json")
+
+	jsonEncoder := smithyjson.NewEncoder()
+	if err := awsRestjson1_serializeOpDocumentStartDeviceAuthorizationInput(input, jsonEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(jsonEncoder.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = restEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsRestjson1_serializeOpHttpBindingsStartDeviceAuthorizationInput(v *StartDeviceAuthorizationInput, encoder *httpbinding.Encoder) error {
+	if v == nil {
+		return fmt.Errorf("unsupported serialization of nil %T", v)
+	}
+
+	return nil
+}
+
+func awsRestjson1_serializeOpDocumentStartDeviceAuthorizationInput(v *StartDeviceAuthorizationInput, value smithyjson.Value) error {
+	object := value.Object()
+	defer object.Close()
+
+	if v.ClientId != nil {
+		ok := object.Key("clientId")
+		ok.String(*v.ClientId)
+	}
+
+	if v.ClientSecret != nil {
+		ok := object.Key("clientSecret")
+		ok.String(*v.ClientSecret)
+	}
+
+	if v.StartUrl != nil {
+		ok := object.Key("startUrl")
+		ok.String(*v.StartUrl)
+	}
+
+	return nil
+}
+
+func awsRestjson1_serializeDocumentGrantTypes(v []string, value smithyjson.Value) error {
+	array := value.Array()
+	defer array.Close()
+
+	for i := range v {
+		av := array.Value()
+		av.String(v[i])
+	}
+	return nil
+}
+
+func awsRestjson1_serializeDocumentRedirectUris(v []string, value smithyjson.Value) error {
+	array := value.Array()
+	defer array.Close()
+
+	for i := range v {
+		av := array.Value()
+		av.String(v[i])
+	}
+	return nil
+}
+
+func awsRestjson1_serializeDocumentScopes(v []string, value smithyjson.Value) error {
+	array := value.Array()
+	defer array.Close()
+
+	for i := range v {
+		av := array.Value()
+		av.String(v[i])
+	}
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/types/errors.go 🔗

@@ -0,0 +1,428 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+import (
+	"fmt"
+	smithy "github.com/aws/smithy-go"
+)
+
+// You do not have sufficient access to perform this action.
+type AccessDeniedException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *AccessDeniedException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *AccessDeniedException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *AccessDeniedException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "AccessDeniedException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *AccessDeniedException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that a request to authorize a client with an access user session
+// token is pending.
+type AuthorizationPendingException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *AuthorizationPendingException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *AuthorizationPendingException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *AuthorizationPendingException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "AuthorizationPendingException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *AuthorizationPendingException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the token issued by the service is expired and is no longer
+// valid.
+type ExpiredTokenException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *ExpiredTokenException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ExpiredTokenException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *ExpiredTokenException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "ExpiredTokenException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *ExpiredTokenException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that an error from the service occurred while trying to process a
+// request.
+type InternalServerException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InternalServerException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InternalServerException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InternalServerException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InternalServerException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InternalServerException) ErrorFault() smithy.ErrorFault { return smithy.FaultServer }
+
+// Indicates that the clientId or clientSecret in the request is invalid. For
+// example, this can occur when a client sends an incorrect clientId or an expired
+// clientSecret .
+type InvalidClientException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidClientException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidClientException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidClientException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidClientException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidClientException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the client information sent in the request during registration
+// is invalid.
+type InvalidClientMetadataException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidClientMetadataException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidClientMetadataException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidClientMetadataException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidClientMetadataException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidClientMetadataException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that a request contains an invalid grant. This can occur if a client
+// makes a CreateTokenrequest with an invalid grant type.
+type InvalidGrantException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidGrantException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidGrantException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidGrantException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidGrantException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidGrantException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that one or more redirect URI in the request is not supported for
+// this operation.
+type InvalidRedirectUriException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidRedirectUriException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidRedirectUriException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidRedirectUriException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidRedirectUriException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidRedirectUriException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that something is wrong with the input to the request. For example, a
+// required parameter might be missing or out of range.
+type InvalidRequestException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidRequestException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidRequestException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidRequestException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidRequestException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidRequestException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that a token provided as input to the request was issued by and is
+// only usable by calling IAM Identity Center endpoints in another region.
+type InvalidRequestRegionException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+	Endpoint          *string
+	Region            *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidRequestRegionException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidRequestRegionException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidRequestRegionException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidRequestRegionException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidRequestRegionException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the scope provided in the request is invalid.
+type InvalidScopeException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidScopeException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidScopeException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidScopeException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidScopeException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidScopeException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the client is making the request too frequently and is more than
+// the service can handle.
+type SlowDownException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *SlowDownException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *SlowDownException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *SlowDownException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "SlowDownException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *SlowDownException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the client is not currently authorized to make the request. This
+// can happen when a clientId is not issued for a public client.
+type UnauthorizedClientException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *UnauthorizedClientException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *UnauthorizedClientException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *UnauthorizedClientException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "UnauthorizedClientException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *UnauthorizedClientException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// Indicates that the grant type in the request is not supported by the service.
+type UnsupportedGrantTypeException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	Error_            *string
+	Error_description *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *UnsupportedGrantTypeException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *UnsupportedGrantTypeException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *UnsupportedGrantTypeException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "UnsupportedGrantTypeException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *UnsupportedGrantTypeException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }

vendor/github.com/aws/aws-sdk-go-v2/service/ssooidc/validators.go 🔗

@@ -0,0 +1,184 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package ssooidc
+
+import (
+	"context"
+	"fmt"
+	smithy "github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/middleware"
+)
+
+type validateOpCreateToken struct {
+}
+
+func (*validateOpCreateToken) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpCreateToken) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*CreateTokenInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpCreateTokenInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpCreateTokenWithIAM struct {
+}
+
+func (*validateOpCreateTokenWithIAM) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpCreateTokenWithIAM) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*CreateTokenWithIAMInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpCreateTokenWithIAMInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpRegisterClient struct {
+}
+
+func (*validateOpRegisterClient) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpRegisterClient) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*RegisterClientInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpRegisterClientInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpStartDeviceAuthorization struct {
+}
+
+func (*validateOpStartDeviceAuthorization) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpStartDeviceAuthorization) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*StartDeviceAuthorizationInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpStartDeviceAuthorizationInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+func addOpCreateTokenValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpCreateToken{}, middleware.After)
+}
+
+func addOpCreateTokenWithIAMValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpCreateTokenWithIAM{}, middleware.After)
+}
+
+func addOpRegisterClientValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpRegisterClient{}, middleware.After)
+}
+
+func addOpStartDeviceAuthorizationValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpStartDeviceAuthorization{}, middleware.After)
+}
+
+func validateOpCreateTokenInput(v *CreateTokenInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "CreateTokenInput"}
+	if v.ClientId == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("ClientId"))
+	}
+	if v.ClientSecret == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("ClientSecret"))
+	}
+	if v.GrantType == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("GrantType"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpCreateTokenWithIAMInput(v *CreateTokenWithIAMInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "CreateTokenWithIAMInput"}
+	if v.ClientId == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("ClientId"))
+	}
+	if v.GrantType == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("GrantType"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpRegisterClientInput(v *RegisterClientInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "RegisterClientInput"}
+	if v.ClientName == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("ClientName"))
+	}
+	if v.ClientType == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("ClientType"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpStartDeviceAuthorizationInput(v *StartDeviceAuthorizationInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "StartDeviceAuthorizationInput"}
+	if v.ClientId == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("ClientId"))
+	}
+	if v.ClientSecret == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("ClientSecret"))
+	}
+	if v.StartUrl == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("StartUrl"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/CHANGELOG.md 🔗

@@ -0,0 +1,493 @@
+# v1.30.3 (2024-07-10.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.30.2 (2024-07-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.30.1 (2024-06-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.30.0 (2024-06-26)
+
+* **Feature**: Support list-of-string endpoint parameter.
+
+# v1.29.1 (2024-06-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.29.0 (2024-06-18)
+
+* **Feature**: Track usage of various AWS SDK features in user-agent string.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.13 (2024-06-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.12 (2024-06-07)
+
+* **Bug Fix**: Add clock skew correction on all service clients
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.11 (2024-06-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.10 (2024-05-23)
+
+* No change notes available for this release.
+
+# v1.28.9 (2024-05-16)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.8 (2024-05-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.7 (2024-05-08)
+
+* **Bug Fix**: GoDoc improvement
+
+# v1.28.6 (2024-03-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.5 (2024-03-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.4 (2024-03-07)
+
+* **Bug Fix**: Remove dependency on go-cmp.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.3 (2024-03-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.2 (2024-03-04)
+
+* **Bug Fix**: Update internal/presigned-url dependency for corrected API name.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.1 (2024-02-23)
+
+* **Bug Fix**: Move all common, SDK-side middleware stack ops into the service client module to prevent cross-module compatibility issues in the future.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.28.0 (2024-02-22)
+
+* **Feature**: Add middleware stack snapshot tests.
+
+# v1.27.2 (2024-02-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.27.1 (2024-02-20)
+
+* **Bug Fix**: When sourcing values for a service's `EndpointParameters`, the lack of a configured region (i.e. `options.Region == ""`) will now translate to a `nil` value for `EndpointParameters.Region` instead of a pointer to the empty string `""`. This will result in a much more explicit error when calling an operation instead of an obscure hostname lookup failure.
+
+# v1.27.0 (2024-02-13)
+
+* **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.7 (2024-01-04)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.6 (2023-12-20)
+
+* No change notes available for this release.
+
+# v1.26.5 (2023-12-08)
+
+* **Bug Fix**: Reinstate presence of default Retryer in functional options, but still respect max attempts set therein.
+
+# v1.26.4 (2023-12-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.3 (2023-12-06)
+
+* **Bug Fix**: Restore pre-refactor auth behavior where all operations could technically be performed anonymously.
+* **Bug Fix**: STS `AssumeRoleWithSAML` and `AssumeRoleWithWebIdentity` would incorrectly attempt to use SigV4 authentication.
+
+# v1.26.2 (2023-12-01)
+
+* **Bug Fix**: Correct wrapping of errors in authentication workflow.
+* **Bug Fix**: Correctly recognize cache-wrapped instances of AnonymousCredentials at client construction.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.1 (2023-11-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.26.0 (2023-11-29)
+
+* **Feature**: Expose Options() accessor on service clients.
+* **Documentation**: Documentation updates for AWS Security Token Service.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.6 (2023-11-28.2)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.5 (2023-11-28)
+
+* **Bug Fix**: Respect setting RetryMaxAttempts in functional options at client construction.
+
+# v1.25.4 (2023-11-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.3 (2023-11-17)
+
+* **Documentation**: API updates for the AWS Security Token Service
+
+# v1.25.2 (2023-11-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.1 (2023-11-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.25.0 (2023-11-01)
+
+* **Feature**: Adds support for configured endpoints via environment variables and the AWS shared configuration file.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.24.0 (2023-10-31)
+
+* **Feature**: **BREAKING CHANGE**: Bump minimum go version to 1.19 per the revised [go version support policy](https://aws.amazon.com/blogs/developer/aws-sdk-for-go-aligns-with-go-release-policy-on-supported-runtimes/).
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.2 (2023-10-12)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.1 (2023-10-06)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.23.0 (2023-10-02)
+
+* **Feature**: STS API updates for assumeRole
+
+# v1.22.0 (2023-09-18)
+
+* **Announcement**: [BREAKFIX] Change in MaxResults datatype from value to pointer type in cognito-sync service.
+* **Feature**: Adds several endpoint ruleset changes across all models: smaller rulesets, removed non-unique regional endpoints, fixes FIPS and DualStack endpoints, and make region not required in SDK::Endpoint. Additional breakfix to cognito-sync field.
+
+# v1.21.5 (2023-08-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.4 (2023-08-18)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.3 (2023-08-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.2 (2023-08-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.21.1 (2023-08-01)
+
+* No change notes available for this release.
+
+# v1.21.0 (2023-07-31)
+
+* **Feature**: Adds support for smithy-modeled endpoint resolution. A new rules-based endpoint resolution will be added to the SDK which will supercede and deprecate existing endpoint resolution. Specifically, EndpointResolver will be deprecated while BaseEndpoint and EndpointResolverV2 will take its place. For more information, please see the Endpoints section in our Developer Guide.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.1 (2023-07-28)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.20.0 (2023-07-25)
+
+* **Feature**: API updates for the AWS Security Token Service
+
+# v1.19.3 (2023-07-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.19.2 (2023-06-15)
+
+* No change notes available for this release.
+
+# v1.19.1 (2023-06-13)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.19.0 (2023-05-08)
+
+* **Feature**: Documentation updates for AWS Security Token Service.
+
+# v1.18.11 (2023-05-04)
+
+* No change notes available for this release.
+
+# v1.18.10 (2023-04-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.9 (2023-04-10)
+
+* No change notes available for this release.
+
+# v1.18.8 (2023-04-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.7 (2023-03-21)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.6 (2023-03-10)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.5 (2023-02-22)
+
+* **Bug Fix**: Prevent nil pointer dereference when retrieving error codes.
+
+# v1.18.4 (2023-02-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.18.3 (2023-02-03)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+* **Dependency Update**: Upgrade smithy to 1.27.2 and correct empty query list serialization.
+
+# v1.18.2 (2023-01-25)
+
+* **Documentation**: Doc only change to update wording in a key topic
+
+# v1.18.1 (2023-01-23)
+
+* No change notes available for this release.
+
+# v1.18.0 (2023-01-05)
+
+* **Feature**: Add `ErrorCodeOverride` field to all error structs (aws/smithy-go#401).
+
+# v1.17.7 (2022-12-15)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.6 (2022-12-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.5 (2022-11-22)
+
+* No change notes available for this release.
+
+# v1.17.4 (2022-11-17)
+
+* **Documentation**: Documentation updates for AWS Security Token Service.
+
+# v1.17.3 (2022-11-16)
+
+* No change notes available for this release.
+
+# v1.17.2 (2022-11-10)
+
+* No change notes available for this release.
+
+# v1.17.1 (2022-10-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.17.0 (2022-10-21)
+
+* **Feature**: Add presign functionality for sts:AssumeRole operation
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.19 (2022-09-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.18 (2022-09-14)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.17 (2022-09-02)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.16 (2022-08-31)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.15 (2022-08-30)
+
+* No change notes available for this release.
+
+# v1.16.14 (2022-08-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.13 (2022-08-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.12 (2022-08-09)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.11 (2022-08-08)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.10 (2022-08-01)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.9 (2022-07-05)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.8 (2022-06-29)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.7 (2022-06-07)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.6 (2022-05-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.5 (2022-05-16)
+
+* **Documentation**: Documentation updates for AWS Security Token Service.
+
+# v1.16.4 (2022-04-25)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.3 (2022-03-30)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.2 (2022-03-24)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.1 (2022-03-23)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.16.0 (2022-03-08)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Documentation**: Updated service client model to latest release.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.15.0 (2022-02-24)
+
+* **Feature**: API client updated
+* **Feature**: Adds RetryMaxAttempts and RetryMod to API client Options. This allows the API clients' default Retryer to be configured from the shared configuration files or environment variables. Adding a new Retry mode of `Adaptive`. `Adaptive` retry mode is an experimental mode, adding client rate limiting when throttles reponses are received from an API. See [retry.AdaptiveMode](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws/retry#AdaptiveMode) for more details, and configuration options.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.14.0 (2022-01-14)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.13.0 (2022-01-07)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.12.0 (2021-12-21)
+
+* **Feature**: Updated to latest service endpoints
+
+# v1.11.1 (2021-12-02)
+
+* **Bug Fix**: Fixes a bug that prevented aws.EndpointResolverWithOptions from being used by the service client. ([#1514](https://github.com/aws/aws-sdk-go-v2/pull/1514))
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.11.0 (2021-11-30)
+
+* **Feature**: API client updated
+
+# v1.10.1 (2021-11-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.10.0 (2021-11-12)
+
+* **Feature**: Service clients now support custom endpoints that have an initial URI path defined.
+
+# v1.9.0 (2021-11-06)
+
+* **Feature**: The SDK now supports configuration of FIPS and DualStack endpoints using environment variables, shared configuration, or programmatically.
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.8.0 (2021-10-21)
+
+* **Feature**: API client updated
+* **Feature**: Updated  to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.2 (2021-10-11)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.1 (2021-09-17)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.7.0 (2021-08-27)
+
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.2 (2021-08-19)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.1 (2021-08-04)
+
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version.
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.6.0 (2021-07-15)
+
+* **Feature**: The ErrorCode method on generated service error types has been corrected to match the API model.
+* **Documentation**: Updated service model to latest revision.
+* **Dependency Update**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.5.0 (2021-06-25)
+
+* **Feature**: API client updated
+* **Feature**: Updated `github.com/aws/smithy-go` to latest version
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.1 (2021-05-20)
+
+* **Dependency Update**: Updated to the latest SDK module versions
+
+# v1.4.0 (2021-05-14)
+
+* **Feature**: Constant has been added to modules to enable runtime version inspection for reporting.
+* **Dependency Update**: Updated to the latest SDK module versions
+

vendor/github.com/aws/aws-sdk-go-v2/service/sts/LICENSE.txt 🔗

@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_client.go 🔗

@@ -0,0 +1,779 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	"github.com/aws/aws-sdk-go-v2/aws/defaults"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/aws/protocol/query"
+	"github.com/aws/aws-sdk-go-v2/aws/retry"
+	"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+	awshttp "github.com/aws/aws-sdk-go-v2/aws/transport/http"
+	internalauth "github.com/aws/aws-sdk-go-v2/internal/auth"
+	internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+	internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+	internalmiddleware "github.com/aws/aws-sdk-go-v2/internal/middleware"
+	acceptencodingcust "github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding"
+	presignedurlcust "github.com/aws/aws-sdk-go-v2/service/internal/presigned-url"
+	smithy "github.com/aws/smithy-go"
+	smithyauth "github.com/aws/smithy-go/auth"
+	smithydocument "github.com/aws/smithy-go/document"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net"
+	"net/http"
+	"sync/atomic"
+	"time"
+)
+
+const ServiceID = "STS"
+const ServiceAPIVersion = "2011-06-15"
+
+// Client provides the API client to make operations call for AWS Security Token
+// Service.
+type Client struct {
+	options Options
+
+	// Difference between the time reported by the server and the client
+	timeOffset *atomic.Int64
+}
+
+// New returns an initialized Client based on the functional options. Provide
+// additional functional options to further configure the behavior of the client,
+// such as changing the client's endpoint or adding custom middleware behavior.
+func New(options Options, optFns ...func(*Options)) *Client {
+	options = options.Copy()
+
+	resolveDefaultLogger(&options)
+
+	setResolvedDefaultsMode(&options)
+
+	resolveRetryer(&options)
+
+	resolveHTTPClient(&options)
+
+	resolveHTTPSignerV4(&options)
+
+	resolveEndpointResolverV2(&options)
+
+	resolveAuthSchemeResolver(&options)
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	finalizeRetryMaxAttempts(&options)
+
+	ignoreAnonymousAuth(&options)
+
+	wrapWithAnonymousAuth(&options)
+
+	resolveAuthSchemes(&options)
+
+	client := &Client{
+		options: options,
+	}
+
+	initializeTimeOffsetResolver(client)
+
+	return client
+}
+
+// Options returns a copy of the client configuration.
+//
+// Callers SHOULD NOT perform mutations on any inner structures within client
+// config. Config overrides should instead be made on a per-operation basis through
+// functional options.
+func (c *Client) Options() Options {
+	return c.options.Copy()
+}
+
+func (c *Client) invokeOperation(ctx context.Context, opID string, params interface{}, optFns []func(*Options), stackFns ...func(*middleware.Stack, Options) error) (result interface{}, metadata middleware.Metadata, err error) {
+	ctx = middleware.ClearStackValues(ctx)
+	stack := middleware.NewStack(opID, smithyhttp.NewStackRequest)
+	options := c.options.Copy()
+
+	for _, fn := range optFns {
+		fn(&options)
+	}
+
+	finalizeOperationRetryMaxAttempts(&options, *c)
+
+	finalizeClientEndpointResolverOptions(&options)
+
+	for _, fn := range stackFns {
+		if err := fn(stack, options); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	for _, fn := range options.APIOptions {
+		if err := fn(stack); err != nil {
+			return nil, metadata, err
+		}
+	}
+
+	handler := middleware.DecorateHandler(smithyhttp.NewClientHandler(options.HTTPClient), stack)
+	result, metadata, err = handler.Handle(ctx, params)
+	if err != nil {
+		err = &smithy.OperationError{
+			ServiceID:     ServiceID,
+			OperationName: opID,
+			Err:           err,
+		}
+	}
+	return result, metadata, err
+}
+
+type operationInputKey struct{}
+
+func setOperationInput(ctx context.Context, input interface{}) context.Context {
+	return middleware.WithStackValue(ctx, operationInputKey{}, input)
+}
+
+func getOperationInput(ctx context.Context) interface{} {
+	return middleware.GetStackValue(ctx, operationInputKey{})
+}
+
+type setOperationInputMiddleware struct {
+}
+
+func (*setOperationInputMiddleware) ID() string {
+	return "setOperationInput"
+}
+
+func (m *setOperationInputMiddleware) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	ctx = setOperationInput(ctx, in.Parameters)
+	return next.HandleSerialize(ctx, in)
+}
+
+func addProtocolFinalizerMiddlewares(stack *middleware.Stack, options Options, operation string) error {
+	if err := stack.Finalize.Add(&resolveAuthSchemeMiddleware{operation: operation, options: options}, middleware.Before); err != nil {
+		return fmt.Errorf("add ResolveAuthScheme: %w", err)
+	}
+	if err := stack.Finalize.Insert(&getIdentityMiddleware{options: options}, "ResolveAuthScheme", middleware.After); err != nil {
+		return fmt.Errorf("add GetIdentity: %v", err)
+	}
+	if err := stack.Finalize.Insert(&resolveEndpointV2Middleware{options: options}, "GetIdentity", middleware.After); err != nil {
+		return fmt.Errorf("add ResolveEndpointV2: %v", err)
+	}
+	if err := stack.Finalize.Insert(&signRequestMiddleware{}, "ResolveEndpointV2", middleware.After); err != nil {
+		return fmt.Errorf("add Signing: %w", err)
+	}
+	return nil
+}
+func resolveAuthSchemeResolver(options *Options) {
+	if options.AuthSchemeResolver == nil {
+		options.AuthSchemeResolver = &defaultAuthSchemeResolver{}
+	}
+}
+
+func resolveAuthSchemes(options *Options) {
+	if options.AuthSchemes == nil {
+		options.AuthSchemes = []smithyhttp.AuthScheme{
+			internalauth.NewHTTPAuthScheme("aws.auth#sigv4", &internalauthsmithy.V4SignerAdapter{
+				Signer:     options.HTTPSignerV4,
+				Logger:     options.Logger,
+				LogSigning: options.ClientLogMode.IsSigning(),
+			}),
+		}
+	}
+}
+
+type noSmithyDocumentSerde = smithydocument.NoSerde
+
+type legacyEndpointContextSetter struct {
+	LegacyResolver EndpointResolver
+}
+
+func (*legacyEndpointContextSetter) ID() string {
+	return "legacyEndpointContextSetter"
+}
+
+func (m *legacyEndpointContextSetter) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	if m.LegacyResolver != nil {
+		ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, true)
+	}
+
+	return next.HandleInitialize(ctx, in)
+
+}
+func addlegacyEndpointContextSetter(stack *middleware.Stack, o Options) error {
+	return stack.Initialize.Add(&legacyEndpointContextSetter{
+		LegacyResolver: o.EndpointResolver,
+	}, middleware.Before)
+}
+
+func resolveDefaultLogger(o *Options) {
+	if o.Logger != nil {
+		return
+	}
+	o.Logger = logging.Nop{}
+}
+
+func addSetLoggerMiddleware(stack *middleware.Stack, o Options) error {
+	return middleware.AddSetLoggerMiddleware(stack, o.Logger)
+}
+
+func setResolvedDefaultsMode(o *Options) {
+	if len(o.resolvedDefaultsMode) > 0 {
+		return
+	}
+
+	var mode aws.DefaultsMode
+	mode.SetFromString(string(o.DefaultsMode))
+
+	if mode == aws.DefaultsModeAuto {
+		mode = defaults.ResolveDefaultsModeAuto(o.Region, o.RuntimeEnvironment)
+	}
+
+	o.resolvedDefaultsMode = mode
+}
+
+// NewFromConfig returns a new client from the provided config.
+func NewFromConfig(cfg aws.Config, optFns ...func(*Options)) *Client {
+	opts := Options{
+		Region:                cfg.Region,
+		DefaultsMode:          cfg.DefaultsMode,
+		RuntimeEnvironment:    cfg.RuntimeEnvironment,
+		HTTPClient:            cfg.HTTPClient,
+		Credentials:           cfg.Credentials,
+		APIOptions:            cfg.APIOptions,
+		Logger:                cfg.Logger,
+		ClientLogMode:         cfg.ClientLogMode,
+		AppID:                 cfg.AppID,
+		AccountIDEndpointMode: cfg.AccountIDEndpointMode,
+	}
+	resolveAWSRetryerProvider(cfg, &opts)
+	resolveAWSRetryMaxAttempts(cfg, &opts)
+	resolveAWSRetryMode(cfg, &opts)
+	resolveAWSEndpointResolver(cfg, &opts)
+	resolveUseDualStackEndpoint(cfg, &opts)
+	resolveUseFIPSEndpoint(cfg, &opts)
+	resolveBaseEndpoint(cfg, &opts)
+	return New(opts, optFns...)
+}
+
+func resolveHTTPClient(o *Options) {
+	var buildable *awshttp.BuildableClient
+
+	if o.HTTPClient != nil {
+		var ok bool
+		buildable, ok = o.HTTPClient.(*awshttp.BuildableClient)
+		if !ok {
+			return
+		}
+	} else {
+		buildable = awshttp.NewBuildableClient()
+	}
+
+	modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+	if err == nil {
+		buildable = buildable.WithDialerOptions(func(dialer *net.Dialer) {
+			if dialerTimeout, ok := modeConfig.GetConnectTimeout(); ok {
+				dialer.Timeout = dialerTimeout
+			}
+		})
+
+		buildable = buildable.WithTransportOptions(func(transport *http.Transport) {
+			if tlsHandshakeTimeout, ok := modeConfig.GetTLSNegotiationTimeout(); ok {
+				transport.TLSHandshakeTimeout = tlsHandshakeTimeout
+			}
+		})
+	}
+
+	o.HTTPClient = buildable
+}
+
+func resolveRetryer(o *Options) {
+	if o.Retryer != nil {
+		return
+	}
+
+	if len(o.RetryMode) == 0 {
+		modeConfig, err := defaults.GetModeConfiguration(o.resolvedDefaultsMode)
+		if err == nil {
+			o.RetryMode = modeConfig.RetryMode
+		}
+	}
+	if len(o.RetryMode) == 0 {
+		o.RetryMode = aws.RetryModeStandard
+	}
+
+	var standardOptions []func(*retry.StandardOptions)
+	if v := o.RetryMaxAttempts; v != 0 {
+		standardOptions = append(standardOptions, func(so *retry.StandardOptions) {
+			so.MaxAttempts = v
+		})
+	}
+
+	switch o.RetryMode {
+	case aws.RetryModeAdaptive:
+		var adaptiveOptions []func(*retry.AdaptiveModeOptions)
+		if len(standardOptions) != 0 {
+			adaptiveOptions = append(adaptiveOptions, func(ao *retry.AdaptiveModeOptions) {
+				ao.StandardOptions = append(ao.StandardOptions, standardOptions...)
+			})
+		}
+		o.Retryer = retry.NewAdaptiveMode(adaptiveOptions...)
+
+	default:
+		o.Retryer = retry.NewStandard(standardOptions...)
+	}
+}
+
+func resolveAWSRetryerProvider(cfg aws.Config, o *Options) {
+	if cfg.Retryer == nil {
+		return
+	}
+	o.Retryer = cfg.Retryer()
+}
+
+func resolveAWSRetryMode(cfg aws.Config, o *Options) {
+	if len(cfg.RetryMode) == 0 {
+		return
+	}
+	o.RetryMode = cfg.RetryMode
+}
+func resolveAWSRetryMaxAttempts(cfg aws.Config, o *Options) {
+	if cfg.RetryMaxAttempts == 0 {
+		return
+	}
+	o.RetryMaxAttempts = cfg.RetryMaxAttempts
+}
+
+func finalizeRetryMaxAttempts(o *Options) {
+	if o.RetryMaxAttempts == 0 {
+		return
+	}
+
+	o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func finalizeOperationRetryMaxAttempts(o *Options, client Client) {
+	if v := o.RetryMaxAttempts; v == 0 || v == client.options.RetryMaxAttempts {
+		return
+	}
+
+	o.Retryer = retry.AddWithMaxAttempts(o.Retryer, o.RetryMaxAttempts)
+}
+
+func resolveAWSEndpointResolver(cfg aws.Config, o *Options) {
+	if cfg.EndpointResolver == nil && cfg.EndpointResolverWithOptions == nil {
+		return
+	}
+	o.EndpointResolver = withEndpointResolver(cfg.EndpointResolver, cfg.EndpointResolverWithOptions)
+}
+
+func addClientUserAgent(stack *middleware.Stack, options Options) error {
+	ua, err := getOrAddRequestUserAgent(stack)
+	if err != nil {
+		return err
+	}
+
+	ua.AddSDKAgentKeyValue(awsmiddleware.APIMetadata, "sts", goModuleVersion)
+	if len(options.AppID) > 0 {
+		ua.AddSDKAgentKey(awsmiddleware.ApplicationIdentifier, options.AppID)
+	}
+
+	return nil
+}
+
+func getOrAddRequestUserAgent(stack *middleware.Stack) (*awsmiddleware.RequestUserAgent, error) {
+	id := (*awsmiddleware.RequestUserAgent)(nil).ID()
+	mw, ok := stack.Build.Get(id)
+	if !ok {
+		mw = awsmiddleware.NewRequestUserAgent()
+		if err := stack.Build.Add(mw, middleware.After); err != nil {
+			return nil, err
+		}
+	}
+
+	ua, ok := mw.(*awsmiddleware.RequestUserAgent)
+	if !ok {
+		return nil, fmt.Errorf("%T for %s middleware did not match expected type", mw, id)
+	}
+
+	return ua, nil
+}
+
+type HTTPSignerV4 interface {
+	SignHTTP(ctx context.Context, credentials aws.Credentials, r *http.Request, payloadHash string, service string, region string, signingTime time.Time, optFns ...func(*v4.SignerOptions)) error
+}
+
+func resolveHTTPSignerV4(o *Options) {
+	if o.HTTPSignerV4 != nil {
+		return
+	}
+	o.HTTPSignerV4 = newDefaultV4Signer(*o)
+}
+
+func newDefaultV4Signer(o Options) *v4.Signer {
+	return v4.NewSigner(func(so *v4.SignerOptions) {
+		so.Logger = o.Logger
+		so.LogSigning = o.ClientLogMode.IsSigning()
+	})
+}
+
+func addClientRequestID(stack *middleware.Stack) error {
+	return stack.Build.Add(&awsmiddleware.ClientRequestID{}, middleware.After)
+}
+
+func addComputeContentLength(stack *middleware.Stack) error {
+	return stack.Build.Add(&smithyhttp.ComputeContentLength{}, middleware.After)
+}
+
+func addRawResponseToMetadata(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&awsmiddleware.AddRawResponse{}, middleware.Before)
+}
+
+func addRecordResponseTiming(stack *middleware.Stack) error {
+	return stack.Deserialize.Add(&awsmiddleware.RecordResponseTiming{}, middleware.After)
+}
+func addStreamingEventsPayload(stack *middleware.Stack) error {
+	return stack.Finalize.Add(&v4.StreamingEventsPayload{}, middleware.Before)
+}
+
+func addUnsignedPayload(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.UnsignedPayload{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addComputePayloadSHA256(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.ComputePayloadSHA256{}, "ResolveEndpointV2", middleware.After)
+}
+
+func addContentSHA256Header(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&v4.ContentSHA256Header{}, (*v4.ComputePayloadSHA256)(nil).ID(), middleware.After)
+}
+
+func addIsWaiterUserAgent(o *Options) {
+	o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+		ua, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureWaiter)
+		return nil
+	})
+}
+
+func addIsPaginatorUserAgent(o *Options) {
+	o.APIOptions = append(o.APIOptions, func(stack *middleware.Stack) error {
+		ua, err := getOrAddRequestUserAgent(stack)
+		if err != nil {
+			return err
+		}
+
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeaturePaginator)
+		return nil
+	})
+}
+
+func addRetry(stack *middleware.Stack, o Options) error {
+	attempt := retry.NewAttemptMiddleware(o.Retryer, smithyhttp.RequestCloner, func(m *retry.Attempt) {
+		m.LogAttempts = o.ClientLogMode.IsRetries()
+	})
+	if err := stack.Finalize.Insert(attempt, "Signing", middleware.Before); err != nil {
+		return err
+	}
+	if err := stack.Finalize.Insert(&retry.MetricsHeader{}, attempt.ID(), middleware.After); err != nil {
+		return err
+	}
+	return nil
+}
+
+// resolves dual-stack endpoint configuration
+func resolveUseDualStackEndpoint(cfg aws.Config, o *Options) error {
+	if len(cfg.ConfigSources) == 0 {
+		return nil
+	}
+	value, found, err := internalConfig.ResolveUseDualStackEndpoint(context.Background(), cfg.ConfigSources)
+	if err != nil {
+		return err
+	}
+	if found {
+		o.EndpointOptions.UseDualStackEndpoint = value
+	}
+	return nil
+}
+
+// resolves FIPS endpoint configuration
+func resolveUseFIPSEndpoint(cfg aws.Config, o *Options) error {
+	if len(cfg.ConfigSources) == 0 {
+		return nil
+	}
+	value, found, err := internalConfig.ResolveUseFIPSEndpoint(context.Background(), cfg.ConfigSources)
+	if err != nil {
+		return err
+	}
+	if found {
+		o.EndpointOptions.UseFIPSEndpoint = value
+	}
+	return nil
+}
+
+func resolveAccountID(identity smithyauth.Identity, mode aws.AccountIDEndpointMode) *string {
+	if mode == aws.AccountIDEndpointModeDisabled {
+		return nil
+	}
+
+	if ca, ok := identity.(*internalauthsmithy.CredentialsAdapter); ok && ca.Credentials.AccountID != "" {
+		return aws.String(ca.Credentials.AccountID)
+	}
+
+	return nil
+}
+
+func addTimeOffsetBuild(stack *middleware.Stack, c *Client) error {
+	mw := internalmiddleware.AddTimeOffsetMiddleware{Offset: c.timeOffset}
+	if err := stack.Build.Add(&mw, middleware.After); err != nil {
+		return err
+	}
+	return stack.Deserialize.Insert(&mw, "RecordResponseTiming", middleware.Before)
+}
+func initializeTimeOffsetResolver(c *Client) {
+	c.timeOffset = new(atomic.Int64)
+}
+
+func checkAccountID(identity smithyauth.Identity, mode aws.AccountIDEndpointMode) error {
+	switch mode {
+	case aws.AccountIDEndpointModeUnset:
+	case aws.AccountIDEndpointModePreferred:
+	case aws.AccountIDEndpointModeDisabled:
+	case aws.AccountIDEndpointModeRequired:
+		if ca, ok := identity.(*internalauthsmithy.CredentialsAdapter); !ok {
+			return fmt.Errorf("accountID is required but not set")
+		} else if ca.Credentials.AccountID == "" {
+			return fmt.Errorf("accountID is required but not set")
+		}
+	// default check in case invalid mode is configured through request config
+	default:
+		return fmt.Errorf("invalid accountID endpoint mode %s, must be preferred/required/disabled", mode)
+	}
+
+	return nil
+}
+
+func addUserAgentRetryMode(stack *middleware.Stack, options Options) error {
+	ua, err := getOrAddRequestUserAgent(stack)
+	if err != nil {
+		return err
+	}
+
+	switch options.Retryer.(type) {
+	case *retry.Standard:
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeStandard)
+	case *retry.AdaptiveMode:
+		ua.AddUserAgentFeature(awsmiddleware.UserAgentFeatureRetryModeAdaptive)
+	}
+	return nil
+}
+
+func addRecursionDetection(stack *middleware.Stack) error {
+	return stack.Build.Add(&awsmiddleware.RecursionDetection{}, middleware.After)
+}
+
+func addRequestIDRetrieverMiddleware(stack *middleware.Stack) error {
+	return stack.Deserialize.Insert(&awsmiddleware.RequestIDRetriever{}, "OperationDeserializer", middleware.Before)
+
+}
+
+func addResponseErrorMiddleware(stack *middleware.Stack) error {
+	return stack.Deserialize.Insert(&awshttp.ResponseErrorWrapper{}, "RequestIDRetriever", middleware.Before)
+
+}
+
+// HTTPPresignerV4 represents presigner interface used by presign url client
+type HTTPPresignerV4 interface {
+	PresignHTTP(
+		ctx context.Context, credentials aws.Credentials, r *http.Request,
+		payloadHash string, service string, region string, signingTime time.Time,
+		optFns ...func(*v4.SignerOptions),
+	) (url string, signedHeader http.Header, err error)
+}
+
+// PresignOptions represents the presign client options
+type PresignOptions struct {
+
+	// ClientOptions are list of functional options to mutate client options used by
+	// the presign client.
+	ClientOptions []func(*Options)
+
+	// Presigner is the presigner used by the presign url client
+	Presigner HTTPPresignerV4
+}
+
+func (o PresignOptions) copy() PresignOptions {
+	clientOptions := make([]func(*Options), len(o.ClientOptions))
+	copy(clientOptions, o.ClientOptions)
+	o.ClientOptions = clientOptions
+	return o
+}
+
+// WithPresignClientFromClientOptions is a helper utility to retrieve a function
+// that takes PresignOption as input
+func WithPresignClientFromClientOptions(optFns ...func(*Options)) func(*PresignOptions) {
+	return withPresignClientFromClientOptions(optFns).options
+}
+
+type withPresignClientFromClientOptions []func(*Options)
+
+func (w withPresignClientFromClientOptions) options(o *PresignOptions) {
+	o.ClientOptions = append(o.ClientOptions, w...)
+}
+
+// PresignClient represents the presign url client
+type PresignClient struct {
+	client  *Client
+	options PresignOptions
+}
+
+// NewPresignClient generates a presign client using provided API Client and
+// presign options
+func NewPresignClient(c *Client, optFns ...func(*PresignOptions)) *PresignClient {
+	var options PresignOptions
+	for _, fn := range optFns {
+		fn(&options)
+	}
+	if len(options.ClientOptions) != 0 {
+		c = New(c.options, options.ClientOptions...)
+	}
+
+	if options.Presigner == nil {
+		options.Presigner = newDefaultV4Signer(c.options)
+	}
+
+	return &PresignClient{
+		client:  c,
+		options: options,
+	}
+}
+
+func withNopHTTPClientAPIOption(o *Options) {
+	o.HTTPClient = smithyhttp.NopClient{}
+}
+
+type presignContextPolyfillMiddleware struct {
+}
+
+func (*presignContextPolyfillMiddleware) ID() string {
+	return "presignContextPolyfill"
+}
+
+func (m *presignContextPolyfillMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	schemeID := rscheme.Scheme.SchemeID()
+
+	if schemeID == "aws.auth#sigv4" || schemeID == "com.amazonaws.s3#sigv4express" {
+		if sn, ok := smithyhttp.GetSigV4SigningName(&rscheme.SignerProperties); ok {
+			ctx = awsmiddleware.SetSigningName(ctx, sn)
+		}
+		if sr, ok := smithyhttp.GetSigV4SigningRegion(&rscheme.SignerProperties); ok {
+			ctx = awsmiddleware.SetSigningRegion(ctx, sr)
+		}
+	} else if schemeID == "aws.auth#sigv4a" {
+		if sn, ok := smithyhttp.GetSigV4ASigningName(&rscheme.SignerProperties); ok {
+			ctx = awsmiddleware.SetSigningName(ctx, sn)
+		}
+		if sr, ok := smithyhttp.GetSigV4ASigningRegions(&rscheme.SignerProperties); ok {
+			ctx = awsmiddleware.SetSigningRegion(ctx, sr[0])
+		}
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+type presignConverter PresignOptions
+
+func (c presignConverter) convertToPresignMiddleware(stack *middleware.Stack, options Options) (err error) {
+	if _, ok := stack.Finalize.Get((*acceptencodingcust.DisableGzip)(nil).ID()); ok {
+		stack.Finalize.Remove((*acceptencodingcust.DisableGzip)(nil).ID())
+	}
+	if _, ok := stack.Finalize.Get((*retry.Attempt)(nil).ID()); ok {
+		stack.Finalize.Remove((*retry.Attempt)(nil).ID())
+	}
+	if _, ok := stack.Finalize.Get((*retry.MetricsHeader)(nil).ID()); ok {
+		stack.Finalize.Remove((*retry.MetricsHeader)(nil).ID())
+	}
+	stack.Deserialize.Clear()
+	stack.Build.Remove((*awsmiddleware.ClientRequestID)(nil).ID())
+	stack.Build.Remove("UserAgent")
+	if err := stack.Finalize.Insert(&presignContextPolyfillMiddleware{}, "Signing", middleware.Before); err != nil {
+		return err
+	}
+
+	pmw := v4.NewPresignHTTPRequestMiddleware(v4.PresignHTTPRequestMiddlewareOptions{
+		CredentialsProvider: options.Credentials,
+		Presigner:           c.Presigner,
+		LogSigning:          options.ClientLogMode.IsSigning(),
+	})
+	if _, err := stack.Finalize.Swap("Signing", pmw); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddNoPayloadDefaultContentTypeRemover(stack); err != nil {
+		return err
+	}
+	// convert request to a GET request
+	err = query.AddAsGetRequestMiddleware(stack)
+	if err != nil {
+		return err
+	}
+	err = presignedurlcust.AddAsIsPresigningMiddleware(stack)
+	if err != nil {
+		return err
+	}
+	return nil
+}
+
+func addRequestResponseLogging(stack *middleware.Stack, o Options) error {
+	return stack.Deserialize.Add(&smithyhttp.RequestResponseLogger{
+		LogRequest:          o.ClientLogMode.IsRequest(),
+		LogRequestWithBody:  o.ClientLogMode.IsRequestWithBody(),
+		LogResponse:         o.ClientLogMode.IsResponse(),
+		LogResponseWithBody: o.ClientLogMode.IsResponseWithBody(),
+	}, middleware.After)
+}
+
+type disableHTTPSMiddleware struct {
+	DisableHTTPS bool
+}
+
+func (*disableHTTPSMiddleware) ID() string {
+	return "disableHTTPS"
+}
+
+func (m *disableHTTPSMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.DisableHTTPS && !smithyhttp.GetHostnameImmutable(ctx) {
+		req.URL.Scheme = "http"
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+func addDisableHTTPSMiddleware(stack *middleware.Stack, o Options) error {
+	return stack.Finalize.Insert(&disableHTTPSMiddleware{
+		DisableHTTPS: o.EndpointOptions.DisableHTTPS,
+	}, "ResolveEndpointV2", middleware.After)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_AssumeRole.go 🔗

@@ -0,0 +1,520 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns a set of temporary security credentials that you can use to access
+// Amazon Web Services resources. These temporary credentials consist of an access
+// key ID, a secret access key, and a security token. Typically, you use AssumeRole
+// within your account or for cross-account access. For a comparison of AssumeRole
+// with other API operations that produce temporary credentials, see [Requesting Temporary Security Credentials]and [Comparing the Amazon Web Services STS API operations] in the
+// IAM User Guide.
+//
+// # Permissions
+//
+// The temporary security credentials created by AssumeRole can be used to make
+// API calls to any Amazon Web Services service with the following exception: You
+// cannot call the Amazon Web Services STS GetFederationToken or GetSessionToken
+// API operations.
+//
+// (Optional) You can pass inline or managed [session policies] to this operation. You can pass a
+// single JSON policy document to use as an inline session policy. You can also
+// specify up to 10 managed policy Amazon Resource Names (ARNs) to use as managed
+// session policies. The plaintext that you use for both inline and managed session
+// policies can't exceed 2,048 characters. Passing policies to this operation
+// returns new temporary credentials. The resulting session's permissions are the
+// intersection of the role's identity-based policy and the session policies. You
+// can use the role's temporary credentials in subsequent Amazon Web Services API
+// calls to access resources in the account that owns the role. You cannot use
+// session policies to grant more permissions than those allowed by the
+// identity-based policy of the role that is being assumed. For more information,
+// see [Session Policies]in the IAM User Guide.
+//
+// When you create a role, you create two policies: a role trust policy that
+// specifies who can assume the role, and a permissions policy that specifies what
+// can be done with the role. You specify the trusted principal that is allowed to
+// assume the role in the role trust policy.
+//
+// To assume a role from a different account, your Amazon Web Services account
+// must be trusted by the role. The trust relationship is defined in the role's
+// trust policy when the role is created. That trust policy states which accounts
+// are allowed to delegate that access to users in the account.
+//
+// A user who wants to access a role in a different account must also have
+// permissions that are delegated from the account administrator. The administrator
+// must attach a policy that allows the user to call AssumeRole for the ARN of the
+// role in the other account.
+//
+// To allow a user to assume a role in the same account, you can do either of the
+// following:
+//
+//   - Attach a policy to the user that allows the user to call AssumeRole (as long
+//     as the role's trust policy trusts the account).
+//
+//   - Add the user as a principal directly in the role's trust policy.
+//
+// You can do either because the role’s trust policy acts as an IAM resource-based
+// policy. When a resource-based policy grants access to a principal in the same
+// account, no additional identity-based policy is required. For more information
+// about trust policies and resource-based policies, see [IAM Policies]in the IAM User Guide.
+//
+// # Tags
+//
+// (Optional) You can pass tag key-value pairs to your session. These tags are
+// called session tags. For more information about session tags, see [Passing Session Tags in STS]in the IAM
+// User Guide.
+//
+// An administrator must grant you the permissions necessary to pass session tags.
+// The administrator can also create granular permissions to allow you to pass only
+// specific session tags. For more information, see [Tutorial: Using Tags for Attribute-Based Access Control]in the IAM User Guide.
+//
+// You can set the session tags as transitive. Transitive tags persist during role
+// chaining. For more information, see [Chaining Roles with Session Tags]in the IAM User Guide.
+//
+// # Using MFA with AssumeRole
+//
+// (Optional) You can include multi-factor authentication (MFA) information when
+// you call AssumeRole . This is useful for cross-account scenarios to ensure that
+// the user that assumes the role has been authenticated with an Amazon Web
+// Services MFA device. In that scenario, the trust policy of the role being
+// assumed includes a condition that tests for MFA authentication. If the caller
+// does not include valid MFA information, the request to assume the role is
+// denied. The condition in a trust policy that tests for MFA authentication might
+// look like the following example.
+//
+//	"Condition": {"Bool": {"aws:MultiFactorAuthPresent": true}}
+//
+// For more information, see [Configuring MFA-Protected API Access] in the IAM User Guide guide.
+//
+// To use MFA with AssumeRole , you pass values for the SerialNumber and TokenCode
+// parameters. The SerialNumber value identifies the user's hardware or virtual
+// MFA device. The TokenCode is the time-based one-time password (TOTP) that the
+// MFA device produces.
+//
+// [Configuring MFA-Protected API Access]: https://docs.aws.amazon.com/IAM/latest/UserGuide/MFAProtectedAPI.html
+// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [Passing Session Tags in STS]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+// [Chaining Roles with Session Tags]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_role-chaining
+// [Comparing the Amazon Web Services STS API operations]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison
+// [session policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [IAM Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
+// [Requesting Temporary Security Credentials]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html
+// [Tutorial: Using Tags for Attribute-Based Access Control]: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html
+func (c *Client) AssumeRole(ctx context.Context, params *AssumeRoleInput, optFns ...func(*Options)) (*AssumeRoleOutput, error) {
+	if params == nil {
+		params = &AssumeRoleInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "AssumeRole", params, optFns, c.addOperationAssumeRoleMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*AssumeRoleOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type AssumeRoleInput struct {
+
+	// The Amazon Resource Name (ARN) of the role to assume.
+	//
+	// This member is required.
+	RoleArn *string
+
+	// An identifier for the assumed role session.
+	//
+	// Use the role session name to uniquely identify a session when the same role is
+	// assumed by different principals or for different reasons. In cross-account
+	// scenarios, the role session name is visible to, and can be logged by the account
+	// that owns the role. The role session name is also used in the ARN of the assumed
+	// role principal. This means that subsequent cross-account API requests that use
+	// the temporary security credentials will expose the role session name to the
+	// external account in their CloudTrail logs.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-
+	//
+	// This member is required.
+	RoleSessionName *string
+
+	// The duration, in seconds, of the role session. The value specified can range
+	// from 900 seconds (15 minutes) up to the maximum session duration set for the
+	// role. The maximum session duration setting can have a value from 1 hour to 12
+	// hours. If you specify a value higher than this setting or the administrator
+	// setting (whichever is lower), the operation fails. For example, if you specify a
+	// session duration of 12 hours, but your administrator set the maximum session
+	// duration to 6 hours, your operation fails.
+	//
+	// Role chaining limits your Amazon Web Services CLI or Amazon Web Services API
+	// role session to a maximum of one hour. When you use the AssumeRole API
+	// operation to assume a role, you can specify the duration of your role session
+	// with the DurationSeconds parameter. You can specify a parameter value of up to
+	// 43200 seconds (12 hours), depending on the maximum session duration setting for
+	// your role. However, if you assume a role using role chaining and provide a
+	// DurationSeconds parameter value greater than one hour, the operation fails. To
+	// learn how to view the maximum value for your role, see [View the Maximum Session Duration Setting for a Role]in the IAM User Guide.
+	//
+	// By default, the value is set to 3600 seconds.
+	//
+	// The DurationSeconds parameter is separate from the duration of a console
+	// session that you might request using the returned credentials. The request to
+	// the federation endpoint for a console sign-in token takes a SessionDuration
+	// parameter that specifies the maximum length of the console session. For more
+	// information, see [Creating a URL that Enables Federated Users to Access the Amazon Web Services Management Console]in the IAM User Guide.
+	//
+	// [View the Maximum Session Duration Setting for a Role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session
+	// [Creating a URL that Enables Federated Users to Access the Amazon Web Services Management Console]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html
+	DurationSeconds *int32
+
+	// A unique identifier that might be required when you assume a role in another
+	// account. If the administrator of the account to which the role belongs provided
+	// you with an external ID, then provide that value in the ExternalId parameter.
+	// This value can be any string, such as a passphrase or account number. A
+	// cross-account role is usually set up to trust everyone in an account. Therefore,
+	// the administrator of the trusting account might send an external ID to the
+	// administrator of the trusted account. That way, only someone with the ID can
+	// assume the role, rather than everyone in the account. For more information about
+	// the external ID, see [How to Use an External ID When Granting Access to Your Amazon Web Services Resources to a Third Party]in the IAM User Guide.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@:/-
+	//
+	// [How to Use an External ID When Granting Access to Your Amazon Web Services Resources to a Third Party]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html
+	ExternalId *string
+
+	// An IAM policy in JSON format that you want to use as an inline session policy.
+	//
+	// This parameter is optional. Passing policies to this operation returns new
+	// temporary credentials. The resulting session's permissions are the intersection
+	// of the role's identity-based policy and the session policies. You can use the
+	// role's temporary credentials in subsequent Amazon Web Services API calls to
+	// access resources in the account that owns the role. You cannot use session
+	// policies to grant more permissions than those allowed by the identity-based
+	// policy of the role that is being assumed. For more information, see [Session Policies]in the IAM
+	// User Guide.
+	//
+	// The plaintext that you use for both inline and managed session policies can't
+	// exceed 2,048 characters. The JSON policy characters can be any ASCII character
+	// from the space character to the end of the valid character list (\u0020 through
+	// \u00FF). It can also include the tab (\u0009), linefeed (\u000A), and carriage
+	// return (\u000D) characters.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	Policy *string
+
+	// The Amazon Resource Names (ARNs) of the IAM managed policies that you want to
+	// use as managed session policies. The policies must exist in the same account as
+	// the role.
+	//
+	// This parameter is optional. You can provide up to 10 managed policy ARNs.
+	// However, the plaintext that you use for both inline and managed session policies
+	// can't exceed 2,048 characters. For more information about ARNs, see [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]in the
+	// Amazon Web Services General Reference.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// Passing policies to this operation returns new temporary credentials. The
+	// resulting session's permissions are the intersection of the role's
+	// identity-based policy and the session policies. You can use the role's temporary
+	// credentials in subsequent Amazon Web Services API calls to access resources in
+	// the account that owns the role. You cannot use session policies to grant more
+	// permissions than those allowed by the identity-based policy of the role that is
+	// being assumed. For more information, see [Session Policies]in the IAM User Guide.
+	//
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	// [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
+	PolicyArns []types.PolicyDescriptorType
+
+	// A list of previously acquired trusted context assertions in the format of a
+	// JSON array. The trusted context assertion is signed and encrypted by Amazon Web
+	// Services STS.
+	//
+	// The following is an example of a ProvidedContext value that includes a single
+	// trusted context assertion and the ARN of the context provider from which the
+	// trusted context assertion was generated.
+	//
+	//     [{"ProviderArn":"arn:aws:iam::aws:contextProvider/IdentityCenter","ContextAssertion":"trusted-context-assertion"}]
+	ProvidedContexts []types.ProvidedContext
+
+	// The identification number of the MFA device that is associated with the user
+	// who is making the AssumeRole call. Specify this value if the trust policy of
+	// the role being assumed includes a condition that requires MFA authentication.
+	// The value is either the serial number for a hardware device (such as
+	// GAHT12345678 ) or an Amazon Resource Name (ARN) for a virtual device (such as
+	// arn:aws:iam::123456789012:mfa/user ).
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-
+	SerialNumber *string
+
+	// The source identity specified by the principal that is calling the AssumeRole
+	// operation.
+	//
+	// You can require users to specify a source identity when they assume a role. You
+	// do this by using the sts:SourceIdentity condition key in a role trust policy.
+	// You can use source identity information in CloudTrail logs to determine who took
+	// actions with a role. You can use the aws:SourceIdentity condition key to
+	// further control access to Amazon Web Services resources based on the value of
+	// source identity. For more information about using source identity, see [Monitor and control actions taken with assumed roles]in the
+	// IAM User Guide.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-. You cannot use a
+	// value that begins with the text aws: . This prefix is reserved for Amazon Web
+	// Services internal use.
+	//
+	// [Monitor and control actions taken with assumed roles]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html
+	SourceIdentity *string
+
+	// A list of session tags that you want to pass. Each session tag consists of a
+	// key name and an associated value. For more information about session tags, see [Tagging Amazon Web Services STS Sessions]
+	// in the IAM User Guide.
+	//
+	// This parameter is optional. You can pass up to 50 session tags. The plaintext
+	// session tag keys can’t exceed 128 characters, and the values can’t exceed 256
+	// characters. For these and additional limits, see [IAM and STS Character Limits]in the IAM User Guide.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// You can pass a session tag with the same key as a tag that is already attached
+	// to the role. When you do, session tags override a role tag with the same key.
+	//
+	// Tag key–value pairs are not case sensitive, but case is preserved. This means
+	// that you cannot have separate Department and department tag keys. Assume that
+	// the role has the Department = Marketing tag and you pass the department =
+	// engineering session tag. Department and department are not saved as separate
+	// tags, and the session tag passed in the request takes precedence over the role
+	// tag.
+	//
+	// Additionally, if you used temporary credentials to perform this operation, the
+	// new session inherits any transitive session tags from the calling session. If
+	// you pass a session tag with the same key as an inherited tag, the operation
+	// fails. To view the inherited tags for a session, see the CloudTrail logs. For
+	// more information, see [Viewing Session Tags in CloudTrail]in the IAM User Guide.
+	//
+	// [Tagging Amazon Web Services STS Sessions]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+	// [IAM and STS Character Limits]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html#reference_iam-limits-entity-length
+	// [Viewing Session Tags in CloudTrail]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_ctlogs
+	Tags []types.Tag
+
+	// The value provided by the MFA device, if the trust policy of the role being
+	// assumed requires MFA. (In other words, if the policy includes a condition that
+	// tests for MFA). If the role being assumed requires MFA and if the TokenCode
+	// value is missing or expired, the AssumeRole call returns an "access denied"
+	// error.
+	//
+	// The format for this parameter, as described by its regex pattern, is a sequence
+	// of six numeric digits.
+	TokenCode *string
+
+	// A list of keys for session tags that you want to set as transitive. If you set
+	// a tag key as transitive, the corresponding key and value passes to subsequent
+	// sessions in a role chain. For more information, see [Chaining Roles with Session Tags]in the IAM User Guide.
+	//
+	// This parameter is optional. When you set session tags as transitive, the
+	// session policy and session tags packed binary limit is not affected.
+	//
+	// If you choose not to specify a transitive tag key, then no tags are passed from
+	// this session to any subsequent sessions.
+	//
+	// [Chaining Roles with Session Tags]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_role-chaining
+	TransitiveTagKeys []string
+
+	noSmithyDocumentSerde
+}
+
+// Contains the response to a successful AssumeRole request, including temporary Amazon Web
+// Services credentials that can be used to make Amazon Web Services requests.
+type AssumeRoleOutput struct {
+
+	// The Amazon Resource Name (ARN) and the assumed role ID, which are identifiers
+	// that you can use to refer to the resulting temporary security credentials. For
+	// example, you can reference these credentials as a principal in a resource-based
+	// policy by using the ARN or assumed role ID. The ARN and ID include the
+	// RoleSessionName that you specified when you called AssumeRole .
+	AssumedRoleUser *types.AssumedRoleUser
+
+	// The temporary security credentials, which include an access key ID, a secret
+	// access key, and a security (or session) token.
+	//
+	// The size of the security token that STS API operations return is not fixed. We
+	// strongly recommend that you make no assumptions about the maximum size.
+	Credentials *types.Credentials
+
+	// A percentage value that indicates the packed size of the session policies and
+	// session tags combined passed in the request. The request fails if the packed
+	// size is greater than 100 percent, which means the policies and tags exceeded the
+	// allowed space.
+	PackedPolicySize *int32
+
+	// The source identity specified by the principal that is calling the AssumeRole
+	// operation.
+	//
+	// You can require users to specify a source identity when they assume a role. You
+	// do this by using the sts:SourceIdentity condition key in a role trust policy.
+	// You can use source identity information in CloudTrail logs to determine who took
+	// actions with a role. You can use the aws:SourceIdentity condition key to
+	// further control access to Amazon Web Services resources based on the value of
+	// source identity. For more information about using source identity, see [Monitor and control actions taken with assumed roles]in the
+	// IAM User Guide.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-
+	//
+	// [Monitor and control actions taken with assumed roles]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html
+	SourceIdentity *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationAssumeRoleMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpAssumeRole{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpAssumeRole{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "AssumeRole"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addComputePayloadSHA256(stack); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpAssumeRoleValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opAssumeRole(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opAssumeRole(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "AssumeRole",
+	}
+}
+
+// PresignAssumeRole is used to generate a presigned HTTP Request which contains
+// presigned URL, signed headers and HTTP method used.
+func (c *PresignClient) PresignAssumeRole(ctx context.Context, params *AssumeRoleInput, optFns ...func(*PresignOptions)) (*v4.PresignedHTTPRequest, error) {
+	if params == nil {
+		params = &AssumeRoleInput{}
+	}
+	options := c.options.copy()
+	for _, fn := range optFns {
+		fn(&options)
+	}
+	clientOptFns := append(options.ClientOptions, withNopHTTPClientAPIOption)
+
+	result, _, err := c.client.invokeOperation(ctx, "AssumeRole", params, clientOptFns,
+		c.client.addOperationAssumeRoleMiddlewares,
+		presignConverter(options).convertToPresignMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*v4.PresignedHTTPRequest)
+	return out, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_AssumeRoleWithSAML.go 🔗

@@ -0,0 +1,436 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns a set of temporary security credentials for users who have been
+// authenticated via a SAML authentication response. This operation provides a
+// mechanism for tying an enterprise identity store or directory to role-based
+// Amazon Web Services access without user-specific credentials or configuration.
+// For a comparison of AssumeRoleWithSAML with the other API operations that
+// produce temporary credentials, see [Requesting Temporary Security Credentials]and [Comparing the Amazon Web Services STS API operations] in the IAM User Guide.
+//
+// The temporary security credentials returned by this operation consist of an
+// access key ID, a secret access key, and a security token. Applications can use
+// these temporary security credentials to sign calls to Amazon Web Services
+// services.
+//
+// # Session Duration
+//
+// By default, the temporary security credentials created by AssumeRoleWithSAML
+// last for one hour. However, you can use the optional DurationSeconds parameter
+// to specify the duration of your session. Your role session lasts for the
+// duration that you specify, or until the time specified in the SAML
+// authentication response's SessionNotOnOrAfter value, whichever is shorter. You
+// can provide a DurationSeconds value from 900 seconds (15 minutes) up to the
+// maximum session duration setting for the role. This setting can have a value
+// from 1 hour to 12 hours. To learn how to view the maximum value for your role,
+// see [View the Maximum Session Duration Setting for a Role]in the IAM User Guide. The maximum session duration limit applies when you
+// use the AssumeRole* API operations or the assume-role* CLI commands. However
+// the limit does not apply when you use those operations to create a console URL.
+// For more information, see [Using IAM Roles]in the IAM User Guide.
+//
+// [Role chaining]limits your CLI or Amazon Web Services API role session to a maximum of one
+// hour. When you use the AssumeRole API operation to assume a role, you can
+// specify the duration of your role session with the DurationSeconds parameter.
+// You can specify a parameter value of up to 43200 seconds (12 hours), depending
+// on the maximum session duration setting for your role. However, if you assume a
+// role using role chaining and provide a DurationSeconds parameter value greater
+// than one hour, the operation fails.
+//
+// # Permissions
+//
+// The temporary security credentials created by AssumeRoleWithSAML can be used to
+// make API calls to any Amazon Web Services service with the following exception:
+// you cannot call the STS GetFederationToken or GetSessionToken API operations.
+//
+// (Optional) You can pass inline or managed [session policies] to this operation. You can pass a
+// single JSON policy document to use as an inline session policy. You can also
+// specify up to 10 managed policy Amazon Resource Names (ARNs) to use as managed
+// session policies. The plaintext that you use for both inline and managed session
+// policies can't exceed 2,048 characters. Passing policies to this operation
+// returns new temporary credentials. The resulting session's permissions are the
+// intersection of the role's identity-based policy and the session policies. You
+// can use the role's temporary credentials in subsequent Amazon Web Services API
+// calls to access resources in the account that owns the role. You cannot use
+// session policies to grant more permissions than those allowed by the
+// identity-based policy of the role that is being assumed. For more information,
+// see [Session Policies]in the IAM User Guide.
+//
+// Calling AssumeRoleWithSAML does not require the use of Amazon Web Services
+// security credentials. The identity of the caller is validated by using keys in
+// the metadata document that is uploaded for the SAML provider entity for your
+// identity provider.
+//
+// Calling AssumeRoleWithSAML can result in an entry in your CloudTrail logs. The
+// entry includes the value in the NameID element of the SAML assertion. We
+// recommend that you use a NameIDType that is not associated with any personally
+// identifiable information (PII). For example, you could instead use the
+// persistent identifier ( urn:oasis:names:tc:SAML:2.0:nameid-format:persistent ).
+//
+// # Tags
+//
+// (Optional) You can configure your IdP to pass attributes into your SAML
+// assertion as session tags. Each session tag consists of a key name and an
+// associated value. For more information about session tags, see [Passing Session Tags in STS]in the IAM User
+// Guide.
+//
+// You can pass up to 50 session tags. The plaintext session tag keys can’t exceed
+// 128 characters and the values can’t exceed 256 characters. For these and
+// additional limits, see [IAM and STS Character Limits]in the IAM User Guide.
+//
+// An Amazon Web Services conversion compresses the passed inline session policy,
+// managed policy ARNs, and session tags into a packed binary format that has a
+// separate limit. Your request can fail for this limit even if your plaintext
+// meets the other requirements. The PackedPolicySize response element indicates
+// by percentage how close the policies and tags for your request are to the upper
+// size limit.
+//
+// You can pass a session tag with the same key as a tag that is attached to the
+// role. When you do, session tags override the role's tags with the same key.
+//
+// An administrator must grant you the permissions necessary to pass session tags.
+// The administrator can also create granular permissions to allow you to pass only
+// specific session tags. For more information, see [Tutorial: Using Tags for Attribute-Based Access Control]in the IAM User Guide.
+//
+// You can set the session tags as transitive. Transitive tags persist during role
+// chaining. For more information, see [Chaining Roles with Session Tags]in the IAM User Guide.
+//
+// # SAML Configuration
+//
+// Before your application can call AssumeRoleWithSAML , you must configure your
+// SAML identity provider (IdP) to issue the claims required by Amazon Web
+// Services. Additionally, you must use Identity and Access Management (IAM) to
+// create a SAML provider entity in your Amazon Web Services account that
+// represents your identity provider. You must also create an IAM role that
+// specifies this SAML provider in its trust policy.
+//
+// For more information, see the following resources:
+//
+// [About SAML 2.0-based Federation]
+//   - in the IAM User Guide.
+//
+// [Creating SAML Identity Providers]
+//   - in the IAM User Guide.
+//
+// [Configuring a Relying Party and Claims]
+//   - in the IAM User Guide.
+//
+// [Creating a Role for SAML 2.0 Federation]
+//   - in the IAM User Guide.
+//
+// [View the Maximum Session Duration Setting for a Role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session
+// [Creating a Role for SAML 2.0 Federation]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp_saml.html
+// [IAM and STS Character Limits]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html#reference_iam-limits-entity-length
+// [Comparing the Amazon Web Services STS API operations]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison
+// [Creating SAML Identity Providers]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html
+// [session policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [Requesting Temporary Security Credentials]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html
+// [Tutorial: Using Tags for Attribute-Based Access Control]: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html
+// [Configuring a Relying Party and Claims]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml_relying-party.html
+// [Role chaining]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-role-chaining
+// [Using IAM Roles]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html
+// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [Passing Session Tags in STS]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+// [About SAML 2.0-based Federation]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html
+// [Chaining Roles with Session Tags]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_role-chaining
+func (c *Client) AssumeRoleWithSAML(ctx context.Context, params *AssumeRoleWithSAMLInput, optFns ...func(*Options)) (*AssumeRoleWithSAMLOutput, error) {
+	if params == nil {
+		params = &AssumeRoleWithSAMLInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "AssumeRoleWithSAML", params, optFns, c.addOperationAssumeRoleWithSAMLMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*AssumeRoleWithSAMLOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type AssumeRoleWithSAMLInput struct {
+
+	// The Amazon Resource Name (ARN) of the SAML provider in IAM that describes the
+	// IdP.
+	//
+	// This member is required.
+	PrincipalArn *string
+
+	// The Amazon Resource Name (ARN) of the role that the caller is assuming.
+	//
+	// This member is required.
+	RoleArn *string
+
+	// The base64 encoded SAML authentication response provided by the IdP.
+	//
+	// For more information, see [Configuring a Relying Party and Adding Claims] in the IAM User Guide.
+	//
+	// [Configuring a Relying Party and Adding Claims]: https://docs.aws.amazon.com/IAM/latest/UserGuide/create-role-saml-IdP-tasks.html
+	//
+	// This member is required.
+	SAMLAssertion *string
+
+	// The duration, in seconds, of the role session. Your role session lasts for the
+	// duration that you specify for the DurationSeconds parameter, or until the time
+	// specified in the SAML authentication response's SessionNotOnOrAfter value,
+	// whichever is shorter. You can provide a DurationSeconds value from 900 seconds
+	// (15 minutes) up to the maximum session duration setting for the role. This
+	// setting can have a value from 1 hour to 12 hours. If you specify a value higher
+	// than this setting, the operation fails. For example, if you specify a session
+	// duration of 12 hours, but your administrator set the maximum session duration to
+	// 6 hours, your operation fails. To learn how to view the maximum value for your
+	// role, see [View the Maximum Session Duration Setting for a Role]in the IAM User Guide.
+	//
+	// By default, the value is set to 3600 seconds.
+	//
+	// The DurationSeconds parameter is separate from the duration of a console
+	// session that you might request using the returned credentials. The request to
+	// the federation endpoint for a console sign-in token takes a SessionDuration
+	// parameter that specifies the maximum length of the console session. For more
+	// information, see [Creating a URL that Enables Federated Users to Access the Amazon Web Services Management Console]in the IAM User Guide.
+	//
+	// [View the Maximum Session Duration Setting for a Role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session
+	// [Creating a URL that Enables Federated Users to Access the Amazon Web Services Management Console]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html
+	DurationSeconds *int32
+
+	// An IAM policy in JSON format that you want to use as an inline session policy.
+	//
+	// This parameter is optional. Passing policies to this operation returns new
+	// temporary credentials. The resulting session's permissions are the intersection
+	// of the role's identity-based policy and the session policies. You can use the
+	// role's temporary credentials in subsequent Amazon Web Services API calls to
+	// access resources in the account that owns the role. You cannot use session
+	// policies to grant more permissions than those allowed by the identity-based
+	// policy of the role that is being assumed. For more information, see [Session Policies]in the IAM
+	// User Guide.
+	//
+	// The plaintext that you use for both inline and managed session policies can't
+	// exceed 2,048 characters. The JSON policy characters can be any ASCII character
+	// from the space character to the end of the valid character list (\u0020 through
+	// \u00FF). It can also include the tab (\u0009), linefeed (\u000A), and carriage
+	// return (\u000D) characters.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	Policy *string
+
+	// The Amazon Resource Names (ARNs) of the IAM managed policies that you want to
+	// use as managed session policies. The policies must exist in the same account as
+	// the role.
+	//
+	// This parameter is optional. You can provide up to 10 managed policy ARNs.
+	// However, the plaintext that you use for both inline and managed session policies
+	// can't exceed 2,048 characters. For more information about ARNs, see [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]in the
+	// Amazon Web Services General Reference.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// Passing policies to this operation returns new temporary credentials. The
+	// resulting session's permissions are the intersection of the role's
+	// identity-based policy and the session policies. You can use the role's temporary
+	// credentials in subsequent Amazon Web Services API calls to access resources in
+	// the account that owns the role. You cannot use session policies to grant more
+	// permissions than those allowed by the identity-based policy of the role that is
+	// being assumed. For more information, see [Session Policies]in the IAM User Guide.
+	//
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	// [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
+	PolicyArns []types.PolicyDescriptorType
+
+	noSmithyDocumentSerde
+}
+
+// Contains the response to a successful AssumeRoleWithSAML request, including temporary Amazon Web
+// Services credentials that can be used to make Amazon Web Services requests.
+type AssumeRoleWithSAMLOutput struct {
+
+	// The identifiers for the temporary security credentials that the operation
+	// returns.
+	AssumedRoleUser *types.AssumedRoleUser
+
+	//  The value of the Recipient attribute of the SubjectConfirmationData element of
+	// the SAML assertion.
+	Audience *string
+
+	// The temporary security credentials, which include an access key ID, a secret
+	// access key, and a security (or session) token.
+	//
+	// The size of the security token that STS API operations return is not fixed. We
+	// strongly recommend that you make no assumptions about the maximum size.
+	Credentials *types.Credentials
+
+	// The value of the Issuer element of the SAML assertion.
+	Issuer *string
+
+	// A hash value based on the concatenation of the following:
+	//
+	//   - The Issuer response value.
+	//
+	//   - The Amazon Web Services account ID.
+	//
+	//   - The friendly name (the last part of the ARN) of the SAML provider in IAM.
+	//
+	// The combination of NameQualifier and Subject can be used to uniquely identify a
+	// user.
+	//
+	// The following pseudocode shows how the hash value is calculated:
+	//
+	//     BASE64 ( SHA1 ( "https://example.com/saml" + "123456789012" + "/MySAMLIdP" ) )
+	NameQualifier *string
+
+	// A percentage value that indicates the packed size of the session policies and
+	// session tags combined passed in the request. The request fails if the packed
+	// size is greater than 100 percent, which means the policies and tags exceeded the
+	// allowed space.
+	PackedPolicySize *int32
+
+	// The value in the SourceIdentity attribute in the SAML assertion.
+	//
+	// You can require users to set a source identity value when they assume a role.
+	// You do this by using the sts:SourceIdentity condition key in a role trust
+	// policy. That way, actions that are taken with the role are associated with that
+	// user. After the source identity is set, the value cannot be changed. It is
+	// present in the request for all actions that are taken by the role and persists
+	// across [chained role]sessions. You can configure your SAML identity provider to use an
+	// attribute associated with your users, like user name or email, as the source
+	// identity when calling AssumeRoleWithSAML . You do this by adding an attribute to
+	// the SAML assertion. For more information about using source identity, see [Monitor and control actions taken with assumed roles]in
+	// the IAM User Guide.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-
+	//
+	// [chained role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts#iam-term-role-chaining
+	// [Monitor and control actions taken with assumed roles]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html
+	SourceIdentity *string
+
+	// The value of the NameID element in the Subject element of the SAML assertion.
+	Subject *string
+
+	//  The format of the name ID, as defined by the Format attribute in the NameID
+	// element of the SAML assertion. Typical examples of the format are transient or
+	// persistent .
+	//
+	// If the format includes the prefix urn:oasis:names:tc:SAML:2.0:nameid-format ,
+	// that prefix is removed. For example,
+	// urn:oasis:names:tc:SAML:2.0:nameid-format:transient is returned as transient .
+	// If the format includes any other prefix, the format is returned with no
+	// modifications.
+	SubjectType *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationAssumeRoleWithSAMLMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpAssumeRoleWithSAML{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpAssumeRoleWithSAML{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "AssumeRoleWithSAML"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpAssumeRoleWithSAMLValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opAssumeRoleWithSAML(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opAssumeRoleWithSAML(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "AssumeRoleWithSAML",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_AssumeRoleWithWebIdentity.go 🔗

@@ -0,0 +1,447 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns a set of temporary security credentials for users who have been
+// authenticated in a mobile or web application with a web identity provider.
+// Example providers include the OAuth 2.0 providers Login with Amazon and
+// Facebook, or any OpenID Connect-compatible identity provider such as Google or [Amazon Cognito federated identities].
+//
+// For mobile applications, we recommend that you use Amazon Cognito. You can use
+// Amazon Cognito with the [Amazon Web Services SDK for iOS Developer Guide]and the [Amazon Web Services SDK for Android Developer Guide] to uniquely identify a user. You can also
+// supply the user with a consistent identity throughout the lifetime of an
+// application.
+//
+// To learn more about Amazon Cognito, see [Amazon Cognito identity pools] in Amazon Cognito Developer Guide.
+//
+// Calling AssumeRoleWithWebIdentity does not require the use of Amazon Web
+// Services security credentials. Therefore, you can distribute an application (for
+// example, on mobile devices) that requests temporary security credentials without
+// including long-term Amazon Web Services credentials in the application. You also
+// don't need to deploy server-based proxy services that use long-term Amazon Web
+// Services credentials. Instead, the identity of the caller is validated by using
+// a token from the web identity provider. For a comparison of
+// AssumeRoleWithWebIdentity with the other API operations that produce temporary
+// credentials, see [Requesting Temporary Security Credentials]and [Comparing the Amazon Web Services STS API operations] in the IAM User Guide.
+//
+// The temporary security credentials returned by this API consist of an access
+// key ID, a secret access key, and a security token. Applications can use these
+// temporary security credentials to sign calls to Amazon Web Services service API
+// operations.
+//
+// # Session Duration
+//
+// By default, the temporary security credentials created by
+// AssumeRoleWithWebIdentity last for one hour. However, you can use the optional
+// DurationSeconds parameter to specify the duration of your session. You can
+// provide a value from 900 seconds (15 minutes) up to the maximum session duration
+// setting for the role. This setting can have a value from 1 hour to 12 hours. To
+// learn how to view the maximum value for your role, see [View the Maximum Session Duration Setting for a Role]in the IAM User Guide.
+// The maximum session duration limit applies when you use the AssumeRole* API
+// operations or the assume-role* CLI commands. However the limit does not apply
+// when you use those operations to create a console URL. For more information, see
+// [Using IAM Roles]in the IAM User Guide.
+//
+// # Permissions
+//
+// The temporary security credentials created by AssumeRoleWithWebIdentity can be
+// used to make API calls to any Amazon Web Services service with the following
+// exception: you cannot call the STS GetFederationToken or GetSessionToken API
+// operations.
+//
+// (Optional) You can pass inline or managed [session policies] to this operation. You can pass a
+// single JSON policy document to use as an inline session policy. You can also
+// specify up to 10 managed policy Amazon Resource Names (ARNs) to use as managed
+// session policies. The plaintext that you use for both inline and managed session
+// policies can't exceed 2,048 characters. Passing policies to this operation
+// returns new temporary credentials. The resulting session's permissions are the
+// intersection of the role's identity-based policy and the session policies. You
+// can use the role's temporary credentials in subsequent Amazon Web Services API
+// calls to access resources in the account that owns the role. You cannot use
+// session policies to grant more permissions than those allowed by the
+// identity-based policy of the role that is being assumed. For more information,
+// see [Session Policies]in the IAM User Guide.
+//
+// # Tags
+//
+// (Optional) You can configure your IdP to pass attributes into your web identity
+// token as session tags. Each session tag consists of a key name and an associated
+// value. For more information about session tags, see [Passing Session Tags in STS]in the IAM User Guide.
+//
+// You can pass up to 50 session tags. The plaintext session tag keys can’t exceed
+// 128 characters and the values can’t exceed 256 characters. For these and
+// additional limits, see [IAM and STS Character Limits]in the IAM User Guide.
+//
+// An Amazon Web Services conversion compresses the passed inline session policy,
+// managed policy ARNs, and session tags into a packed binary format that has a
+// separate limit. Your request can fail for this limit even if your plaintext
+// meets the other requirements. The PackedPolicySize response element indicates
+// by percentage how close the policies and tags for your request are to the upper
+// size limit.
+//
+// You can pass a session tag with the same key as a tag that is attached to the
+// role. When you do, the session tag overrides the role tag with the same key.
+//
+// An administrator must grant you the permissions necessary to pass session tags.
+// The administrator can also create granular permissions to allow you to pass only
+// specific session tags. For more information, see [Tutorial: Using Tags for Attribute-Based Access Control]in the IAM User Guide.
+//
+// You can set the session tags as transitive. Transitive tags persist during role
+// chaining. For more information, see [Chaining Roles with Session Tags]in the IAM User Guide.
+//
+// # Identities
+//
+// Before your application can call AssumeRoleWithWebIdentity , you must have an
+// identity token from a supported identity provider and create a role that the
+// application can assume. The role that your application assumes must trust the
+// identity provider that is associated with the identity token. In other words,
+// the identity provider must be specified in the role's trust policy.
+//
+// Calling AssumeRoleWithWebIdentity can result in an entry in your CloudTrail
+// logs. The entry includes the [Subject]of the provided web identity token. We recommend
+// that you avoid using any personally identifiable information (PII) in this
+// field. For example, you could instead use a GUID or a pairwise identifier, as [suggested in the OIDC specification].
+//
+// For more information about how to use web identity federation and the
+// AssumeRoleWithWebIdentity API, see the following resources:
+//
+// [Using Web Identity Federation API Operations for Mobile Apps]
+//   - and [Federation Through a Web-based Identity Provider].
+//
+// [Web Identity Federation Playground]
+//   - . Walk through the process of authenticating through Login with Amazon,
+//     Facebook, or Google, getting temporary security credentials, and then using
+//     those credentials to make a request to Amazon Web Services.
+//
+// [Amazon Web Services SDK for iOS Developer Guide]
+//   - and [Amazon Web Services SDK for Android Developer Guide]. These toolkits contain sample apps that show how to invoke the
+//     identity providers. The toolkits then show how to use the information from these
+//     providers to get and use temporary security credentials.
+//
+// [Web Identity Federation with Mobile Applications]
+//   - . This article discusses web identity federation and shows an example of
+//     how to use web identity federation to get access to content in Amazon S3.
+//
+// [Amazon Web Services SDK for iOS Developer Guide]: http://aws.amazon.com/sdkforios/
+// [View the Maximum Session Duration Setting for a Role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session
+// [Web Identity Federation Playground]: https://aws.amazon.com/blogs/aws/the-aws-web-identity-federation-playground/
+// [Amazon Web Services SDK for Android Developer Guide]: http://aws.amazon.com/sdkforandroid/
+// [IAM and STS Character Limits]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html#reference_iam-limits-entity-length
+// [Comparing the Amazon Web Services STS API operations]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison
+// [session policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [Requesting Temporary Security Credentials]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html
+// [Subject]: http://openid.net/specs/openid-connect-core-1_0.html#Claims
+// [Tutorial: Using Tags for Attribute-Based Access Control]: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html
+// [Amazon Cognito identity pools]: https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html
+// [Federation Through a Web-based Identity Provider]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_assumerolewithwebidentity
+// [Using IAM Roles]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html
+// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [Amazon Cognito federated identities]: https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html
+// [Passing Session Tags in STS]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+// [Chaining Roles with Session Tags]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html#id_session-tags_role-chaining
+// [Web Identity Federation with Mobile Applications]: http://aws.amazon.com/articles/web-identity-federation-with-mobile-applications
+// [Using Web Identity Federation API Operations for Mobile Apps]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc_manual.html
+// [suggested in the OIDC specification]: http://openid.net/specs/openid-connect-core-1_0.html#SubjectIDTypes
+func (c *Client) AssumeRoleWithWebIdentity(ctx context.Context, params *AssumeRoleWithWebIdentityInput, optFns ...func(*Options)) (*AssumeRoleWithWebIdentityOutput, error) {
+	if params == nil {
+		params = &AssumeRoleWithWebIdentityInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "AssumeRoleWithWebIdentity", params, optFns, c.addOperationAssumeRoleWithWebIdentityMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*AssumeRoleWithWebIdentityOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type AssumeRoleWithWebIdentityInput struct {
+
+	// The Amazon Resource Name (ARN) of the role that the caller is assuming.
+	//
+	// This member is required.
+	RoleArn *string
+
+	// An identifier for the assumed role session. Typically, you pass the name or
+	// identifier that is associated with the user who is using your application. That
+	// way, the temporary security credentials that your application will use are
+	// associated with that user. This session name is included as part of the ARN and
+	// assumed role ID in the AssumedRoleUser response element.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-
+	//
+	// This member is required.
+	RoleSessionName *string
+
+	// The OAuth 2.0 access token or OpenID Connect ID token that is provided by the
+	// identity provider. Your application must get this token by authenticating the
+	// user who is using your application with a web identity provider before the
+	// application makes an AssumeRoleWithWebIdentity call. Only tokens with RSA
+	// algorithms (RS256) are supported.
+	//
+	// This member is required.
+	WebIdentityToken *string
+
+	// The duration, in seconds, of the role session. The value can range from 900
+	// seconds (15 minutes) up to the maximum session duration setting for the role.
+	// This setting can have a value from 1 hour to 12 hours. If you specify a value
+	// higher than this setting, the operation fails. For example, if you specify a
+	// session duration of 12 hours, but your administrator set the maximum session
+	// duration to 6 hours, your operation fails. To learn how to view the maximum
+	// value for your role, see [View the Maximum Session Duration Setting for a Role]in the IAM User Guide.
+	//
+	// By default, the value is set to 3600 seconds.
+	//
+	// The DurationSeconds parameter is separate from the duration of a console
+	// session that you might request using the returned credentials. The request to
+	// the federation endpoint for a console sign-in token takes a SessionDuration
+	// parameter that specifies the maximum length of the console session. For more
+	// information, see [Creating a URL that Enables Federated Users to Access the Amazon Web Services Management Console]in the IAM User Guide.
+	//
+	// [View the Maximum Session Duration Setting for a Role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session
+	// [Creating a URL that Enables Federated Users to Access the Amazon Web Services Management Console]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-custom-url.html
+	DurationSeconds *int32
+
+	// An IAM policy in JSON format that you want to use as an inline session policy.
+	//
+	// This parameter is optional. Passing policies to this operation returns new
+	// temporary credentials. The resulting session's permissions are the intersection
+	// of the role's identity-based policy and the session policies. You can use the
+	// role's temporary credentials in subsequent Amazon Web Services API calls to
+	// access resources in the account that owns the role. You cannot use session
+	// policies to grant more permissions than those allowed by the identity-based
+	// policy of the role that is being assumed. For more information, see [Session Policies]in the IAM
+	// User Guide.
+	//
+	// The plaintext that you use for both inline and managed session policies can't
+	// exceed 2,048 characters. The JSON policy characters can be any ASCII character
+	// from the space character to the end of the valid character list (\u0020 through
+	// \u00FF). It can also include the tab (\u0009), linefeed (\u000A), and carriage
+	// return (\u000D) characters.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	Policy *string
+
+	// The Amazon Resource Names (ARNs) of the IAM managed policies that you want to
+	// use as managed session policies. The policies must exist in the same account as
+	// the role.
+	//
+	// This parameter is optional. You can provide up to 10 managed policy ARNs.
+	// However, the plaintext that you use for both inline and managed session policies
+	// can't exceed 2,048 characters. For more information about ARNs, see [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]in the
+	// Amazon Web Services General Reference.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// Passing policies to this operation returns new temporary credentials. The
+	// resulting session's permissions are the intersection of the role's
+	// identity-based policy and the session policies. You can use the role's temporary
+	// credentials in subsequent Amazon Web Services API calls to access resources in
+	// the account that owns the role. You cannot use session policies to grant more
+	// permissions than those allowed by the identity-based policy of the role that is
+	// being assumed. For more information, see [Session Policies]in the IAM User Guide.
+	//
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	// [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
+	PolicyArns []types.PolicyDescriptorType
+
+	// The fully qualified host component of the domain name of the OAuth 2.0 identity
+	// provider. Do not specify this value for an OpenID Connect identity provider.
+	//
+	// Currently www.amazon.com and graph.facebook.com are the only supported identity
+	// providers for OAuth 2.0 access tokens. Do not include URL schemes and port
+	// numbers.
+	//
+	// Do not specify this value for OpenID Connect ID tokens.
+	ProviderId *string
+
+	noSmithyDocumentSerde
+}
+
+// Contains the response to a successful AssumeRoleWithWebIdentity request, including temporary Amazon Web
+// Services credentials that can be used to make Amazon Web Services requests.
+type AssumeRoleWithWebIdentityOutput struct {
+
+	// The Amazon Resource Name (ARN) and the assumed role ID, which are identifiers
+	// that you can use to refer to the resulting temporary security credentials. For
+	// example, you can reference these credentials as a principal in a resource-based
+	// policy by using the ARN or assumed role ID. The ARN and ID include the
+	// RoleSessionName that you specified when you called AssumeRole .
+	AssumedRoleUser *types.AssumedRoleUser
+
+	// The intended audience (also known as client ID) of the web identity token. This
+	// is traditionally the client identifier issued to the application that requested
+	// the web identity token.
+	Audience *string
+
+	// The temporary security credentials, which include an access key ID, a secret
+	// access key, and a security token.
+	//
+	// The size of the security token that STS API operations return is not fixed. We
+	// strongly recommend that you make no assumptions about the maximum size.
+	Credentials *types.Credentials
+
+	// A percentage value that indicates the packed size of the session policies and
+	// session tags combined passed in the request. The request fails if the packed
+	// size is greater than 100 percent, which means the policies and tags exceeded the
+	// allowed space.
+	PackedPolicySize *int32
+
+	//  The issuing authority of the web identity token presented. For OpenID Connect
+	// ID tokens, this contains the value of the iss field. For OAuth 2.0 access
+	// tokens, this contains the value of the ProviderId parameter that was passed in
+	// the AssumeRoleWithWebIdentity request.
+	Provider *string
+
+	// The value of the source identity that is returned in the JSON web token (JWT)
+	// from the identity provider.
+	//
+	// You can require users to set a source identity value when they assume a role.
+	// You do this by using the sts:SourceIdentity condition key in a role trust
+	// policy. That way, actions that are taken with the role are associated with that
+	// user. After the source identity is set, the value cannot be changed. It is
+	// present in the request for all actions that are taken by the role and persists
+	// across [chained role]sessions. You can configure your identity provider to use an attribute
+	// associated with your users, like user name or email, as the source identity when
+	// calling AssumeRoleWithWebIdentity . You do this by adding a claim to the JSON
+	// web token. To learn more about OIDC tokens and claims, see [Using Tokens with User Pools]in the Amazon
+	// Cognito Developer Guide. For more information about using source identity, see [Monitor and control actions taken with assumed roles]
+	// in the IAM User Guide.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-
+	//
+	// [chained role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts#iam-term-role-chaining
+	// [Monitor and control actions taken with assumed roles]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_monitor.html
+	// [Using Tokens with User Pools]: https://docs.aws.amazon.com/cognito/latest/developerguide/amazon-cognito-user-pools-using-tokens-with-identity-providers.html
+	SourceIdentity *string
+
+	// The unique user identifier that is returned by the identity provider. This
+	// identifier is associated with the WebIdentityToken that was submitted with the
+	// AssumeRoleWithWebIdentity call. The identifier is typically unique to the user
+	// and the application that acquired the WebIdentityToken (pairwise identifier).
+	// For OpenID Connect ID tokens, this field contains the value returned by the
+	// identity provider as the token's sub (Subject) claim.
+	SubjectFromWebIdentityToken *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationAssumeRoleWithWebIdentityMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpAssumeRoleWithWebIdentity{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpAssumeRoleWithWebIdentity{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "AssumeRoleWithWebIdentity"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpAssumeRoleWithWebIdentityValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opAssumeRoleWithWebIdentity(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opAssumeRoleWithWebIdentity(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "AssumeRoleWithWebIdentity",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_DecodeAuthorizationMessage.go 🔗

@@ -0,0 +1,177 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Decodes additional information about the authorization status of a request from
+// an encoded message returned in response to an Amazon Web Services request.
+//
+// For example, if a user is not authorized to perform an operation that he or she
+// has requested, the request returns a Client.UnauthorizedOperation response (an
+// HTTP 403 response). Some Amazon Web Services operations additionally return an
+// encoded message that can provide details about this authorization failure.
+//
+// Only certain Amazon Web Services operations return an encoded authorization
+// message. The documentation for an individual operation indicates whether that
+// operation returns an encoded message in addition to returning an HTTP code.
+//
+// The message is encoded because the details of the authorization status can
+// contain privileged information that the user who requested the operation should
+// not see. To decode an authorization status message, a user must be granted
+// permissions through an IAM [policy]to request the DecodeAuthorizationMessage (
+// sts:DecodeAuthorizationMessage ) action.
+//
+// The decoded message includes the following type of information:
+//
+//   - Whether the request was denied due to an explicit deny or due to the
+//     absence of an explicit allow. For more information, see [Determining Whether a Request is Allowed or Denied]in the IAM User
+//     Guide.
+//
+//   - The principal who made the request.
+//
+//   - The requested action.
+//
+//   - The requested resource.
+//
+//   - The values of condition keys in the context of the user's request.
+//
+// [Determining Whether a Request is Allowed or Denied]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow
+// [policy]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
+func (c *Client) DecodeAuthorizationMessage(ctx context.Context, params *DecodeAuthorizationMessageInput, optFns ...func(*Options)) (*DecodeAuthorizationMessageOutput, error) {
+	if params == nil {
+		params = &DecodeAuthorizationMessageInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "DecodeAuthorizationMessage", params, optFns, c.addOperationDecodeAuthorizationMessageMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*DecodeAuthorizationMessageOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type DecodeAuthorizationMessageInput struct {
+
+	// The encoded message that was returned with the response.
+	//
+	// This member is required.
+	EncodedMessage *string
+
+	noSmithyDocumentSerde
+}
+
+// A document that contains additional information about the authorization status
+// of a request from an encoded message that is returned in response to an Amazon
+// Web Services request.
+type DecodeAuthorizationMessageOutput struct {
+
+	// The API returns a response with the decoded message.
+	DecodedMessage *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationDecodeAuthorizationMessageMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpDecodeAuthorizationMessage{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpDecodeAuthorizationMessage{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "DecodeAuthorizationMessage"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addComputePayloadSHA256(stack); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpDecodeAuthorizationMessageValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opDecodeAuthorizationMessage(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opDecodeAuthorizationMessage(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "DecodeAuthorizationMessage",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetAccessKeyInfo.go 🔗

@@ -0,0 +1,168 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns the account identifier for the specified access key ID.
+//
+// Access keys consist of two parts: an access key ID (for example,
+// AKIAIOSFODNN7EXAMPLE ) and a secret access key (for example,
+// wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY ). For more information about access
+// keys, see [Managing Access Keys for IAM Users]in the IAM User Guide.
+//
+// When you pass an access key ID to this operation, it returns the ID of the
+// Amazon Web Services account to which the keys belong. Access key IDs beginning
+// with AKIA are long-term credentials for an IAM user or the Amazon Web Services
+// account root user. Access key IDs beginning with ASIA are temporary credentials
+// that are created using STS operations. If the account in the response belongs to
+// you, you can sign in as the root user and review your root user access keys.
+// Then, you can pull a [credentials report]to learn which IAM user owns the keys. To learn who
+// requested the temporary credentials for an ASIA access key, view the STS events
+// in your [CloudTrail logs]in the IAM User Guide.
+//
+// This operation does not indicate the state of the access key. The key might be
+// active, inactive, or deleted. Active keys might not have permissions to perform
+// an operation. Providing a deleted access key might return an error that the key
+// doesn't exist.
+//
+// [credentials report]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html
+// [CloudTrail logs]: https://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html
+// [Managing Access Keys for IAM Users]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
+func (c *Client) GetAccessKeyInfo(ctx context.Context, params *GetAccessKeyInfoInput, optFns ...func(*Options)) (*GetAccessKeyInfoOutput, error) {
+	if params == nil {
+		params = &GetAccessKeyInfoInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetAccessKeyInfo", params, optFns, c.addOperationGetAccessKeyInfoMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetAccessKeyInfoOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type GetAccessKeyInfoInput struct {
+
+	// The identifier of an access key.
+	//
+	// This parameter allows (through its regex pattern) a string of characters that
+	// can consist of any upper- or lowercase letter or digit.
+	//
+	// This member is required.
+	AccessKeyId *string
+
+	noSmithyDocumentSerde
+}
+
+type GetAccessKeyInfoOutput struct {
+
+	// The number used to identify the Amazon Web Services account.
+	Account *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetAccessKeyInfoMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpGetAccessKeyInfo{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpGetAccessKeyInfo{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "GetAccessKeyInfo"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addComputePayloadSHA256(stack); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpGetAccessKeyInfoValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetAccessKeyInfo(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opGetAccessKeyInfo(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "GetAccessKeyInfo",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetCallerIdentity.go 🔗

@@ -0,0 +1,180 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/aws/signer/v4"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns details about the IAM user or role whose credentials are used to call
+// the operation.
+//
+// No permissions are required to perform this operation. If an administrator
+// attaches a policy to your identity that explicitly denies access to the
+// sts:GetCallerIdentity action, you can still perform this operation. Permissions
+// are not required because the same information is returned when access is denied.
+// To view an example response, see [I Am Not Authorized to Perform: iam:DeleteVirtualMFADevice]in the IAM User Guide.
+//
+// [I Am Not Authorized to Perform: iam:DeleteVirtualMFADevice]: https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_access-denied-delete-mfa
+func (c *Client) GetCallerIdentity(ctx context.Context, params *GetCallerIdentityInput, optFns ...func(*Options)) (*GetCallerIdentityOutput, error) {
+	if params == nil {
+		params = &GetCallerIdentityInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetCallerIdentity", params, optFns, c.addOperationGetCallerIdentityMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetCallerIdentityOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type GetCallerIdentityInput struct {
+	noSmithyDocumentSerde
+}
+
+// Contains the response to a successful GetCallerIdentity request, including information about the
+// entity making the request.
+type GetCallerIdentityOutput struct {
+
+	// The Amazon Web Services account ID number of the account that owns or contains
+	// the calling entity.
+	Account *string
+
+	// The Amazon Web Services ARN associated with the calling entity.
+	Arn *string
+
+	// The unique identifier of the calling entity. The exact value depends on the
+	// type of entity that is making the call. The values returned are those listed in
+	// the aws:userid column in the [Principal table]found on the Policy Variables reference page in
+	// the IAM User Guide.
+	//
+	// [Principal table]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html#principaltable
+	UserId *string
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetCallerIdentityMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpGetCallerIdentity{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpGetCallerIdentity{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "GetCallerIdentity"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addComputePayloadSHA256(stack); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetCallerIdentity(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opGetCallerIdentity(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "GetCallerIdentity",
+	}
+}
+
+// PresignGetCallerIdentity is used to generate a presigned HTTP Request which
+// contains presigned URL, signed headers and HTTP method used.
+func (c *PresignClient) PresignGetCallerIdentity(ctx context.Context, params *GetCallerIdentityInput, optFns ...func(*PresignOptions)) (*v4.PresignedHTTPRequest, error) {
+	if params == nil {
+		params = &GetCallerIdentityInput{}
+	}
+	options := c.options.copy()
+	for _, fn := range optFns {
+		fn(&options)
+	}
+	clientOptFns := append(options.ClientOptions, withNopHTTPClientAPIOption)
+
+	result, _, err := c.client.invokeOperation(ctx, "GetCallerIdentity", params, clientOptFns,
+		c.client.addOperationGetCallerIdentityMiddlewares,
+		presignConverter(options).convertToPresignMiddleware,
+	)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*v4.PresignedHTTPRequest)
+	return out, nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetFederationToken.go 🔗

@@ -0,0 +1,381 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns a set of temporary security credentials (consisting of an access key
+// ID, a secret access key, and a security token) for a user. A typical use is in a
+// proxy application that gets temporary security credentials on behalf of
+// distributed applications inside a corporate network.
+//
+// You must call the GetFederationToken operation using the long-term security
+// credentials of an IAM user. As a result, this call is appropriate in contexts
+// where those credentials can be safeguarded, usually in a server-based
+// application. For a comparison of GetFederationToken with the other API
+// operations that produce temporary credentials, see [Requesting Temporary Security Credentials]and [Comparing the Amazon Web Services STS API operations] in the IAM User Guide.
+//
+// Although it is possible to call GetFederationToken using the security
+// credentials of an Amazon Web Services account root user rather than an IAM user
+// that you create for the purpose of a proxy application, we do not recommend it.
+// For more information, see [Safeguard your root user credentials and don't use them for everyday tasks]in the IAM User Guide.
+//
+// You can create a mobile-based or browser-based app that can authenticate users
+// using a web identity provider like Login with Amazon, Facebook, Google, or an
+// OpenID Connect-compatible identity provider. In this case, we recommend that you
+// use [Amazon Cognito]or AssumeRoleWithWebIdentity . For more information, see [Federation Through a Web-based Identity Provider] in the IAM User
+// Guide.
+//
+// # Session duration
+//
+// The temporary credentials are valid for the specified duration, from 900
+// seconds (15 minutes) up to a maximum of 129,600 seconds (36 hours). The default
+// session duration is 43,200 seconds (12 hours). Temporary credentials obtained by
+// using the root user credentials have a maximum duration of 3,600 seconds (1
+// hour).
+//
+// # Permissions
+//
+// You can use the temporary credentials created by GetFederationToken in any
+// Amazon Web Services service with the following exceptions:
+//
+//   - You cannot call any IAM operations using the CLI or the Amazon Web Services
+//     API. This limitation does not apply to console sessions.
+//
+//   - You cannot call any STS operations except GetCallerIdentity .
+//
+// You can use temporary credentials for single sign-on (SSO) to the console.
+//
+// You must pass an inline or managed [session policy] to this operation. You can pass a single
+// JSON policy document to use as an inline session policy. You can also specify up
+// to 10 managed policy Amazon Resource Names (ARNs) to use as managed session
+// policies. The plaintext that you use for both inline and managed session
+// policies can't exceed 2,048 characters.
+//
+// Though the session policy parameters are optional, if you do not pass a policy,
+// then the resulting federated user session has no permissions. When you pass
+// session policies, the session permissions are the intersection of the IAM user
+// policies and the session policies that you pass. This gives you a way to further
+// restrict the permissions for a federated user. You cannot use session policies
+// to grant more permissions than those that are defined in the permissions policy
+// of the IAM user. For more information, see [Session Policies]in the IAM User Guide. For
+// information about using GetFederationToken to create temporary security
+// credentials, see [GetFederationToken—Federation Through a Custom Identity Broker].
+//
+// You can use the credentials to access a resource that has a resource-based
+// policy. If that policy specifically references the federated user session in the
+// Principal element of the policy, the session has the permissions allowed by the
+// policy. These permissions are granted in addition to the permissions granted by
+// the session policies.
+//
+// # Tags
+//
+// (Optional) You can pass tag key-value pairs to your session. These are called
+// session tags. For more information about session tags, see [Passing Session Tags in STS]in the IAM User
+// Guide.
+//
+// You can create a mobile-based or browser-based app that can authenticate users
+// using a web identity provider like Login with Amazon, Facebook, Google, or an
+// OpenID Connect-compatible identity provider. In this case, we recommend that you
+// use [Amazon Cognito]or AssumeRoleWithWebIdentity . For more information, see [Federation Through a Web-based Identity Provider] in the IAM User
+// Guide.
+//
+// An administrator must grant you the permissions necessary to pass session tags.
+// The administrator can also create granular permissions to allow you to pass only
+// specific session tags. For more information, see [Tutorial: Using Tags for Attribute-Based Access Control]in the IAM User Guide.
+//
+// Tag key–value pairs are not case sensitive, but case is preserved. This means
+// that you cannot have separate Department and department tag keys. Assume that
+// the user that you are federating has the Department = Marketing tag and you
+// pass the department = engineering session tag. Department and department are
+// not saved as separate tags, and the session tag passed in the request takes
+// precedence over the user tag.
+//
+// [Federation Through a Web-based Identity Provider]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_assumerolewithwebidentity
+// [session policy]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [Amazon Cognito]: http://aws.amazon.com/cognito/
+// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+// [Passing Session Tags in STS]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+// [GetFederationToken—Federation Through a Custom Identity Broker]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_getfederationtoken
+// [Comparing the Amazon Web Services STS API operations]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison
+// [Safeguard your root user credentials and don't use them for everyday tasks]: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials
+// [Requesting Temporary Security Credentials]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html
+// [Tutorial: Using Tags for Attribute-Based Access Control]: https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html
+func (c *Client) GetFederationToken(ctx context.Context, params *GetFederationTokenInput, optFns ...func(*Options)) (*GetFederationTokenOutput, error) {
+	if params == nil {
+		params = &GetFederationTokenInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetFederationToken", params, optFns, c.addOperationGetFederationTokenMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetFederationTokenOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type GetFederationTokenInput struct {
+
+	// The name of the federated user. The name is used as an identifier for the
+	// temporary security credentials (such as Bob ). For example, you can reference
+	// the federated user name in a resource-based policy, such as in an Amazon S3
+	// bucket policy.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@-
+	//
+	// This member is required.
+	Name *string
+
+	// The duration, in seconds, that the session should last. Acceptable durations
+	// for federation sessions range from 900 seconds (15 minutes) to 129,600 seconds
+	// (36 hours), with 43,200 seconds (12 hours) as the default. Sessions obtained
+	// using root user credentials are restricted to a maximum of 3,600 seconds (one
+	// hour). If the specified duration is longer than one hour, the session obtained
+	// by using root user credentials defaults to one hour.
+	DurationSeconds *int32
+
+	// An IAM policy in JSON format that you want to use as an inline session policy.
+	//
+	// You must pass an inline or managed [session policy] to this operation. You can pass a single
+	// JSON policy document to use as an inline session policy. You can also specify up
+	// to 10 managed policy Amazon Resource Names (ARNs) to use as managed session
+	// policies.
+	//
+	// This parameter is optional. However, if you do not pass any session policies,
+	// then the resulting federated user session has no permissions.
+	//
+	// When you pass session policies, the session permissions are the intersection of
+	// the IAM user policies and the session policies that you pass. This gives you a
+	// way to further restrict the permissions for a federated user. You cannot use
+	// session policies to grant more permissions than those that are defined in the
+	// permissions policy of the IAM user. For more information, see [Session Policies]in the IAM User
+	// Guide.
+	//
+	// The resulting credentials can be used to access a resource that has a
+	// resource-based policy. If that policy specifically references the federated user
+	// session in the Principal element of the policy, the session has the permissions
+	// allowed by the policy. These permissions are granted in addition to the
+	// permissions that are granted by the session policies.
+	//
+	// The plaintext that you use for both inline and managed session policies can't
+	// exceed 2,048 characters. The JSON policy characters can be any ASCII character
+	// from the space character to the end of the valid character list (\u0020 through
+	// \u00FF). It can also include the tab (\u0009), linefeed (\u000A), and carriage
+	// return (\u000D) characters.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// [session policy]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	Policy *string
+
+	// The Amazon Resource Names (ARNs) of the IAM managed policies that you want to
+	// use as a managed session policy. The policies must exist in the same account as
+	// the IAM user that is requesting federated access.
+	//
+	// You must pass an inline or managed [session policy] to this operation. You can pass a single
+	// JSON policy document to use as an inline session policy. You can also specify up
+	// to 10 managed policy Amazon Resource Names (ARNs) to use as managed session
+	// policies. The plaintext that you use for both inline and managed session
+	// policies can't exceed 2,048 characters. You can provide up to 10 managed policy
+	// ARNs. For more information about ARNs, see [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]in the Amazon Web Services General
+	// Reference.
+	//
+	// This parameter is optional. However, if you do not pass any session policies,
+	// then the resulting federated user session has no permissions.
+	//
+	// When you pass session policies, the session permissions are the intersection of
+	// the IAM user policies and the session policies that you pass. This gives you a
+	// way to further restrict the permissions for a federated user. You cannot use
+	// session policies to grant more permissions than those that are defined in the
+	// permissions policy of the IAM user. For more information, see [Session Policies]in the IAM User
+	// Guide.
+	//
+	// The resulting credentials can be used to access a resource that has a
+	// resource-based policy. If that policy specifically references the federated user
+	// session in the Principal element of the policy, the session has the permissions
+	// allowed by the policy. These permissions are granted in addition to the
+	// permissions that are granted by the session policies.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// [session policy]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	// [Session Policies]: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session
+	// [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
+	PolicyArns []types.PolicyDescriptorType
+
+	// A list of session tags. Each session tag consists of a key name and an
+	// associated value. For more information about session tags, see [Passing Session Tags in STS]in the IAM User
+	// Guide.
+	//
+	// This parameter is optional. You can pass up to 50 session tags. The plaintext
+	// session tag keys can’t exceed 128 characters and the values can’t exceed 256
+	// characters. For these and additional limits, see [IAM and STS Character Limits]in the IAM User Guide.
+	//
+	// An Amazon Web Services conversion compresses the passed inline session policy,
+	// managed policy ARNs, and session tags into a packed binary format that has a
+	// separate limit. Your request can fail for this limit even if your plaintext
+	// meets the other requirements. The PackedPolicySize response element indicates
+	// by percentage how close the policies and tags for your request are to the upper
+	// size limit.
+	//
+	// You can pass a session tag with the same key as a tag that is already attached
+	// to the user you are federating. When you do, session tags override a user tag
+	// with the same key.
+	//
+	// Tag key–value pairs are not case sensitive, but case is preserved. This means
+	// that you cannot have separate Department and department tag keys. Assume that
+	// the role has the Department = Marketing tag and you pass the department =
+	// engineering session tag. Department and department are not saved as separate
+	// tags, and the session tag passed in the request takes precedence over the role
+	// tag.
+	//
+	// [Passing Session Tags in STS]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+	// [IAM and STS Character Limits]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html#reference_iam-limits-entity-length
+	Tags []types.Tag
+
+	noSmithyDocumentSerde
+}
+
+// Contains the response to a successful GetFederationToken request, including temporary Amazon Web
+// Services credentials that can be used to make Amazon Web Services requests.
+type GetFederationTokenOutput struct {
+
+	// The temporary security credentials, which include an access key ID, a secret
+	// access key, and a security (or session) token.
+	//
+	// The size of the security token that STS API operations return is not fixed. We
+	// strongly recommend that you make no assumptions about the maximum size.
+	Credentials *types.Credentials
+
+	// Identifiers for the federated user associated with the credentials (such as
+	// arn:aws:sts::123456789012:federated-user/Bob or 123456789012:Bob ). You can use
+	// the federated user's ARN in your resource-based policies, such as an Amazon S3
+	// bucket policy.
+	FederatedUser *types.FederatedUser
+
+	// A percentage value that indicates the packed size of the session policies and
+	// session tags combined passed in the request. The request fails if the packed
+	// size is greater than 100 percent, which means the policies and tags exceeded the
+	// allowed space.
+	PackedPolicySize *int32
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetFederationTokenMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpGetFederationToken{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpGetFederationToken{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "GetFederationToken"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addComputePayloadSHA256(stack); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = addOpGetFederationTokenValidationMiddleware(stack); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetFederationToken(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opGetFederationToken(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "GetFederationToken",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/api_op_GetSessionToken.go 🔗

@@ -0,0 +1,227 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+// Returns a set of temporary credentials for an Amazon Web Services account or
+// IAM user. The credentials consist of an access key ID, a secret access key, and
+// a security token. Typically, you use GetSessionToken if you want to use MFA to
+// protect programmatic calls to specific Amazon Web Services API operations like
+// Amazon EC2 StopInstances .
+//
+// MFA-enabled IAM users must call GetSessionToken and submit an MFA code that is
+// associated with their MFA device. Using the temporary security credentials that
+// the call returns, IAM users can then make programmatic calls to API operations
+// that require MFA authentication. An incorrect MFA code causes the API to return
+// an access denied error. For a comparison of GetSessionToken with the other API
+// operations that produce temporary credentials, see [Requesting Temporary Security Credentials]and [Comparing the Amazon Web Services STS API operations] in the IAM User Guide.
+//
+// No permissions are required for users to perform this operation. The purpose of
+// the sts:GetSessionToken operation is to authenticate the user using MFA. You
+// cannot use policies to control authentication operations. For more information,
+// see [Permissions for GetSessionToken]in the IAM User Guide.
+//
+// # Session Duration
+//
+// The GetSessionToken operation must be called by using the long-term Amazon Web
+// Services security credentials of an IAM user. Credentials that are created by
+// IAM users are valid for the duration that you specify. This duration can range
+// from 900 seconds (15 minutes) up to a maximum of 129,600 seconds (36 hours),
+// with a default of 43,200 seconds (12 hours). Credentials based on account
+// credentials can range from 900 seconds (15 minutes) up to 3,600 seconds (1
+// hour), with a default of 1 hour.
+//
+// # Permissions
+//
+// The temporary security credentials created by GetSessionToken can be used to
+// make API calls to any Amazon Web Services service with the following exceptions:
+//
+//   - You cannot call any IAM API operations unless MFA authentication
+//     information is included in the request.
+//
+//   - You cannot call any STS API except AssumeRole or GetCallerIdentity .
+//
+// The credentials that GetSessionToken returns are based on permissions
+// associated with the IAM user whose credentials were used to call the operation.
+// The temporary credentials have the same permissions as the IAM user.
+//
+// Although it is possible to call GetSessionToken using the security credentials
+// of an Amazon Web Services account root user rather than an IAM user, we do not
+// recommend it. If GetSessionToken is called using root user credentials, the
+// temporary credentials have root user permissions. For more information, see [Safeguard your root user credentials and don't use them for everyday tasks]in
+// the IAM User Guide
+//
+// For more information about using GetSessionToken to create temporary
+// credentials, see [Temporary Credentials for Users in Untrusted Environments]in the IAM User Guide.
+//
+// [Permissions for GetSessionToken]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_control-access_getsessiontoken.html
+// [Comparing the Amazon Web Services STS API operations]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#stsapi_comparison
+// [Temporary Credentials for Users in Untrusted Environments]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html#api_getsessiontoken
+// [Safeguard your root user credentials and don't use them for everyday tasks]: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials
+// [Requesting Temporary Security Credentials]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html
+func (c *Client) GetSessionToken(ctx context.Context, params *GetSessionTokenInput, optFns ...func(*Options)) (*GetSessionTokenOutput, error) {
+	if params == nil {
+		params = &GetSessionTokenInput{}
+	}
+
+	result, metadata, err := c.invokeOperation(ctx, "GetSessionToken", params, optFns, c.addOperationGetSessionTokenMiddlewares)
+	if err != nil {
+		return nil, err
+	}
+
+	out := result.(*GetSessionTokenOutput)
+	out.ResultMetadata = metadata
+	return out, nil
+}
+
+type GetSessionTokenInput struct {
+
+	// The duration, in seconds, that the credentials should remain valid. Acceptable
+	// durations for IAM user sessions range from 900 seconds (15 minutes) to 129,600
+	// seconds (36 hours), with 43,200 seconds (12 hours) as the default. Sessions for
+	// Amazon Web Services account owners are restricted to a maximum of 3,600 seconds
+	// (one hour). If the duration is longer than one hour, the session for Amazon Web
+	// Services account owners defaults to one hour.
+	DurationSeconds *int32
+
+	// The identification number of the MFA device that is associated with the IAM
+	// user who is making the GetSessionToken call. Specify this value if the IAM user
+	// has a policy that requires MFA authentication. The value is either the serial
+	// number for a hardware device (such as GAHT12345678 ) or an Amazon Resource Name
+	// (ARN) for a virtual device (such as arn:aws:iam::123456789012:mfa/user ). You
+	// can find the device for an IAM user by going to the Amazon Web Services
+	// Management Console and viewing the user's security credentials.
+	//
+	// The regex used to validate this parameter is a string of characters consisting
+	// of upper- and lower-case alphanumeric characters with no spaces. You can also
+	// include underscores or any of the following characters: =,.@:/-
+	SerialNumber *string
+
+	// The value provided by the MFA device, if MFA is required. If any policy
+	// requires the IAM user to submit an MFA code, specify this value. If MFA
+	// authentication is required, the user must provide a code when requesting a set
+	// of temporary security credentials. A user who fails to provide the code receives
+	// an "access denied" response when requesting resources that require MFA
+	// authentication.
+	//
+	// The format for this parameter, as described by its regex pattern, is a sequence
+	// of six numeric digits.
+	TokenCode *string
+
+	noSmithyDocumentSerde
+}
+
+// Contains the response to a successful GetSessionToken request, including temporary Amazon Web
+// Services credentials that can be used to make Amazon Web Services requests.
+type GetSessionTokenOutput struct {
+
+	// The temporary security credentials, which include an access key ID, a secret
+	// access key, and a security (or session) token.
+	//
+	// The size of the security token that STS API operations return is not fixed. We
+	// strongly recommend that you make no assumptions about the maximum size.
+	Credentials *types.Credentials
+
+	// Metadata pertaining to the operation's result.
+	ResultMetadata middleware.Metadata
+
+	noSmithyDocumentSerde
+}
+
+func (c *Client) addOperationGetSessionTokenMiddlewares(stack *middleware.Stack, options Options) (err error) {
+	if err := stack.Serialize.Add(&setOperationInputMiddleware{}, middleware.After); err != nil {
+		return err
+	}
+	err = stack.Serialize.Add(&awsAwsquery_serializeOpGetSessionToken{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	err = stack.Deserialize.Add(&awsAwsquery_deserializeOpGetSessionToken{}, middleware.After)
+	if err != nil {
+		return err
+	}
+	if err := addProtocolFinalizerMiddlewares(stack, options, "GetSessionToken"); err != nil {
+		return fmt.Errorf("add protocol finalizers: %v", err)
+	}
+
+	if err = addlegacyEndpointContextSetter(stack, options); err != nil {
+		return err
+	}
+	if err = addSetLoggerMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addClientRequestID(stack); err != nil {
+		return err
+	}
+	if err = addComputeContentLength(stack); err != nil {
+		return err
+	}
+	if err = addResolveEndpointMiddleware(stack, options); err != nil {
+		return err
+	}
+	if err = addComputePayloadSHA256(stack); err != nil {
+		return err
+	}
+	if err = addRetry(stack, options); err != nil {
+		return err
+	}
+	if err = addRawResponseToMetadata(stack); err != nil {
+		return err
+	}
+	if err = addRecordResponseTiming(stack); err != nil {
+		return err
+	}
+	if err = addClientUserAgent(stack, options); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddErrorCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = smithyhttp.AddCloseResponseBodyMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addSetLegacyContextSigningOptionsMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addTimeOffsetBuild(stack, c); err != nil {
+		return err
+	}
+	if err = addUserAgentRetryMode(stack, options); err != nil {
+		return err
+	}
+	if err = stack.Initialize.Add(newServiceMetadataMiddleware_opGetSessionToken(options.Region), middleware.Before); err != nil {
+		return err
+	}
+	if err = addRecursionDetection(stack); err != nil {
+		return err
+	}
+	if err = addRequestIDRetrieverMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addResponseErrorMiddleware(stack); err != nil {
+		return err
+	}
+	if err = addRequestResponseLogging(stack, options); err != nil {
+		return err
+	}
+	if err = addDisableHTTPSMiddleware(stack, options); err != nil {
+		return err
+	}
+	return nil
+}
+
+func newServiceMetadataMiddleware_opGetSessionToken(region string) *awsmiddleware.RegisterServiceMetadata {
+	return &awsmiddleware.RegisterServiceMetadata{
+		Region:        region,
+		ServiceID:     ServiceID,
+		OperationName: "GetSessionToken",
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/auth.go 🔗

@@ -0,0 +1,296 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	smithy "github.com/aws/smithy-go"
+	smithyauth "github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+)
+
+func bindAuthParamsRegion(_ interface{}, params *AuthResolverParameters, _ interface{}, options Options) {
+	params.Region = options.Region
+}
+
+type setLegacyContextSigningOptionsMiddleware struct {
+}
+
+func (*setLegacyContextSigningOptionsMiddleware) ID() string {
+	return "setLegacyContextSigningOptions"
+}
+
+func (m *setLegacyContextSigningOptionsMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	rscheme := getResolvedAuthScheme(ctx)
+	schemeID := rscheme.Scheme.SchemeID()
+
+	if sn := awsmiddleware.GetSigningName(ctx); sn != "" {
+		if schemeID == "aws.auth#sigv4" {
+			smithyhttp.SetSigV4SigningName(&rscheme.SignerProperties, sn)
+		} else if schemeID == "aws.auth#sigv4a" {
+			smithyhttp.SetSigV4ASigningName(&rscheme.SignerProperties, sn)
+		}
+	}
+
+	if sr := awsmiddleware.GetSigningRegion(ctx); sr != "" {
+		if schemeID == "aws.auth#sigv4" {
+			smithyhttp.SetSigV4SigningRegion(&rscheme.SignerProperties, sr)
+		} else if schemeID == "aws.auth#sigv4a" {
+			smithyhttp.SetSigV4ASigningRegions(&rscheme.SignerProperties, []string{sr})
+		}
+	}
+
+	return next.HandleFinalize(ctx, in)
+}
+
+func addSetLegacyContextSigningOptionsMiddleware(stack *middleware.Stack) error {
+	return stack.Finalize.Insert(&setLegacyContextSigningOptionsMiddleware{}, "Signing", middleware.Before)
+}
+
+type withAnonymous struct {
+	resolver AuthSchemeResolver
+}
+
+var _ AuthSchemeResolver = (*withAnonymous)(nil)
+
+func (v *withAnonymous) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+	opts, err := v.resolver.ResolveAuthSchemes(ctx, params)
+	if err != nil {
+		return nil, err
+	}
+
+	opts = append(opts, &smithyauth.Option{
+		SchemeID: smithyauth.SchemeIDAnonymous,
+	})
+	return opts, nil
+}
+
+func wrapWithAnonymousAuth(options *Options) {
+	if _, ok := options.AuthSchemeResolver.(*defaultAuthSchemeResolver); !ok {
+		return
+	}
+
+	options.AuthSchemeResolver = &withAnonymous{
+		resolver: options.AuthSchemeResolver,
+	}
+}
+
+// AuthResolverParameters contains the set of inputs necessary for auth scheme
+// resolution.
+type AuthResolverParameters struct {
+	// The name of the operation being invoked.
+	Operation string
+
+	// The region in which the operation is being invoked.
+	Region string
+}
+
+func bindAuthResolverParams(ctx context.Context, operation string, input interface{}, options Options) *AuthResolverParameters {
+	params := &AuthResolverParameters{
+		Operation: operation,
+	}
+
+	bindAuthParamsRegion(ctx, params, input, options)
+
+	return params
+}
+
+// AuthSchemeResolver returns a set of possible authentication options for an
+// operation.
+type AuthSchemeResolver interface {
+	ResolveAuthSchemes(context.Context, *AuthResolverParameters) ([]*smithyauth.Option, error)
+}
+
+type defaultAuthSchemeResolver struct{}
+
+var _ AuthSchemeResolver = (*defaultAuthSchemeResolver)(nil)
+
+func (*defaultAuthSchemeResolver) ResolveAuthSchemes(ctx context.Context, params *AuthResolverParameters) ([]*smithyauth.Option, error) {
+	if overrides, ok := operationAuthOptions[params.Operation]; ok {
+		return overrides(params), nil
+	}
+	return serviceAuthOptions(params), nil
+}
+
+var operationAuthOptions = map[string]func(*AuthResolverParameters) []*smithyauth.Option{
+	"AssumeRoleWithSAML": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+
+	"AssumeRoleWithWebIdentity": func(params *AuthResolverParameters) []*smithyauth.Option {
+		return []*smithyauth.Option{
+			{SchemeID: smithyauth.SchemeIDAnonymous},
+		}
+	},
+}
+
+func serviceAuthOptions(params *AuthResolverParameters) []*smithyauth.Option {
+	return []*smithyauth.Option{
+		{
+			SchemeID: smithyauth.SchemeIDSigV4,
+			SignerProperties: func() smithy.Properties {
+				var props smithy.Properties
+				smithyhttp.SetSigV4SigningName(&props, "sts")
+				smithyhttp.SetSigV4SigningRegion(&props, params.Region)
+				return props
+			}(),
+		},
+	}
+}
+
+type resolveAuthSchemeMiddleware struct {
+	operation string
+	options   Options
+}
+
+func (*resolveAuthSchemeMiddleware) ID() string {
+	return "ResolveAuthScheme"
+}
+
+func (m *resolveAuthSchemeMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	params := bindAuthResolverParams(ctx, m.operation, getOperationInput(ctx), m.options)
+	options, err := m.options.AuthSchemeResolver.ResolveAuthSchemes(ctx, params)
+	if err != nil {
+		return out, metadata, fmt.Errorf("resolve auth scheme: %w", err)
+	}
+
+	scheme, ok := m.selectScheme(options)
+	if !ok {
+		return out, metadata, fmt.Errorf("could not select an auth scheme")
+	}
+
+	ctx = setResolvedAuthScheme(ctx, scheme)
+	return next.HandleFinalize(ctx, in)
+}
+
+func (m *resolveAuthSchemeMiddleware) selectScheme(options []*smithyauth.Option) (*resolvedAuthScheme, bool) {
+	for _, option := range options {
+		if option.SchemeID == smithyauth.SchemeIDAnonymous {
+			return newResolvedAuthScheme(smithyhttp.NewAnonymousScheme(), option), true
+		}
+
+		for _, scheme := range m.options.AuthSchemes {
+			if scheme.SchemeID() != option.SchemeID {
+				continue
+			}
+
+			if scheme.IdentityResolver(m.options) != nil {
+				return newResolvedAuthScheme(scheme, option), true
+			}
+		}
+	}
+
+	return nil, false
+}
+
+type resolvedAuthSchemeKey struct{}
+
+type resolvedAuthScheme struct {
+	Scheme             smithyhttp.AuthScheme
+	IdentityProperties smithy.Properties
+	SignerProperties   smithy.Properties
+}
+
+func newResolvedAuthScheme(scheme smithyhttp.AuthScheme, option *smithyauth.Option) *resolvedAuthScheme {
+	return &resolvedAuthScheme{
+		Scheme:             scheme,
+		IdentityProperties: option.IdentityProperties,
+		SignerProperties:   option.SignerProperties,
+	}
+}
+
+func setResolvedAuthScheme(ctx context.Context, scheme *resolvedAuthScheme) context.Context {
+	return middleware.WithStackValue(ctx, resolvedAuthSchemeKey{}, scheme)
+}
+
+func getResolvedAuthScheme(ctx context.Context) *resolvedAuthScheme {
+	v, _ := middleware.GetStackValue(ctx, resolvedAuthSchemeKey{}).(*resolvedAuthScheme)
+	return v
+}
+
+type getIdentityMiddleware struct {
+	options Options
+}
+
+func (*getIdentityMiddleware) ID() string {
+	return "GetIdentity"
+}
+
+func (m *getIdentityMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	resolver := rscheme.Scheme.IdentityResolver(m.options)
+	if resolver == nil {
+		return out, metadata, fmt.Errorf("no identity resolver")
+	}
+
+	identity, err := resolver.GetIdentity(ctx, rscheme.IdentityProperties)
+	if err != nil {
+		return out, metadata, fmt.Errorf("get identity: %w", err)
+	}
+
+	ctx = setIdentity(ctx, identity)
+	return next.HandleFinalize(ctx, in)
+}
+
+type identityKey struct{}
+
+func setIdentity(ctx context.Context, identity smithyauth.Identity) context.Context {
+	return middleware.WithStackValue(ctx, identityKey{}, identity)
+}
+
+func getIdentity(ctx context.Context) smithyauth.Identity {
+	v, _ := middleware.GetStackValue(ctx, identityKey{}).(smithyauth.Identity)
+	return v
+}
+
+type signRequestMiddleware struct {
+}
+
+func (*signRequestMiddleware) ID() string {
+	return "Signing"
+}
+
+func (m *signRequestMiddleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unexpected transport type %T", in.Request)
+	}
+
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	identity := getIdentity(ctx)
+	if identity == nil {
+		return out, metadata, fmt.Errorf("no identity")
+	}
+
+	signer := rscheme.Scheme.Signer()
+	if signer == nil {
+		return out, metadata, fmt.Errorf("no signer")
+	}
+
+	if err := signer.SignRequest(ctx, req, identity, rscheme.SignerProperties); err != nil {
+		return out, metadata, fmt.Errorf("sign request: %w", err)
+	}
+
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/deserializers.go 🔗

@@ -0,0 +1,2516 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"bytes"
+	"context"
+	"encoding/xml"
+	"fmt"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	awsxml "github.com/aws/aws-sdk-go-v2/aws/protocol/xml"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	smithy "github.com/aws/smithy-go"
+	smithyxml "github.com/aws/smithy-go/encoding/xml"
+	smithyio "github.com/aws/smithy-go/io"
+	"github.com/aws/smithy-go/middleware"
+	"github.com/aws/smithy-go/ptr"
+	smithytime "github.com/aws/smithy-go/time"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"io"
+	"strconv"
+	"strings"
+	"time"
+)
+
+func deserializeS3Expires(v string) (*time.Time, error) {
+	t, err := smithytime.ParseHTTPDate(v)
+	if err != nil {
+		return nil, nil
+	}
+	return &t, nil
+}
+
+type awsAwsquery_deserializeOpAssumeRole struct {
+}
+
+func (*awsAwsquery_deserializeOpAssumeRole) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpAssumeRole) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorAssumeRole(response, &metadata)
+	}
+	output := &AssumeRoleOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("AssumeRoleResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentAssumeRoleOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorAssumeRole(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	case strings.EqualFold("ExpiredTokenException", errorCode):
+		return awsAwsquery_deserializeErrorExpiredTokenException(response, errorBody)
+
+	case strings.EqualFold("MalformedPolicyDocument", errorCode):
+		return awsAwsquery_deserializeErrorMalformedPolicyDocumentException(response, errorBody)
+
+	case strings.EqualFold("PackedPolicyTooLarge", errorCode):
+		return awsAwsquery_deserializeErrorPackedPolicyTooLargeException(response, errorBody)
+
+	case strings.EqualFold("RegionDisabledException", errorCode):
+		return awsAwsquery_deserializeErrorRegionDisabledException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+type awsAwsquery_deserializeOpAssumeRoleWithSAML struct {
+}
+
+func (*awsAwsquery_deserializeOpAssumeRoleWithSAML) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpAssumeRoleWithSAML) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorAssumeRoleWithSAML(response, &metadata)
+	}
+	output := &AssumeRoleWithSAMLOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("AssumeRoleWithSAMLResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentAssumeRoleWithSAMLOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorAssumeRoleWithSAML(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	case strings.EqualFold("ExpiredTokenException", errorCode):
+		return awsAwsquery_deserializeErrorExpiredTokenException(response, errorBody)
+
+	case strings.EqualFold("IDPRejectedClaim", errorCode):
+		return awsAwsquery_deserializeErrorIDPRejectedClaimException(response, errorBody)
+
+	case strings.EqualFold("InvalidIdentityToken", errorCode):
+		return awsAwsquery_deserializeErrorInvalidIdentityTokenException(response, errorBody)
+
+	case strings.EqualFold("MalformedPolicyDocument", errorCode):
+		return awsAwsquery_deserializeErrorMalformedPolicyDocumentException(response, errorBody)
+
+	case strings.EqualFold("PackedPolicyTooLarge", errorCode):
+		return awsAwsquery_deserializeErrorPackedPolicyTooLargeException(response, errorBody)
+
+	case strings.EqualFold("RegionDisabledException", errorCode):
+		return awsAwsquery_deserializeErrorRegionDisabledException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+type awsAwsquery_deserializeOpAssumeRoleWithWebIdentity struct {
+}
+
+func (*awsAwsquery_deserializeOpAssumeRoleWithWebIdentity) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpAssumeRoleWithWebIdentity) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorAssumeRoleWithWebIdentity(response, &metadata)
+	}
+	output := &AssumeRoleWithWebIdentityOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("AssumeRoleWithWebIdentityResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentAssumeRoleWithWebIdentityOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorAssumeRoleWithWebIdentity(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	case strings.EqualFold("ExpiredTokenException", errorCode):
+		return awsAwsquery_deserializeErrorExpiredTokenException(response, errorBody)
+
+	case strings.EqualFold("IDPCommunicationError", errorCode):
+		return awsAwsquery_deserializeErrorIDPCommunicationErrorException(response, errorBody)
+
+	case strings.EqualFold("IDPRejectedClaim", errorCode):
+		return awsAwsquery_deserializeErrorIDPRejectedClaimException(response, errorBody)
+
+	case strings.EqualFold("InvalidIdentityToken", errorCode):
+		return awsAwsquery_deserializeErrorInvalidIdentityTokenException(response, errorBody)
+
+	case strings.EqualFold("MalformedPolicyDocument", errorCode):
+		return awsAwsquery_deserializeErrorMalformedPolicyDocumentException(response, errorBody)
+
+	case strings.EqualFold("PackedPolicyTooLarge", errorCode):
+		return awsAwsquery_deserializeErrorPackedPolicyTooLargeException(response, errorBody)
+
+	case strings.EqualFold("RegionDisabledException", errorCode):
+		return awsAwsquery_deserializeErrorRegionDisabledException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+type awsAwsquery_deserializeOpDecodeAuthorizationMessage struct {
+}
+
+func (*awsAwsquery_deserializeOpDecodeAuthorizationMessage) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpDecodeAuthorizationMessage) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorDecodeAuthorizationMessage(response, &metadata)
+	}
+	output := &DecodeAuthorizationMessageOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("DecodeAuthorizationMessageResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentDecodeAuthorizationMessageOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorDecodeAuthorizationMessage(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	case strings.EqualFold("InvalidAuthorizationMessageException", errorCode):
+		return awsAwsquery_deserializeErrorInvalidAuthorizationMessageException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+type awsAwsquery_deserializeOpGetAccessKeyInfo struct {
+}
+
+func (*awsAwsquery_deserializeOpGetAccessKeyInfo) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpGetAccessKeyInfo) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorGetAccessKeyInfo(response, &metadata)
+	}
+	output := &GetAccessKeyInfoOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("GetAccessKeyInfoResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentGetAccessKeyInfoOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorGetAccessKeyInfo(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+type awsAwsquery_deserializeOpGetCallerIdentity struct {
+}
+
+func (*awsAwsquery_deserializeOpGetCallerIdentity) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpGetCallerIdentity) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorGetCallerIdentity(response, &metadata)
+	}
+	output := &GetCallerIdentityOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("GetCallerIdentityResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentGetCallerIdentityOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorGetCallerIdentity(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+type awsAwsquery_deserializeOpGetFederationToken struct {
+}
+
+func (*awsAwsquery_deserializeOpGetFederationToken) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpGetFederationToken) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorGetFederationToken(response, &metadata)
+	}
+	output := &GetFederationTokenOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("GetFederationTokenResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentGetFederationTokenOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorGetFederationToken(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	case strings.EqualFold("MalformedPolicyDocument", errorCode):
+		return awsAwsquery_deserializeErrorMalformedPolicyDocumentException(response, errorBody)
+
+	case strings.EqualFold("PackedPolicyTooLarge", errorCode):
+		return awsAwsquery_deserializeErrorPackedPolicyTooLargeException(response, errorBody)
+
+	case strings.EqualFold("RegionDisabledException", errorCode):
+		return awsAwsquery_deserializeErrorRegionDisabledException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+type awsAwsquery_deserializeOpGetSessionToken struct {
+}
+
+func (*awsAwsquery_deserializeOpGetSessionToken) ID() string {
+	return "OperationDeserializer"
+}
+
+func (m *awsAwsquery_deserializeOpGetSessionToken) HandleDeserialize(ctx context.Context, in middleware.DeserializeInput, next middleware.DeserializeHandler) (
+	out middleware.DeserializeOutput, metadata middleware.Metadata, err error,
+) {
+	out, metadata, err = next.HandleDeserialize(ctx, in)
+	if err != nil {
+		return out, metadata, err
+	}
+
+	response, ok := out.RawResponse.(*smithyhttp.Response)
+	if !ok {
+		return out, metadata, &smithy.DeserializationError{Err: fmt.Errorf("unknown transport type %T", out.RawResponse)}
+	}
+
+	if response.StatusCode < 200 || response.StatusCode >= 300 {
+		return out, metadata, awsAwsquery_deserializeOpErrorGetSessionToken(response, &metadata)
+	}
+	output := &GetSessionTokenOutput{}
+	out.Result = output
+
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(response.Body, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return out, metadata, nil
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return out, metadata, &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("GetSessionTokenResult")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeOpDocumentGetSessionTokenOutput(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		err = &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+		return out, metadata, err
+	}
+
+	return out, metadata, err
+}
+
+func awsAwsquery_deserializeOpErrorGetSessionToken(response *smithyhttp.Response, metadata *middleware.Metadata) error {
+	var errorBuffer bytes.Buffer
+	if _, err := io.Copy(&errorBuffer, response.Body); err != nil {
+		return &smithy.DeserializationError{Err: fmt.Errorf("failed to copy error response body, %w", err)}
+	}
+	errorBody := bytes.NewReader(errorBuffer.Bytes())
+
+	errorCode := "UnknownError"
+	errorMessage := errorCode
+
+	errorComponents, err := awsxml.GetErrorResponseComponents(errorBody, false)
+	if err != nil {
+		return err
+	}
+	if reqID := errorComponents.RequestID; len(reqID) != 0 {
+		awsmiddleware.SetRequestIDMetadata(metadata, reqID)
+	}
+	if len(errorComponents.Code) != 0 {
+		errorCode = errorComponents.Code
+	}
+	if len(errorComponents.Message) != 0 {
+		errorMessage = errorComponents.Message
+	}
+	errorBody.Seek(0, io.SeekStart)
+	switch {
+	case strings.EqualFold("RegionDisabledException", errorCode):
+		return awsAwsquery_deserializeErrorRegionDisabledException(response, errorBody)
+
+	default:
+		genericError := &smithy.GenericAPIError{
+			Code:    errorCode,
+			Message: errorMessage,
+		}
+		return genericError
+
+	}
+}
+
+func awsAwsquery_deserializeErrorExpiredTokenException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.ExpiredTokenException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentExpiredTokenException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeErrorIDPCommunicationErrorException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.IDPCommunicationErrorException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentIDPCommunicationErrorException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeErrorIDPRejectedClaimException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.IDPRejectedClaimException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentIDPRejectedClaimException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeErrorInvalidAuthorizationMessageException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidAuthorizationMessageException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentInvalidAuthorizationMessageException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeErrorInvalidIdentityTokenException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.InvalidIdentityTokenException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentInvalidIdentityTokenException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeErrorMalformedPolicyDocumentException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.MalformedPolicyDocumentException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentMalformedPolicyDocumentException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeErrorPackedPolicyTooLargeException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.PackedPolicyTooLargeException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentPackedPolicyTooLargeException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeErrorRegionDisabledException(response *smithyhttp.Response, errorBody *bytes.Reader) error {
+	output := &types.RegionDisabledException{}
+	var buff [1024]byte
+	ringBuffer := smithyio.NewRingBuffer(buff[:])
+	body := io.TeeReader(errorBody, ringBuffer)
+	rootDecoder := xml.NewDecoder(body)
+	t, err := smithyxml.FetchRootElement(rootDecoder)
+	if err == io.EOF {
+		return output
+	}
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder := smithyxml.WrapNodeDecoder(rootDecoder, t)
+	t, err = decoder.GetElement("Error")
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	decoder = smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+	err = awsAwsquery_deserializeDocumentRegionDisabledException(&output, decoder)
+	if err != nil {
+		var snapshot bytes.Buffer
+		io.Copy(&snapshot, ringBuffer)
+		return &smithy.DeserializationError{
+			Err:      fmt.Errorf("failed to decode response body, %w", err),
+			Snapshot: snapshot.Bytes(),
+		}
+	}
+
+	return output
+}
+
+func awsAwsquery_deserializeDocumentAssumedRoleUser(v **types.AssumedRoleUser, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.AssumedRoleUser
+	if *v == nil {
+		sv = &types.AssumedRoleUser{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("Arn", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Arn = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("AssumedRoleId", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.AssumedRoleId = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentCredentials(v **types.Credentials, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.Credentials
+	if *v == nil {
+		sv = &types.Credentials{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("AccessKeyId", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.AccessKeyId = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("Expiration", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				t, err := smithytime.ParseDateTime(xtv)
+				if err != nil {
+					return err
+				}
+				sv.Expiration = ptr.Time(t)
+			}
+
+		case strings.EqualFold("SecretAccessKey", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.SecretAccessKey = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("SessionToken", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.SessionToken = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentExpiredTokenException(v **types.ExpiredTokenException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.ExpiredTokenException
+	if *v == nil {
+		sv = &types.ExpiredTokenException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentFederatedUser(v **types.FederatedUser, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.FederatedUser
+	if *v == nil {
+		sv = &types.FederatedUser{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("Arn", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Arn = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("FederatedUserId", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.FederatedUserId = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentIDPCommunicationErrorException(v **types.IDPCommunicationErrorException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.IDPCommunicationErrorException
+	if *v == nil {
+		sv = &types.IDPCommunicationErrorException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentIDPRejectedClaimException(v **types.IDPRejectedClaimException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.IDPRejectedClaimException
+	if *v == nil {
+		sv = &types.IDPRejectedClaimException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentInvalidAuthorizationMessageException(v **types.InvalidAuthorizationMessageException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.InvalidAuthorizationMessageException
+	if *v == nil {
+		sv = &types.InvalidAuthorizationMessageException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentInvalidIdentityTokenException(v **types.InvalidIdentityTokenException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.InvalidIdentityTokenException
+	if *v == nil {
+		sv = &types.InvalidIdentityTokenException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentMalformedPolicyDocumentException(v **types.MalformedPolicyDocumentException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.MalformedPolicyDocumentException
+	if *v == nil {
+		sv = &types.MalformedPolicyDocumentException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentPackedPolicyTooLargeException(v **types.PackedPolicyTooLargeException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.PackedPolicyTooLargeException
+	if *v == nil {
+		sv = &types.PackedPolicyTooLargeException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeDocumentRegionDisabledException(v **types.RegionDisabledException, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *types.RegionDisabledException
+	if *v == nil {
+		sv = &types.RegionDisabledException{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("message", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Message = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentAssumeRoleOutput(v **AssumeRoleOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *AssumeRoleOutput
+	if *v == nil {
+		sv = &AssumeRoleOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("AssumedRoleUser", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentAssumedRoleUser(&sv.AssumedRoleUser, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("Credentials", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentCredentials(&sv.Credentials, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("PackedPolicySize", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				i64, err := strconv.ParseInt(xtv, 10, 64)
+				if err != nil {
+					return err
+				}
+				sv.PackedPolicySize = ptr.Int32(int32(i64))
+			}
+
+		case strings.EqualFold("SourceIdentity", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.SourceIdentity = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentAssumeRoleWithSAMLOutput(v **AssumeRoleWithSAMLOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *AssumeRoleWithSAMLOutput
+	if *v == nil {
+		sv = &AssumeRoleWithSAMLOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("AssumedRoleUser", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentAssumedRoleUser(&sv.AssumedRoleUser, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("Audience", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Audience = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("Credentials", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentCredentials(&sv.Credentials, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("Issuer", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Issuer = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("NameQualifier", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.NameQualifier = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("PackedPolicySize", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				i64, err := strconv.ParseInt(xtv, 10, 64)
+				if err != nil {
+					return err
+				}
+				sv.PackedPolicySize = ptr.Int32(int32(i64))
+			}
+
+		case strings.EqualFold("SourceIdentity", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.SourceIdentity = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("Subject", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Subject = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("SubjectType", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.SubjectType = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentAssumeRoleWithWebIdentityOutput(v **AssumeRoleWithWebIdentityOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *AssumeRoleWithWebIdentityOutput
+	if *v == nil {
+		sv = &AssumeRoleWithWebIdentityOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("AssumedRoleUser", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentAssumedRoleUser(&sv.AssumedRoleUser, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("Audience", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Audience = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("Credentials", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentCredentials(&sv.Credentials, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("PackedPolicySize", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				i64, err := strconv.ParseInt(xtv, 10, 64)
+				if err != nil {
+					return err
+				}
+				sv.PackedPolicySize = ptr.Int32(int32(i64))
+			}
+
+		case strings.EqualFold("Provider", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Provider = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("SourceIdentity", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.SourceIdentity = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("SubjectFromWebIdentityToken", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.SubjectFromWebIdentityToken = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentDecodeAuthorizationMessageOutput(v **DecodeAuthorizationMessageOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *DecodeAuthorizationMessageOutput
+	if *v == nil {
+		sv = &DecodeAuthorizationMessageOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("DecodedMessage", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.DecodedMessage = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentGetAccessKeyInfoOutput(v **GetAccessKeyInfoOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *GetAccessKeyInfoOutput
+	if *v == nil {
+		sv = &GetAccessKeyInfoOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("Account", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Account = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentGetCallerIdentityOutput(v **GetCallerIdentityOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *GetCallerIdentityOutput
+	if *v == nil {
+		sv = &GetCallerIdentityOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("Account", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Account = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("Arn", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.Arn = ptr.String(xtv)
+			}
+
+		case strings.EqualFold("UserId", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				sv.UserId = ptr.String(xtv)
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentGetFederationTokenOutput(v **GetFederationTokenOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *GetFederationTokenOutput
+	if *v == nil {
+		sv = &GetFederationTokenOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("Credentials", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentCredentials(&sv.Credentials, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("FederatedUser", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentFederatedUser(&sv.FederatedUser, nodeDecoder); err != nil {
+				return err
+			}
+
+		case strings.EqualFold("PackedPolicySize", t.Name.Local):
+			val, err := decoder.Value()
+			if err != nil {
+				return err
+			}
+			if val == nil {
+				break
+			}
+			{
+				xtv := string(val)
+				i64, err := strconv.ParseInt(xtv, 10, 64)
+				if err != nil {
+					return err
+				}
+				sv.PackedPolicySize = ptr.Int32(int32(i64))
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}
+
+func awsAwsquery_deserializeOpDocumentGetSessionTokenOutput(v **GetSessionTokenOutput, decoder smithyxml.NodeDecoder) error {
+	if v == nil {
+		return fmt.Errorf("unexpected nil of type %T", v)
+	}
+	var sv *GetSessionTokenOutput
+	if *v == nil {
+		sv = &GetSessionTokenOutput{}
+	} else {
+		sv = *v
+	}
+
+	for {
+		t, done, err := decoder.Token()
+		if err != nil {
+			return err
+		}
+		if done {
+			break
+		}
+		originalDecoder := decoder
+		decoder = smithyxml.WrapNodeDecoder(originalDecoder.Decoder, t)
+		switch {
+		case strings.EqualFold("Credentials", t.Name.Local):
+			nodeDecoder := smithyxml.WrapNodeDecoder(decoder.Decoder, t)
+			if err := awsAwsquery_deserializeDocumentCredentials(&sv.Credentials, nodeDecoder); err != nil {
+				return err
+			}
+
+		default:
+			// Do nothing and ignore the unexpected tag element
+			err = decoder.Decoder.Skip()
+			if err != nil {
+				return err
+			}
+
+		}
+		decoder = originalDecoder
+	}
+	*v = sv
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/doc.go 🔗

@@ -0,0 +1,13 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+// Package sts provides the API client, operations, and parameter types for AWS
+// Security Token Service.
+//
+// # Security Token Service
+//
+// Security Token Service (STS) enables you to request temporary,
+// limited-privilege credentials for users. This guide provides descriptions of the
+// STS API. For more information about using this service, see [Temporary Security Credentials].
+//
+// [Temporary Security Credentials]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
+package sts

vendor/github.com/aws/aws-sdk-go-v2/service/sts/endpoints.go 🔗

@@ -0,0 +1,1130 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"errors"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	internalConfig "github.com/aws/aws-sdk-go-v2/internal/configsources"
+	"github.com/aws/aws-sdk-go-v2/internal/endpoints"
+	"github.com/aws/aws-sdk-go-v2/internal/endpoints/awsrulesfn"
+	internalendpoints "github.com/aws/aws-sdk-go-v2/service/sts/internal/endpoints"
+	smithy "github.com/aws/smithy-go"
+	smithyauth "github.com/aws/smithy-go/auth"
+	smithyendpoints "github.com/aws/smithy-go/endpoints"
+	"github.com/aws/smithy-go/middleware"
+	"github.com/aws/smithy-go/ptr"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net/http"
+	"net/url"
+	"os"
+	"strings"
+)
+
+// EndpointResolverOptions is the service endpoint resolver options
+type EndpointResolverOptions = internalendpoints.Options
+
+// EndpointResolver interface for resolving service endpoints.
+type EndpointResolver interface {
+	ResolveEndpoint(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+}
+
+var _ EndpointResolver = &internalendpoints.Resolver{}
+
+// NewDefaultEndpointResolver constructs a new service endpoint resolver
+func NewDefaultEndpointResolver() *internalendpoints.Resolver {
+	return internalendpoints.New()
+}
+
+// EndpointResolverFunc is a helper utility that wraps a function so it satisfies
+// the EndpointResolver interface. This is useful when you want to add additional
+// endpoint resolving logic, or stub out specific endpoints with custom values.
+type EndpointResolverFunc func(region string, options EndpointResolverOptions) (aws.Endpoint, error)
+
+func (fn EndpointResolverFunc) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+	return fn(region, options)
+}
+
+// EndpointResolverFromURL returns an EndpointResolver configured using the
+// provided endpoint url. By default, the resolved endpoint resolver uses the
+// client region as signing region, and the endpoint source is set to
+// EndpointSourceCustom.You can provide functional options to configure endpoint
+// values for the resolved endpoint.
+func EndpointResolverFromURL(url string, optFns ...func(*aws.Endpoint)) EndpointResolver {
+	e := aws.Endpoint{URL: url, Source: aws.EndpointSourceCustom}
+	for _, fn := range optFns {
+		fn(&e)
+	}
+
+	return EndpointResolverFunc(
+		func(region string, options EndpointResolverOptions) (aws.Endpoint, error) {
+			if len(e.SigningRegion) == 0 {
+				e.SigningRegion = region
+			}
+			return e, nil
+		},
+	)
+}
+
+type ResolveEndpoint struct {
+	Resolver EndpointResolver
+	Options  EndpointResolverOptions
+}
+
+func (*ResolveEndpoint) ID() string {
+	return "ResolveEndpoint"
+}
+
+func (m *ResolveEndpoint) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	if !awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+		return next.HandleSerialize(ctx, in)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.Resolver == nil {
+		return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+	}
+
+	eo := m.Options
+	eo.Logger = middleware.GetLogger(ctx)
+
+	var endpoint aws.Endpoint
+	endpoint, err = m.Resolver.ResolveEndpoint(awsmiddleware.GetRegion(ctx), eo)
+	if err != nil {
+		nf := (&aws.EndpointNotFoundError{})
+		if errors.As(err, &nf) {
+			ctx = awsmiddleware.SetRequiresLegacyEndpoints(ctx, false)
+			return next.HandleSerialize(ctx, in)
+		}
+		return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+	}
+
+	req.URL, err = url.Parse(endpoint.URL)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to parse endpoint URL: %w", err)
+	}
+
+	if len(awsmiddleware.GetSigningName(ctx)) == 0 {
+		signingName := endpoint.SigningName
+		if len(signingName) == 0 {
+			signingName = "sts"
+		}
+		ctx = awsmiddleware.SetSigningName(ctx, signingName)
+	}
+	ctx = awsmiddleware.SetEndpointSource(ctx, endpoint.Source)
+	ctx = smithyhttp.SetHostnameImmutable(ctx, endpoint.HostnameImmutable)
+	ctx = awsmiddleware.SetSigningRegion(ctx, endpoint.SigningRegion)
+	ctx = awsmiddleware.SetPartitionID(ctx, endpoint.PartitionID)
+	return next.HandleSerialize(ctx, in)
+}
+func addResolveEndpointMiddleware(stack *middleware.Stack, o Options) error {
+	return stack.Serialize.Insert(&ResolveEndpoint{
+		Resolver: o.EndpointResolver,
+		Options:  o.EndpointOptions,
+	}, "OperationSerializer", middleware.Before)
+}
+
+func removeResolveEndpointMiddleware(stack *middleware.Stack) error {
+	_, err := stack.Serialize.Remove((&ResolveEndpoint{}).ID())
+	return err
+}
+
+type wrappedEndpointResolver struct {
+	awsResolver aws.EndpointResolverWithOptions
+}
+
+func (w *wrappedEndpointResolver) ResolveEndpoint(region string, options EndpointResolverOptions) (endpoint aws.Endpoint, err error) {
+	return w.awsResolver.ResolveEndpoint(ServiceID, region, options)
+}
+
+type awsEndpointResolverAdaptor func(service, region string) (aws.Endpoint, error)
+
+func (a awsEndpointResolverAdaptor) ResolveEndpoint(service, region string, options ...interface{}) (aws.Endpoint, error) {
+	return a(service, region)
+}
+
+var _ aws.EndpointResolverWithOptions = awsEndpointResolverAdaptor(nil)
+
+// withEndpointResolver returns an aws.EndpointResolverWithOptions that first delegates endpoint resolution to the awsResolver.
+// If awsResolver returns aws.EndpointNotFoundError error, the v1 resolver middleware will swallow the error,
+// and set an appropriate context flag such that fallback will occur when EndpointResolverV2 is invoked
+// via its middleware.
+//
+// If another error (besides aws.EndpointNotFoundError) is returned, then that error will be propagated.
+func withEndpointResolver(awsResolver aws.EndpointResolver, awsResolverWithOptions aws.EndpointResolverWithOptions) EndpointResolver {
+	var resolver aws.EndpointResolverWithOptions
+
+	if awsResolverWithOptions != nil {
+		resolver = awsResolverWithOptions
+	} else if awsResolver != nil {
+		resolver = awsEndpointResolverAdaptor(awsResolver.ResolveEndpoint)
+	}
+
+	return &wrappedEndpointResolver{
+		awsResolver: resolver,
+	}
+}
+
+func finalizeClientEndpointResolverOptions(options *Options) {
+	options.EndpointOptions.LogDeprecated = options.ClientLogMode.IsDeprecatedUsage()
+
+	if len(options.EndpointOptions.ResolvedRegion) == 0 {
+		const fipsInfix = "-fips-"
+		const fipsPrefix = "fips-"
+		const fipsSuffix = "-fips"
+
+		if strings.Contains(options.Region, fipsInfix) ||
+			strings.Contains(options.Region, fipsPrefix) ||
+			strings.Contains(options.Region, fipsSuffix) {
+			options.EndpointOptions.ResolvedRegion = strings.ReplaceAll(strings.ReplaceAll(strings.ReplaceAll(
+				options.Region, fipsInfix, "-"), fipsPrefix, ""), fipsSuffix, "")
+			options.EndpointOptions.UseFIPSEndpoint = aws.FIPSEndpointStateEnabled
+		}
+	}
+
+}
+
+func resolveEndpointResolverV2(options *Options) {
+	if options.EndpointResolverV2 == nil {
+		options.EndpointResolverV2 = NewDefaultEndpointResolverV2()
+	}
+}
+
+func resolveBaseEndpoint(cfg aws.Config, o *Options) {
+	if cfg.BaseEndpoint != nil {
+		o.BaseEndpoint = cfg.BaseEndpoint
+	}
+
+	_, g := os.LookupEnv("AWS_ENDPOINT_URL")
+	_, s := os.LookupEnv("AWS_ENDPOINT_URL_STS")
+
+	if g && !s {
+		return
+	}
+
+	value, found, err := internalConfig.ResolveServiceBaseEndpoint(context.Background(), "STS", cfg.ConfigSources)
+	if found && err == nil {
+		o.BaseEndpoint = &value
+	}
+}
+
+func bindRegion(region string) *string {
+	if region == "" {
+		return nil
+	}
+	return aws.String(endpoints.MapFIPSRegion(region))
+}
+
+// EndpointParameters provides the parameters that influence how endpoints are
+// resolved.
+type EndpointParameters struct {
+	// The AWS region used to dispatch the request.
+	//
+	// Parameter is
+	// required.
+	//
+	// AWS::Region
+	Region *string
+
+	// When true, use the dual-stack endpoint. If the configured endpoint does not
+	// support dual-stack, dispatching the request MAY return an error.
+	//
+	// Defaults to
+	// false if no value is provided.
+	//
+	// AWS::UseDualStack
+	UseDualStack *bool
+
+	// When true, send this request to the FIPS-compliant regional endpoint. If the
+	// configured endpoint does not have a FIPS compliant endpoint, dispatching the
+	// request will return an error.
+	//
+	// Defaults to false if no value is
+	// provided.
+	//
+	// AWS::UseFIPS
+	UseFIPS *bool
+
+	// Override the endpoint used to send this request
+	//
+	// Parameter is
+	// required.
+	//
+	// SDK::Endpoint
+	Endpoint *string
+
+	// Whether the global endpoint should be used, rather then the regional endpoint
+	// for us-east-1.
+	//
+	// Defaults to false if no value is
+	// provided.
+	//
+	// AWS::STS::UseGlobalEndpoint
+	UseGlobalEndpoint *bool
+}
+
+// ValidateRequired validates required parameters are set.
+func (p EndpointParameters) ValidateRequired() error {
+	if p.UseDualStack == nil {
+		return fmt.Errorf("parameter UseDualStack is required")
+	}
+
+	if p.UseFIPS == nil {
+		return fmt.Errorf("parameter UseFIPS is required")
+	}
+
+	if p.UseGlobalEndpoint == nil {
+		return fmt.Errorf("parameter UseGlobalEndpoint is required")
+	}
+
+	return nil
+}
+
+// WithDefaults returns a shallow copy of EndpointParameterswith default values
+// applied to members where applicable.
+func (p EndpointParameters) WithDefaults() EndpointParameters {
+	if p.UseDualStack == nil {
+		p.UseDualStack = ptr.Bool(false)
+	}
+
+	if p.UseFIPS == nil {
+		p.UseFIPS = ptr.Bool(false)
+	}
+
+	if p.UseGlobalEndpoint == nil {
+		p.UseGlobalEndpoint = ptr.Bool(false)
+	}
+	return p
+}
+
+type stringSlice []string
+
+func (s stringSlice) Get(i int) *string {
+	if i < 0 || i >= len(s) {
+		return nil
+	}
+
+	v := s[i]
+	return &v
+}
+
+// EndpointResolverV2 provides the interface for resolving service endpoints.
+type EndpointResolverV2 interface {
+	// ResolveEndpoint attempts to resolve the endpoint with the provided options,
+	// returning the endpoint if found. Otherwise an error is returned.
+	ResolveEndpoint(ctx context.Context, params EndpointParameters) (
+		smithyendpoints.Endpoint, error,
+	)
+}
+
+// resolver provides the implementation for resolving endpoints.
+type resolver struct{}
+
+func NewDefaultEndpointResolverV2() EndpointResolverV2 {
+	return &resolver{}
+}
+
+// ResolveEndpoint attempts to resolve the endpoint with the provided options,
+// returning the endpoint if found. Otherwise an error is returned.
+func (r *resolver) ResolveEndpoint(
+	ctx context.Context, params EndpointParameters,
+) (
+	endpoint smithyendpoints.Endpoint, err error,
+) {
+	params = params.WithDefaults()
+	if err = params.ValidateRequired(); err != nil {
+		return endpoint, fmt.Errorf("endpoint parameters are not valid, %w", err)
+	}
+	_UseDualStack := *params.UseDualStack
+	_UseFIPS := *params.UseFIPS
+	_UseGlobalEndpoint := *params.UseGlobalEndpoint
+
+	if _UseGlobalEndpoint == true {
+		if !(params.Endpoint != nil) {
+			if exprVal := params.Region; exprVal != nil {
+				_Region := *exprVal
+				_ = _Region
+				if exprVal := awsrulesfn.GetPartition(_Region); exprVal != nil {
+					_PartitionResult := *exprVal
+					_ = _PartitionResult
+					if _UseFIPS == false {
+						if _UseDualStack == false {
+							if _Region == "ap-northeast-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "ap-south-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "ap-southeast-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "ap-southeast-2" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "aws-global" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "ca-central-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "eu-central-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "eu-north-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "eu-west-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "eu-west-2" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "eu-west-3" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "sa-east-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "us-east-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "us-east-2" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "us-west-1" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							if _Region == "us-west-2" {
+								uriString := "https://sts.amazonaws.com"
+
+								uri, err := url.Parse(uriString)
+								if err != nil {
+									return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+								}
+
+								return smithyendpoints.Endpoint{
+									URI:     *uri,
+									Headers: http.Header{},
+									Properties: func() smithy.Properties {
+										var out smithy.Properties
+										smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+											{
+												SchemeID: "aws.auth#sigv4",
+												SignerProperties: func() smithy.Properties {
+													var sp smithy.Properties
+													smithyhttp.SetSigV4SigningName(&sp, "sts")
+													smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+													smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+													return sp
+												}(),
+											},
+										})
+										return out
+									}(),
+								}, nil
+							}
+							uriString := func() string {
+								var out strings.Builder
+								out.WriteString("https://sts.")
+								out.WriteString(_Region)
+								out.WriteString(".")
+								out.WriteString(_PartitionResult.DnsSuffix)
+								return out.String()
+							}()
+
+							uri, err := url.Parse(uriString)
+							if err != nil {
+								return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+							}
+
+							return smithyendpoints.Endpoint{
+								URI:     *uri,
+								Headers: http.Header{},
+								Properties: func() smithy.Properties {
+									var out smithy.Properties
+									smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+										{
+											SchemeID: "aws.auth#sigv4",
+											SignerProperties: func() smithy.Properties {
+												var sp smithy.Properties
+												smithyhttp.SetSigV4SigningName(&sp, "sts")
+												smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+												smithyhttp.SetSigV4SigningRegion(&sp, _Region)
+												return sp
+											}(),
+										},
+									})
+									return out
+								}(),
+							}, nil
+						}
+					}
+				}
+			}
+		}
+	}
+	if exprVal := params.Endpoint; exprVal != nil {
+		_Endpoint := *exprVal
+		_ = _Endpoint
+		if _UseFIPS == true {
+			return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: FIPS and custom endpoint are not supported")
+		}
+		if _UseDualStack == true {
+			return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: Dualstack and custom endpoint are not supported")
+		}
+		uriString := _Endpoint
+
+		uri, err := url.Parse(uriString)
+		if err != nil {
+			return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+		}
+
+		return smithyendpoints.Endpoint{
+			URI:     *uri,
+			Headers: http.Header{},
+		}, nil
+	}
+	if exprVal := params.Region; exprVal != nil {
+		_Region := *exprVal
+		_ = _Region
+		if exprVal := awsrulesfn.GetPartition(_Region); exprVal != nil {
+			_PartitionResult := *exprVal
+			_ = _PartitionResult
+			if _UseFIPS == true {
+				if _UseDualStack == true {
+					if true == _PartitionResult.SupportsFIPS {
+						if true == _PartitionResult.SupportsDualStack {
+							uriString := func() string {
+								var out strings.Builder
+								out.WriteString("https://sts-fips.")
+								out.WriteString(_Region)
+								out.WriteString(".")
+								out.WriteString(_PartitionResult.DualStackDnsSuffix)
+								return out.String()
+							}()
+
+							uri, err := url.Parse(uriString)
+							if err != nil {
+								return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+							}
+
+							return smithyendpoints.Endpoint{
+								URI:     *uri,
+								Headers: http.Header{},
+							}, nil
+						}
+					}
+					return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS and DualStack are enabled, but this partition does not support one or both")
+				}
+			}
+			if _UseFIPS == true {
+				if _PartitionResult.SupportsFIPS == true {
+					if _PartitionResult.Name == "aws-us-gov" {
+						uriString := func() string {
+							var out strings.Builder
+							out.WriteString("https://sts.")
+							out.WriteString(_Region)
+							out.WriteString(".amazonaws.com")
+							return out.String()
+						}()
+
+						uri, err := url.Parse(uriString)
+						if err != nil {
+							return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+						}
+
+						return smithyendpoints.Endpoint{
+							URI:     *uri,
+							Headers: http.Header{},
+						}, nil
+					}
+					uriString := func() string {
+						var out strings.Builder
+						out.WriteString("https://sts-fips.")
+						out.WriteString(_Region)
+						out.WriteString(".")
+						out.WriteString(_PartitionResult.DnsSuffix)
+						return out.String()
+					}()
+
+					uri, err := url.Parse(uriString)
+					if err != nil {
+						return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+					}
+
+					return smithyendpoints.Endpoint{
+						URI:     *uri,
+						Headers: http.Header{},
+					}, nil
+				}
+				return endpoint, fmt.Errorf("endpoint rule error, %s", "FIPS is enabled but this partition does not support FIPS")
+			}
+			if _UseDualStack == true {
+				if true == _PartitionResult.SupportsDualStack {
+					uriString := func() string {
+						var out strings.Builder
+						out.WriteString("https://sts.")
+						out.WriteString(_Region)
+						out.WriteString(".")
+						out.WriteString(_PartitionResult.DualStackDnsSuffix)
+						return out.String()
+					}()
+
+					uri, err := url.Parse(uriString)
+					if err != nil {
+						return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+					}
+
+					return smithyendpoints.Endpoint{
+						URI:     *uri,
+						Headers: http.Header{},
+					}, nil
+				}
+				return endpoint, fmt.Errorf("endpoint rule error, %s", "DualStack is enabled but this partition does not support DualStack")
+			}
+			if _Region == "aws-global" {
+				uriString := "https://sts.amazonaws.com"
+
+				uri, err := url.Parse(uriString)
+				if err != nil {
+					return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+				}
+
+				return smithyendpoints.Endpoint{
+					URI:     *uri,
+					Headers: http.Header{},
+					Properties: func() smithy.Properties {
+						var out smithy.Properties
+						smithyauth.SetAuthOptions(&out, []*smithyauth.Option{
+							{
+								SchemeID: "aws.auth#sigv4",
+								SignerProperties: func() smithy.Properties {
+									var sp smithy.Properties
+									smithyhttp.SetSigV4SigningName(&sp, "sts")
+									smithyhttp.SetSigV4ASigningName(&sp, "sts")
+
+									smithyhttp.SetSigV4SigningRegion(&sp, "us-east-1")
+									return sp
+								}(),
+							},
+						})
+						return out
+					}(),
+				}, nil
+			}
+			uriString := func() string {
+				var out strings.Builder
+				out.WriteString("https://sts.")
+				out.WriteString(_Region)
+				out.WriteString(".")
+				out.WriteString(_PartitionResult.DnsSuffix)
+				return out.String()
+			}()
+
+			uri, err := url.Parse(uriString)
+			if err != nil {
+				return endpoint, fmt.Errorf("Failed to parse uri: %s", uriString)
+			}
+
+			return smithyendpoints.Endpoint{
+				URI:     *uri,
+				Headers: http.Header{},
+			}, nil
+		}
+		return endpoint, fmt.Errorf("Endpoint resolution failed. Invalid operation or environment input.")
+	}
+	return endpoint, fmt.Errorf("endpoint rule error, %s", "Invalid Configuration: Missing Region")
+}
+
+type endpointParamsBinder interface {
+	bindEndpointParams(*EndpointParameters)
+}
+
+func bindEndpointParams(ctx context.Context, input interface{}, options Options) *EndpointParameters {
+	params := &EndpointParameters{}
+
+	params.Region = bindRegion(options.Region)
+	params.UseDualStack = aws.Bool(options.EndpointOptions.UseDualStackEndpoint == aws.DualStackEndpointStateEnabled)
+	params.UseFIPS = aws.Bool(options.EndpointOptions.UseFIPSEndpoint == aws.FIPSEndpointStateEnabled)
+	params.Endpoint = options.BaseEndpoint
+
+	if b, ok := input.(endpointParamsBinder); ok {
+		b.bindEndpointParams(params)
+	}
+
+	return params
+}
+
+type resolveEndpointV2Middleware struct {
+	options Options
+}
+
+func (*resolveEndpointV2Middleware) ID() string {
+	return "ResolveEndpointV2"
+}
+
+func (m *resolveEndpointV2Middleware) HandleFinalize(ctx context.Context, in middleware.FinalizeInput, next middleware.FinalizeHandler) (
+	out middleware.FinalizeOutput, metadata middleware.Metadata, err error,
+) {
+	if awsmiddleware.GetRequiresLegacyEndpoints(ctx) {
+		return next.HandleFinalize(ctx, in)
+	}
+
+	if err := checkAccountID(getIdentity(ctx), m.options.AccountIDEndpointMode); err != nil {
+		return out, metadata, fmt.Errorf("invalid accountID set: %w", err)
+	}
+
+	req, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown transport type %T", in.Request)
+	}
+
+	if m.options.EndpointResolverV2 == nil {
+		return out, metadata, fmt.Errorf("expected endpoint resolver to not be nil")
+	}
+
+	params := bindEndpointParams(ctx, getOperationInput(ctx), m.options)
+	endpt, err := m.options.EndpointResolverV2.ResolveEndpoint(ctx, *params)
+	if err != nil {
+		return out, metadata, fmt.Errorf("failed to resolve service endpoint, %w", err)
+	}
+
+	if endpt.URI.RawPath == "" && req.URL.RawPath != "" {
+		endpt.URI.RawPath = endpt.URI.Path
+	}
+	req.URL.Scheme = endpt.URI.Scheme
+	req.URL.Host = endpt.URI.Host
+	req.URL.Path = smithyhttp.JoinPath(endpt.URI.Path, req.URL.Path)
+	req.URL.RawPath = smithyhttp.JoinPath(endpt.URI.RawPath, req.URL.RawPath)
+	for k := range endpt.Headers {
+		req.Header.Set(k, endpt.Headers.Get(k))
+	}
+
+	rscheme := getResolvedAuthScheme(ctx)
+	if rscheme == nil {
+		return out, metadata, fmt.Errorf("no resolved auth scheme")
+	}
+
+	opts, _ := smithyauth.GetAuthOptions(&endpt.Properties)
+	for _, o := range opts {
+		rscheme.SignerProperties.SetAll(&o.SignerProperties)
+	}
+
+	return next.HandleFinalize(ctx, in)
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/generated.json 🔗

@@ -0,0 +1,41 @@
+{
+    "dependencies": {
+        "github.com/aws/aws-sdk-go-v2": "v1.4.0",
+        "github.com/aws/aws-sdk-go-v2/internal/configsources": "v0.0.0-00010101000000-000000000000",
+        "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2": "v2.0.0-00010101000000-000000000000",
+        "github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding": "v1.0.5",
+        "github.com/aws/aws-sdk-go-v2/service/internal/presigned-url": "v1.0.7",
+        "github.com/aws/smithy-go": "v1.4.0"
+    },
+    "files": [
+        "api_client.go",
+        "api_client_test.go",
+        "api_op_AssumeRole.go",
+        "api_op_AssumeRoleWithSAML.go",
+        "api_op_AssumeRoleWithWebIdentity.go",
+        "api_op_DecodeAuthorizationMessage.go",
+        "api_op_GetAccessKeyInfo.go",
+        "api_op_GetCallerIdentity.go",
+        "api_op_GetFederationToken.go",
+        "api_op_GetSessionToken.go",
+        "auth.go",
+        "deserializers.go",
+        "doc.go",
+        "endpoints.go",
+        "endpoints_config_test.go",
+        "endpoints_test.go",
+        "generated.json",
+        "internal/endpoints/endpoints.go",
+        "internal/endpoints/endpoints_test.go",
+        "options.go",
+        "protocol_test.go",
+        "serializers.go",
+        "snapshot_test.go",
+        "types/errors.go",
+        "types/types.go",
+        "validators.go"
+    ],
+    "go": "1.15",
+    "module": "github.com/aws/aws-sdk-go-v2/service/sts",
+    "unstable": false
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/internal/endpoints/endpoints.go 🔗

@@ -0,0 +1,512 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package endpoints
+
+import (
+	"github.com/aws/aws-sdk-go-v2/aws"
+	endpoints "github.com/aws/aws-sdk-go-v2/internal/endpoints/v2"
+	"github.com/aws/smithy-go/logging"
+	"regexp"
+)
+
+// Options is the endpoint resolver configuration options
+type Options struct {
+	// Logger is a logging implementation that log events should be sent to.
+	Logger logging.Logger
+
+	// LogDeprecated indicates that deprecated endpoints should be logged to the
+	// provided logger.
+	LogDeprecated bool
+
+	// ResolvedRegion is used to override the region to be resolved, rather then the
+	// using the value passed to the ResolveEndpoint method. This value is used by the
+	// SDK to translate regions like fips-us-east-1 or us-east-1-fips to an alternative
+	// name. You must not set this value directly in your application.
+	ResolvedRegion string
+
+	// DisableHTTPS informs the resolver to return an endpoint that does not use the
+	// HTTPS scheme.
+	DisableHTTPS bool
+
+	// UseDualStackEndpoint specifies the resolver must resolve a dual-stack endpoint.
+	UseDualStackEndpoint aws.DualStackEndpointState
+
+	// UseFIPSEndpoint specifies the resolver must resolve a FIPS endpoint.
+	UseFIPSEndpoint aws.FIPSEndpointState
+}
+
+func (o Options) GetResolvedRegion() string {
+	return o.ResolvedRegion
+}
+
+func (o Options) GetDisableHTTPS() bool {
+	return o.DisableHTTPS
+}
+
+func (o Options) GetUseDualStackEndpoint() aws.DualStackEndpointState {
+	return o.UseDualStackEndpoint
+}
+
+func (o Options) GetUseFIPSEndpoint() aws.FIPSEndpointState {
+	return o.UseFIPSEndpoint
+}
+
+func transformToSharedOptions(options Options) endpoints.Options {
+	return endpoints.Options{
+		Logger:               options.Logger,
+		LogDeprecated:        options.LogDeprecated,
+		ResolvedRegion:       options.ResolvedRegion,
+		DisableHTTPS:         options.DisableHTTPS,
+		UseDualStackEndpoint: options.UseDualStackEndpoint,
+		UseFIPSEndpoint:      options.UseFIPSEndpoint,
+	}
+}
+
+// Resolver STS endpoint resolver
+type Resolver struct {
+	partitions endpoints.Partitions
+}
+
+// ResolveEndpoint resolves the service endpoint for the given region and options
+func (r *Resolver) ResolveEndpoint(region string, options Options) (endpoint aws.Endpoint, err error) {
+	if len(region) == 0 {
+		return endpoint, &aws.MissingRegionError{}
+	}
+
+	opt := transformToSharedOptions(options)
+	return r.partitions.ResolveEndpoint(region, opt)
+}
+
+// New returns a new Resolver
+func New() *Resolver {
+	return &Resolver{
+		partitions: defaultPartitions,
+	}
+}
+
+var partitionRegexp = struct {
+	Aws      *regexp.Regexp
+	AwsCn    *regexp.Regexp
+	AwsIso   *regexp.Regexp
+	AwsIsoB  *regexp.Regexp
+	AwsIsoE  *regexp.Regexp
+	AwsIsoF  *regexp.Regexp
+	AwsUsGov *regexp.Regexp
+}{
+
+	Aws:      regexp.MustCompile("^(us|eu|ap|sa|ca|me|af|il)\\-\\w+\\-\\d+$"),
+	AwsCn:    regexp.MustCompile("^cn\\-\\w+\\-\\d+$"),
+	AwsIso:   regexp.MustCompile("^us\\-iso\\-\\w+\\-\\d+$"),
+	AwsIsoB:  regexp.MustCompile("^us\\-isob\\-\\w+\\-\\d+$"),
+	AwsIsoE:  regexp.MustCompile("^eu\\-isoe\\-\\w+\\-\\d+$"),
+	AwsIsoF:  regexp.MustCompile("^us\\-isof\\-\\w+\\-\\d+$"),
+	AwsUsGov: regexp.MustCompile("^us\\-gov\\-\\w+\\-\\d+$"),
+}
+
+var defaultPartitions = endpoints.Partitions{
+	{
+		ID: "aws",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "sts.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "sts.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.Aws,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "af-south-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-east-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-northeast-3",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-south-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-south-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-3",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ap-southeast-4",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "aws-global",
+			}: endpoints.Endpoint{
+				Hostname: "sts.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-east-1",
+				},
+			},
+			endpoints.EndpointKey{
+				Region: "ca-central-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "ca-west-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-central-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-central-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-north-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-south-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-south-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-west-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-west-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "eu-west-3",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "il-central-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "me-central-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "me-south-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "sa-east-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "us-east-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region:  "us-east-1",
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname: "sts-fips.us-east-1.amazonaws.com",
+			},
+			endpoints.EndpointKey{
+				Region: "us-east-1-fips",
+			}: endpoints.Endpoint{
+				Hostname: "sts-fips.us-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-east-1",
+				},
+				Deprecated: aws.TrueTernary,
+			},
+			endpoints.EndpointKey{
+				Region: "us-east-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region:  "us-east-2",
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname: "sts-fips.us-east-2.amazonaws.com",
+			},
+			endpoints.EndpointKey{
+				Region: "us-east-2-fips",
+			}: endpoints.Endpoint{
+				Hostname: "sts-fips.us-east-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-east-2",
+				},
+				Deprecated: aws.TrueTernary,
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region:  "us-west-1",
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname: "sts-fips.us-west-1.amazonaws.com",
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-1-fips",
+			}: endpoints.Endpoint{
+				Hostname: "sts-fips.us-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-west-1",
+				},
+				Deprecated: aws.TrueTernary,
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-2",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region:  "us-west-2",
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname: "sts-fips.us-west-2.amazonaws.com",
+			},
+			endpoints.EndpointKey{
+				Region: "us-west-2-fips",
+			}: endpoints.Endpoint{
+				Hostname: "sts-fips.us-west-2.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-west-2",
+				},
+				Deprecated: aws.TrueTernary,
+			},
+		},
+	},
+	{
+		ID: "aws-cn",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "sts.{region}.api.amazonwebservices.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.amazonaws.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.api.amazonwebservices.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "sts.{region}.amazonaws.com.cn",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsCn,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "cn-north-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "cn-northwest-1",
+			}: endpoints.Endpoint{},
+		},
+	},
+	{
+		ID: "aws-iso",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.c2s.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "sts.{region}.c2s.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIso,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "us-iso-east-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region: "us-iso-west-1",
+			}: endpoints.Endpoint{},
+		},
+	},
+	{
+		ID: "aws-iso-b",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.sc2s.sgov.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "sts.{region}.sc2s.sgov.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoB,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "us-isob-east-1",
+			}: endpoints.Endpoint{},
+		},
+	},
+	{
+		ID: "aws-iso-e",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.cloud.adc-e.uk",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "sts.{region}.cloud.adc-e.uk",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoE,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-iso-f",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.csp.hci.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "sts.{region}.csp.hci.ic.gov",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsIsoF,
+		IsRegionalized: true,
+	},
+	{
+		ID: "aws-us-gov",
+		Defaults: map[endpoints.DefaultKey]endpoints.Endpoint{
+			{
+				Variant: endpoints.DualStackVariant,
+			}: {
+				Hostname:          "sts.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname:          "sts.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: endpoints.FIPSVariant | endpoints.DualStackVariant,
+			}: {
+				Hostname:          "sts-fips.{region}.api.aws",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+			{
+				Variant: 0,
+			}: {
+				Hostname:          "sts.{region}.amazonaws.com",
+				Protocols:         []string{"https"},
+				SignatureVersions: []string{"v4"},
+			},
+		},
+		RegionRegex:    partitionRegexp.AwsUsGov,
+		IsRegionalized: true,
+		Endpoints: endpoints.Endpoints{
+			endpoints.EndpointKey{
+				Region: "us-gov-east-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region:  "us-gov-east-1",
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname: "sts.us-gov-east-1.amazonaws.com",
+			},
+			endpoints.EndpointKey{
+				Region: "us-gov-east-1-fips",
+			}: endpoints.Endpoint{
+				Hostname: "sts.us-gov-east-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-gov-east-1",
+				},
+				Deprecated: aws.TrueTernary,
+			},
+			endpoints.EndpointKey{
+				Region: "us-gov-west-1",
+			}: endpoints.Endpoint{},
+			endpoints.EndpointKey{
+				Region:  "us-gov-west-1",
+				Variant: endpoints.FIPSVariant,
+			}: {
+				Hostname: "sts.us-gov-west-1.amazonaws.com",
+			},
+			endpoints.EndpointKey{
+				Region: "us-gov-west-1-fips",
+			}: endpoints.Endpoint{
+				Hostname: "sts.us-gov-west-1.amazonaws.com",
+				CredentialScope: endpoints.CredentialScope{
+					Region: "us-gov-west-1",
+				},
+				Deprecated: aws.TrueTernary,
+			},
+		},
+	},
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/options.go 🔗

@@ -0,0 +1,227 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"github.com/aws/aws-sdk-go-v2/aws"
+	awsmiddleware "github.com/aws/aws-sdk-go-v2/aws/middleware"
+	internalauthsmithy "github.com/aws/aws-sdk-go-v2/internal/auth/smithy"
+	smithyauth "github.com/aws/smithy-go/auth"
+	"github.com/aws/smithy-go/logging"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"net/http"
+)
+
+type HTTPClient interface {
+	Do(*http.Request) (*http.Response, error)
+}
+
+type Options struct {
+	// Set of options to modify how an operation is invoked. These apply to all
+	// operations invoked for this client. Use functional options on operation call to
+	// modify this list for per operation behavior.
+	APIOptions []func(*middleware.Stack) error
+
+	// Indicates how aws account ID is applied in endpoint2.0 routing
+	AccountIDEndpointMode aws.AccountIDEndpointMode
+
+	// The optional application specific identifier appended to the User-Agent header.
+	AppID string
+
+	// This endpoint will be given as input to an EndpointResolverV2. It is used for
+	// providing a custom base endpoint that is subject to modifications by the
+	// processing EndpointResolverV2.
+	BaseEndpoint *string
+
+	// Configures the events that will be sent to the configured logger.
+	ClientLogMode aws.ClientLogMode
+
+	// The credentials object to use when signing requests.
+	Credentials aws.CredentialsProvider
+
+	// The configuration DefaultsMode that the SDK should use when constructing the
+	// clients initial default settings.
+	DefaultsMode aws.DefaultsMode
+
+	// The endpoint options to be used when attempting to resolve an endpoint.
+	EndpointOptions EndpointResolverOptions
+
+	// The service endpoint resolver.
+	//
+	// Deprecated: Deprecated: EndpointResolver and WithEndpointResolver. Providing a
+	// value for this field will likely prevent you from using any endpoint-related
+	// service features released after the introduction of EndpointResolverV2 and
+	// BaseEndpoint.
+	//
+	// To migrate an EndpointResolver implementation that uses a custom endpoint, set
+	// the client option BaseEndpoint instead.
+	EndpointResolver EndpointResolver
+
+	// Resolves the endpoint used for a particular service operation. This should be
+	// used over the deprecated EndpointResolver.
+	EndpointResolverV2 EndpointResolverV2
+
+	// Signature Version 4 (SigV4) Signer
+	HTTPSignerV4 HTTPSignerV4
+
+	// The logger writer interface to write logging messages to.
+	Logger logging.Logger
+
+	// The region to send requests to. (Required)
+	Region string
+
+	// RetryMaxAttempts specifies the maximum number attempts an API client will call
+	// an operation that fails with a retryable error. A value of 0 is ignored, and
+	// will not be used to configure the API client created default retryer, or modify
+	// per operation call's retry max attempts.
+	//
+	// If specified in an operation call's functional options with a value that is
+	// different than the constructed client's Options, the Client's Retryer will be
+	// wrapped to use the operation's specific RetryMaxAttempts value.
+	RetryMaxAttempts int
+
+	// RetryMode specifies the retry mode the API client will be created with, if
+	// Retryer option is not also specified.
+	//
+	// When creating a new API Clients this member will only be used if the Retryer
+	// Options member is nil. This value will be ignored if Retryer is not nil.
+	//
+	// Currently does not support per operation call overrides, may in the future.
+	RetryMode aws.RetryMode
+
+	// Retryer guides how HTTP requests should be retried in case of recoverable
+	// failures. When nil the API client will use a default retryer. The kind of
+	// default retry created by the API client can be changed with the RetryMode
+	// option.
+	Retryer aws.Retryer
+
+	// The RuntimeEnvironment configuration, only populated if the DefaultsMode is set
+	// to DefaultsModeAuto and is initialized using config.LoadDefaultConfig . You
+	// should not populate this structure programmatically, or rely on the values here
+	// within your applications.
+	RuntimeEnvironment aws.RuntimeEnvironment
+
+	// The initial DefaultsMode used when the client options were constructed. If the
+	// DefaultsMode was set to aws.DefaultsModeAuto this will store what the resolved
+	// value was at that point in time.
+	//
+	// Currently does not support per operation call overrides, may in the future.
+	resolvedDefaultsMode aws.DefaultsMode
+
+	// The HTTP client to invoke API calls with. Defaults to client's default HTTP
+	// implementation if nil.
+	HTTPClient HTTPClient
+
+	// The auth scheme resolver which determines how to authenticate for each
+	// operation.
+	AuthSchemeResolver AuthSchemeResolver
+
+	// The list of auth schemes supported by the client.
+	AuthSchemes []smithyhttp.AuthScheme
+}
+
+// Copy creates a clone where the APIOptions list is deep copied.
+func (o Options) Copy() Options {
+	to := o
+	to.APIOptions = make([]func(*middleware.Stack) error, len(o.APIOptions))
+	copy(to.APIOptions, o.APIOptions)
+
+	return to
+}
+
+func (o Options) GetIdentityResolver(schemeID string) smithyauth.IdentityResolver {
+	if schemeID == "aws.auth#sigv4" {
+		return getSigV4IdentityResolver(o)
+	}
+	if schemeID == "smithy.api#noAuth" {
+		return &smithyauth.AnonymousIdentityResolver{}
+	}
+	return nil
+}
+
+// WithAPIOptions returns a functional option for setting the Client's APIOptions
+// option.
+func WithAPIOptions(optFns ...func(*middleware.Stack) error) func(*Options) {
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, optFns...)
+	}
+}
+
+// Deprecated: EndpointResolver and WithEndpointResolver. Providing a value for
+// this field will likely prevent you from using any endpoint-related service
+// features released after the introduction of EndpointResolverV2 and BaseEndpoint.
+//
+// To migrate an EndpointResolver implementation that uses a custom endpoint, set
+// the client option BaseEndpoint instead.
+func WithEndpointResolver(v EndpointResolver) func(*Options) {
+	return func(o *Options) {
+		o.EndpointResolver = v
+	}
+}
+
+// WithEndpointResolverV2 returns a functional option for setting the Client's
+// EndpointResolverV2 option.
+func WithEndpointResolverV2(v EndpointResolverV2) func(*Options) {
+	return func(o *Options) {
+		o.EndpointResolverV2 = v
+	}
+}
+
+func getSigV4IdentityResolver(o Options) smithyauth.IdentityResolver {
+	if o.Credentials != nil {
+		return &internalauthsmithy.CredentialsProviderAdapter{Provider: o.Credentials}
+	}
+	return nil
+}
+
+// WithSigV4SigningName applies an override to the authentication workflow to
+// use the given signing name for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing name from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningName(name string) func(*Options) {
+	fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+		out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+	) {
+		return next.HandleInitialize(awsmiddleware.SetSigningName(ctx, name), in)
+	}
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+			return s.Initialize.Add(
+				middleware.InitializeMiddlewareFunc("withSigV4SigningName", fn),
+				middleware.Before,
+			)
+		})
+	}
+}
+
+// WithSigV4SigningRegion applies an override to the authentication workflow to
+// use the given signing region for SigV4-authenticated operations.
+//
+// This is an advanced setting. The value here is FINAL, taking precedence over
+// the resolved signing region from both auth scheme resolution and endpoint
+// resolution.
+func WithSigV4SigningRegion(region string) func(*Options) {
+	fn := func(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+		out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+	) {
+		return next.HandleInitialize(awsmiddleware.SetSigningRegion(ctx, region), in)
+	}
+	return func(o *Options) {
+		o.APIOptions = append(o.APIOptions, func(s *middleware.Stack) error {
+			return s.Initialize.Add(
+				middleware.InitializeMiddlewareFunc("withSigV4SigningRegion", fn),
+				middleware.Before,
+			)
+		})
+	}
+}
+
+func ignoreAnonymousAuth(options *Options) {
+	if aws.IsCredentialsProvider(options.Credentials, (*aws.AnonymousCredentials)(nil)) {
+		options.Credentials = nil
+	}
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/serializers.go 🔗

@@ -0,0 +1,862 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"bytes"
+	"context"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/aws/protocol/query"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	smithy "github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/encoding/httpbinding"
+	"github.com/aws/smithy-go/middleware"
+	smithyhttp "github.com/aws/smithy-go/transport/http"
+	"path"
+)
+
+type awsAwsquery_serializeOpAssumeRole struct {
+}
+
+func (*awsAwsquery_serializeOpAssumeRole) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpAssumeRole) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*AssumeRoleInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("AssumeRole")
+	body.Key("Version").String("2011-06-15")
+
+	if err := awsAwsquery_serializeOpDocumentAssumeRoleInput(input, bodyEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsquery_serializeOpAssumeRoleWithSAML struct {
+}
+
+func (*awsAwsquery_serializeOpAssumeRoleWithSAML) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpAssumeRoleWithSAML) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*AssumeRoleWithSAMLInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("AssumeRoleWithSAML")
+	body.Key("Version").String("2011-06-15")
+
+	if err := awsAwsquery_serializeOpDocumentAssumeRoleWithSAMLInput(input, bodyEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsquery_serializeOpAssumeRoleWithWebIdentity struct {
+}
+
+func (*awsAwsquery_serializeOpAssumeRoleWithWebIdentity) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpAssumeRoleWithWebIdentity) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*AssumeRoleWithWebIdentityInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("AssumeRoleWithWebIdentity")
+	body.Key("Version").String("2011-06-15")
+
+	if err := awsAwsquery_serializeOpDocumentAssumeRoleWithWebIdentityInput(input, bodyEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsquery_serializeOpDecodeAuthorizationMessage struct {
+}
+
+func (*awsAwsquery_serializeOpDecodeAuthorizationMessage) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpDecodeAuthorizationMessage) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*DecodeAuthorizationMessageInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("DecodeAuthorizationMessage")
+	body.Key("Version").String("2011-06-15")
+
+	if err := awsAwsquery_serializeOpDocumentDecodeAuthorizationMessageInput(input, bodyEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsquery_serializeOpGetAccessKeyInfo struct {
+}
+
+func (*awsAwsquery_serializeOpGetAccessKeyInfo) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpGetAccessKeyInfo) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*GetAccessKeyInfoInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("GetAccessKeyInfo")
+	body.Key("Version").String("2011-06-15")
+
+	if err := awsAwsquery_serializeOpDocumentGetAccessKeyInfoInput(input, bodyEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsquery_serializeOpGetCallerIdentity struct {
+}
+
+func (*awsAwsquery_serializeOpGetCallerIdentity) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpGetCallerIdentity) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*GetCallerIdentityInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("GetCallerIdentity")
+	body.Key("Version").String("2011-06-15")
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsquery_serializeOpGetFederationToken struct {
+}
+
+func (*awsAwsquery_serializeOpGetFederationToken) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpGetFederationToken) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*GetFederationTokenInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("GetFederationToken")
+	body.Key("Version").String("2011-06-15")
+
+	if err := awsAwsquery_serializeOpDocumentGetFederationTokenInput(input, bodyEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+
+type awsAwsquery_serializeOpGetSessionToken struct {
+}
+
+func (*awsAwsquery_serializeOpGetSessionToken) ID() string {
+	return "OperationSerializer"
+}
+
+func (m *awsAwsquery_serializeOpGetSessionToken) HandleSerialize(ctx context.Context, in middleware.SerializeInput, next middleware.SerializeHandler) (
+	out middleware.SerializeOutput, metadata middleware.Metadata, err error,
+) {
+	request, ok := in.Request.(*smithyhttp.Request)
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown transport type %T", in.Request)}
+	}
+
+	input, ok := in.Parameters.(*GetSessionTokenInput)
+	_ = input
+	if !ok {
+		return out, metadata, &smithy.SerializationError{Err: fmt.Errorf("unknown input parameters type %T", in.Parameters)}
+	}
+
+	operationPath := "/"
+	if len(request.Request.URL.Path) == 0 {
+		request.Request.URL.Path = operationPath
+	} else {
+		request.Request.URL.Path = path.Join(request.Request.URL.Path, operationPath)
+		if request.Request.URL.Path != "/" && operationPath[len(operationPath)-1] == '/' {
+			request.Request.URL.Path += "/"
+		}
+	}
+	request.Request.Method = "POST"
+	httpBindingEncoder, err := httpbinding.NewEncoder(request.URL.Path, request.URL.RawQuery, request.Header)
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	httpBindingEncoder.SetHeader("Content-Type").String("application/x-www-form-urlencoded")
+
+	bodyWriter := bytes.NewBuffer(nil)
+	bodyEncoder := query.NewEncoder(bodyWriter)
+	body := bodyEncoder.Object()
+	body.Key("Action").String("GetSessionToken")
+	body.Key("Version").String("2011-06-15")
+
+	if err := awsAwsquery_serializeOpDocumentGetSessionTokenInput(input, bodyEncoder.Value); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	err = bodyEncoder.Encode()
+	if err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request, err = request.SetStream(bytes.NewReader(bodyWriter.Bytes())); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+
+	if request.Request, err = httpBindingEncoder.Encode(request.Request); err != nil {
+		return out, metadata, &smithy.SerializationError{Err: err}
+	}
+	in.Request = request
+
+	return next.HandleSerialize(ctx, in)
+}
+func awsAwsquery_serializeDocumentPolicyDescriptorListType(v []types.PolicyDescriptorType, value query.Value) error {
+	array := value.Array("member")
+
+	for i := range v {
+		av := array.Value()
+		if err := awsAwsquery_serializeDocumentPolicyDescriptorType(&v[i], av); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+func awsAwsquery_serializeDocumentPolicyDescriptorType(v *types.PolicyDescriptorType, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.Arn != nil {
+		objectKey := object.Key("arn")
+		objectKey.String(*v.Arn)
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeDocumentProvidedContext(v *types.ProvidedContext, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.ContextAssertion != nil {
+		objectKey := object.Key("ContextAssertion")
+		objectKey.String(*v.ContextAssertion)
+	}
+
+	if v.ProviderArn != nil {
+		objectKey := object.Key("ProviderArn")
+		objectKey.String(*v.ProviderArn)
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeDocumentProvidedContextsListType(v []types.ProvidedContext, value query.Value) error {
+	array := value.Array("member")
+
+	for i := range v {
+		av := array.Value()
+		if err := awsAwsquery_serializeDocumentProvidedContext(&v[i], av); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+func awsAwsquery_serializeDocumentTag(v *types.Tag, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.Key != nil {
+		objectKey := object.Key("Key")
+		objectKey.String(*v.Key)
+	}
+
+	if v.Value != nil {
+		objectKey := object.Key("Value")
+		objectKey.String(*v.Value)
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeDocumentTagKeyListType(v []string, value query.Value) error {
+	array := value.Array("member")
+
+	for i := range v {
+		av := array.Value()
+		av.String(v[i])
+	}
+	return nil
+}
+
+func awsAwsquery_serializeDocumentTagListType(v []types.Tag, value query.Value) error {
+	array := value.Array("member")
+
+	for i := range v {
+		av := array.Value()
+		if err := awsAwsquery_serializeDocumentTag(&v[i], av); err != nil {
+			return err
+		}
+	}
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentAssumeRoleInput(v *AssumeRoleInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.DurationSeconds != nil {
+		objectKey := object.Key("DurationSeconds")
+		objectKey.Integer(*v.DurationSeconds)
+	}
+
+	if v.ExternalId != nil {
+		objectKey := object.Key("ExternalId")
+		objectKey.String(*v.ExternalId)
+	}
+
+	if v.Policy != nil {
+		objectKey := object.Key("Policy")
+		objectKey.String(*v.Policy)
+	}
+
+	if v.PolicyArns != nil {
+		objectKey := object.Key("PolicyArns")
+		if err := awsAwsquery_serializeDocumentPolicyDescriptorListType(v.PolicyArns, objectKey); err != nil {
+			return err
+		}
+	}
+
+	if v.ProvidedContexts != nil {
+		objectKey := object.Key("ProvidedContexts")
+		if err := awsAwsquery_serializeDocumentProvidedContextsListType(v.ProvidedContexts, objectKey); err != nil {
+			return err
+		}
+	}
+
+	if v.RoleArn != nil {
+		objectKey := object.Key("RoleArn")
+		objectKey.String(*v.RoleArn)
+	}
+
+	if v.RoleSessionName != nil {
+		objectKey := object.Key("RoleSessionName")
+		objectKey.String(*v.RoleSessionName)
+	}
+
+	if v.SerialNumber != nil {
+		objectKey := object.Key("SerialNumber")
+		objectKey.String(*v.SerialNumber)
+	}
+
+	if v.SourceIdentity != nil {
+		objectKey := object.Key("SourceIdentity")
+		objectKey.String(*v.SourceIdentity)
+	}
+
+	if v.Tags != nil {
+		objectKey := object.Key("Tags")
+		if err := awsAwsquery_serializeDocumentTagListType(v.Tags, objectKey); err != nil {
+			return err
+		}
+	}
+
+	if v.TokenCode != nil {
+		objectKey := object.Key("TokenCode")
+		objectKey.String(*v.TokenCode)
+	}
+
+	if v.TransitiveTagKeys != nil {
+		objectKey := object.Key("TransitiveTagKeys")
+		if err := awsAwsquery_serializeDocumentTagKeyListType(v.TransitiveTagKeys, objectKey); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentAssumeRoleWithSAMLInput(v *AssumeRoleWithSAMLInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.DurationSeconds != nil {
+		objectKey := object.Key("DurationSeconds")
+		objectKey.Integer(*v.DurationSeconds)
+	}
+
+	if v.Policy != nil {
+		objectKey := object.Key("Policy")
+		objectKey.String(*v.Policy)
+	}
+
+	if v.PolicyArns != nil {
+		objectKey := object.Key("PolicyArns")
+		if err := awsAwsquery_serializeDocumentPolicyDescriptorListType(v.PolicyArns, objectKey); err != nil {
+			return err
+		}
+	}
+
+	if v.PrincipalArn != nil {
+		objectKey := object.Key("PrincipalArn")
+		objectKey.String(*v.PrincipalArn)
+	}
+
+	if v.RoleArn != nil {
+		objectKey := object.Key("RoleArn")
+		objectKey.String(*v.RoleArn)
+	}
+
+	if v.SAMLAssertion != nil {
+		objectKey := object.Key("SAMLAssertion")
+		objectKey.String(*v.SAMLAssertion)
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentAssumeRoleWithWebIdentityInput(v *AssumeRoleWithWebIdentityInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.DurationSeconds != nil {
+		objectKey := object.Key("DurationSeconds")
+		objectKey.Integer(*v.DurationSeconds)
+	}
+
+	if v.Policy != nil {
+		objectKey := object.Key("Policy")
+		objectKey.String(*v.Policy)
+	}
+
+	if v.PolicyArns != nil {
+		objectKey := object.Key("PolicyArns")
+		if err := awsAwsquery_serializeDocumentPolicyDescriptorListType(v.PolicyArns, objectKey); err != nil {
+			return err
+		}
+	}
+
+	if v.ProviderId != nil {
+		objectKey := object.Key("ProviderId")
+		objectKey.String(*v.ProviderId)
+	}
+
+	if v.RoleArn != nil {
+		objectKey := object.Key("RoleArn")
+		objectKey.String(*v.RoleArn)
+	}
+
+	if v.RoleSessionName != nil {
+		objectKey := object.Key("RoleSessionName")
+		objectKey.String(*v.RoleSessionName)
+	}
+
+	if v.WebIdentityToken != nil {
+		objectKey := object.Key("WebIdentityToken")
+		objectKey.String(*v.WebIdentityToken)
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentDecodeAuthorizationMessageInput(v *DecodeAuthorizationMessageInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.EncodedMessage != nil {
+		objectKey := object.Key("EncodedMessage")
+		objectKey.String(*v.EncodedMessage)
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentGetAccessKeyInfoInput(v *GetAccessKeyInfoInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.AccessKeyId != nil {
+		objectKey := object.Key("AccessKeyId")
+		objectKey.String(*v.AccessKeyId)
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentGetCallerIdentityInput(v *GetCallerIdentityInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentGetFederationTokenInput(v *GetFederationTokenInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.DurationSeconds != nil {
+		objectKey := object.Key("DurationSeconds")
+		objectKey.Integer(*v.DurationSeconds)
+	}
+
+	if v.Name != nil {
+		objectKey := object.Key("Name")
+		objectKey.String(*v.Name)
+	}
+
+	if v.Policy != nil {
+		objectKey := object.Key("Policy")
+		objectKey.String(*v.Policy)
+	}
+
+	if v.PolicyArns != nil {
+		objectKey := object.Key("PolicyArns")
+		if err := awsAwsquery_serializeDocumentPolicyDescriptorListType(v.PolicyArns, objectKey); err != nil {
+			return err
+		}
+	}
+
+	if v.Tags != nil {
+		objectKey := object.Key("Tags")
+		if err := awsAwsquery_serializeDocumentTagListType(v.Tags, objectKey); err != nil {
+			return err
+		}
+	}
+
+	return nil
+}
+
+func awsAwsquery_serializeOpDocumentGetSessionTokenInput(v *GetSessionTokenInput, value query.Value) error {
+	object := value.Object()
+	_ = object
+
+	if v.DurationSeconds != nil {
+		objectKey := object.Key("DurationSeconds")
+		objectKey.Integer(*v.DurationSeconds)
+	}
+
+	if v.SerialNumber != nil {
+		objectKey := object.Key("SerialNumber")
+		objectKey.String(*v.SerialNumber)
+	}
+
+	if v.TokenCode != nil {
+		objectKey := object.Key("TokenCode")
+		objectKey.String(*v.TokenCode)
+	}
+
+	return nil
+}

vendor/github.com/aws/aws-sdk-go-v2/service/sts/types/errors.go 🔗

@@ -0,0 +1,248 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+import (
+	"fmt"
+	smithy "github.com/aws/smithy-go"
+)
+
+// The web identity token that was passed is expired or is not valid. Get a new
+// identity token from the identity provider and then retry the request.
+type ExpiredTokenException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *ExpiredTokenException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *ExpiredTokenException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *ExpiredTokenException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "ExpiredTokenException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *ExpiredTokenException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The request could not be fulfilled because the identity provider (IDP) that was
+// asked to verify the incoming identity token could not be reached. This is often
+// a transient error caused by network conditions. Retry the request a limited
+// number of times so that you don't exceed the request rate. If the error
+// persists, the identity provider might be down or not responding.
+type IDPCommunicationErrorException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *IDPCommunicationErrorException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *IDPCommunicationErrorException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *IDPCommunicationErrorException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "IDPCommunicationError"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *IDPCommunicationErrorException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The identity provider (IdP) reported that authentication failed. This might be
+// because the claim is invalid.
+//
+// If this error is returned for the AssumeRoleWithWebIdentity operation, it can
+// also mean that the claim has expired or has been explicitly revoked.
+type IDPRejectedClaimException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *IDPRejectedClaimException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *IDPRejectedClaimException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *IDPRejectedClaimException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "IDPRejectedClaim"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *IDPRejectedClaimException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The error returned if the message passed to DecodeAuthorizationMessage was
+// invalid. This can happen if the token contains invalid characters, such as
+// linebreaks.
+type InvalidAuthorizationMessageException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidAuthorizationMessageException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidAuthorizationMessageException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidAuthorizationMessageException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidAuthorizationMessageException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidAuthorizationMessageException) ErrorFault() smithy.ErrorFault {
+	return smithy.FaultClient
+}
+
+// The web identity token that was passed could not be validated by Amazon Web
+// Services. Get a new identity token from the identity provider and then retry the
+// request.
+type InvalidIdentityTokenException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *InvalidIdentityTokenException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *InvalidIdentityTokenException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *InvalidIdentityTokenException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "InvalidIdentityToken"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *InvalidIdentityTokenException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The request was rejected because the policy document was malformed. The error
+// message describes the specific error.
+type MalformedPolicyDocumentException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *MalformedPolicyDocumentException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *MalformedPolicyDocumentException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *MalformedPolicyDocumentException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "MalformedPolicyDocument"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *MalformedPolicyDocumentException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// The request was rejected because the total packed size of the session policies
+// and session tags combined was too large. An Amazon Web Services conversion
+// compresses the session policy document, session policy ARNs, and session tags
+// into a packed binary format that has a separate limit. The error message
+// indicates by percentage how close the policies and tags are to the upper size
+// limit. For more information, see [Passing Session Tags in STS]in the IAM User Guide.
+//
+// You could receive this error even though you meet other defined session policy
+// and session tag limits. For more information, see [IAM and STS Entity Character Limits]in the IAM User Guide.
+//
+// [Passing Session Tags in STS]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+// [IAM and STS Entity Character Limits]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-quotas.html#reference_iam-limits-entity-length
+type PackedPolicyTooLargeException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *PackedPolicyTooLargeException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *PackedPolicyTooLargeException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *PackedPolicyTooLargeException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "PackedPolicyTooLarge"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *PackedPolicyTooLargeException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }
+
+// STS is not activated in the requested region for the account that is being
+// asked to generate credentials. The account administrator must use the IAM
+// console to activate STS in that region. For more information, see [Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region]in the IAM
+// User Guide.
+//
+// [Activating and Deactivating Amazon Web Services STS in an Amazon Web Services Region]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html
+type RegionDisabledException struct {
+	Message *string
+
+	ErrorCodeOverride *string
+
+	noSmithyDocumentSerde
+}
+
+func (e *RegionDisabledException) Error() string {
+	return fmt.Sprintf("%s: %s", e.ErrorCode(), e.ErrorMessage())
+}
+func (e *RegionDisabledException) ErrorMessage() string {
+	if e.Message == nil {
+		return ""
+	}
+	return *e.Message
+}
+func (e *RegionDisabledException) ErrorCode() string {
+	if e == nil || e.ErrorCodeOverride == nil {
+		return "RegionDisabledException"
+	}
+	return *e.ErrorCodeOverride
+}
+func (e *RegionDisabledException) ErrorFault() smithy.ErrorFault { return smithy.FaultClient }

vendor/github.com/aws/aws-sdk-go-v2/service/sts/types/types.go 🔗

@@ -0,0 +1,144 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package types
+
+import (
+	smithydocument "github.com/aws/smithy-go/document"
+	"time"
+)
+
+// The identifiers for the temporary security credentials that the operation
+// returns.
+type AssumedRoleUser struct {
+
+	// The ARN of the temporary security credentials that are returned from the AssumeRole
+	// action. For more information about ARNs and how to use them in policies, see [IAM Identifiers]in
+	// the IAM User Guide.
+	//
+	// [IAM Identifiers]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html
+	//
+	// This member is required.
+	Arn *string
+
+	// A unique identifier that contains the role ID and the role session name of the
+	// role that is being assumed. The role ID is generated by Amazon Web Services when
+	// the role is created.
+	//
+	// This member is required.
+	AssumedRoleId *string
+
+	noSmithyDocumentSerde
+}
+
+// Amazon Web Services credentials for API authentication.
+type Credentials struct {
+
+	// The access key ID that identifies the temporary security credentials.
+	//
+	// This member is required.
+	AccessKeyId *string
+
+	// The date on which the current credentials expire.
+	//
+	// This member is required.
+	Expiration *time.Time
+
+	// The secret access key that can be used to sign requests.
+	//
+	// This member is required.
+	SecretAccessKey *string
+
+	// The token that users must pass to the service API to use the temporary
+	// credentials.
+	//
+	// This member is required.
+	SessionToken *string
+
+	noSmithyDocumentSerde
+}
+
+// Identifiers for the federated user that is associated with the credentials.
+type FederatedUser struct {
+
+	// The ARN that specifies the federated user that is associated with the
+	// credentials. For more information about ARNs and how to use them in policies,
+	// see [IAM Identifiers]in the IAM User Guide.
+	//
+	// [IAM Identifiers]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html
+	//
+	// This member is required.
+	Arn *string
+
+	// The string that identifies the federated user associated with the credentials,
+	// similar to the unique ID of an IAM user.
+	//
+	// This member is required.
+	FederatedUserId *string
+
+	noSmithyDocumentSerde
+}
+
+// A reference to the IAM managed policy that is passed as a session policy for a
+// role session or a federated user session.
+type PolicyDescriptorType struct {
+
+	// The Amazon Resource Name (ARN) of the IAM managed policy to use as a session
+	// policy for the role. For more information about ARNs, see [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]in the Amazon Web
+	// Services General Reference.
+	//
+	// [Amazon Resource Names (ARNs) and Amazon Web Services Service Namespaces]: https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
+	Arn *string
+
+	noSmithyDocumentSerde
+}
+
+// Contains information about the provided context. This includes the signed and
+// encrypted trusted context assertion and the context provider ARN from which the
+// trusted context assertion was generated.
+type ProvidedContext struct {
+
+	// The signed and encrypted trusted context assertion generated by the context
+	// provider. The trusted context assertion is signed and encrypted by Amazon Web
+	// Services STS.
+	ContextAssertion *string
+
+	// The context provider ARN from which the trusted context assertion was generated.
+	ProviderArn *string
+
+	noSmithyDocumentSerde
+}
+
+// You can pass custom key-value pair attributes when you assume a role or
+// federate a user. These are called session tags. You can then use the session
+// tags to control access to resources. For more information, see [Tagging Amazon Web Services STS Sessions]in the IAM User
+// Guide.
+//
+// [Tagging Amazon Web Services STS Sessions]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_session-tags.html
+type Tag struct {
+
+	// The key for a session tag.
+	//
+	// You can pass up to 50 session tags. The plain text session tag keys can’t
+	// exceed 128 characters. For these and additional limits, see [IAM and STS Character Limits]in the IAM User
+	// Guide.
+	//
+	// [IAM and STS Character Limits]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html#reference_iam-limits-entity-length
+	//
+	// This member is required.
+	Key *string
+
+	// The value for a session tag.
+	//
+	// You can pass up to 50 session tags. The plain text session tag values can’t
+	// exceed 256 characters. For these and additional limits, see [IAM and STS Character Limits]in the IAM User
+	// Guide.
+	//
+	// [IAM and STS Character Limits]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html#reference_iam-limits-entity-length
+	//
+	// This member is required.
+	Value *string
+
+	noSmithyDocumentSerde
+}
+
+type noSmithyDocumentSerde = smithydocument.NoSerde

vendor/github.com/aws/aws-sdk-go-v2/service/sts/validators.go 🔗

@@ -0,0 +1,305 @@
+// Code generated by smithy-go-codegen DO NOT EDIT.
+
+package sts
+
+import (
+	"context"
+	"fmt"
+	"github.com/aws/aws-sdk-go-v2/service/sts/types"
+	smithy "github.com/aws/smithy-go"
+	"github.com/aws/smithy-go/middleware"
+)
+
+type validateOpAssumeRole struct {
+}
+
+func (*validateOpAssumeRole) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpAssumeRole) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*AssumeRoleInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpAssumeRoleInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpAssumeRoleWithSAML struct {
+}
+
+func (*validateOpAssumeRoleWithSAML) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpAssumeRoleWithSAML) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*AssumeRoleWithSAMLInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpAssumeRoleWithSAMLInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpAssumeRoleWithWebIdentity struct {
+}
+
+func (*validateOpAssumeRoleWithWebIdentity) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpAssumeRoleWithWebIdentity) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*AssumeRoleWithWebIdentityInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpAssumeRoleWithWebIdentityInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpDecodeAuthorizationMessage struct {
+}
+
+func (*validateOpDecodeAuthorizationMessage) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpDecodeAuthorizationMessage) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*DecodeAuthorizationMessageInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpDecodeAuthorizationMessageInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpGetAccessKeyInfo struct {
+}
+
+func (*validateOpGetAccessKeyInfo) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpGetAccessKeyInfo) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*GetAccessKeyInfoInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpGetAccessKeyInfoInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+type validateOpGetFederationToken struct {
+}
+
+func (*validateOpGetFederationToken) ID() string {
+	return "OperationInputValidation"
+}
+
+func (m *validateOpGetFederationToken) HandleInitialize(ctx context.Context, in middleware.InitializeInput, next middleware.InitializeHandler) (
+	out middleware.InitializeOutput, metadata middleware.Metadata, err error,
+) {
+	input, ok := in.Parameters.(*GetFederationTokenInput)
+	if !ok {
+		return out, metadata, fmt.Errorf("unknown input parameters type %T", in.Parameters)
+	}
+	if err := validateOpGetFederationTokenInput(input); err != nil {
+		return out, metadata, err
+	}
+	return next.HandleInitialize(ctx, in)
+}
+
+func addOpAssumeRoleValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpAssumeRole{}, middleware.After)
+}
+
+func addOpAssumeRoleWithSAMLValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpAssumeRoleWithSAML{}, middleware.After)
+}
+
+func addOpAssumeRoleWithWebIdentityValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpAssumeRoleWithWebIdentity{}, middleware.After)
+}
+
+func addOpDecodeAuthorizationMessageValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpDecodeAuthorizationMessage{}, middleware.After)
+}
+
+func addOpGetAccessKeyInfoValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpGetAccessKeyInfo{}, middleware.After)
+}
+
+func addOpGetFederationTokenValidationMiddleware(stack *middleware.Stack) error {
+	return stack.Initialize.Add(&validateOpGetFederationToken{}, middleware.After)
+}
+
+func validateTag(v *types.Tag) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "Tag"}
+	if v.Key == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("Key"))
+	}
+	if v.Value == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("Value"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateTagListType(v []types.Tag) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "TagListType"}
+	for i := range v {
+		if err := validateTag(&v[i]); err != nil {
+			invalidParams.AddNested(fmt.Sprintf("[%d]", i), err.(smithy.InvalidParamsError))
+		}
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpAssumeRoleInput(v *AssumeRoleInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "AssumeRoleInput"}
+	if v.RoleArn == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("RoleArn"))
+	}
+	if v.RoleSessionName == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("RoleSessionName"))
+	}
+	if v.Tags != nil {
+		if err := validateTagListType(v.Tags); err != nil {
+			invalidParams.AddNested("Tags", err.(smithy.InvalidParamsError))
+		}
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpAssumeRoleWithSAMLInput(v *AssumeRoleWithSAMLInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "AssumeRoleWithSAMLInput"}
+	if v.RoleArn == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("RoleArn"))
+	}
+	if v.PrincipalArn == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("PrincipalArn"))
+	}
+	if v.SAMLAssertion == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("SAMLAssertion"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpAssumeRoleWithWebIdentityInput(v *AssumeRoleWithWebIdentityInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "AssumeRoleWithWebIdentityInput"}
+	if v.RoleArn == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("RoleArn"))
+	}
+	if v.RoleSessionName == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("RoleSessionName"))
+	}
+	if v.WebIdentityToken == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("WebIdentityToken"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpDecodeAuthorizationMessageInput(v *DecodeAuthorizationMessageInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "DecodeAuthorizationMessageInput"}
+	if v.EncodedMessage == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("EncodedMessage"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpGetAccessKeyInfoInput(v *GetAccessKeyInfoInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "GetAccessKeyInfoInput"}
+	if v.AccessKeyId == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("AccessKeyId"))
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}
+
+func validateOpGetFederationTokenInput(v *GetFederationTokenInput) error {
+	if v == nil {
+		return nil
+	}
+	invalidParams := smithy.InvalidParamsError{Context: "GetFederationTokenInput"}
+	if v.Name == nil {
+		invalidParams.Add(smithy.NewErrParamRequired("Name"))
+	}
+	if v.Tags != nil {
+		if err := validateTagListType(v.Tags); err != nil {
+			invalidParams.AddNested("Tags", err.(smithy.InvalidParamsError))
+		}
+	}
+	if invalidParams.Len() > 0 {
+		return invalidParams
+	} else {
+		return nil
+	}
+}

vendor/github.com/aws/smithy-go/.gitignore 🔗

@@ -0,0 +1,29 @@
+# Eclipse
+.classpath
+.project
+.settings/
+
+# Intellij
+.idea/
+*.iml
+*.iws
+
+# Mac
+.DS_Store
+
+# Maven
+target/
+**/dependency-reduced-pom.xml
+
+# Gradle
+/.gradle
+build/
+*/out/
+*/*/out/
+
+# VS Code
+bin/
+.vscode/
+
+# make
+c.out

vendor/github.com/aws/smithy-go/.travis.yml 🔗

@@ -0,0 +1,28 @@
+language: go
+sudo: true
+dist: bionic
+
+branches:
+  only:
+    - main
+
+os:
+  - linux
+  - osx
+  # Travis doesn't work with windows and Go tip
+  #- windows
+
+go:
+  - tip
+
+matrix:
+  allow_failures:
+    - go: tip
+
+before_install:
+  - if [ "$TRAVIS_OS_NAME" = "windows" ]; then choco install make; fi
+  - (cd /tmp/; go get golang.org/x/lint/golint)
+
+script:
+  - make go test -v ./...;
+

vendor/github.com/aws/smithy-go/CHANGELOG.md 🔗

@@ -0,0 +1,239 @@
+# Release (2024-06-27)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.20.3
+  * **Bug Fix**: Fix encoding/cbor test overflow on x86.
+
+# Release (2024-03-29)
+
+* No change notes available for this release.
+
+# Release (2024-02-21)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.20.1
+  * **Bug Fix**: Remove runtime dependency on go-cmp.
+
+# Release (2024-02-13)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.20.0
+  * **Feature**: Add codegen definition for sigv4a trait.
+  * **Feature**: Bump minimum Go version to 1.20 per our language support policy.
+
+# Release (2023-12-07)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.19.0
+  * **Feature**: Support modeled request compression.
+
+# Release (2023-11-30)
+
+* No change notes available for this release.
+
+# Release (2023-11-29)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.18.0
+  * **Feature**: Expose Options() method on generated service clients.
+
+# Release (2023-11-15)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.17.0
+  * **Feature**: Support identity/auth components of client reference architecture.
+
+# Release (2023-10-31)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.16.0
+  * **Feature**: **LANG**: Bump minimum go version to 1.19.
+
+# Release (2023-10-06)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.15.0
+  * **Feature**: Add `http.WithHeaderComment` middleware.
+
+# Release (2023-08-18)
+
+* No change notes available for this release.
+
+# Release (2023-08-07)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.14.1
+  * **Bug Fix**: Prevent duplicated error returns in EndpointResolverV2 default implementation.
+
+# Release (2023-07-31)
+
+## General Highlights
+* **Feature**: Adds support for smithy-modeled endpoint resolution.
+
+# Release (2022-12-02)
+
+* No change notes available for this release.
+
+# Release (2022-10-24)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.13.4
+  * **Bug Fix**: fixed document type checking for encoding nested types
+
+# Release (2022-09-14)
+
+* No change notes available for this release.
+
+# Release (v1.13.2)
+
+* No change notes available for this release.
+
+# Release (v1.13.1)
+
+* No change notes available for this release.
+
+# Release (v1.13.0)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.13.0
+  * **Feature**: Adds support for the Smithy httpBearerAuth authentication trait to smithy-go. This allows the SDK to support the bearer authentication flow for API operations decorated with httpBearerAuth. An API client will need to be provided with its own bearer.TokenProvider implementation or use the bearer.StaticTokenProvider implementation.
+
+# Release (v1.12.1)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.12.1
+  * **Bug Fix**: Fixes a bug where JSON object keys were not escaped.
+
+# Release (v1.12.0)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.12.0
+  * **Feature**: `transport/http`: Add utility for setting context metadata when operation serializer automatically assigns content-type default value.
+
+# Release (v1.11.3)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.11.3
+  * **Dependency Update**: Updates smithy-go unit test dependency go-cmp to 0.5.8.
+
+# Release (v1.11.2)
+
+* No change notes available for this release.
+
+# Release (v1.11.1)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.11.1
+  * **Bug Fix**: Updates the smithy-go HTTP Request to correctly handle building the request to an http.Request. Related to [aws/aws-sdk-go-v2#1583](https://github.com/aws/aws-sdk-go-v2/issues/1583)
+
+# Release (v1.11.0)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.11.0
+  * **Feature**: Updates deserialization of header list to supported quoted strings
+
+# Release (v1.10.0)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.10.0
+  * **Feature**: Add `ptr.Duration`, `ptr.ToDuration`, `ptr.DurationSlice`, `ptr.ToDurationSlice`, `ptr.DurationMap`, and `ptr.ToDurationMap` functions for the `time.Duration` type.
+
+# Release (v1.9.1)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.9.1
+  * **Documentation**: Fixes various typos in Go package documentation.
+
+# Release (v1.9.0)
+
+## Module Highlights
+* `github.com/aws/smithy-go`: v1.9.0
+  * **Feature**: sync: OnceErr, can be used to concurrently record a signal when an error has occurred.
+  * **Bug Fix**: `transport/http`: CloseResponseBody and ErrorCloseResponseBody middleware have been updated to ensure that the body is fully drained before closing.
+
+# Release v1.8.1
+
+### Smithy Go Module
+* **Bug Fix**: Fixed an issue that would cause the HTTP Content-Length to be set to 0 if the stream body was not set.
+  * Fixes [aws/aws-sdk-go-v2#1418](https://github.com/aws/aws-sdk-go-v2/issues/1418)
+
+# Release v1.8.0
+
+### Smithy Go Module
+
+* `time`: Add support for parsing additional DateTime timestamp format ([#324](https://github.com/aws/smithy-go/pull/324))
+  * Adds support for parsing DateTime timestamp formatted time similar to RFC 3339, but without the `Z` character, nor UTC offset.
+  * Fixes [#1387](https://github.com/aws/aws-sdk-go-v2/issues/1387)
+
+# Release v1.7.0
+
+### Smithy Go Module
+* `ptr`:  Handle error for deferred file close call ([#314](https://github.com/aws/smithy-go/pull/314))
+  * Handle error for defer close call
+* `middleware`: Add Clone to Metadata ([#318](https://github.com/aws/smithy-go/pull/318))
+  * Adds a new Clone method to the middleware Metadata type. This provides a shallow clone of the entries in the Metadata.
+* `document`: Add new package for document shape serialization support ([#310](https://github.com/aws/smithy-go/pull/310))
+
+### Codegen
+* Add Smithy Document Shape Support ([#310](https://github.com/aws/smithy-go/pull/310))
+  * Adds support for Smithy Document shapes and supporting types for protocols to implement support
+
+# Release v1.6.0 (2021-07-15)
+
+### Smithy Go Module
+* `encoding/httpbinding`: Support has been added for encoding `float32` and `float64` values that are `NaN`, `Infinity`, or `-Infinity`. ([#316](https://github.com/aws/smithy-go/pull/316))
+
+### Codegen
+* Adds support for handling `float32` and `float64` `NaN` values in HTTP Protocol Unit Tests. ([#316](https://github.com/aws/smithy-go/pull/316))
+* Adds support protocol generator implementations to override the error code string returned by `ErrorCode` methods on generated error types. ([#315](https://github.com/aws/smithy-go/pull/315))
+
+# Release v1.5.0 (2021-06-25)
+
+### Smithy Go module
+* `time`: Update time parsing to not be as strict for HTTPDate and DateTime ([#307](https://github.com/aws/smithy-go/pull/307))
+  * Fixes [#302](https://github.com/aws/smithy-go/issues/302) by changing time to UTC before formatting so no local offset time is lost.
+
+### Codegen
+* Adds support for integrating client members via plugins ([#301](https://github.com/aws/smithy-go/pull/301))
+* Fix serialization of enum types marked with payload trait ([#296](https://github.com/aws/smithy-go/pull/296))
+* Update generation of API client modules to include a manifest of files generated ([#283](https://github.com/aws/smithy-go/pull/283))
+* Update Group Java group ID for smithy-go generator ([#298](https://github.com/aws/smithy-go/pull/298))
+* Support the delegation of determining the errors that can occur for an operation ([#304](https://github.com/aws/smithy-go/pull/304))
+* Support for marking and documenting deprecated client config fields. ([#303](https://github.com/aws/smithy-go/pull/303))
+
+# Release v1.4.0 (2021-05-06)
+
+### Smithy Go module
+* `encoding/xml`: Fix escaping of Next Line and Line Start in XML Encoder ([#267](https://github.com/aws/smithy-go/pull/267))
+
+### Codegen
+* Add support for Smithy 1.7 ([#289](https://github.com/aws/smithy-go/pull/289))
+* Add support for httpQueryParams location
+* Add support for model renaming conflict resolution with service closure
+
+# Release v1.3.1 (2021-04-08)
+
+### Smithy Go module
+* `transport/http`: Loosen endpoint hostname validation to allow specifying port numbers. ([#279](https://github.com/aws/smithy-go/pull/279))
+* `io`: Fix RingBuffer panics due to out of bounds index. ([#282](https://github.com/aws/smithy-go/pull/282))
+
+# Release v1.3.0 (2021-04-01)
+
+### Smithy Go module
+* `transport/http`: Add utility to safely join string to url path, and url raw query.
+
+### Codegen
+* Update HttpBindingProtocolGenerator to use http/transport JoinPath and JoinQuery utility.
+
+# Release v1.2.0 (2021-03-12)
+
+### Smithy Go module
+* Fix support for parsing shortened year format in HTTP Date header.
+* Fix GitHub APIDiff action workflow to get gorelease tool correctly.
+* Fix codegen artifact unit test for Go 1.16
+
+### Codegen
+* Fix generating paginator nil parameter handling before usage.
+* Fix Serialize unboxed members decorated as required.
+* Add ability to define resolvers at both client construction and operation invocation.
+* Support for extending paginators with custom runtime trait

vendor/github.com/aws/smithy-go/CODE_OF_CONDUCT.md 🔗

@@ -0,0 +1,4 @@
+## Code of Conduct
+This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
+For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
+opensource-codeofconduct@amazon.com with any additional questions or comments.

vendor/github.com/aws/smithy-go/CONTRIBUTING.md 🔗

@@ -0,0 +1,59 @@
+# Contributing Guidelines
+
+Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional
+documentation, we greatly value feedback and contributions from our community.
+
+Please read through this document before submitting any issues or pull requests to ensure we have all the necessary
+information to effectively respond to your bug report or contribution.
+
+
+## Reporting Bugs/Feature Requests
+
+We welcome you to use the GitHub issue tracker to report bugs or suggest features.
+
+When filing an issue, please check existing open, or recently closed, issues to make sure somebody else hasn't already
+reported the issue. Please try to include as much information as you can. Details like these are incredibly useful:
+
+* A reproducible test case or series of steps
+* The version of our code being used
+* Any modifications you've made relevant to the bug
+* Anything unusual about your environment or deployment
+
+
+## Contributing via Pull Requests
+Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that:
+
+1. You are working against the latest source on the *main* branch.
+2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already.
+3. You open an issue to discuss any significant work - we would hate for your time to be wasted.
+
+To send us a pull request, please:
+
+1. Fork the repository.
+2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change.
+3. Ensure local tests pass.
+4. Commit to your fork using clear commit messages.
+5. Send us a pull request, answering any default questions in the pull request interface.
+6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation.
+
+GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and
+[creating a pull request](https://help.github.com/articles/creating-a-pull-request/).
+
+
+## Finding contributions to work on
+Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any 'help wanted' issues is a great place to start.
+
+
+## Code of Conduct
+This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
+For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
+opensource-codeofconduct@amazon.com with any additional questions or comments.
+
+
+## Security issue notifications
+If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue.
+
+
+## Licensing
+
+See the [LICENSE](LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution.

vendor/github.com/aws/smithy-go/LICENSE 🔗

@@ -0,0 +1,175 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.

vendor/github.com/aws/smithy-go/Makefile 🔗

@@ -0,0 +1,102 @@
+PRE_RELEASE_VERSION ?=
+
+RELEASE_MANIFEST_FILE ?=
+RELEASE_CHGLOG_DESC_FILE ?=
+
+REPOTOOLS_VERSION ?= latest
+REPOTOOLS_MODULE = github.com/awslabs/aws-go-multi-module-repository-tools
+REPOTOOLS_CMD_CALCULATE_RELEASE = ${REPOTOOLS_MODULE}/cmd/calculaterelease@${REPOTOOLS_VERSION}
+REPOTOOLS_CMD_CALCULATE_RELEASE_ADDITIONAL_ARGS ?=
+REPOTOOLS_CMD_UPDATE_REQUIRES = ${REPOTOOLS_MODULE}/cmd/updaterequires@${REPOTOOLS_VERSION}
+REPOTOOLS_CMD_UPDATE_MODULE_METADATA = ${REPOTOOLS_MODULE}/cmd/updatemodulemeta@${REPOTOOLS_VERSION}
+REPOTOOLS_CMD_GENERATE_CHANGELOG = ${REPOTOOLS_MODULE}/cmd/generatechangelog@${REPOTOOLS_VERSION}
+REPOTOOLS_CMD_CHANGELOG = ${REPOTOOLS_MODULE}/cmd/changelog@${REPOTOOLS_VERSION}
+REPOTOOLS_CMD_TAG_RELEASE = ${REPOTOOLS_MODULE}/cmd/tagrelease@${REPOTOOLS_VERSION}
+REPOTOOLS_CMD_MODULE_VERSION = ${REPOTOOLS_MODULE}/cmd/moduleversion@${REPOTOOLS_VERSION}
+
+UNIT_TEST_TAGS=
+BUILD_TAGS=
+
+ifneq ($(PRE_RELEASE_VERSION),)
+	REPOTOOLS_CMD_CALCULATE_RELEASE_ADDITIONAL_ARGS += -preview=${PRE_RELEASE_VERSION}
+endif
+
+smithy-publish-local:
+	cd codegen && ./gradlew publishToMavenLocal
+
+smithy-build:
+	cd codegen && ./gradlew build
+
+smithy-clean:
+	cd codegen && ./gradlew clean
+
+##################
+# Linting/Verify #
+##################
+.PHONY: verify vet cover
+
+verify: vet
+
+vet:
+	go vet ${BUILD_TAGS} --all ./...
+
+cover:
+	go test ${BUILD_TAGS} -coverprofile c.out ./...
+	@cover=`go tool cover -func c.out | grep '^total:' | awk '{ print $$3+0 }'`; \
+		echo "total (statements): $$cover%";
+
+################
+# Unit Testing #
+################
+.PHONY: unit unit-race unit-test unit-race-test
+
+unit: verify
+	go vet ${BUILD_TAGS} --all ./... && \
+	go test ${BUILD_TAGS} ${RUN_NONE} ./... && \
+	go test -timeout=1m ${UNIT_TEST_TAGS} ./...
+
+unit-race: verify
+	go vet ${BUILD_TAGS} --all ./... && \
+	go test ${BUILD_TAGS} ${RUN_NONE} ./... && \
+	go test -timeout=1m ${UNIT_TEST_TAGS} -race -cpu=4 ./...
+
+unit-test: verify
+	go test -timeout=1m ${UNIT_TEST_TAGS} ./...
+
+unit-race-test: verify
+	go test -timeout=1m ${UNIT_TEST_TAGS} -race -cpu=4 ./...
+
+#####################
+#  Release Process  #
+#####################
+.PHONY: preview-release pre-release-validation release
+
+preview-release:
+	go run ${REPOTOOLS_CMD_CALCULATE_RELEASE} ${REPOTOOLS_CMD_CALCULATE_RELEASE_ADDITIONAL_ARGS}
+
+pre-release-validation:
+	@if [[ -z "${RELEASE_MANIFEST_FILE}" ]]; then \
+		echo "RELEASE_MANIFEST_FILE is required to specify the file to write the release manifest" && false; \
+	fi
+	@if [[ -z "${RELEASE_CHGLOG_DESC_FILE}" ]]; then \
+		echo "RELEASE_CHGLOG_DESC_FILE is required to specify the file to write the release notes" && false; \
+	fi
+
+release: pre-release-validation
+	go run ${REPOTOOLS_CMD_CALCULATE_RELEASE} -o ${RELEASE_MANIFEST_FILE} ${REPOTOOLS_CMD_CALCULATE_RELEASE_ADDITIONAL_ARGS}
+	go run ${REPOTOOLS_CMD_UPDATE_REQUIRES} -release ${RELEASE_MANIFEST_FILE}
+	go run ${REPOTOOLS_CMD_UPDATE_MODULE_METADATA} -release ${RELEASE_MANIFEST_FILE}
+	go run ${REPOTOOLS_CMD_GENERATE_CHANGELOG} -release ${RELEASE_MANIFEST_FILE} -o ${RELEASE_CHGLOG_DESC_FILE}
+	go run ${REPOTOOLS_CMD_CHANGELOG} rm -all
+	go run ${REPOTOOLS_CMD_TAG_RELEASE} -release ${RELEASE_MANIFEST_FILE}
+
+module-version:
+	@go run ${REPOTOOLS_CMD_MODULE_VERSION} .
+
+##############
+# Repo Tools #
+##############
+.PHONY: install-changelog
+
+install-changelog:
+	go install ${REPOTOOLS_MODULE}/cmd/changelog@${REPOTOOLS_VERSION}

vendor/github.com/aws/smithy-go/README.md 🔗

@@ -0,0 +1,27 @@
+## Smithy Go
+
+[![Go Build Status](https://github.com/aws/smithy-go/actions/workflows/go.yml/badge.svg?branch=main)](https://github.com/aws/smithy-go/actions/workflows/go.yml)[![Codegen Build Status](https://github.com/aws/smithy-go/actions/workflows/codegen.yml/badge.svg?branch=main)](https://github.com/aws/smithy-go/actions/workflows/codegen.yml)
+
+[Smithy](https://smithy.io/) code generators for Go.
+
+**WARNING: All interfaces are subject to change.**
+
+## Can I use this?
+
+In order to generate a usable smithy client you must provide a [protocol definition](https://github.com/aws/smithy-go/blob/main/codegen/smithy-go-codegen/src/main/java/software/amazon/smithy/go/codegen/integration/ProtocolGenerator.java),
+such as [AWS restJson1](https://smithy.io/2.0/aws/protocols/aws-restjson1-protocol.html),
+in order to generate transport mechanisms and serialization/deserialization
+code ("serde") accordingly.
+
+The code generator does not currently support any protocols out of the box,
+therefore the useability of this project on its own is currently limited.
+Support for all [AWS protocols](https://smithy.io/2.0/aws/protocols/index.html)
+exists in [aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2). We are
+tracking the movement of those out of the SDK into smithy-go in
+[#458](https://github.com/aws/smithy-go/issues/458), but there's currently no
+timeline for doing so.
+
+## License
+
+This project is licensed under the Apache-2.0 License.
+