Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getmcp.com/llms.txt

Use this file to discover all available pages before exploring further.

The Model Context Protocol (MCP) is an open standard created by Anthropic. It defines how AI clients communicate with external tools and data sources using JSON-RPC 2.0 over HTTP. GetMCP implements the Streamable HTTP transport — the most widely supported variant. Any MCP-compatible client (Claude Desktop, Claude Code, Cursor, Windsurf, and others) can connect to a GetMCP server without any additional configuration.
Any REST API that accepts HTTP requests — your own backend API, a third-party service, or an internal microservice. Common examples:
  • Your own SaaS product API
  • Internal company APIs behind a private network (if WordPress can reach them)
  • WordPress REST API endpoints (including WooCommerce)
If it accepts an HTTP request and returns a response, GetMCP can wrap it as an MCP tool.
There are two approaches depending on whether each user brings their own API key or all users share the same key:Option A — User-supplied credentials (each user’s own key): The end user adds their API key to the headers block in their MCP client config file (e.g., claude_desktop_config.json). GetMCP forwards that header to the external API with every tool call. The key is never stored in GetMCP. See Authentication for a full walkthrough.Option B — Fixed/shared credentials (one key for all users): Add the API key as a Custom Header in the tool’s Advanced Settings. This works for all common auth patterns:
  • Bearer token: Authorization: Bearer your_token
  • API key: X-API-Key: your_key
  • Query param: add ?api_key=your_key directly to the endpoint URL
  • Basic auth: Authorization: Basic base64(user:pass)
Fixed Custom Headers are encrypted at rest and never exposed to the AI client or in logs.
When an AI client connects to your MCP server, it fetches the full list of tools via tools/list. For each tool it receives the name and description you wrote.The AI reads your description and decides — based on the user’s message — whether and when to call the tool. Your tool description is the most important thing you configure. A vague description leads to missed or incorrect tool calls. Be specific: what does the tool do, what does it return, when should it be used.
The AI receives the HTTP response from your API, either as raw JSON or as formatted text depending on your Response Mapping configuration. You can use a dot-notation path to extract a specific field (e.g. data.results) or return the full response body. The AI reads this and uses it to construct its reply to the user.
Yes — each tool maps to one API endpoint. Add as many tools as you need to a server. A typical setup might have get_customer, create_order, list_products, and send_invoice as separate tools, all in one server, all calling different endpoints of the same API.
Yes. You can create as many servers as you need. Each server gets its own unique URL. Use multiple servers to:
  • Separate tools by API domain (one server for Stripe, one for GitHub)
  • Give different user groups access to different tool sets
  • Maintain separate staging and production configurations
Define pagination parameters (page, offset, limit) as tool input parameters and map them to query string or body. The AI will call the tool multiple times with different values to retrieve additional pages. Most AI clients handle this naturally when the tool description explains the pagination behavior.
Yes. Use {{parameter_name}} placeholders in your Endpoint URL and map those parameters to Path in the parameter mapping. For example:
https://api.example.com/users/{{user_id}}/orders
When the AI calls the tool with user_id: 42, GetMCP replaces {{user_id}} with 42 before making the request.
When you create a server, GetMCP generates a unique URL like:
https://yoursite.com/mcp/{slug}
Anyone with this URL can connect — so treat it like a password. Share it only with the AI clients you trust. You can find the URL on the server detail page in the GetMCP admin.
Production credentials are the credentials that you set in the MCP client configuration. They are used every time an AI client calls the tool in real usage.Test credentials are configured at the server level in Server Settings > Test Auth. They are used only when you click Test on a tool inside the GetMCP admin panel — so you can test against a sandbox or staging environment without touching your production keys.The two never mix. Production calls always use MCP client configured credentials. Admin tests always use server-level test credentials.
GetMCP offers several import methods — choose whichever matches what you have:
  • cURL Import — paste a curl command (from API docs or browser DevTools → Network → Copy as cURL) and GetMCP extracts the URL, method, headers, and parameters automatically.
  • OpenAPI / Swagger Import — upload or paste an OpenAPI 3.x or Swagger 2.x spec and bulk-create tools from the defined endpoints.
  • Postman Collection Import — import a Postman collection JSON to generate tools from saved requests.
  • Template — install a pre-built tool collection for popular APIs (Stripe, GitHub, Slack, WooCommerce, and more) in one click.
  • From scratch — manually fill in the endpoint URL, method, and parameters for full control.
In all import cases, the authentication type is detected automatically, but you must enter your actual credentials manually after the import.
  • Tools — Execute actions by making HTTP requests to your API. This is what AI clients call when a user asks them to do something (fetch data, create a record, trigger an action).
  • Resources — Expose read-only content the AI can reference (documentation, configuration, static data). The AI reads these for context, not to take action.
  • Prompts — Reusable prompt templates with typed arguments. The AI retrieves these and uses them as structured instructions.
Most developers only need Tools.
No — in both credential approaches, the AI client never sees the raw API keys used for external APIs.
  • Fixed Custom Headers: stored encrypted in WordPress and injected server-side. The AI client only sees the response your API returns.
  • User-supplied headers: the user’s own key travels from their MCP client config to GetMCP in the HTTP request, then is forwarded directly to the external API. GetMCP never writes it to disk or the database.
In both cases, credentials are never included in the data returned to the AI client.
GetMCP returns the HTTP status code and error body to the AI client as part of the tool response. The AI will typically inform the user that the tool failed and why. You can configure Retry Count in Advanced Settings to automatically retry on transient failures before returning an error.
Yes. Every tool has a built-in Test panel in the admin. Enter argument values, click Run Test, and see the exact request sent to your API and the response returned. This lets you verify the tool works correctly before sharing the server URL with any AI client.
HTTPS is strongly recommended for production. API credentials are transmitted as HTTP headers — on HTTP they can be intercepted. Most AI clients also prefer or require HTTPS. For local development, HTTP works fine.