The Chattermill MCP Server allows AI agents (such as Claude Desktop, Cursor, and Codex) to securely access your customer feedback. Once configured, your teams can query Chattermill insights directly within the AI agent they already use - asking questions in natural language and receiving answers instantly.
This guide walks you through configuring your AI agent to access the Chattermill MCP Server.
What is Model Context Protocol (MCP)?
Model Context Protocol (MCP) is an open standard that allows AI agents to securely connect to external applications, agents, and platforms.
Without MCP, AI agents can only work with information you manually paste into a chat or with public data. MCP enables AI agents to query your business data directly, while respecting existing permissions and security controls.
For Chattermill users, this means customer feedback insights from Chattermill can be accessed directly inside the AI agents they already use, without switching platforms or building custom integrations.
Figure: How AI agents access Chattermill through the MCP Server.
Supported AI Agents
The Chattermill MCP Server implements the open MCP and can be used with any MCP-compatible AI agent.
We have tested compatibility with the following:
In addition, custom agents built by your teams can access the Chattermill MCP Server, provided they implement the MCP protocol.
What The Chattermill MCP Server Does (Examples)
The Chattermill MCP Server allows AI agents to securely access and analyze your customer feedback data.
By exposing Chattermill’s analysis capabilities through the MCP, the server enables them to retrieve insights, summaries, metrics, and raw feedback using natural language queries.
See how Product Teams can use the Chattermill MCP to pull customer feedback insights directly into their AI agents.
See how CX teams can use the Chattermill MCP to surface customer feedback insights directly in AI Agents like Claude.
The Chattermill MCP Tools and Capabilities
The Chattermill MCP Server exposes four MCP tools that AI agents can use to retrieve and analyze customer feedback data.
Each tool performs a specific function, such as generating summaries, retrieving feedback clusters, accessing quantitative metrics, or fetching individual feedback entries. AI agents select and call these tools as needed to answer questions and retrieve insights from Chattermill.
The Chattermill MCP Server currently exposes the following tools:
Insight Tools
fetch_highlights
fetch_highlights
Generates an AI-powered summary and analysis of customer feedback based on a natural-language query.
This tool is useful for retrieving high-level insights and summaries, such as identifying top customer issues or understanding key themes within a dataset.
Example use cases:
Summarize complaints about delivery
Identify key issues reported in a specific region or timeframe
fetch_observations
fetch_observations
Fetches clusters of similar feedback grouped by theme, including representative customer snippets.
Observations are sorted by volume and provide a structured view of recurring customer issues. This tool is suitable for exploring patterns in feedback and understanding the drivers behind specific themes.
fetch_quantitative_insight
fetch_quantitative_insight
Retrieves quantitative metrics and structured data based on a natural-language query.
This tool returns data used to generate charts and reports in Chattermill, such as NPS trends, sentiment metrics, or feedback volume over time. It enables AI agents to answer quantitative questions and analyze trends in customer feedback.
fetch_raw_feedback
fetch_raw_feedback
Fetches individual customer feedback entries based on a natural-language query.
This tool allows AI agents to retrieve verbatim customer responses for detailed analysis, validation, or investigation of specific issues.
Discovery and Search Tools
list_themes
list_themes
Lists all available themes grouped by category for a project.
This tool is used for discovery - it returns the full taxonomy of categories and themes configured in a Chattermill project, along with their IDs. These IDs can then be used as filters in other tools such as fetch_observations or fetch_raw_feedback to scope queries to specific topics.
Example use cases:
Discover what feedback topics are tracked before running an analysis
Find the correct theme or category ID to use as a filter in a follow-up query
list_dimensions
list_dimensions
Lists all available dimensions (segments) with their possible values for a project.
Dimensions represent filterable attributes such as country, brand, product line, or data source. This tool shows what segmentation options are available and their known values, enabling AI agents to construct precise filters. For segments with many values, use search_segments to find specific matches.
Example use cases:
Discover which segments are available for filtering (e.g. country, channel, brand)
Check what values exist for a given segment before applying it as a filter
search_segments
search_segments
Searches for matching values within a specific segment dimension.
This tool performs a case-insensitive partial match against segment values (e.g. searching "united" within the "country" segment to find "United Kingdom"). Use list_dimensions first to discover available segment names, then use this tool to find specific values within them.
Example use cases:
Find the exact segment value for "Germany" within a country dimension
Look up available brand names before filtering feedback by brand
search_observations
search_observations
Searches clusters of similar feedback by keyword or theme ID, returning observation names and IDs sorted by volume.
Unlike fetch_observations, this tool accepts structured inputs directly without AI-based filter extraction - making it suitable when you already know the theme ID or exact search term. Returned observation IDs can be used as filters in fetch_raw_feedback to retrieve specific verbatim feedback.
Example use cases:
Find observations related to a specific keyword like "refund" or "login"
Get observation IDs for a known theme to use as filters in deeper analysis
How The Chattermill MCP Server Works
At a high level, the integration works as follows:
A user asks a question in an MCP-compatible AI agent (such as Claude Desktop, Cursor, Codex, or a custom agent).
The AI agent analyzes the request and determines how to answer it, selecting from available tools, including the Chattermill MCP Server.
When customer feedback data is needed, the agent invokes the appropriate Chattermill MCP tool with a natural-language query.
The Chattermill MCP Server securely executes the request on Chattermill’s infrastructure and retrieves the relevant data.
Structured results are returned to the AI agent.
The agent may use these results directly or combine them with other tools and additional queries to generate a complete response.
This process allows the AI agent to retrieve accurate, structured insights from Chattermill while maintaining Chattermill’s existing data permissions and security controls.
Security and Permissions
The Chattermill MCP Server operates within Chattermill’s existing security and permission framework:
Chattermill roles and permissions apply
Users can only access data they already have permission to view
Users explicitly choose to connect their AI agent to Chattermill
All communication occurs over HTTPS
Data analysis runs on Chattermill infrastructure
The MCP Server does not grant access to any data beyond what a user can already view in Chattermill.
Example Prompts
The following example prompts can be used to confirm that your AI agent is correctly connected to the Chattermill MCP Server.
Summarize feedback
Example: Summarize what customers are saying about the checkout experience.Explore themes and patterns
Example: Find feedback clusters related to a new feature.Answer quantitative questions
Example: What was our NPS last year compared to the previous year?Analyze trends over time
Example: Which customer issues increased in Germany this week versus last week?
How to Configure The Chattermill MCP Server
To use the Chattermill MCP Server, you will need:
A Chattermill API Key
You can generate an API key from Settings → API in the Chattermill application. This key is used to authenticate your AI agent.
An MCP-Compatible AI Agent
You will need an AI assistant or agent that supports the MCP, such as:
Client | Description |
Claude Desktop | Anthropic's desktop AI agent (macOS/Windows) |
Claude Code | Anthropic's CLI AI agent for developers |
Cursor | AI-powered code editor with MCP support |
Codex | OpenAI’s command-line AI agent that supports MCP servers |
Gemini CLI | Google’s command-line AI agent |
Manus | AI agent with MCP support |
OpenClaw | AI agent with MCP support |
Custom Agents | Any custom AI agent built by your team |
Configuration steps vary slightly depending on the agent you are using. Follow the steps below to connect the Chattermill MCP Server to your AI agent.
Authentication is performed using your Chattermill API key, which is passed as a bearer token when configuring your agent. This allows the Chattermill MCP Server to securely authenticate requests and ensures that access is limited to the data permitted by your Chattermill account.
Setup Instructions
Claude Desktop
Claude Desktop
Please note, MCP servers are supported in Claude Desktop, not the web version.
Before configuring the Chattermill MCP Server, ensure Node.js (version 16 or later) is installed. This is required to run the MCP server using the npx command. You can download Node.js from: https://nodejs.org
To set up the Chattermill MCP in Claude Desktop, follow the steps below:
Download and install Claude Desktop from claude.ai/download
Open Claude Desktop and sign in
Open Settings → Developer
Click Edit Config
Replace the file contents with:
{ "mcpServers": { "chattermill": { "command": "npx", "args": [ "-y", "mcp-remote", "--transport", "http", "--header", "Authorization: Bearer YOUR_API_KEY" ] } } } |
6. Replace YOUR_API_KEY with your Chattermill API key
7. Save the file
8. Fully quit and restart Claude Desktop
9. Confirm Chattermill appears in Search and Tools
10. Set the MCP server to Always Allow to avoid repeated permission prompts (optional)
Claude Code
Claude Code
To set up the Chattermill MCP in Claude Code, follow the steps below:
Download and install Claude Code
Once installed, run the following command to configure the Chattermill MCP Server:
claude mcp add chattermill https://app.chattermill.com/mcp \ --transport http \ --scope user \ --header 'Authorization: Bearer YOUR_API_KEY'
|
3. Verify the server is connected:
claude mcp list
|
4. Start Claude Code:
claude
|
5. Run /mcp to confirm the server status
Cursor
Cursor
To set up the Chattermill MCP in Cursor, follow the steps below:
Open Cursor
Go to Settings
Search for MCP
Click Edit in settings.json
Add the following configuration:
{ "mcpServers": { "chattermill": { "url": "https://app.chattermill.com/mcp", "headers": { "Authorization": "Bearer YOUR_API_KEY" } } } } |
6. Replace YOUR_API_KEY
7. Save the file
8. Restart Cursor
Codex
Codex
To set up the Chattermill MCP in Codex, follow the steps below:
1. Open the Codex config file located at:
~/.codex/config.toml
|
If the file does not exist, create it.
2. Add the following configuration to the file:
[mcp_servers.chattermill] url = "https://app.chattermill.com/mcp" bearer_token_env_var = "CHATTERMILL_API_KEY"
|
3. Set the environment variable CHATTERMILL_API_KEY with your Chattermill API key.
Example for Mac/Linux:
export CHATTERMILL_API_KEY="YOUR_API_KEY"
|
Windows (PowerShell):
setx CHATTERMILL_API_KEY "YOUR_API_KEY"
|
Replace YOUR_API_KEY with your Chattermill API key.
4. Restart Codex so it loads the MCP configuration.
5. Once configured, Codex should be able to access the Chattermill MCP Server.
You can verify this by running:
/mcp
|
The Chattermill server should appear in the list of available MCP servers.
Gemini CLI
Gemini CLI
To set up the Chattermill MCP in Gemini CLI, follow the steps below:
Install Gemini CLI:
npm install -g @google/gemini-cli
|
2. Verify installation:
gemini --version
|
3. Add the MCP server:
gemini mcp add chattermill https://app.chattermill.com/mcp \ --transport http \ --scope user \ --header "Authorization: Bearer YOUR_API_KEY"
|
4. Verify:
gemini mcp list
|
Manus
Manus
To set up the Chattermill MCP in Manus, follow the steps below:
Visit manus.im and sign in
Navigate to Settings → Connectors
Add a Custom MCP server
Configure:
Server name: Chattermill
Transport type: HTTP
Server URL: https://app.chattermill.com/mcp
Header:
Name: Authorization
Value: Bearer YOUR_API_KEY
Save the configuration
OpenClaw
OpenClaw
To set up the Chattermill MCP in OpenClaw, follow the steps below:
1. Create the MCP configuration directory. If the configuration directory does not already exist, create it:
mkdir -p ~/.openclaw/workspace/config |
2. Open or create the following file:
~/.openclaw/workspace/config/mcporter.json |
3. Add the Chattermill MCP Server. Add the following configuration to the file. Replace YOUR_API_KEY with your Chattermill API key:
{ "mcpServers": { "chattermill": { "baseUrl": "https://app.chattermill.com/mcp", "headers": { "Authorization": "Bearer YOUR_API_KEY" } } }, "imports": [] } |
4. Restart OpenClaw so the new MCP configuration is loaded:
openclaw gateway restart |
5. Run the following command to confirm the Chattermill MCP Server is registered:
mcporter list |
To view the tools available from the Chattermill MCP Server:
mcporter list chattermill --schema |
If the configuration is correct, Chattermill will appear in the list of available MCP servers.
Confirming the Configuration
Once configured, try a simple query in your AI agent, for example:
Summarize what customers are saying about the checkout experience.
If the MCP server is configured correctly, you should receive a response based on your Chattermill data.
FAQ and Troubleshooting
These answers cover common questions about configuring and using the Chattermill MCP Server. For agent-specific configuration steps, refer to your AI agent’s documentation.
Q: Do I need a paid AI agent to use the Chattermill MCP Server?
Not necessarily. Many MCP-compatible AI agents support MCP servers, including free and paid plans.
Availability of MCP features depends on the specific AI agent you are using. Refer to their documentation for details on MCP support and plan requirements.
Q: What data can the AI agent access through the Chattermill MCP Server?
The AI agent can only access data you already have permission to view in Chattermill.
This includes customer feedback data, such as Highlights (summaries), Observations (feedback clusters), quantitative insights (such as NPS and sentiment), and individual feedback entries.
The MCP integration respects existing Chattermill roles, permissions, and data
access rules. It does not grant access to additional data.
Q: Where does data processing happen?
Data processing happens across both the AI agent and Chattermill’s infrastructure. The AI agent interprets the user’s request, selects the appropriate MCP tool, and processes the results returned by Chattermill.
When customer feedback data is required, the Chattermill MCP Server securely executes the request on Chattermill’s servers and returns structured results to the AI agent. This ensures that customer feedback data remains securely stored and processed within Chattermill’s infrastructure, while allowing the AI agent to retrieve and present insights.
Q: Can I use the Chattermill MCP Server across multiple Chattermill projects?
Yes. By default, the MCP uses the project you most recently visited in the Chattermill application.
If you work across multiple projects, you can specify a project ID in your prompt to control which project is queried.
Q: Do I need to host or run the Chattermill MCP Server myself?
No. The Chattermill MCP Server is fully hosted and managed by Chattermill. You only need to configure your AI agent to use it.
Q: Which AI agents are supported?
The Chattermill MCP Server works with any MCP-compatible AI agent.
This includes Claude Desktop, Cursor, Codex, Gemini CLI, Manus, and custom agents built by your team.
Q: Why don’t I see Chattermill insights in my AI agent?
Ensure the MCP configuration has been added correctly and that your API key is valid. After updating your configuration, restart your AI agent.
If the issue persists, refer to your AI agent’s documentation or contact Chattermill Support.
Q: Can I use the Chattermill MCP Server behind a corporate firewall?
Yes, provided your network allows HTTPS access to Chattermill’s API.
Some enterprise environments may require network or proxy configuration. Contact your IT team if access is restricted.
Q: Where can I find my Chattermill API key?
You can generate and manage API keys in your Chattermill account under Settings → API. API keys allow your AI agent to securely access Chattermill insights.
Q: Where can I get help if something isn’t working?
If you are experiencing issues, first verify your API key and MCP configuration. For agent-specific configuration questions, refer to your AI agent’s documentation.
If you need further assistance, contact Chattermill support.
Q: Will more agents be supported in the future?
As more AI agents adopt MCP, they will be able to access Chattermill through the MCP Server.
In some cases, MCP may only be available in certain versions of a product. For example, Gemini CLI supports MCP, while the consumer Gemini web app does not currently support MCP connections.
Refer to the documentation for your chosen AI agent to confirm MCP compatibility and requirements.
What’s Next?
The Chattermill MCP Server is the foundation for more advanced, agentic workflows. Additional capabilities and supported agents will be added over time.

