Why Your Intercom AI Gives Every Customer the Same Answer
You deployed Intercom Fin. It handles tier-one tickets. Response time is down. Customer satisfaction scores held steady, more or less. Then a customer escalated.
They had asked the AI about migrating their data. The AI gave a correct answer based on your documentation. The problem: this customer is on your enterprise plan, in the middle of a custom migration, with a dedicated implementation engineer they have been talking to for six weeks. The AI had no idea. It answered the generic question as if it were a generic customer.
The customer did not feel known. They felt handled.
What the AI Actually Knows
Intercom Fin knows what is in your help center and your conversation history. That is a meaningful amount of context. But it does not know who the company is beyond the contact record. It does not know their funding stage, their team size, their technology stack, their competitive position, whether they are in expansion talks or at risk of churn. It does not know whether the person asking is a power user, a new hire, or the CFO evaluating whether to renew.
So when a high-value account asks a question, the AI answers the question. It does not answer the question for that specific company, in that specific context, at that specific point in their relationship with you.
This is not a failure of the AI. It is a failure of the intelligence layer available to it. You would not expect a senior account manager to give a great answer without knowing who they were talking to. The AI is in the same position. It is answering blind.
Ask yourself this: how many conversations in Intercom right now are getting generic answers because the AI does not know who is on the other end? And what does it cost when those conversations are with your largest accounts?
What Changes When the AI Knows the Customer
A support agent that knows the customer's company context can do things a generic bot cannot.
It can recognise that the customer asking about API rate limits is on the same enterprise plan as three other companies that asked the same question last month, all of whom were hitting a specific integration pattern. It can surface that context to the human agent as a handoff note. It can route urgently based on account value rather than ticket volume. It can personalise tone based on whether the customer is technical or operational.
None of this requires the AI to be smarter. It requires the AI to have better information before it responds.
The Forage + Claude + Intercom Integration
Forage is an MCP server. Claude connects to it via the Model Context Protocol. When a new conversation opens in Intercom, the agent queries Forage before generating the first response. Here is the pattern.
Step 1: Identify the company on inbound
// Extract company domain from contact email
const domain = contact.email.split('@')[1]
// Pull full company profile from Forage knowledge graph
forage.get_company_info({ domain: domain })
// Returns: company name, size, funding stage,
// tech stack, key contacts, recent signalsStep 2: Enrich with relationship signals
forage.query_knowledge({
query: "What do we know about " + company.name,
entity_types: ["company", "person", "signal"]
})
// Returns: prior interactions, enriched profile,
// signals written from previous sessionsStep 3: Inject context into Claude's system prompt
const systemContext = `
Customer company: ${company.name}
Plan tier: ${crm.plan}
Funding stage: ${company.funding_stage}
Team size: ${company.headcount}
Tech stack: ${company.tech_stack.join(', ')}
Account health: ${crm.health_score}
Recent signals: ${company.recent_signals}
Tailor your response to this context.
If account health is below 70, flag for human handoff.
`Claude now answers with full context. The response is no longer generic because the input is no longer generic.
Step 4: Write new signals back to graph
Every conversation is a signal. When a customer asks about a specific feature, that goes into the knowledge graph as a signal on that company. When they escalate, that is a signal. When they praise a specific workflow, that is a signal. The next conversation starts with that history already in context.
The AI that handles the hundredth conversation with a customer is dramatically more informed than the one that handled the first, because every conversation has been feeding the graph.
n8n automation for Intercom webhooks:
// Intercom webhook triggers on new conversation
// n8n MCP node calls Forage for company context
// Claude generates contextual response
// Response posted back via Intercom API
POST https://api.intercom.io/conversations/{id}/reply
{
type: "admin",
message_type: "comment",
body: claude_response_with_context
}The Account You Cannot Afford to Get Wrong
Most Intercom conversations are low-stakes. A generic answer is fine. Some are not. The customer evaluating renewal. The new champion at a key account trying to understand what they bought. The technical lead at a prospect who just opened a free trial.
The AI cannot know which conversation matters most unless it knows who is in the conversation. Forage provides that context. Claude uses it. Every response is informed rather than generic.
£0.0025 per enrichment call. One Apify token. No infrastructure to manage.
Your support AI is already answering. The question is whether it knows who it is talking to. Visit useforage.xyz to give it that context.