Contents
- The Invisible Failure: The Cost of Not Being Discovered
- What Is search_miss? — KanseiLink's Hidden Failure Mode
- Case Study: Zapier — A-Grade Rating, 13% Success Rate
- Pattern Analysis: Other Services Suffering from search_miss
- Why It Happens — Dissecting the Semantic Gap
- High-Success Services: What They Do Differently
- AEO Prescription: 3 Steps to Become Discoverable to Agents
- FAQ
The Invisible Failure: The Cost of Not Being Discovered
When an AI agent executes a task using MCP tools, the very first step is finding the right service. When a user instructs an agent to "sync kintone data to Google Sheets," the agent searches for an appropriate tool, selects it, and calls it. The API call — and any subsequent error — is visible and measurable.
But there's an invisible failure mode that precedes all of that: the agent is looking for the right service and cannot find it. The agent proceeds as if the service doesn't exist, picks an alternative, or abandons the task. From the service provider's perspective, this doesn't appear anywhere in server logs. It's pure, invisible opportunity loss.
KanseiLink classifies and tracks this as search_miss. Analyzing April 2026 data reveals that search_miss is a more pervasive problem than API errors or authentication failures — and far harder to detect without an intermediary intelligence layer.
KanseiLink Real-World Data Summary (April 2026)
overall success rate
attributable to search_miss
overall success rate
overall success rate
What Is search_miss? — KanseiLink's Hidden Failure Mode
KanseiLink tracks several error types: api_error (API call failure), auth_expired (token expiry), invalid_input (malformed request). These all represent "failed after the service was called." They show up in logs. They're measurable.
search_miss represents failure before a service is ever called. Specifically, it's logged when an agent uses KanseiLink's search_services tool with a natural language query and the intended service does not appear in the top 3 results. The agent moves on, and no API call is ever made.
Agent calls search_services → target service absent from top 3 → agent selects alternative or aborts → KanseiLink logs the miss with the query text. Because no API call is ever attempted, the service's own infrastructure logs show nothing. Only an intermediary layer like KanseiLink can surface this failure.
This "nothing in server logs" characteristic is exactly what makes discoverability failure so dangerous. Monitoring your API success rate will never reveal it. You can have a flawlessly designed MCP server and still be invisible to 78% of the agents trying to reach you.
Case Study: Zapier — A-Grade Rating, 13% Success Rate
Zapier is one of the world's largest workflow automation platforms: 7,000+ app integrations, enterprise adoption at scale, and an official MCP server (zapier.com/mcp). KanseiLink assigns it an A-grade AEO rating.
Yet KanseiLink's real-world measurement shows Zapier's call success rate at 13% (n=9) — with 7 of 8 failures classified as search_miss.
"Zapier succeeds on 11% of calls (n=9). median latency 42ms. most common errors: search_miss (7x), contradicted_workaround (1x). common workaround: 'Query「データ連携・API統合ツール」did not find zapier in top 3.'"
The data shows that the Zapier brand does not semantically match the intent queries agents use in the Japanese market. Two confirmed search_miss patterns illustrate the problem:
| Agent Query | Expected Service | Top 3 Results | Verification |
|---|---|---|---|
| "データ連携・API統合ツール" (data integration tool) | Zapier | Zapier not included | ✅ confirmed (n=4) |
| "kintoneのデータをGoogleスプレッドシートに同期したい" (sync kintone to Google Sheets) | Zapier | kintone, google-workspace, google-drive | ✅ verified (n=3) |
The second pattern is particularly instructive. When an agent searches for "sync kintone to Google Sheets," it gets kintone and Google Workspace as direct integrations — bypassing Zapier entirely. The agent picks the direct path; Zapier's value as a middleware layer is never considered. Whether this is the "right" outcome is debatable, but from Zapier's perspective it's a lost connection.
Zapier MCP is a beta feature. Since September 17, 2025, each MCP tool call consumes 2 Zapier tasks — a cost model change from the previous 300-call monthly allowance (Zapier official blog ✅ verified). Additionally, only Streamable HTTP and SSE transports are supported. Combining a low success rate with this cost structure makes Zapier MCP a service that requires careful evaluation before production deployment.
Pattern Analysis: Other Services Suffering from search_miss
Sansan: Japan's Business Card Giant, Lost in CRM Context
Sansan is Japan's leading business card and sales intelligence platform. Yet its success rate sits at 61% (n=36), with 5 recorded search_misses. The confirmed pattern: agents searching "取引先の連絡先を一覧で出したい" (list contact info for business partners) don't get Sansan in the top 3 (confirmed, n=4).
The underlying dynamic: Sansan's core value is contact management sourced from business cards, but agents searching in a CRM context for "contact lists" surface Salesforce and HubSpot first. Sansan's distinctive positioning (relationship data from business card origins) doesn't map to the way agents describe the CRM use case.
Chatwork: Discoverable as a Communication Tool, But Reliability Lags
Chatwork shows 66% success (n=123 — statistically meaningful) with 10 search_misses recorded. For Chatwork, api_error (24 occurrences) actually dominates over search_miss — it's a different problem set. The search_miss workaround logged is "Try category filter or Japanese keywords," implying English-language query discoverability is weak for this Japan-native service.
search_miss Rate Comparison (KanseiLink April 2026)
| Service | Success Rate | Total Calls | search_miss Count | search_miss Rate |
|---|---|---|---|---|
| Zapier | 13% | 9 | 7 | 78% |
| Sansan | 61% | 36 | 5 | 14% |
| Chatwork | 66% | 123 | 10 | 8% |
| Slack | 91% | 113 | 0 | 0% |
| freee | 90% | 98+ | 0 | 0% |
Why It Happens — Dissecting the Semantic Gap
Agents don't search by brand name. Agents search by intent — what they want to accomplish.
This is the core of the discoverability problem. While a human user Googles "Zapier," an AI agent queries "data integration tool," "automate workflow and send notification," or "sync kintone with spreadsheet." The semantic distance between the brand identity and agent intent patterns determines discoverability — not brand recognition.
Services with persistent search_miss problems share these patterns:
- English brand name disconnected from Japanese functional descriptions (Zapier: "workflow automation" ≠ Japanese "データ連携ツール")
- Overly generic capability (services that "do everything" match no specific use-case query strongly)
- Niche function searched in a broader category context (Sansan's "business cards" searched under "CRM contacts")
- English-optimized metadata in a Japanese-primary search market (Chatwork)
KanseiLink's search_services tool uses vector similarity search. Service metadata (category, feature description, use case examples) is ranked against the semantic distance from the agent's query. Services with richer, intent-aligned functional descriptions match a broader range of queries and rank higher consistently.
High-Success Services: What They Do Differently
Slack (91% success, 0 search_miss) and freee (90% success, 0 search_miss) are essentially impossible to miss. What's their advantage?
Slack has achieved such deep de facto standard status that it surfaces naturally for any communication-adjacent query. Claude agents describe Slack as "the stdout of the agent economy" — and with 82 of 188 KanseiLink recipes using Slack as a notification or output layer, that's not hyperbole.
"Slack is the single most agent-ready service in the ecosystem. 82 out of 188 recipes use it as a notification/output layer. The official MCP server exists on npm, API docs are thorough, and the 91% success rate across 112 real calls confirms reliability. Slack is effectively the stdout of the agent economy."
freee holds AAA-grade AEO status and shows strong alignment between its metadata and the Japanese accounting, expense management, and invoicing intent queries that agents commonly use. "精算したい" (process expense), "請求書を作りたい" (create invoice), "勘定科目を確認したい" (check account code) — freee surfaces for all of them.
The common thread: functional descriptions cover the intent patterns agents actually use. Not brand recognition — semantic richness of capability descriptions.
AEO Prescription: 3 Steps to Become Discoverable to Agents
1. Optimize Feature Descriptions for Japanese Intent Patterns
Enrich MCP server metadata (description, category, tags) to cover the intent patterns agents use. Not "Zapier — workflow automation" but "Connect any two SaaS tools to automatically transfer data, send notifications, and orchestrate multi-step workflows without code." Use verb-based functional descriptions. In a Japanese-primary market, Japanese-language coverage is critical.
2. Enrich MCP Tool Schema description Fields
Each MCP tool's description field is the primary signal agents use for tool selection. "Send a message" tells the agent what the tool does. "Send a message to a Slack channel, DM to a specific user, or post a thread reply — use this to notify humans about task completion, deliver reports, or escalate errors that need attention" tells the agent when and why to use it. Intent-matching descriptions dramatically improve selection rates.
3. Use KanseiLink AEO Audits to Identify Your Specific Blind Spots
KanseiLink's AEO audit reports show which specific queries agents are using when search_miss occurs for your service. This turns a statistical problem ("we have low discoverability") into actionable metadata improvements ("agents search for X but don't find us — add X to our description"). Services going through this process typically see search_miss rates fall by 40%+.
Services completing KanseiLink's AEO improvement program see average search_miss rate reductions of 40%+. Category classification accuracy and Japanese functional description enrichment deliver the highest per-effort impact.
FAQ
What exactly is a search_miss?
search_miss is recorded when an AI agent calls KanseiLink's search_services tool with a natural language query and the intended service does not appear in the top 3 results. The agent proceeds to select an alternative or abandon the task. This failure never appears in the service's server logs — only an intermediary layer like KanseiLink can capture it.
Why is Zapier MCP's success rate so low?
KanseiLink data (n=9) shows 13% success. The primary cause is search_miss (7 of 8 failures): agents searching "data integration tool" or "sync kintone to Google Sheets" don't get Zapier in the top 3 results. Zapier's MCP is also still in beta, with each call consuming 2 Zapier tasks since September 2025.
Can discoverability be improved for free?
Improving MCP server metadata and tool description fields costs nothing. A KanseiLink AEO audit to identify specific search_miss patterns is a paid service, but it turns the problem from abstract to actionable. See our pricing page for details.
KanseiLink real-world data cited in this article (success rates, error classifications, search_miss counts) is derived from agent call logs collected by the KanseiLink MCP system. Zapier's n=9 is a small sample; confidence is rated "low" and figures may shift as more data accumulates. Zapier's task consumption policy is referenced from the official Zapier blog (September 2025), verified at time of writing.