MCP Feedback
TATER's MCP servers casually solicit feedback from users during a session and route it to the right place. Negative sentiment auto-files an ADO Issue. All feedback (positive, neutral, negative) is stored for SuperAdmin review.
How it works
When you work with an AI agent connected to TATER MCP (Claude Desktop, Claude Code, Microsoft 365 Copilot, the in-app AI Assistant), the agent is instructed to ask — once per session, at a natural pause — how the experience is going. Examples it might use:
- "Quick check before I move on — anything about that flow that felt rough?"
- "Was that helpful, or is there a faster way you'd want me to handle it?"
- "Anything I could have done better there?"
Whatever you say, the agent paraphrases faithfully and submits it via the submit_mcp_feedback tool with four fields: sentiment (positive/neutral/negative), comment, context (what you were trying to do), and feature (which area of TATER).
Negative feedback auto-files ADO
If the sentiment is negative, TATER automatically creates an Issue work item in the TATER-Security/App Azure DevOps project so the team sees it on the board. The work item:
- Title is prefixed
[MCP feedback]with the feature and a snippet of your comment - Description carries your verbatim comment, the context, the feature, and the org / user metadata
- Tags:
mcp-feedback; user-reported; negative - Priority: 2
- State:
To Do
The agent will tell you the work item number and that the team will see it. If ADO is unavailable for any reason, the feedback still persists for SuperAdmin review.
Every submission is stored
Positive, neutral, and negative submissions all land in a Cosmos DB container called McpFeedback, partition-keyed by tenant. Each entry records:
- Sentiment, comment, context, and feature
- Submitting user (OID, email, display name)
- Org id and tenant id
- Channel (
via: copilot / claude / mcp / web / agent) - ADO work item id and URL (only when auto-filed)
- Submission timestamp
SuperAdmin review page
SuperAdmins see a dedicated MCP Feedback nav item under Administration. The page shows:
- KPI strip: Total submissions, Positive, Neutral, Negative, Filed to ADO
- Filters: sentiment dropdown, free-text search across comment / feature / context / user
- Cards color-coded by sentiment border, with comment, context, user, org, channel, timestamp, and a deep link to the auto-filed ADO work item
Use the SuperAdmin view to spot patterns — recurring feature complaints, regressions tied to a release, or wins to share back with the team — and to validate that ADO follow-through is happening.
Rules the agent follows
- Once per session unless the user volunteers more — no spamming
- Wait for clean pauses; never interrupt mid-task
- Paraphrase faithfully — never fabricate feedback the user didn't actually give
- Tell the user when negative feedback was filed to ADO
API surface
POST /api/mcp-feedback— submit (used by stdio MCP, accepts the same shape as the MCP tool)GET /api/mcp-feedback— SuperAdmin-only list across all tenants- MCP tools:
submit_mcp_feedback(in both HTTP and stdio MCP servers)
TATER