AI Compliance Analyst
A conversational AI co-worker built into TATER that reads your live scan data and can autonomously create risk acceptances, assign controls, document evidence, and trigger remediations — all from a chat window in the sidebar.
What Is the AI Compliance Analyst?
The AI Compliance Analyst is a full agentic AI assistant embedded directly in the TATER application, accessible from AI Assistant in the sidebar navigation. Unlike a general-purpose chatbot, it has direct access to your organization's compliance data through a set of purpose-built tools it can call autonomously.
You can ask it to do real compliance work, not just answer questions:
- "What's our overall compliance score right now, and which framework is performing worst?"
- "Find all Critical and High failing controls in Entra ID and assign them to John with a due date of next Friday."
- "We're failing the legacy authentication control but need it on for one service account. Create a risk acceptance."
- "Document that we verified DMARC enforcement manually in Exchange Admin Center yesterday."
- "What does the DMARC control require and can you trigger the automated fix?"
The analyst understands TATER's frameworks (CIS Microsoft 365 Foundations Benchmark, CISA SCuBA, DISA STIGs), product codes (ENT, EXO, DEF, SPO, SPT, PUR, PBI, PPL), and control status values (Pass, Fail, Manual Review, Skip, Override). It knows that Teams SPT_* controls often return Skip because those checks require delegated authentication unavailable in automated contexts, so it will not flag those as problems unless you ask.
The AI Analyst described here runs inside the TATER web app and is the best choice for day-to-day compliance work. If you prefer to work from Claude Desktop or claude.ai and want to connect your TATER data to those interfaces, see the Claude MCP Setup guide instead.
Setting Up the API Key
The AI Analyst requires an Anthropic API key. The key is stored encrypted at rest (AES-256-GCM) in your organization's settings.
Get an Anthropic API key
If you do not have one, sign up at console.anthropic.com and create an API key. Keys start with sk-ant-. Keep it in a safe place — you will only see it once in the Anthropic console.
Open Settings → Features
In the TATER application, navigate to Settings in the sidebar and select Features.
Find the AI Compliance Analyst card
Locate the AI Compliance Analyst card at the top of the Features page. Click the key icon or the field label to expand the API key entry form.
Enter your API key and save
Paste your Anthropic API key into the field and click Save Key. The key is encrypted immediately before storage. Only a masked version is shown after saving. The AI Analyst is active as soon as the key is saved — no page reload needed.
TATER resolves the Anthropic key in priority order: org-level key (Settings → Features for the current org) → tenant-level key (Settings → Features for the primary org) → ANTHROPIC_API_KEY environment variable on the Function App. The org-level key always wins if set. MSPs can configure a tenant-level key that applies across all client organizations that have not set their own key.
The 9 Compliance Tools
The AI Analyst can call nine tools autonomously during a conversation. You never need to invoke these manually — the AI decides when and how to use them based on your request. Understanding what each tool does helps you frame requests effectively.
| Tool | What It Does | When the AI Uses It |
|---|---|---|
get_recent_scans |
Retrieves recent compliance scans (up to 10), including scan date, type, pass/fail counts, and compliance percentage. | Whenever you ask about current status, scores, or want context before taking action. |
get_scan_results |
Gets control-by-control results from a specific scan. Supports filtering by status (Fail, Manual, Pass) and severity (Critical, High, Medium, Low). Returns up to 80 controls per call. | When you ask which controls are failing, want to review a specific area, or before creating assignments or risk acceptances. |
get_overrides |
Retrieves existing risk acceptances (overrides) for the organization. Used to check whether a risk acceptance already exists before creating a duplicate. | Before creating any risk acceptance, to avoid duplicates and show you existing justifications. |
create_risk_acceptance |
Creates a formal risk acceptance (override) with a professional 50–150 word justification, expiry date, and optional compensating controls. Records in the Overrides container and the audit log. | When you explicitly request a risk acceptance, or when the AI has confirmed the action via ask_user. |
add_evidence_comment |
Adds an auditable evidence comment to a specific control. The comment is tagged with source: 'ai-analyst' in the audit trail. Appears in the control's Evidence tab. |
When you describe manual verification steps you have completed, or ask the AI to document something you tell it. |
get_people |
Lists team members in the People directory for the organization, including name, title, and email. Used to match names to People records for assignment. | Before creating assignments, to look up valid People records. |
assign_control |
Assigns a control to a person with a due date and optional notes. Creates an Assignment record visible on the Assignments page and in the assignee's task list. | When you ask the AI to delegate control remediation to a team member. |
ask_user |
Pauses the agentic loop and presents a question to you in the chat interface. The AI waits for your typed response before continuing. Used to confirm high-impact actions, gather context the AI does not have, or verify intent before bulk operations. | Before creating more than five risk acceptances at once, before remediating a Critical or High severity finding, or anytime the AI needs clarification. |
trigger_remediation |
Triggers an automated remediation script for a control that has a remediation available in the catalog. Requires the organization to have remediation enabled and a webhook URL configured. The AI always uses ask_user to confirm before triggering. |
When you explicitly ask the AI to fix a control, and only after confirmation. |
The Agentic Loop
The AI Analyst does not respond in a single shot. It runs an agentic loop that can make up to 12 tool calls before returning a final answer. This allows it to chain multiple operations together seamlessly.
For example, when you ask it to assign all failing Entra ID controls to a team member, it will:
- Call
get_recent_scansto find the latest scan - Call
get_scan_resultsfiltered tostatus=Failandproduct=ENT - Call
get_peopleto find the team member's People record - Call
ask_userto confirm: "I found 7 failing Entra ID controls. I'll assign them all to John Smith with a due date of next Friday. Shall I proceed?" - After you confirm: call
assign_controlseven times, once per control - Summarize what was done
While the loop runs, you see each tool call appear in the chat as a wrench icon followed by the tool name. Tool calls update to a checkmark when they complete successfully, or an X if they fail. The loop pauses visibly whenever ask_user is called — a question box appears in the chat and the AI waits for your response before continuing.
The ask_user Pause Mechanism
The ask_user tool is the primary mechanism for keeping you in control. When the AI needs input or wants to confirm an action, it injects a question box directly into the chat thread. The agentic loop suspends completely until you respond.
Common situations that trigger ask_user:
- The AI is about to create more than five risk acceptances at once
- The AI has identified a Critical or High severity finding to remediate or override
- The AI needs context you have not provided (e.g., which team member to assign to, or the business reason for an exception)
- The AI is unsure whether a control's current state is intentional or a misconfiguration
- Bulk operations that affect many records at once
Type your response into the question box and press Enter (or click Submit). The AI incorporates your answer and continues the loop from where it paused.
Example Conversations
Checking Compliance Status
You: What's our current compliance posture? Give me the headline numbers.
[AI calls get_recent_scans, then get_scan_results for the latest scan]
AI: Your most recent scan ran yesterday (April 5, 2026) and covered 312 controls. Here's the summary:
Pass: 214 (69%) Fail: 63 (20%) Manual Review: 28 (9%) Skip: 7 (2%)
Your worst-performing area is Exchange Online (EXO) at 51% pass rate, driven largely by transport rule and SMTP AUTH controls. Entra ID (ENT) is performing best at 84%. The 63 failing controls include 8 Critical severity findings — would you like me to list those?
Assigning Failing Controls to a Team Member
You: Assign all failing Entra ID controls to Sarah Chen. Due date two weeks from today.
[AI calls get_scan_results filtered to product=ENT, status=Fail]
[AI calls get_people to find Sarah Chen's record]
[AI calls ask_user]
AI asks: I found 11 failing Entra ID controls and located Sarah Chen (Security Engineer) in your People directory. I'll assign all 11 with a due date of April 20, 2026. Shall I proceed?
You: Yes, go ahead.
[AI calls assign_control 11 times]
AI: Done. I've assigned all 11 failing Entra ID controls to Sarah Chen with a due date of April 20. The assignments are now visible on the Assignments page and in Sarah's task list. The controls include items like legacy authentication blocking, admin MFA enforcement, and idle session timeout configuration.
Creating a Risk Acceptance
You: We're failing the SMTP AUTH control (EXO_018) but we need it enabled for our fax-to-email gateway. Please create a risk acceptance.
[AI calls get_overrides to check for existing acceptance]
[AI calls ask_user]
AI asks: No existing risk acceptance found for EXO_018. I'll create one with the following details:
Justification: SMTP AUTH is enabled to support a legacy fax-to-email gateway that does not support modern authentication protocols. The service account used by the gateway is scoped to a single mailbox, monitored via Defender for Cloud Apps, and has no permissions beyond send/receive. Disabling SMTP AUTH globally would break this critical business workflow. Risk is accepted for 90 days pending vendor evaluation of a modern auth alternative.
Expiry: July 6, 2026. Sound right? Or would you like to adjust the justification or expiry?
You: That's good, create it.
[AI calls create_risk_acceptance]
AI: Risk acceptance created for EXO_018 (SMTP AUTH). It expires July 6, 2026. The control will now show as "Override" status in dashboards and reports. A record has been added to the audit log.
Documenting Manual Evidence
You: Document that I manually verified MFA is enforced via Conditional Access Policy "Require MFA for All Users" — I checked this in Entra admin center this morning and it's configured correctly with zero exclusions.
[AI calls add_evidence_comment on the relevant MFA control]
AI: Evidence documented on ENT_001 (Require MFA for All Users). The comment reads: "Manual verification performed April 6, 2026. Confirmed Conditional Access policy 'Require MFA for All Users' is active in Entra admin center with zero user or group exclusions. Policy targets all users with no conditions that would allow bypass. State: Enabled." This comment is tagged Source: AI Analyst and is included in your audit trail.
Triggering an Automated Remediation
You: What does the DMARC control require and can you fix it?
[AI calls get_scan_results to find the DMARC control and its current status]
[AI calls ask_user]
AI asks: The DMARC control (DEF_SCUBA1.4) requires that DMARC is configured for all accepted domains with a policy of p=reject or p=quarantine, and that results are reported to a designated mailbox. Your current scan shows this as Failing. The automated remediation script (REM_DEF_DMARC.ps1) can configure DMARC via Exchange Online PowerShell. This is a High severity control. Should I trigger the remediation? The change will affect all accepted domains in your tenant.
You: Yes, go ahead.
[AI calls trigger_remediation]
AI: Remediation triggered. The Azure Automation runbook is running. You can monitor progress on the Remediation page — it polls every 5 seconds and times out after 10 minutes. Once complete, run a new scan to confirm the control passes.
Built-In Safety Rules
The AI Analyst operates under a set of safety rules baked into its system prompt. These rules protect your environment from unintended bulk changes and ensure high-risk actions always have a human checkpoint:
| Rule | Behavior |
|---|---|
| Bulk risk acceptances | The AI will not create more than 5 risk acceptances without first using ask_user to confirm intent and show you a summary of what it is about to create. |
| Critical / High severity overrides | Any risk acceptance or remediation for a Critical or High severity finding always triggers ask_user first, regardless of how explicitly you phrased the request. |
| Teams SPT_* controls | The AI knows Teams controls frequently return Skip status because Cs* cmdlets require delegated authentication. It will not flag these as problems or recommend remediations for them unless you specifically ask. |
| Audit trail tagging | Every record created by the AI Analyst (overrides, assignments, comments) is tagged with source: 'ai-analyst' in the audit log so you can distinguish AI-created records from human-created ones. |
| Remediation confirmation | The AI always uses ask_user before calling trigger_remediation, regardless of severity. Automated remediations make changes to your tenant and cannot be rolled back automatically. |
Audit Trail
All actions taken by the AI Analyst are recorded in TATER's activity audit log. Navigate to Administration → Activity Log to review them. You can filter by:
- Entity type: Select
override,assignment, orcommentto see only AI-created records of that type - Date range: Narrow to the period when a conversation occurred
- Source: The
sourcefield on AI-created records is set toai-analyst, distinguishing them from records created manually or by runbooks
Each audit entry includes the specific values written (e.g., the justification text, the assigned person, the expiry date), the timestamp, and the user account whose session was active when the AI ran the action.
Tips and Limitations
Tips for Effective Use
- Be specific about scope. Saying "fix our Exchange issues" is broad — the AI will ask clarifying questions. "Create risk acceptances for all failing EXO controls with severity Medium or lower" gives it enough to work with directly.
- Specify timeframes for assignments. The AI can accept relative dates like "next Friday," "end of month," or "in two weeks" and converts them to absolute dates.
- Chain related requests. You can make complex compound requests in one message: "Find all Critical failing controls, assign them to the security team with a two-week deadline, and ask me before creating any risk acceptances." The AI handles the sequencing.
- Use it after each scan. The most productive time to run the AI Analyst is immediately after a new scan completes. Ask for a summary, then ask it to triage the new failures against existing overrides and assignments.
- The conversation is session-scoped. Conversation history is stored in the browser tab. Closing the tab or opening a new session starts a fresh conversation. The AI does not have memory of previous sessions.
Limitations
- 80-control result cap per call. When retrieving scan results, the AI fetches up to 80 controls per tool call. For very large scans with hundreds of failures, it may need to make multiple calls or ask you to narrow the scope by product code or severity.
- 12-iteration loop limit. The agentic loop runs at most 12 tool-call iterations per message. Very large bulk operations (e.g., assigning 50 controls individually) may hit this limit. Break large tasks into smaller segments if needed.
- 30-message conversation window. The API sends the last 30 messages as conversation context. Very long conversations may lose early context. Start a fresh session for a new topic.
- No read access to Settings or credentials. The AI cannot read your Anthropic API key, SIEM config, or webhook URLs. It also cannot modify organization settings.
- Remediation requires prior configuration. The
trigger_remediationtool only works if your organization has remediation enabled and a webhook URL configured in Settings → Remediation. The AI will tell you if this is not set up. - The AI does not run scans. It can read existing scan results but cannot initiate a new scan. Use the Scans page or the Azure Automation runbook schedule for that.
Troubleshooting
AI Analyst Not Responding / "Not Configured" Error
If the chat shows a message like "AI Analyst is not configured for this organization":
- Navigate to Settings → Features and check the AI Compliance Analyst card. Enter your Anthropic API key and save.
- If your organization shares a key configured at the tenant level, ask your SuperAdmin to verify the key is saved correctly.
- The key must be a valid Anthropic key starting with
sk-ant-. Keys generated from the Anthropic API console are the only supported format.
Tool Calls Failing
If a tool call shows an X (failure) in the chat:
- get_recent_scans / get_scan_results: Ensure you have at least one scan imported. Navigate to the Scans page to verify.
- get_people: Your organization must have People records. Add team members on the People page before asking the AI to make assignments.
- create_risk_acceptance: You must have OrgAdmin or higher role. Auditors and Viewers cannot create overrides even through the AI.
- trigger_remediation: Remediation must be enabled for your organization (Settings → Remediation) and the control must have a matching script in the remediation catalog.
- add_evidence_comment: You need Auditor role or higher to add comments.
Slow or Incomplete Responses
The AI Analyst runs against the Anthropic API in real time. Response time depends on Anthropic API latency and the complexity of the tool chain being executed. If a response appears stalled:
- Wait up to 60 seconds for complex multi-step operations before assuming a failure. The tool call indicators in the chat update as each step completes.
- If the chat shows tool call indicators that never resolve, check your browser's network tab for a stalled request to
/api/ai/chat. A timeout or network disruption may have interrupted the call. Refresh the page and retry. - Large scan result sets slow down initial tool calls. Try narrowing the request: "only show me Critical severity EXO failures" rather than "all failures."
AI Took an Unexpected Action
If the AI created a record you did not intend:
- Navigate to the relevant page (Overrides, Assignments, or Comments) and delete the record manually.
- Check the audit log (Administration → Activity Log) to confirm exactly what was created and when. Filter by
source: ai-analyst. - In future conversations, be more specific about what you want confirmed before the AI acts. Phrases like "check with me before creating anything" or "don't take action yet, just tell me what you would do" put the AI in a planning-only mode.
To automate portal-based evidence collection (screenshots from Entra admin center, Exchange, Defender), see the Evidence Agent guide. To connect Claude Desktop or claude.ai to your TATER data instead, see Claude MCP Setup.
Was this page helpful?
TATER