Introducing the Ochre AI support workspace. Start a 14-day trial

AI escalation post-mortems: closing the feedback loop

Every autopilot escalation auto-generates a post-mortem with root cause, missing-article suggestion, and a guardrail change you can apply in one click.

Updated 2 min read

When the AI agent escalates a conversation to a human, that's information. It means something in your knowledge base, your guardrails, or the customer's question itself was outside what the agent was confident about. If you don't capture that signal, you'll see the same escalation again next week.

Ochre captures it automatically. Every autopilot escalation generates a post-mortem the moment a human takes over.

What a post-mortem contains

Open /ai/escalations and you'll see one row per escalation, newest first. Click in and the post-mortem has three sections:

  1. Root cause. A short LLM-generated explanation of why the agent stopped — "the question asked about EU VAT handling, and the closest article only covered US sales tax." The reasoning is grounded in the trace: the retrieval results, the model's draft, and the guardrail that fired (if any).
  2. Missing article suggestion. If the root cause is a knowledge gap, the post-mortem proposes the article that would have closed it — title, suggested category, and a draft body seeded from the conversation. Click Create draft and it lands in the KB as a draft for your editor to polish.
  3. Guardrail change. If the root cause is a policy gap, the post-mortem proposes a guardrail edit — a new "never quote a refund amount without checking Stripe" rule, or a tightened confidence threshold for billing topics. Click Apply and it's live.

Closing the loop in one click

The point isn't the post-mortem — it's the button. Reading why an escalation happened doesn't help if fixing it takes thirty minutes of context-switching. The page is built so the common case is one click: Create draft, Apply guardrail, or Mark as expected if the escalation was the right call (e.g. "customer asked to speak to a human" — no fix needed, just acknowledge).

Each applied fix is tagged on the post-mortem, so when you look back a month later you can see which escalations led to which articles or guardrails. That's the audit trail your AI program runs on.

Who can see this

The escalations page is admin-facing — owner, admin, and agent roles can read it; light agents can't. The post-mortems include verbatim conversation excerpts, so we treat them with the same access controls as the inbox itself.

A quick health check

If your escalation rate is dropping week-over-week and the post-mortems are turning into "expected" or "missing article fixed," your AI program is healthy. If escalations are flat and the post-mortems all say "knowledge gap" with no draft created, the loop isn't being closed. The page makes the difference visible at a glance.

Was this article helpful?

AI escalation post-mortems in Ochre · Ochre