Patterns
beginnergovernance

Human-in-the-Loop Review

Automation handles the routine work, but pauses for a human to review, validate, or make a judgment call at critical decision points. The best of both worlds.

Views20
BPMN 2.0
On this page

Visual Flow

Rendering diagram…

When to Use This Pattern

Use human-in-the-loop when:

  • AI or automation produces output that needs human validation before acting on it
  • Regulatory requirements mandate a human reviewer (financial decisions, medical, legal)
  • The automation handles 90% of cases correctly, but edge cases need expert judgment
  • You're building trust in a new automation and want humans to verify before going fully autonomous
Tip

This pattern is perfect for the transition from manual to automated. Start with human review on 100% of items, then gradually reduce to exception-only review as confidence builds.

How It Works

PhaseActorAction
1. IngestAutomationReceives input (document, request, data)
2. ProcessAutomationApplies rules, AI, calculations
3. ClassifyAutomationDetermines: auto-approve or needs review?
4. ReviewHumanReviews flagged items, corrects if needed
5. ActAutomationExecutes the final action based on human-validated decision

Implementation Guide

Step 1: Define the Review Criteria

Determine which items need human review:

CriterionAuto-ProcessFlag for Review
AI confidence score> 95%≤ 95%
Dollar amount< $1,000≥ $1,000
Customer typeExisting customerNew customer
Exception detectedNo exceptionsAny exception
Random sample5% random for quality audit
Step 2: Prepare the Review Interface

Give the human reviewer everything they need:

  • Original input (the document, form, email)
  • Automation's output (what the system decided/extracted)
  • Confidence indicators (why did the system flag this?)
  • Action buttons: Approve As-Is, Correct & Approve, Reject, Escalate
  • Correction fields where the reviewer can fix specific values
Step 3: Act on the Review

Based on the reviewer's decision:

  • Approve As-Is → Continue with the automation's output
  • Correct & Approve → Use the reviewer's corrections, feed them back for ML training
  • Reject → Notify the requestor, provide rejection reason
  • Escalate → Route to a senior reviewer or specialist
Step 4: Feed Back for Learning

If you're using AI/ML:

  • Log the reviewer's corrections as training data
  • Track the correction rate over time
  • Use the corrections to improve the model's accuracy
  • Adjust the confidence threshold as the model improves

Example: Invoice Data Extraction

  1. InBot receives invoice PDFs via email
  2. AI action extracts: vendor, invoice number, date, line items, total
  3. Classification: If confidence > 95% and amount < $1,000 → auto-process
  4. Human review: Low confidence or high-value invoices get a review task
  5. Reviewer sees the original invoice side-by-side with extracted data, corrects any errors
  6. Workflow creates the AP entry with validated data
  7. Corrections are logged to improve future extraction accuracy

Tips & Best Practices

  • Show, don't tell. Display the automation's work visually. For document extraction, highlight the relevant areas in the original document. For classification, show the reasoning.
  • Set SLAs on review tasks. Human review shouldn't become a bottleneck. Apply the Escalation with SLA Timeout pattern.
  • Track review metrics: average review time, correction rate, throughput. Use these to justify expanding automation coverage.
  • Make it easy to approve. If 85% of reviews result in "Approve As-Is," optimize the UI for that — one-click approve, with correction as a secondary action.

Related patterns