Back to home

Insurance AI

AI governance for real insurance work.

AI is entering brokerage and insurance workflows before the controls are always clear. The useful question isn't whether AI is allowed. It's who owns the work when the output matters.

I care less about the model demo and more about what happens when AI touches intake, loss runs, policy language, submissions, meeting prep, claim summaries, client communication, or regulated decisions.

A policy statement is not a control. Real governance says who reviews, who logs, who corrects, who escalates, and who owns the call.

Where this tends to break.

What I look for.

Workflow type, output type, client exposure, regulated decision impact, source traceability, review points, escalation paths, logging, jurisdiction overlays, vendor contract position, and whether the team can explain the behavior without theater.

Proof of work.

I built the public Insurance AI Governance Framework because the conversation gets better when teams can test a specific workflow instead of debating AI in the abstract.

I also write about brokerage AI, responsible AI, and AI-native brokerage operating models because the useful opportunity is not replacing senior judgment. It's making that judgment easier to apply in the right places.

What to send.

Send 3-5 sentences on the workflow, the AI output, who would rely on it, what could go wrong, and what decision you need to make. I'll tell you what I would pressure-test first.