I care less about the model demo and more about what happens when AI touches intake, loss runs, policy language, submissions, meeting prep, claim summaries, client communication, or regulated decisions.
A policy statement is not a control. Real governance says who reviews, who logs, who corrects, who escalates, and who owns the call.
Where this tends to break.
- The use case is vague, so nobody knows what the AI output is allowed to influence.
- Human review exists in theory, but not in the workflow people actually use.
- Source documents are hard to trace back when someone questions the answer.
- Logging is treated like an audit afterthought instead of part of the work.
- Coverage, E&O, privacy, and vendor contract questions show up too late.
What I look for.
Workflow type, output type, client exposure, regulated decision impact, source traceability, review points, escalation paths, logging, jurisdiction overlays, vendor contract position, and whether the team can explain the behavior without theater.
Proof of work.
I built the public Insurance AI Governance Framework because the conversation gets better when teams can test a specific workflow instead of debating AI in the abstract.
I also write about brokerage AI, responsible AI, and AI-native brokerage operating models because the useful opportunity is not replacing senior judgment. It's making that judgment easier to apply in the right places.
What to send.
Send 3-5 sentences on the workflow, the AI output, who would rely on it, what could go wrong, and what decision you need to make. I'll tell you what I would pressure-test first.