Early in my career, I worked directly with companies designing commercial insurance and risk management programs. I later founded and operated an independent insurance agency, which I eventually sold. From there, I moved into brokerage leadership, independent consulting, and strategic advisory work - including roles within a top global insurance brokerage platform and in private equity-backed environments where insurance strategy and profitability are tightly linked.
I've also advised large and middle-market companies on risk, insurance strategy, and complex operational exposures, and I've written on industry topics including trucking insurance and risk management.
What pulled me toward AI was simple: I kept seeing how manual everything still is, especially on the commercial side. Smart people spending hours reading, sorting, summarizing, and piecing together information that's scattered across PDFs, emails, loss runs, and underwriting notes. Once language models got good enough to be useful, the opportunity felt obvious.
What I focus on
The actual work
Document handling, submission intake, research, knowledge retrieval, servicing support, renewal prep. The tasks that eat up the most time and have the most room for improvement. I care less about flashy demos and more about whether something holds up when the documents are inconsistent and the data is fragmented.
Making it stick
Everyone talks about the model. Almost nobody talks about what it takes to make it usable: who reviews the output, how it fits into existing process, what happens when it's wrong, and how you'd know if it's actually helping. That's the part I find interesting.
Doing it responsibly
Insurance is regulated for good reason. Deploying AI here means you need real boundaries - who can use what, what gets logged, where a human has to make the call. Enthusiasm isn't a governance model.
How I work
I tend to think from the workflow backward. I don't start with the model or the technology. I start with the actual work: what decision is being made, what information is missing, where the process breaks down, what's repetitive versus what genuinely needs judgment.
I also prefer building things, even if they start rough. A small working tool usually teaches you more than a polished strategy deck. And I'm pretty practical about AI - if a system can't hold up in a messy real environment with inconsistent documents and fragmented data, it's not that interesting to me.
A few things I believe
AI isn't going to replace most of the important people in insurance any time soon. There's too much judgment, accountability, nuance, and regulation in the business. The real value is in making good people better and faster, not pretending the human layer goes away.
I also think a lot of people are looking in the wrong place. Everyone wants to talk about underwriting models, but insurance has been modeling things forever. The bigger opportunity is in the surrounding workflow - document analysis, knowledge retrieval, submission triage, internal decision support. That's where the practical value is.
I'd rather build something rough and learn from it than talk about a perfect system that doesn't exist yet.