Codifying how you use AI: a small business response to a shifting insurance market

Written by Peter Green | Apr 28, 2026 1:58:22 PM
Two stories are playing out in the insurance industry at the moment, and they pull in opposite directions.

On one side, several major insurers are quietly walking away from covering AI outputs; a recent CSO article details how some are excluding AI-related claims at renewal, others are declining to quote on AI workloads altogether, and the templates that the rest of the industry uses now offer carriers a clean way to opt out entirely. The reasoning, broadly, is that AI outputs are too unpredictable to write policies around; the model takes a path to a result that the insurer can't audit.

On the other side, a new specialty market is being built. Reinsurers and Lloyd's syndicates have started underwriting AI-specific cover against things like hallucinations, model drift and inaccurate outputs; the insurance industry's affirmative answer to a risk it doesn't want to absorb in its existing products.

If that pattern sounds familiar, you probably remember when your broker first started talking about cyber insurance; that came out of tech-related risks being carved out of professional indemnity. We seem to be watching the same thing happen again, except this time it's AI being carved out of cyber and (in some places) out of general liability too.

So that's the big picture, but what does it mean for the small business owner?

The questions an underwriter may start asking

Probably nothing at your next renewal. But on the one after, or the one after that, the renewal paperwork might start to include questions like:

  • What AI tools do you use in your business?
  • What models are they based on?
  • How are they integrated with your other systems?
  • What data flows through them?
  • Where does human oversight sit?
  • What happens when the AI gets it wrong?

The CSO piece quoted an underwriter saying it's no longer a question of whether you use AI; it's a question of how you govern it. Governed AI, with bounded decision-making and an obvious rollback path, is more insurable. Experimental AI with no monitoring is going to be more expensive to cover; or won't be covered at all.

If you can answer those questions confidently, you're in a stronger position than most. If you can't, you've got a window to fix that before the underwriter forces the conversation.

The disclosure problem

Insurance works on disclosure of material facts. If you don't tell your insurer that you use AI, and something goes wrong because of an AI tool, "we didn't underwrite to that risk" is a reasonable basis to deny the claim. Same logic that applies to anything else you should have told them about; if you didn't, they didn't price for it, and they're not paying out on it.

That includes the AI tools your team uses without you knowing about them. Shadow IT was already a problem; shadow AI is a more aggressive version of the same thing, because the tools are easier to adopt, more likely to touch sensitive data, and harder to spot in your usual asset inventory. If someone in your business is pasting client information into a free chatbot to "help draft a quick response", that's now an undisclosed risk on your insurance policy.

Quick caveat before going further: I'm not an insurance broker and this isn't financial advice. Talk to your own broker about your specific cover. I had a conversation with mine while putting these thoughts together; he confirmed the direction of travel and is checking with our insurer about my own AI usage as part of it.

A practical response: codify what you're doing

Write down how you use AI in your business. Not as a marketing exercise; as an internal document. An AI code of conduct.

It doesn't need to be long. The goal is to be able to answer, on demand, what an underwriter (or a client doing security due diligence, or you yourself in six months) is going to ask. The questions break down into a few categories:

  • What tools are sanctioned? Which AI platforms are you using, on what plans, with what training settings?
  • What are they used for? Drafting content, internal productivity, marketing imagery, code generation, customer-facing automation; not all of these carry the same risk.
  • Where do they touch your data? Read-only, write-access, connected to email or files or your CRM?
  • What's automated and what isn't? Anything that runs on a schedule without you in the room is a different risk profile from anything you're driving in real time.
  • What's never automated? Often the most important section, because it draws a clear line that you can point at when you're being asked whether AI is "out of control" in your business.

I put together our own version this week, and the exercise surfaced things I'd half-noticed without really thinking through. The polish feature in Gmail on my Android phone, for instance; that's also AI, even though I think of it as just an email tweak. The act of writing the document forced me to give a clear answer to the autonomous-action question; the only thing that runs without me being in the loop is an inbound morning briefing, and even that is scoped to writing into a single Notion workspace I use for my own working notes (no client data, no shared content).

The other thing worth being explicit about, because it tends to come up when people start asking detailed questions: most AI platform connectors today let you control which actions a tool can take (read pages, create pages, and so on) but not which specific resources those actions can touch. So scoping happens at the prompt and task-design level, rather than at the platform level. That changes how you describe a control accurately; not "it's locked down to one board", but "it's instructed to write to one board, and the workspace it's in doesn't contain anything that would hurt if it went sideways". Different sentence, more accurate description, and the difference matters more than it sounds; if a malicious input ever convinced the agent to act outside its instructions, the platform-level permissions would let it.

The exercise also pushed me to apply our existing data classification thinking to the AI question explicitly. The classification we use has four classes (Public, Internal, Sensitive, Restricted), each with handling rules; mapping those rules to AI tools rather than just to people and systems gave us a clear, repeatable answer to "can I put this into Claude?". The default consumer-grade plan covers Public and Internal data; Sensitive needs explicit per-use thought and pushes you toward commercial-terms plans; Restricted goes through a different path entirely, with self-hosted inference (AWS Bedrock or equivalent) for the cases that need it.

That document is now a living reference; it gets updated when I adopt a new tool, when a tool I use changes its terms, or when something happens that makes me question how I'm using one of them. It's version controlled, with a last-reviewed-date at the top; the same pattern as an information security policy.

Doing this even if your insurer never asks

There are some good reasons to do this beyond the renewal form.

Writing it down is when you notice things. You find out which tools have crept into your day, how broadly they're plugged in, and where the gaps in your own understanding are. There's real value in being forced to articulate "the only fully autonomous AI task in my business is X, and here's exactly what it can and can't do."

It's also a hedge against the speed of change. AI tools are evolving fast, and the temptation to bolt on the latest agent or automation is strong. A written reference forces a small amount of friction; a "before I enable this, does it fit our principles?" check that helps you avoid drifting into something you wouldn't have signed up to deliberately.

If you'd like a hand thinking through what your own AI code of conduct should look like, or a sounding board on what tools to use and how to use them sensibly, that's the kind of conversation I have all the time.

Book a call. No obligation, just a practical chat.

This post was drafted with AI assistance and reviewed, edited and approved by Peter before publication, in keeping with the principles described above. Sources and further reading: CSO Online on carriers excluding AI cover; Insurance Business / Lockton Re report on widening AI coverage gaps; Hunton on the emergence of affirmative AI cover.