You say you have an AI program. Maybe you paid a consultancy six figures to write policies. Maybe your CISO mentioned it in a board deck. Maybe two engineers review the custom stuff and everyone assumes that counts.

Let’s find out.

Five questions. Honest answers only.

1. Do you know which AI tools have access to your company data — and who approved them?

Not a complete census of every employee’s browser tabs. But the tools that touch customer data, internal documents, code, or anything regulated — can you list them? Do you know who approved each one, or did they just… show up?

If your AI inventory is “whatever procurement has on file,” you’re missing the tools that matter most — the ones people adopted because they were useful and nobody asked.

2. When a new AI use case shows up, who decides yes or no?

Not “who would probably be involved.” Who specifically owns the decision? What’s the process? How long does it take?

If the answer involves a shrug, a Slack thread, or “it depends on who notices first” — you don’t have a program. You have a suggestion box.

3. What’s your policy on AI and sensitive data — and does anyone follow it?

Having a policy is step one. Communicating it is step two. Actually enforcing it is where most programs quietly fall apart.

Can a new employee find the policy without asking three people? Does it say anything specific enough to act on? Or is it a two-page document full of “employees should exercise good judgment” that nobody has read since it was signed?

4. Who’s reviewing the AI tools you didn’t build?

Your two technical leads reviewing the custom model — great. But what about the 15 SaaS tools with “AI-powered” in their marketing copy that your teams adopted through normal procurement?

Vendor AI is still AI. It processes your data, makes decisions your customers see, and introduces risks your security team may never have evaluated. If your program only covers what engineering built, you’re watching the front door while the back wall is missing.

5. If an AI tool produced a wrong, biased, or harmful output tomorrow — what happens next?

Not philosophically. Operationally. Who gets the call? What’s the escalation path? Is there a playbook, or would everyone just improvise and hope legal picks up the phone?

Incident response for AI isn’t optional anymore. If you have an IR plan for a data breach but not for an AI failure, your program has a gap shaped exactly like the thing most likely to bite you.


Scoring

5 clear answers: You probably have a real program. Nice work.

3–4: You have the bones. Fill the gaps before something fills them for you.

0–2: You have a policy document, not a program. That’s okay — but let’s not call it governance.


No judgment. Most organizations are somewhere between 2 and 4. The point isn’t to have perfect answers — it’s to know which questions you can’t answer yet. That’s where the actual work starts.