Someone on your leadership team said the words: “We should probably have an AI policy.” Maybe it was after the third vendor demo this quarter. Maybe it was after someone mentioned they’d been running customer data through a free-tier AI tool. Either way, here you are.

The instinct is to Google “AI acceptable use policy template,” paste something into a doc, and call it done. I’m asking you not to do that.

A policy nobody reads is worse than no policy. It creates a false sense of governance while everything continues ungoverned.

Here’s how to write one that’s short enough to read, clear enough to follow, and durable enough to last more than six months.

Know What You’re Writing

A policy states intent. It says what the organization commits to, who’s accountable, and what the boundaries are. It should be stable — something you don’t rewrite every time you adopt a new tool.

A policy is not a procedure. It doesn’t describe how to submit a tool request, what form to fill out, or which Slack channel to post in. That’s operational detail. It changes frequently. It belongs somewhere else.

The moment you start writing procedures into your policy, it grows from 2 pages to 15 and nobody reads page 4 onward. Keep the policy at the what and why layer. Put the how somewhere else.

Frameworks That Inform the Drafting

You don’t need to cite frameworks in the policy itself, but they’ll make sure you’re covering the right ground.

NIST AI RMF — Govern function: The Govern subcategories (1.1 through 1.7) map almost directly to what a policy needs to address — legal compliance, risk tolerance, roles, oversight, culture. Use it as your structural checklist.

ISO/IEC 42001 — Clause 5: The international standard for AI management systems. Clause 5 defines what an AI policy must contain: management commitment, objectives, roles, communication. Even if you never pursue certification, satisfying these requirements means your policy covers the essentials.

NIST AI 600-1 (Generative AI Profile): If your organization uses generative AI — and it almost certainly does — this companion document identifies risks specific to GenAI: hallucination, data leakage, confabulation, IP exposure. It’ll inform your prohibited uses and data protection sections.

EU AI Act risk tiers: Even if you’re not subject to EU regulation, the tiered model (Unacceptable → High → Limited → Minimal) is a clean mental model for classifying use cases. Borrow the concept.

The Sections You Need

Here’s what belongs in a 2–3 page AI policy.

1. Purpose and Scope

State why this policy exists and who it applies to. Define what you mean by “AI” — broadly enough to cover current and near-future tools, narrowly enough to be meaningful.

This policy establishes requirements for the responsible use, procurement, and oversight of artificial intelligence tools and systems across the organization. It applies to all employees, contractors, and third parties who use, develop, or procure AI capabilities in the course of business operations.

For purposes of this policy, “AI tools” includes machine learning models, generative AI services, AI-powered features within software platforms, and any system that uses automated reasoning or prediction to support or replace human decision-making.

Two paragraphs. Done.

2. Acceptable Use

This is the section people will actually read. Be direct.

Approved without review:

  • AI features embedded in already-approved enterprise software (spell check, search ranking, etc.)
  • Internal productivity tools on the organization’s approved list

Requires review and approval:

  • Any new AI tool not on the approved list
  • Use of AI tools with sensitive, regulated, or proprietary data
  • Customer-facing AI implementations
  • AI tools that integrate with internal systems or have write access to business data

Prohibited:

  • Using unapproved AI tools to process customer PII, financial data, or protected health information
  • Relying on AI output for employment, credit, insurance, or legal decisions without qualified human review
  • Misrepresenting AI-generated content as human-produced where disclosure is required
  • Using AI to generate or process content that violates law or organizational ethics standards

Vague policies like “use AI responsibly” give people nothing to act on. Be concrete.

3. Data Protection

State what data can and cannot be used with AI tools. If you have a data classification standard, reference it. If you don’t, this section does double duty.

No data classified as Confidential or Restricted may be input into AI tools unless the tool has been reviewed and approved for that data classification level. This includes customer personal data, financial records, trade secrets, and source code for proprietary systems.

Users are responsible for reviewing AI tool terms of service to understand data retention, model training, and sharing practices before use. When in doubt, do not input the data — consult the AI steering committee.

This is the single highest-risk area for most organizations. People will paste sensitive data into AI tools without thinking twice unless you explicitly tell them not to — and tell them what “sensitive” means.

4. Risk Classification

Keep it simple. Three or four tiers is enough.

TierCriteriaReview Required
LowNo sensitive data, no customer-facing output, no system integrationSelf-service from approved list
MediumInternal data involved, or AI-assisted decisions with human oversightSteering committee review
HighSensitive/regulated data, customer-facing, autonomous decisions, or deep integrationFull review with security, privacy, and legal
ProhibitedViolates law, regulation, or organizational valuesNot approved under any circumstance

For a more detailed risk model, the OWASP Top 10 for LLM Applications and CSA AI Controls will inform what “high risk” looks like in practice.

5. Procurement and Approval

Don’t put the process here. State the requirement and point to it.

All AI tools and services require review and approval through the organization’s AI intake process before use. This applies to new tools, new use cases for approved tools, and free-tier or trial usage. Shadow AI — the use of unapproved AI tools for business purposes — is a policy violation.

Refer to [AI Tool Intake Procedure] for submission and review requirements.

6. Roles and Responsibilities

Name functions, not people. People change; roles persist.

RoleResponsibility
AI Steering CommitteeApprove tools and use cases, set risk tolerance, review policy annually, adjudicate exceptions
SecurityEvaluate tool risk, define technical guardrails, monitor for violations
Privacy / LegalAssess data protection implications, review vendor terms, advise on regulatory requirements
EngineeringImplement approved tools, maintain integrations, apply technical controls
Tool OwnersManage approved tools day-to-day, ensure configurations meet requirements
All UsersFollow this policy, use only approved tools, report concerns

7. Monitoring and Compliance

The AI steering committee will maintain a registry of approved AI tools and conduct periodic reviews of AI tool usage. Violations are subject to the organization’s disciplinary process. Employees are encouraged to report potential violations or concerns without fear of retaliation.

8. Exceptions

Every policy needs a pressure valve. Make it risk-based and time-bound.

Exceptions may be granted by the AI steering committee when a legitimate business need exists and compensating controls adequately mitigate identified risks. All exceptions must be documented, time-bound (not to exceed 12 months), and reviewed for renewal.

9. Review Cycle

This policy will be reviewed annually by the AI steering committee, or sooner if triggered by material changes in AI technology, regulation, or organizational risk posture.

One sentence. That’s all you need.

Mistakes I Keep Seeing

Naming specific tools. The moment you write “ChatGPT” or “Copilot” into your policy, you’ve created a document that needs updating every time a product ships or rebrands. Write about capabilities and risk categories, not product names.

Writing it for lawyers. Legal should review it. Legal should not be the primary audience. Write so a department manager can read it over coffee and know what’s expected.

Trying to cover every scenario. You can’t. That’s what the review process is for. The policy draws bright lines and establishes a path for everything else.

Skipping data protection. See section 3 above. This is where the real risk lives for most organizations.

Making It Real

A policy document sitting in SharePoint is governance theater. Once you’ve drafted it:

  1. Get steering committee sign-off. They own it; they approve it.
  2. Communicate it plainly. Send a one-paragraph summary to all staff with a link to the full policy. Not the whole document.
  3. Build the intake process. The policy references an AI tool review procedure — that procedure needs to exist before the policy is enforceable.
  4. Start the registry. Even if it’s a spreadsheet. List every AI tool in use, who owns it, and its risk tier.
  5. Revisit in 90 days. Your first version won’t be perfect. Plan to live with it, then improve it.

AI governance isn’t about saying no. It’s about making it easy for your team to say yes to the right things. A clear, short policy gives everyone a shared understanding of the boundaries — and frees them to move with confidence inside those boundaries.

Write the policy. Keep it short. Make it real.


If this is the right direction but you’d rather not go it alone — drafting the policy, standing up the review process, building the risk model — I do this work with small and mid-size businesses through Shelby Canyon. No pitch, just practical help getting governance stood up.

Get in touch →