Most small businesses already have an AI problem and do not know it. Someone in the team has pasted a client contract into ChatGPT to summarise it. A junior has dropped a customer list into a free AI tool to clean the formatting. A freelancer is using a model with an unclear data retention policy to draft replies to your suppliers. None of this is malicious. None of it is unusual. And almost none of it is covered by any document you have written down.

An AI usage policy fixes that. Not by banning AI — that ship has sailed for any SMB that wants to stay competitive — but by setting clear, simple rules for how your team uses it. A good policy protects client data, keeps you on the right side of regulators, and gives your team the confidence to use AI productively without constantly second-guessing whether they are about to do something they will regret.

This guide walks through the eight clauses every small business AI policy should cover, gives you example language you can adapt, and flags the mistakes that turn a well-intentioned policy into a document nobody follows. There is also a short template at the end you can copy and tailor in under an hour.

Why an AI policy matters more than people think

A 12-person business does not need the kind of 40-page policy a multinational bank produces. But the risks an AI policy addresses are not theoretical, and they hit small businesses harder per incident than large ones.

The first risk is data leakage. Most consumer AI tools can, by default, use your prompts to improve future models. That is fine for a recipe. It is a serious problem when an employee pastes in a client's medical history, a payroll spreadsheet, or a draft acquisition memo. Once that data has been processed by a third party under terms nobody read, you cannot quietly take it back.

The second risk is regulatory. The EU AI Act, the UK's incoming AI assurance regime, and existing GDPR obligations all require organisations to be able to explain how AI is being used in decisions that affect customers and employees. We covered the EU side in our EU AI Act guide for small businesses; an internal policy is what turns those obligations into something your team can actually follow.

The third risk is quality and reputation. AI tools confidently produce wrong answers. Without rules on when output must be reviewed by a human, sooner or later something hallucinated will go to a client, into a contract, or onto your website. The reputational damage from a single bad incident outweighs years of efficiency gains.

The eight clauses every SMB AI policy should include

You do not need a corporate document. You need a one- to three-page policy that covers eight things clearly. Each clause below includes example wording you can adapt.

1. Scope and purpose

State who the policy applies to and why it exists. Keep it short. Including contractors and freelancers explicitly is important, because they are usually the ones using the most AI tools.

This policy applies to all employees, contractors, and freelancers working with [Company]. It sets out how AI tools may be used in the course of work, with the goal of using AI productively while protecting client data, regulatory compliance, and the quality of our work.

2. Approved tools list

This is the single most useful clause in the policy. List the AI tools the business has reviewed and approved, with the plan tier required for compliance. Anything not on the list requires written approval before use.

For a typical SMB in 2026, an approved list might include: Claude (Team plan), ChatGPT (Team or Enterprise plan), Microsoft Copilot for the Microsoft 365 tenant, Google Gemini for Workspace, and a transcription tool like Otter or Fireflies on the business plan. Free or personal tiers should generally not be approved for any work involving customer or financial data, because the data handling terms are different. Our Claude vs ChatGPT comparison covers the differences in plan tiers and where each one fits.

3. Data classification rules

Tell your team what they can and cannot put into an AI tool. The simplest workable system has three tiers.

Green — safe to use with any approved tool: public information, marketing copy, generic research questions, anonymised examples.

Amber — only with business-tier tools and only when necessary: internal documents, draft strategy, non-identifiable customer aggregates, financial summaries that do not reveal individual figures.

Red — never enter into any external AI tool: personally identifiable information, client lists, contracts in their entirety, source code for proprietary systems, board materials, any health, financial, or legal data tied to an identifiable person, anything covered by an NDA.

If a task requires Red data, it either does not get done with AI, or it gets done with an on-premises or fully private deployment that has been formally approved.

4. Human review requirements

Specify when AI output must be checked by a person before it leaves the business. The default for most SMBs should be: any AI-generated content that goes to a client, regulator, supplier, or the public must be reviewed and edited by a named human, who is responsible for the accuracy of the final version.

List the categories where review is mandatory: external emails, proposals, contracts, invoices and financial statements, legal correspondence, anything published on the website or social channels, anything that informs a hiring or firing decision. Internal drafts, brainstorming, and personal productivity tasks generally do not need formal review — the person doing the work is the reviewer.

5. Disclosure to clients and customers

Decide your stance on disclosure and write it down. The two reasonable positions for an SMB are: AI assistance is disclosed in any deliverable where it materially shaped the output (this is the more cautious, professional-services-friendly choice), or AI use is treated as a normal tool, like spellcheck, and not separately disclosed (this is the more practical choice for marketing and operational work).

Pick one. Apply it consistently. Some clients — particularly in legal, financial, and healthcare sectors — will require explicit disclosure in their contracts. Match or exceed those requirements, do not undercut them.

6. Prohibited uses

Spell out the things that are not allowed under any circumstances. A typical SMB list looks like this:

  • Using AI to generate content that misrepresents a person, including deepfakes, fake quotes, or impersonations.
  • Using AI to make a final hiring, firing, promotion, or compensation decision without human judgement.
  • Using AI to produce work that is then represented as the original product of a named individual without that individual's involvement.
  • Bypassing the approved tools list for convenience.
  • Sharing approved-tool logins between team members.
  • Connecting AI tools to live company systems (CRM, email, finance) without written approval.

7. Incident reporting

Make it easy and consequence-free to report mistakes. The clause should say something like: if you accidentally enter sensitive data into a non-approved tool, share output that turned out to contain errors, or notice anyone else doing so, report it to [named person] within 24 hours. Reporting in good faith will not result in disciplinary action; failing to report it might.

The 24-hour window matters because most AI providers offer a route to delete or restrict use of inadvertently submitted data, but only if you act quickly.

8. Review cadence and ownership

Name a single person who owns the policy and a date by which it will be reviewed. AI capabilities and regulations change fast enough that anything older than six months is probably out of date. A six-monthly review by a named owner, with sign-off from the founder or managing director, is the right cadence for most small businesses.

Not sure where you stand on AI risk and readiness?

Our free 3-minute AI Readiness Quiz scores your business across data, tooling, governance, and skills — and shows exactly where to focus first.

Take the Free Quiz →

A short, copy-and-adapt template

Use the structure below as a starting point. Replace the bracketed placeholders, drop the clauses that genuinely do not apply to your business, and have a lawyer review the final version if you operate in a regulated sector.

[Company] AI Usage Policy — v1.0, [Date]

1. Scope. Applies to all employees, contractors, and freelancers.
2. Approved tools. [List, with required plan tier]. Use of any other AI tool requires written approval from [name].
3. Data rules. Green data may be entered into approved tools. Amber data requires business-tier tools and a clear work need. Red data — including [list specific examples for your business] — must never be entered into any external AI tool.
4. Human review. All AI-assisted output sent to clients, regulators, suppliers, or the public must be reviewed by a named team member before it leaves the business.
5. Disclosure. [State your chosen position on client disclosure].
6. Prohibited uses. [List, including the items in the section above].
7. Incident reporting. Report any policy breach or accidental data exposure to [name] within 24 hours. Good-faith reporting will not be penalised.
8. Ownership and review. This policy is owned by [name] and will be reviewed every six months, next review [date]. Questions: [email].

This is roughly 250 words. That is the right length. Anything longer and your team will not read it; anything shorter and it will not hold up if something goes wrong.

Common pitfalls when rolling out an AI policy

Even a well-drafted policy can fail in practice. The same handful of mistakes come up again and again in small businesses, and most of them overlap with the patterns we identified in our piece on the most common AI mistakes small businesses make.

Writing the policy and not telling anyone. A policy that lives in a folder on the founder's laptop changes nothing. Roll it out in a 30-minute team meeting and require everyone to acknowledge it in writing.

Banning everything. The lazy policy is "do not use AI for work". It will not be followed and you lose the productivity benefits while still carrying the risks, because people will use AI anyway and not tell you. A permissive but specific policy beats a restrictive but vague one.

Approving tools you have not actually checked. Before you put a tool on the approved list, read its data processing addendum, confirm where data is stored, and check whether your prompts are used for training by default. If the answer is "yes" and you cannot disable it, that tool does not belong on the list for anything beyond Green data.

Forgetting freelancers. Freelancers and agencies handling your work are usually the heaviest AI users. Your policy needs to apply to them — and the engagement contracts you sign with them need to reference it.

What to do this week

You do not need a project to do this. A pragmatic SMB rollout looks like this. On day one, draft v1.0 of the policy using the template above — budget two hours. On day two, run it past anyone in your team who already uses AI heavily, because they will spot the gaps. On day three, send it to the team, hold a short walkthrough, and have everyone acknowledge it. On day four, update your contractor and freelance agreements to reference it. By the end of the week, you have moved from informal, ungoverned AI use to something defensible, repeatable, and easy to maintain.

That is the whole point of an AI policy for a small business. It is not a compliance theatre exercise. It is a way to keep doing the productive things your team is already doing with AI, while quietly closing off the categories of risk that could undo a year of progress in a single bad afternoon.

Build the rest of your AI strategy, not just the policy

An AI policy is the floor. To turn AI into a real competitive advantage, you need a strategy that connects tools, workflows, ROI, and governance. Our AI Strategy Kits give you the templates, calculators, and step-by-step plans to do exactly that.

See the AI Strategy Kits →    Take the Free Quiz →