Why your firm needs a written AI policy now

Every law firm with billable hours and a Wi-Fi connection has staff using AI tools today, whether the partners have approved it or not. The Clio Legal Trends Report and other recent industry surveys put adoption at roughly four out of five legal professionals, with slightly less than half of firms carrying any formal governance policy covering that use. In the average firm, AI is already in production and the policy is missing.

A written policy does three jobs at once. It tells the staff which tools they can use and which they cannot, it documents the firm’s compliance posture under the ABA Model Rules and state bar guidance, and it shows your malpractice carrier that you took reasonable steps if something later goes wrong. Skipping the policy does not stop the AI use; it only removes the firm’s defense if a Mata v. Avianca-style sanction ever lands.

This guide is for the managing partner or office manager at a solo to mid-size firm (1 to 25 lawyers) who needs a working AI policy in the firm handbook this quarter. For the broader picture of how AI fits across all practice operations, see the pillar on AI for law firms. For the underlying ethics rules the policy maps to, see ABA rules on lawyers using AI.

The seven elements every law firm AI policy needs

The Clio template, the ABA’s 2026 responsible-use checklist, and a dozen vendor templates all converge on the same shortlist. Get these seven sections right and the rest is window dressing.

1. Scope and definitions

Define what counts as “AI” for the policy. Generative tools (ChatGPT, Claude, Gemini, Microsoft Copilot), legal-specific tools (CoCounsel, Spellbook, Clio Duo, Lexis+ AI), and the AI features inside your existing case management or document management software all fall in scope. State the policy applies to everyone working in the firm, including partners, associates, paralegals, contract attorneys, and outside consultants. Cover both work-issued accounts and any personal account a staff member uses for firm work.

2. Approved and prohibited tools

Two lists, kept current. The approved list names every tool the firm has reviewed and signed up for, with the account tier (Free, Plus, Team, Enterprise, Business Associate Agreement on file). The prohibited list either calls out specific tools the firm has reviewed and rejected, or sets a category rule (for example: no free-tier consumer chatbot for any document containing client identifiers). Keep this as an appendix so it can be updated without recirculating the full policy. Review the lists quarterly.

3. Confidentiality and the redact-first rule

Tie this section directly to ABA Model Rule 1.6 and the related sections of ABA Formal Opinion 512. The operative rule for most small firms: client identifiers, privileged communications, and sensitive case content do not get pasted into any tool that does not carry a vendor agreement with a no-training clause and (for any practice handling medical records) a signed Business Associate Agreement. Redaction before upload is the fallback when the tool is approved but the matter is sensitive.

4. Verification and human review

Every AI-generated work product gets reviewed by a lawyer competent in the subject before it goes to a client, a court, or opposing counsel. Citations get checked against the actual source. Numerical figures get reconciled with the underlying records. Drafts that will be filed get a second read by a second attorney when stakes are high. This is the section that maps to ABA Model Rule 1.1 (competence) and is the most-cited reason for the sanction in Mata v. Avianca.

5. Disclosure and client consent

Decide the firm’s default position on whether to tell clients when AI was used on their matter. Many state bars do not require general disclosure, but several require it for specific uses and most clients appreciate being told. The conservative default: disclose in the engagement letter that the firm uses AI tools to support legal work, get written consent, and document the consent in the matter file. Re-confirm with the client before any use that involves a non-BAA tool or any unusually sensitive content. The Justia 50-state survey is a useful starting point for the rule in your jurisdiction.

6. Billing and AI-assisted time

Decide how AI-assisted work shows up on the bill. ABA Model Rule 1.5 governs reasonable fees, and recent state bar opinions are consistent: a lawyer cannot bill for time the AI saved as if a human did the work. The simplest position is to bill for actual lawyer time spent on the matter (drafting prompts, reviewing output, verifying, revising) and exclude the model’s processing time. If the firm uses fixed fees or flat fees, the policy should say so and remove the issue.

7. Training and supervision

Map this section to ABA Model Rules 5.1 (responsibilities of partners) and 5.3 (responsibilities regarding nonlawyer assistance). Set a cadence for AI training: an onboarding session for every new hire, a refresher for all staff once a year, and a written acknowledgment that each person has read the policy. Name the firm’s AI Lead (often the managing partner, sometimes the IT manager or office manager) and the escalation path when a staff member sees a problem.

An eighth optional section: incident response. What does the firm do when an AI tool outputs a fabricated citation that almost gets filed, or when a staff member pastes client information into a public chatbot. Define notification, containment, and after-action review steps. A small firm can keep this section short.

A copy-pasteable AI policy template

Paste the block below into Word, replace the bracketed fields with your firm’s specifics, and adjust the tool lists to match what your firm uses. The version that ends up in your handbook will need a lawyer’s read for jurisdiction-specific clauses. This is a starting draft, not legal advice.

[FIRM NAME] ARTIFICIAL INTELLIGENCE USE POLICY
Effective: [DATE]
Owner: [MANAGING PARTNER OR DESIGNATED AI LEAD]

1. Purpose

This policy governs the use of artificial intelligence tools by all personnel of [FIRM NAME], including partners, associates, paralegals, administrative staff, contract attorneys, and outside consultants performing work for the firm. The purpose is to (a) capture the productivity benefits of AI, (b) protect client information, (c) maintain compliance with applicable ABA Model Rules and the rules of professional conduct in [STATE], and (d) provide a clear process when questions or incidents arise.

2. Scope

This policy applies to all AI tools used in connection with firm work, including generative chat assistants (such as ChatGPT, Claude, Gemini, Microsoft Copilot), legal-specific AI products (such as CoCounsel, Spellbook, Lexis+ AI, Clio Duo), AI features embedded in case management or document management software, and any tool that processes firm or client content using a machine learning model. It applies to work-issued accounts and to any personal account used for firm work.

3. Approved tools

Personnel may use only the AI tools listed in Appendix A (Approved Tools), at the account tier specified. The AI Lead maintains Appendix A and reviews it quarterly. To request the addition of a new tool, submit the vendor name, intended use case, account tier, and the vendor's data handling terms to the AI Lead.

Use of tools not on the approved list is prohibited until the tool is reviewed and added.

4. Confidentiality and client information

Personnel shall not enter client identifiers, privileged communications, or sensitive case content into any tool that does not carry a written agreement with the vendor including a no-training clause covering firm inputs. For matters involving protected health information, the tool must carry a signed Business Associate Agreement before any unredacted record is processed.

When in doubt, redact first. Strip names, addresses, dates of birth, account numbers, financial identifiers, and any other detail that could identify the client or the matter before pasting content into the tool. Document the redaction step in the matter file.

5. Verification and human review

Every work product produced with AI assistance shall be reviewed by an attorney competent in the subject before delivery to a client, filing with a court, or transmission to opposing counsel. Verification includes:

- Confirming every cited case, statute, regulation, and other authority by retrieving the source independently
- Reconciling every numerical figure (dates, dollar amounts, page citations) against the underlying records
- A second-attorney review for any pleading, motion, or document filed in court

The reviewing attorney remains responsible for the final work product under ABA Model Rule 1.1 (competence).

6. Disclosure and client consent

The firm's engagement letter shall include a clause notifying the client that the firm uses AI tools to support legal work and obtaining the client's written consent. The clause shall reference this policy by name.

For any use that would involve uploading unredacted client content to a tool not covered by a BAA or written vendor agreement, the responsible attorney shall obtain matter-specific written consent before proceeding and document the consent in the matter file.

7. Billing

The firm bills for actual attorney and staff time spent on a matter, including time spent drafting AI prompts, reviewing AI output, verifying source material, and revising drafts. The firm does not bill for AI processing time as if it were attorney time. Fixed-fee and flat-fee matters follow the engagement letter terms.

8. Training and supervision

Every new hire shall complete AI policy training within the first 30 days of employment. All personnel shall complete a refresher annually, including a signed acknowledgment that they have read and understood this policy. The AI Lead is responsible for the training program and for supervising compliance under ABA Model Rules 5.1 and 5.3.

9. Incident response

If a personnel member identifies a fabricated citation in AI output before filing, an inadvertent disclosure of client information to a non-approved tool, or any other AI-related incident, the personnel member shall:

a. Stop work on the affected output immediately
b. Notify the AI Lead within 24 hours
c. Preserve any logs or transcripts of the AI session
d. Cooperate with the after-action review

The AI Lead, in consultation with the managing partner, determines whether client notification, malpractice carrier notification, or bar reporting is required.

10. Policy review

The AI Lead reviews this policy at least annually and proposes revisions to the managing partner. Material updates to the approved tools list, regulatory changes, or significant incidents may trigger an off-cycle review.

Appendix A: Approved Tools

Tool | Account tier | Approved uses | Notes
[Tool 1] | [Tier] | [Uses] | [BAA on file: Y/N; no-training: Y/N]
[Tool 2] | [Tier] | [Uses] | [Notes]

Appendix B: Prohibited Uses

- Free-tier consumer AI tools for any content containing client identifiers
- Any tool that does not carry a written vendor agreement covering firm inputs
- AI-generated work product delivered to a client, court, or opposing counsel without attorney review
- AI-generated citations filed in court without independent verification
- Any use that violates the rules of professional conduct in [STATE]

Acknowledgment

I have read and understand the [FIRM NAME] Artificial Intelligence Use Policy.

Name: __________
Date: __________
Signature: __________

Rolling the policy out without a revolt

The hardest part of this work is not writing the policy. It is the moment a partner who has been pasting client emails into a free chatbot for six months reads the redact-first rule and feels personally targeted. Plan the rollout to head that off.

Start with a partner-only conversation a week before the all-hands. Walk the partners through the seven sections, the rationale, and the ABA citations. Get partner agreement on the approved tools list (the section that creates the most friction) before any staff sees the policy. If a partner wants a specific tool added, capture the request and run the vendor review.

At the all-hands, present the policy as a productivity enabler rather than a restriction. The point of the approved tools list is to make AI use easier, not harder. Show the staff how to find the approved list, how to submit a new-tool request, and who the AI Lead is. Hand out the acknowledgment form and collect signatures the same day. Set a clear date for the first round of training.

In the first 90 days after rollout, expect to add tools to the approved list rather than remove them. Partners and senior associates will surface use cases they have been working on quietly. Each new tool gets reviewed and added or rejected. The policy itself does not need to change for each addition.

For tool selection guidance when building Appendix A, see best AI tools for law firms.

Where the policy sits in the firm handbook

Most small firms keep this policy in the technology or operations section of the employee handbook, next to the existing IT acceptable-use policy and the data security policy. The three together cover what staff can install, what they can save and send, and what they can paste into a model. They are written to be read together.

Reference the AI policy in the engagement letter as the source of the disclosure clause. Reference it in the new-hire onboarding checklist. Reference it in the case management software’s documentation of AI features. The policy works when it shows up in the places staff already look.

Confirm any compliance position in this article with your state bar before relying on it for your firm. Bar guidance on AI evolves quickly and is not uniform across jurisdictions. This article is not legal advice.

Related on Business AI Workflows