Clio’s Legal Trends Report and other recent industry surveys show that a majority of small and solo law firms are adopting AI. The American Bar Association issued its first formal ethics opinion on generative AI in July 2024. The largest firms have built or bought tools that automate work that used to fall on associates. The question for solo and small-firm lawyers in 2026 is no longer “should we use AI.” It’s “where do we start, and how do we not get burned?”

This guide is for the lawyer running a 1- to 25-person practice who has tried ChatGPT once or twice, suspects there’s something there, and wants a working answer rather than a vendor pitch. Over the years I’ve worked with many law firms as a project manager on their websites and currently consult on SEO for some law clients, so the angle here is practitioner-level rather than tool-vendor or BigLaw-partner. I’ll point at specific tools, but I don’t sell them.

What’s covered: the five workflows where AI earns its keep in a small practice today, the tools to consider for each one, the risks the ABA wants you to think about, and a 30-day plan to actually start. Honest take throughout: AI is real for a specific set of tasks. It is not replacing your judgment, your client relationships, or your knowledge of local court practice. It is, in 2026, going to change how a competent attorney spends their week.

Five AI workflows that earn their keep at a small firm

These are the workflows where AI saves real hours today, ranked by where I see the biggest gap between effort and value at small practices.

1. Legal research with citation-grounded tools

Generic ChatGPT is not a legal research tool. It hallucinates case citations confidently and has been responsible for sanctions in Mata v. Avianca and a growing list of follow-on cases. The fix is not to avoid AI for research; it’s to use a tool grounded in real case law. Three products are worth a look in 2026:

  • Lexis+ AI: deep integration with the Lexis case database. Strong for jurisdictions where Lexis has good coverage. Built-in citation verification.
  • Westlaw Precision AI: Thomson Reuters’ answer to Lexis+ AI. If your firm already pays Westlaw, the AI features are part of recent subscription tiers.
  • CoCounsel: also Thomson Reuters (it absorbed Casetext in 2023). Good for memo drafting from research, less for raw case lookup.

Most solo and small firms can’t afford all three and don’t need to. The decision usually comes down to whatever case database you already pay for. If your subscription is up for renewal, ask the vendor what’s included in the AI tier and whether their citation verification flags hallucinations or just summarizes results.

Sample prompt for memo drafting once you have the cases:

Act as a legal research associate. Draft a research memo on [LEGAL ISSUE] in [JURISDICTION].
Use only the cases I provide below. Do not introduce new citations.

For each case I provide, identify: (1) the holding relevant to [LEGAL ISSUE], (2) the procedural posture, (3) any limiting language. Then synthesize the holdings into a position our firm can take.

If the cases conflict, say so. If a case does not address [LEGAL ISSUE], say so.

Cases:
[PASTE CASE 1 FULL TEXT]
[PASTE CASE 2 FULL TEXT]

You feed it real cases you’ve pulled from your research database. The model summarizes and synthesizes; it does not invent. This pattern works in ChatGPT, Claude, or any frontier model.

2. First-draft contracts, demand letters, and routine motions

Drafting is where AI saves the most time per dollar at a small firm. The trick is using AI to produce a first draft from your firm’s templates and your facts, then doing the lawyer work on top of that. The model is the associate that doesn’t exist yet at a 4-person firm.

Three tool categories to consider:

  • Spellbook: contract-specific. Lives inside Word. Drafts and redlines clauses, suggests fallback language, flags risk. Strong for transactional practices.
  • Frontier general models (Claude, ChatGPT, Gemini): better than you’d expect for demand letters, client letters, and first drafts of routine motions if you give them your firm’s template and your facts.
  • CoCounsel for legal-specific drafting that you want grounded in case law as it goes.

Sample prompt for a demand letter:

Act as a paralegal at our firm. Draft a demand letter using our firm's template, which I'm pasting below.

Facts:
[CLIENT NAME], represented by our firm, [BRIEF FACTS OF DAMAGES SUFFERED]. The opposing party is [OPPOSING PARTY]. We are seeking [DOLLAR AMOUNT] for [SPECIFIC DAMAGES].

Tone: firm and professional, not aggressive. We want them to settle, not litigate.

Our template:
[PASTE FIRM TEMPLATE]

Output the letter in full. Do not include placeholder brackets in your output; use the facts above. Flag any factual gaps you noticed in a separate "QUESTIONS" section after the letter.

The “QUESTIONS” section at the end is the move that makes this work. The model surfaces the things you forgot to tell it instead of inventing details. You answer the questions, run it again, edit, and you have a draft you can send to your partner for review in 30 minutes instead of 3 hours.

Confidentiality warning before you do this: redact client identifiers and case numbers if you’re using free-tier ChatGPT, Claude, or Gemini. Their default consumer terms allow inputs to be used for training in some configurations. For real client work, use the enterprise tiers (ChatGPT Team or Enterprise, Claude for Work, Microsoft Copilot for Business) or run a local model. ABA Formal Opinion 512 expects you to think about this before you paste anything.

3. Deposition and transcript summarization

If your practice involves depositions, AI summaries are the cleanest win on this list. A 200-page deposition transcript that used to take a paralegal 4 hours to summarize takes 10 minutes when you feed it to Claude or ChatGPT with the right prompt. Output quality is in the 80-90% range, which is plenty for a first pass that the attorney edits.

Tools: any frontier model handles transcript summarization well. Claude has the longest context window (good for one-shot 300-page transcripts). ChatGPT-5 is competitive. Gemini handles multi-document context cleanly. CoCounsel has a deposition-specific feature for firms that want a legal-trained pipeline.

Sample prompt:

Act as a senior paralegal. Summarize the attached deposition transcript.

Output structure:
1. WITNESS BACKGROUND (one paragraph)
2. KEY ADMISSIONS (numbered list, with page:line citations for each)
3. CONTRADICTIONS WITH PRIOR STATEMENTS (only if any are present in the transcript itself)
4. AREAS WHERE WITNESS WAS EVASIVE (only if the transcript shows refusals or non-answers; cite page:line)
5. OPEN QUESTIONS for follow-up depositions or trial cross

Do not invent admissions or contradictions. If a section has nothing to report, write "None observed."

Transcript:
[PASTE TRANSCRIPT]

The “do not invent” instruction matters. Without it, models will helpfully synthesize “contradictions” that aren’t there. With it, they tend to be honest.

4. Client intake screening and routine email drafting

Two related workflows that share a tool stack. Client intake screening: a prospect fills out an intake form (or sends an email), and AI helps you decide quickly whether the matter fits your firm’s practice areas, conflicts, and budget. Email drafting: AI handles the routine “thank you for reaching out,” “here’s our retainer agreement,” and “here’s what documents we need” outreach that eats your week.

Tools: Clio Duo (built into Clio’s case management platform) handles intake well if you’re already on Clio. MyCase IQ is the equivalent inside MyCase. For firms that don’t use a case management platform, a frontier model with a saved intake-screening prompt works fine.

Sample intake-screening prompt:

Act as a senior paralegal screening a prospective client.

Our firm practices: [LIST PRACTICE AREAS].
Our firm does NOT take: [LIST WHAT YOU EXCLUDE].
Our minimum case value: [DOLLAR AMOUNT or "no minimum"].
Conflicts to flag: [LIST any opposing parties from current matters].

Prospective client message:
[PASTE THE PROSPECT'S MESSAGE]

Output:
1. PRACTICE AREA FIT: yes/no/maybe (one sentence why)
2. CONFLICTS FLAGGED: yes/no (with names if yes)
3. RED FLAGS: any unusual elements (urgency, unrealistic expectations, prior counsel, statute of limitations issues)
4. SUGGESTED NEXT STEP: book consultation / decline / request more info
5. DRAFT INITIAL REPLY (3-5 sentences) the attorney can edit before sending

The screening is the value. The draft reply is gravy. Together they cut the time from “prospect sends email” to “attorney responds appropriately” from a day to about ten minutes.

5. Internal knowledge and firm-policy reference

The fifth workflow is the one that’s least visible from outside the firm and quietly the most valuable: internal knowledge. Your firm has a policy manual, a list of procedures, prior memos, brief banks, intake scripts. AI gives you a way to query that knowledge in plain English instead of grepping through a SharePoint folder.

Two ways to do this:

  • Lightweight: paste the relevant document into a frontier model and ask. Works for one-off questions. No setup. Confidentiality: depends on which tier you’re using.
  • Real: a retrieval-augmented setup where your firm documents live in a vector database and the model has access to all of them. Requires either a vendor (CoCounsel for legal-specific knowledge management, Glean or Notion AI for general firm knowledge) or a developer to set up. The payoff is significant for any firm with more than five lawyers and a real document base.

Sample prompt for the lightweight version:

Act as a senior associate at our firm. I'm asking a question about firm procedure.

Our firm's relevant policy:
[PASTE POLICY DOCUMENT]

My question:
[YOUR QUESTION]

Answer in plain English. If the policy doesn't address my question, say so directly. Do not invent firm procedures the policy doesn't cover.

The “do not invent” instruction is the same theme. Models default to helpful confabulation; explicit anti-confabulation instructions get you back to honest output.

The risks the ABA wants you to think about

The American Bar Association’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on July 29, 2024. It is the first formal ethics opinion specifically on generative AI in legal practice. If you take one piece of homework from this guide, read that opinion before you roll out AI inside your firm. The full PDF is short and direct.

The opinion maps generative AI to four Model Rules of Professional Conduct that already exist:

Competence (Model Rule 1.1)

You have a duty of “legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation,” which the ABA reads to require understanding the benefits and risks of any technology you use to deliver legal services. In practice: you can’t just use a tool because someone told you to. You need to know what the tool does, where it’s likely to be wrong, and how you’ll catch it when it is.

Confidentiality (Model Rule 1.6)

Client information is confidential regardless of source. The ABA opinion expects lawyers using generative AI to consider whether the tool’s terms allow client data to be used for training, whether the data is encrypted in transit and at rest, who has access on the vendor side, and whether informed client consent is needed for some uses. Free-tier consumer AI products usually fail several of these tests. Enterprise tiers and locally-run models clear most of them, but you still need to read the actual terms for each tool.

Fees (Model Rule 1.5)

The opinion is specific: you can charge a client for the time you spend using AI on their matter, including the time spent reviewing AI output for accuracy. You generally cannot charge for the time you spent learning how to use the AI tool. If you bill on a flat fee and AI cuts your hours, the ABA does not require you to refund the difference, but rules in some states are stricter on this. Check your state.

Supervision (Model Rules 5.1 and 5.3)

Managing partners must establish clear policies on permissible AI use. Supervisory attorneys must train associates and staff on ethical and practical use, and must verify compliance. If your firm has staff using AI ad hoc with no written policy, you have a supervision exposure right now.

State guidance varies

Florida, California, New York, New Jersey, and Pennsylvania bars (among others) have issued opinions of their own that go further than the ABA in places. The Justia 50-state survey of AI and attorney ethics rules is the cleanest reference for what your specific bar has said. Read your state’s opinions before relying on the ABA national framing.

One last risk that’s not in the opinion but should be on your radar: hallucinated case citations. Mata v. Avianca set the precedent for sanctions when an attorney filed a brief with cases that didn’t exist. There have been a half-dozen follow-on cases since. The fix is not to avoid AI; it’s to verify every citation before it leaves your office.

Nothing here is legal advice. Confirm any of this with your state bar before you rely on it.

A 30-day adoption plan you can actually run

Most “AI adoption strategies” you’ll find are written for enterprise IT departments running six-month rollouts. Solo and small firms don’t operate that way. This plan is built for a firm of 1-25 lawyers that wants to test AI seriously in one calendar month without disrupting client work.

Week 1: audit and policy

Document who at the firm is already using AI and how. You probably have one or two associates running things through ChatGPT off their personal accounts. That’s a confidentiality exposure. Then write a one-page firm AI policy covering: which tools are approved, what client data can and can’t go into them, who reviews AI output before it leaves the firm, and how the firm pays for the tools. The policy doesn’t need to be perfect. A bad policy is better than no policy.

Week 2: pick one workflow, one tool, one pilot

From the five workflows above, pick the one that hits your firm hardest. For most small practices, that’s drafting (workflow 2) or deposition summaries (workflow 3). Pick one tool from the list and run a small pilot with one or two attorneys. Don’t try to roll out AI across the whole firm in week 2.

Week 3: measure and document

For the pilot workflow, track time saved, error rate, and client outcomes if visible. Note what the model got right, what it got wrong, and where it required heavy editing. The numbers won’t be perfect, but you’ll have a real basis for the week 4 decision instead of vibes.

Week 4: scale, redirect, or stop

Three honest possible outcomes. Scale: the pilot worked, the firm should expand to the other attorneys and add a second workflow. Redirect: the workflow was wrong but the tool showed promise on a different task; pivot. Stop: the tool didn’t fit, the workflow didn’t fit, the data isn’t there. All three outcomes are useful. The point of a 30-day pilot is to find out, not to commit to a multi-year contract before you know whether it works.

One thing to skip in month one: building a custom AI assistant or hiring a consultant to “transform the firm.” Both can be legitimate later. Neither is a month-one move.

Business AI Workflows for Law Firms

Start here for an overview of business AI workflows for law firms. The articles below explore specific tools, workflows, prompts, and practice areas in more detail. New articles publish weekly, so some links may currently point to upcoming pages.

Tools

Workflows

Prompts

Practice areas

Strategy and ethics

If you’re new to the site and want to start with one piece besides this one, read AI legal research tools. It’s the highest-leverage workflow for a small firm in 2026 and the article most likely to change how you spend a billable hour next week.