If you're a lawyer, you're probably feeling the pressure of AI moving into legal practice faster than the rules can keep up. The headlines shift weekly. State bars keep issuing new opinions. You want clear, practical guidance - not hype, not fear. This guide walks you through what's actually changing, how to verify AI output, and how to talk to clients about AI in their matters. Let's separate the real requirements from the noise.
Why AI Ethics for Lawyers Is No Longer Optional
State bars are no longer watching from the sidelines. As of early 2026, a majority of state bars have issued ethics opinions addressing AI use in legal work. These opinions don't just suggest best practices - many mandate them. The New York State Bar Association, for example, has guidance requiring lawyers to verify the accuracy of AI-generated legal conclusions before relying on them.
Clients are also asking questions. Corporate clients increasingly want to know whether AI is being used in their matters, and under what safeguards. If you're not disclosing this proactively, you risk running afoul of your duty of candor and your duty to keep clients reasonably informed.
The stakes are concrete. Lawyers in multiple jurisdictions have been sanctioned for submitting briefs with AI-fabricated citations - the now-infamous Mata v. Avianca matter being the earliest widely reported example. The pattern is the same each time: a lawyer trusted AI output without checking it. Courts have treated that as a breach of competence under Model Rule 1.1.
The takeaway is simple. You can't hand off your professional responsibility to a machine. For a closer look at the citation problem specifically, see our guide on hallucinated citations and how to catch them.
Three Core Ethical Duties Affected by AI
Three pillars of legal ethics now touch every AI decision you make: competence, confidentiality, and communication.
1. Competence - Model Rule 1.1 Competence now includes understanding the AI tools you use. If you use Lexis+ AI to draft a contract, you need to know its limits. Does it flag jurisdiction-specific rules? Does it reliably catch conflicts in precedent? If you can't answer those questions, you haven't met the standard.
2. Confidentiality - Model Rule 1.6 When you paste client information into a cloud-based AI tool, you're handing that data to a third party. Before uploading anything sensitive, confirm the vendor encrypts data in transit and at rest, doesn't train on your inputs, and offers a written business associate or data protection agreement. This topic has its own depth - our pillar on protecting client confidentiality with AI tools walks through vendor vetting in detail.
3. Communication - Model Rule 1.4 If you use AI to summarize depositions or predict case outcomes, you owe clients a clear explanation of what the tool can and can't do. Plain language, not jargon. A client who hears "the model is calibrated at 70% accuracy" needs context: what that number covers, and what it doesn't.
How to Verify AI-Generated Legal Content
Verification isn't optional. Here's a process that scales:
-
Check every citation against primary sources. If an AI tool cites Smith v. Jones, pull the case in Westlaw or Lexis yourself. Confirm the case exists, the jurisdiction matches, and the holding is real.
-
Read the opinion - don't trust the summary. AI summaries routinely omit dissents, procedural posture, or key facts that change the outcome. A two-paragraph summary of a thirty-page opinion will miss something. Read the parts you plan to rely on.
-
Document your verification. If you're ever audited or challenged, you need a paper trail. A short note in your matter file works: "April 5, 2026 - Verified AI summary of State v. Brown against Westlaw. AI omitted dissent; adjusted brief accordingly."
This adds time, but it's manageable. Think of it as the legal equivalent of cite-checking a brief before it goes out - routine, necessary, and the thing that separates a clean filing from a sanctionable one.
Client Communication About AI Use
Clients have a right to know when AI is involved in their case. Three practices cover most situations:
Disclose AI use upfront. A single sentence in your engagement letter does the work: "This firm uses AI tools to support tasks like legal research and document drafting. All AI-assisted work is reviewed by a licensed attorney."
Offer an opt-out where appropriate. Some clients will push back, especially in sensitive matters. Respect that. Explain the trade-off - manual-only workflows may take longer or cost more - and let the client decide.
Explain limitations in plain English. Skip "the model lacks contextual reasoning." Say: "The AI may miss nuance in your matter. I review every suggestion before acting on it." Clients trust lawyers who are honest about limits.
This approach satisfies Model Rule 1.4 and builds trust. It also parallels what other regulated professionals are facing - see how K-12 teachers are handling similar disclosure expectations under district AI mandates.
Tools That Help With AI Ethics Compliance
A few specific tools make compliance easier rather than harder.
Lexis+ AI has a verification layer that flags citations it can't confirm in the Lexis database. If a citation comes back unconfirmed, you know to check it manually before it goes in a brief.
Spellbook logs AI input, output, and your edits as you draft. If your state bar or a client asks for proof of your process, you can export a timestamped trail of what the AI suggested and what you kept.
A simple compliance checklist for every matter:
- Did I verify every AI-generated citation against primary sources?
- Did I disclose AI use to the client?
- Did I document my verification steps?
That checklist, stapled to the front of your matter workflow, is often enough to catch problems before they reach a filing.
For a broader view of where AI fits across legal practice, start at our AI for lawyers hub.
What to Try Next
You've just read the rules, the verification process, the client conversation, and the tools. Here's one concrete step for this week: audit the AI tools you're already using. Do they let you verify output? Do they protect client data? If you can't answer yes to both, you have a gap worth closing.
A partner at a mid-size firm did this audit last month. She discovered that paralegals were using free ChatGPT for initial legal research - technically violating their bar's 2025 ethics opinion. Rather than panic, she pivoted those tasks to the firm's Lexis+ AI subscription, which flags unconfirmed citations and logs every query. Two days of auditing caught a liability that could have metastasized. Then, if your firm has a compliance officer, book a 20-minute meeting and walk through this piece together. If you are the compliance officer, share it with your team.
The rules won't pause while you catch up. A few hours of preparation now protects your clients, your license, and your practice.
Frequently asked questions
- What AI ethics rules apply to lawyers?
- Model Rules 1.1 (competence), 1.6 (confidentiality), and 1.4 (communication) all apply directly to AI use, alongside state-specific ethics opinions.
- How do I verify AI-generated legal content?
- Check every citation against Westlaw or Lexis, read full opinions rather than AI summaries, and log your verification steps in writing.
- Do I need client consent to use AI?
- Many state bars now require disclosure, and some require informed consent. Include an AI clause in your engagement letter to be safe.
- Can AI use lead to disciplinary action?
- Yes. Lawyers have been sanctioned for citing fabricated AI cases or failing to verify AI output. The duty of competence is non-delegable.
- What AI tools are bar-compliant?
- Tools like Lexis+ AI and Spellbook include verification and audit features. No tool is automatically compliant - your review process makes it so.
- How often do bar ethics rules change?
- Since 2023, most state bars have issued at least one AI-specific opinion. Expect updates annually. Check your state bar website quarterly.