First AI Class

How Lawyers Can Protect Client Confidentiality with AI in 2026

Reviewed by Stephen J. Ronan, MD

Last verified:


You've spent years building trust with your clients. When you tell them their information is safe, you mean it. So it's fair to wonder: could the same tool that drafts a contract in 30 seconds also accidentally leak a client's secret?

You're not alone in asking. Many lawyers are trying to figure out how to use AI without breaking confidentiality rules or inviting a bar complaint. This guide walks you through the real risks, practical safeguards, and specific tools that let you use AI without holding your breath.

Why Client Confidentiality Matters in AI Use

Attorney-client privilege isn't just a rule. It's the floor your entire client relationship stands on. ABA Model Rule 1.6 requires you to make reasonable efforts to prevent unauthorized disclosure of client information - and that duty follows you into every AI tool you open.

Confidentiality here is broader than documents. It covers facts you know, strategies you're considering, and even the way you phrase a prompt. Typing "draft a response arguing my client, Jane Smith, wasn't at the scene on March 3" into a public chatbot can put identifying facts into a system you don't control.

The ethical question is simple: are you acting as a gatekeeper, or are you handing client information to a vendor you've never vetted? The answer shapes everything that follows.

Common AI Risks to Client Confidentiality

Three risks come up again and again.

General-purpose tools that retain inputs. Free tiers of ChatGPT, Gemini, and Claude may use your inputs to improve the model unless you turn off training in settings or use an enterprise tier. That means client facts could end up influencing future outputs for other users.

Third-party data handling you haven't read. Many AI vendors process data across multiple jurisdictions with different privacy laws. If you haven't reviewed the data processing agreement, you don't know where the data goes, how long it's kept, or who on the vendor's side can access it.

Prompt injection and accidental exposure. If you paste a document containing hidden instructions - or work inside a tool connected to your email or document system - a malicious prompt can sometimes coax the model into surfacing information it shouldn't. This is a known class of vulnerability, not a hypothetical one.

The pattern across all three: assuming a tool is "safe" without checking is the gamble. For more on the related ethics risks, see our guide on hallucinated citations in legal AI.

How to Safely Use AI with Client Information

The good news: you can use AI without risking confidentiality. Three habits carry most of the weight.

Choose tools built for legal work. Platforms like Lexis+ AI, Harvey, and Thomson Reuters CoCounsel offer enterprise terms with no-training clauses, encryption, and access controls. For general drafting, ChatGPT Enterprise and Claude for Work also provide zero data retention options.

Write a one-page data-handling policy. It doesn't need to be long. Name your approved tools. Define what can be shared (public case law, redacted hypotheticals) and what can't (client names, financial details, medical records, case strategy). Require that files be deleted from any tool after use.

Train the people who touch client files. Hold a 30-minute session. Show paralegals and associates how to redact before pasting. Give them the rule you'd want them to remember at 5 p.m. on a Friday: if a client wouldn't want it public, don't put it into AI without checking first.

What to Do If You Suspect a Breach

Even with precautions, things happen. Here's the order of operations.

First, stop using the tool. Don't keep working while you investigate.

Second, document what happened. What data was entered? When? By whom? Which tool? This record matters for both your malpractice carrier and any disclosure obligation.

Third, notify affected clients. Be direct about what occurred and what you're doing. Many jurisdictions require timely disclosure under Rule 1.4, and delay typically makes things worse.

Fourth, call a cybersecurity professional and your malpractice carrier. They can assess whether data was actually exposed - often it wasn't in the way you feared - and help with next steps.

Fifth, check with your state bar. Some states require reporting; others offer confidential ethics hotlines. Having this protocol written down before a crisis means you won't be improvising when it counts.

A few tools worth knowing by name.

Lexis+ AI runs on LexisNexis infrastructure with contractual commitments that customer data isn't used to train foundation models. It's designed for research, drafting, and summarization inside the Lexis ecosystem.

Harvey is used by a number of large firms and offers enterprise-grade data handling, including isolation of firm data and no-training guarantees.

Thomson Reuters CoCounsel (formerly Casetext) offers similar protections and integrates with Westlaw.

Spellbook focuses on contract drafting inside Microsoft Word with enterprise data controls.

None of these removes your ethical duty. Even with encryption and a no-retention clause, you remain responsible for what gets entered and how outputs are used. Think of these tools the way you'd think of a junior associate: capable and useful, but still needing your judgment on every client matter.

Lawyers aren't the only professionals juggling confidentiality with AI. Doctors face similar pressure around patient data in clinical notes - see how physicians are handling EHR burden with AI for a parallel approach from medicine.

State Bar Guidance on AI and Confidentiality

Model Rule 1.6 is the baseline, but guidance is filling in quickly.

ABA Formal Opinion 512 (July 2024) addressed generative AI directly. It confirmed that lawyers must understand the technology well enough to use it competently, protect client information, supervise AI output, and consider how AI affects billing.

State opinions vary. California, New York, Florida, New Jersey, and several other states have issued their own ethics opinions on AI use. Most converge on the same themes: reasonable due diligence on vendors, client consent when appropriate, and human review of AI output. For a broader view of how these rules are changing, see our overview of shifting bar ethics rules on AI.

To find your state's current position, check your bar's ethics opinions page and any AI task force reports. If your state hasn't spoken yet, ABA Opinion 512 is the safest baseline. You can find more persona-specific guidance on our AI for lawyers hub.

One Small Step to Start

You don't need to become an AI policy expert this week. The first step is an audit.

List the AI tools you or your staff currently use. For each one, find the data processing terms and answer three questions: Is my input used for training? Where is it stored? How long is it kept? If you can't get clear answers, that's your signal to either upgrade to an enterprise tier or stop using the tool for client work.

The goal isn't to avoid AI. It's to use it the way you'd use any other vendor: with a signed agreement, a clear policy, and human judgment on top. The next time you open a new tool, ask one question: does this respect the trust my clients put in me? If you can't answer yes with specifics, you already know what to do.

Frequently asked questions

Can I use ChatGPT for legal work with client information?
Not with identifying client details. Standard ChatGPT may retain inputs for training. Either redact everything client-specific or use a legal-specific tool with a no-retention agreement.
How do I know if an AI tool keeps client data private?
Read the data processing agreement. Look for zero data retention, no training on your inputs, encryption in transit and at rest, and a clear data residency policy.
What if an AI tool accidentally shares client info?
Stop using the tool, document what happened, notify affected clients per your jurisdiction's rules, and contact your malpractice carrier and state bar for guidance.
Are there AI tools specifically for lawyers?
Yes. Lexis+ AI, Harvey, Thomson Reuters CoCounsel, and Spellbook are built for legal work with stronger confidentiality controls than general chatbots.
Do I need to get client permission to use AI?
Some state bars now recommend or require disclosure, especially when client data is processed. Check your state's ethics opinion and consider updating engagement letters.
What if my state bar doesn't have AI guidance yet?
Follow ABA Formal Opinion 512 as a baseline. It addresses competence, confidentiality, supervision, and fees as they apply to generative AI.

Related reading