You are a lawyer who has heard the buzz about AI tools like ChatGPT or Harvey, and you feel uneasy using them. You have read the stories about fake cases, and you wonder if one misplaced citation could land you in front of a judge. That worry is reasonable. The legal profession runs on precision, and AI's tendency to invent citations feels like a direct threat to everything your training taught you to protect.
Here is the honest part: AI can and does invent plausible-sounding cases that do not exist. But you are not defenseless. This guide shows you how to catch hallucinated citations before they leave your office, which tools carry less risk, and what your state bar probably already expects of you. If you want the broader map of AI issues facing attorneys, start with our AI guide for lawyers.
Why AI Invents Cases That Do Not Exist
You have probably noticed that AI tools sound confident even when they are wrong. That is because large language models like ChatGPT or Gemini do not "know" facts the way you do. They predict the most statistically likely next word in a sequence. Ask one to name a New York case on contract disputes, and it may generate a case name, a reporter citation, and a holding that sound convincing but have no basis in reality.
Two things drive this. First, general-purpose chatbots were trained on broad internet text, not a verified corpus of current case law. Second, a confident tone is not the same as a correct citation. The model is not verifying anything. It is generating text that sounds verified.
The lesson is simple. Tone is a red herring. If you treat an AI's certainty as evidence of accuracy, you will eventually file something that embarrasses you.
Three Sanctions Cases You Should Know
Mata v. Avianca is the case every lawyer has heard about by now. A New York attorney used ChatGPT to draft a federal court brief and cited cases that did not exist. Opposing counsel could not find them in Westlaw. The judge sanctioned the attorneys and imposed a $5,000 penalty, and the order became required reading in CLE programs nationwide.
Park v. Kim followed a similar pattern in the Second Circuit. An attorney cited a nonexistent case in her reply brief. The court referred her to the grievance panel and made clear that confirming citations is a baseline professional duty, not an optional check.
The pattern across these rulings is consistent. Judges are not impressed by "the AI told me so." Your duty to verify does not shrink when a tool makes drafting faster. If anything, the ease of generating text raises the standard of care, because the cost of checking is low.
A 5-Minute Verification Workflow That Actually Works
You do not need to abandon AI tools. You need a habit. Here is a workflow that takes about five minutes per citation.
1. Pull every citation into Westlaw or Lexis before filing. Do not just search the case name. Paste the full citation (for example, Smith v. Doe, 123 F.3d 456 (5th Cir. 2001)) into the search bar. If nothing returns, the case is fake. If a case returns but the reporter volume or page is off, treat the whole citation as suspect.
2. Confirm the holding, not just the name. A real case can still be misrepresented. If the AI says EEOC v. Acme Corp. held for the employer but the actual opinion held for the employee, that is a misstatement of authority - which is often worse than a fake cite because it survives a superficial check.
[QuizCTACard]
3. Flag anything you cannot independently verify. If the AI cites a state case from a jurisdiction you do not practice in, cross-check with a local colleague or a bar-provided research service before relying on it. Write "VERIFY" next to any citation you have not personally pulled, and do not clear the flag until you have.
Five minutes per citation is cheap compared to a sanctions order or a disciplinary referral. Pair this habit with solid client confidentiality practices and you have covered the two biggest AI risks in practice today.
Tools Built for Law That Reduce Hallucination Risk
You might be wondering whether AI tools designed for lawyers are safer. The answer is yes, with a caveat: they still need your oversight.
Lexis+ AI and Westlaw Precision AI use retrieval-augmented generation, which means the AI searches the actual Lexis or Westlaw database first, then drafts an answer grounded in what it found. Citations link back to real cases you can open in one click. This dramatically cuts hallucination rates compared to ChatGPT, but it does not make them zero, and the summaries can still overstate a holding.
Harvey is aimed at law firm workflows and integrates with document management. It is strong at drafting tasks - first-pass memos, clause comparisons, discovery review - but you still need to verify citations it surfaces.
Spellbook focuses on contract drafting rather than case research, so the hallucination risk profile is different. It is flagging unusual clauses against a contract corpus, not inventing case names.
No tool removes your duty to verify. Think of these as competent junior associates: useful, fast, and still requiring a partner's review before anything goes out.
What Your State Bar Probably Already Requires
You have heard of ABA Model Rule 1.1, Comment 8 - the duty to stay competent in relevant technology. That now clearly includes AI. Most state bars have gone further with specific guidance, and the pace of new opinions has picked up sharply through 2025.
A few consistent threads across the guidance:
- Verify AI-generated content before filing. This is universal. No bar treats AI output as presumptively reliable.
- Watch confidentiality. Pasting client facts into a public chatbot can waive privilege or breach Rule 1.6 depending on the tool's terms. Enterprise tools with a zero-retention contract are a different story - but read the contract.
- Check your local disclosure rules. Some federal judges now require a certification about AI use in filings. State courts vary.
The rules are changing fast, which is why we track the evolving bar ethics landscape separately. And if you are curious how other regulated professions handle the same tension between AI speed and professional liability, our piece on diagnostic liability for doctors walks through a parallel problem in medicine.
What to Try Next
You now have a clear path. Use retrieval-augmented tools like Lexis+ AI or Westlaw Precision AI for research. Build the five-minute verification habit into every filing. Keep an eye on your state bar's guidance, because it will keep moving.
The goal is not to fear AI. It is to use it the way you would use a sharp new associate - with supervision, clear instructions, and a red pen on the way out the door. You have spent years mastering the law. Adding one more skill - spotting when the AI is bluffing - is well within reach.
If you want a quick gut-check on where you stand, take our two-minute quiz. It will tell you which AI skills to build first based on your practice area and comfort level.
Frequently asked questions
- Can a lawyer be sanctioned for citing an AI-hallucinated case?
- Yes. Judges have fined attorneys for filing briefs with fake AI-generated citations. The duty to verify does not change because a tool is labeled AI.
- How do I check if a case ChatGPT gave me is real?
- Paste the full citation into Westlaw or Lexis. If no result returns, the case is fabricated. Always confirm the holding matches, not just the name.
- Which AI tools are safest for legal research?
- Retrieval-augmented tools like Lexis+ AI, Westlaw Precision AI, and Harvey pull from real databases. They still require you to verify every citation.
- Do I have to tell the judge I used AI?
- It depends on your jurisdiction. Some federal judges now require an AI-use disclosure in filings. Check your court's standing orders before filing.
- Is it ethical to use ChatGPT for legal drafting?
- Drafting is fine. Citing unverified output is not. Your competence duty under Rule 1.1 and confidentiality duty under Rule 1.6 both still apply.
- What is retrieval-augmented generation for lawyers?
- It is an AI setup that searches a real legal database first, then writes based on the actual text it found. This reduces but does not eliminate hallucinations.