First AI Class

AI for Doctors: What's Real, What's Useful, Where to Start

Reviewed by Stephen J. Ronan, MD

Last verified:


A calm guide to AI for doctors. What's actually useful in clinical practice today, what to be cautious about, and where to begin without the hype.

If you trained before "AI" meant anything more than a chess program, the last two years have probably felt disorienting. Patients ask about ChatGPT. Your inbox fills with vendor pitches promising to fix burnout. Colleagues quietly use tools nobody has formally approved. You want to practice good medicine without falling behind - and without trusting software that might hallucinate a drug interaction. That's a reasonable place to stand. Let's walk through what's actually happening, without the noise.

The current reality in clinical practice

Most of what you read about AI in medicine falls into two camps: breathless predictions and worst-case warnings. Neither is very useful at 7 a.m. before rounds.

Here's the grounded version. A handful of AI tools have meaningfully entered day-to-day clinical work. Ambient scribes - software that listens to a visit and drafts a note - are the clearest example. Products like Abridge and Nuance DAX are in use at health systems including Kaiser Permanente and The Permanente Medical Group, with physicians reporting measurable reductions in after-hours charting time. That's not a revolution. It's a keyboard you don't have to touch as much.

Imaging is the other area where AI has real traction. FDA-cleared algorithms now assist with mammography screening, stroke detection on CT, and diabetic retinopathy screening in primary care clinics. These tools flag findings for a human radiologist or clinician to confirm. They don't replace the read - they prioritize the worklist and catch things a tired eye might miss.

Everything else - the predictive models, the clinical decision support, the patient-facing chatbots - lives on a spectrum from "genuinely helpful in specific workflows" to "slideware." The honest answer to "is AI changing medicine?" is: yes, narrowly, in ways you'd recognize as incremental rather than cinematic.

What's actually useful for a practicing doctor

Put the diagnostic AI headlines aside for a moment. The tools most likely to change your next month of work are the boring ones.

Documentation support. If your institution has rolled out an ambient scribe, use the pilot. If it hasn't, a general-purpose assistant like ChatGPT or Claude can help you draft patient education handouts, referral letters, and prior authorization appeals - as long as you paste no identifiable patient information into consumer tools. (For anything touching PHI, use a HIPAA-covered enterprise version your institution has signed a BAA for. This matters.)

Literature triage. Tools like OpenEvidence and Elicit summarize recent studies on a clinical question in minutes. You still read the primary source before changing practice. But getting to the right three papers faster is a genuine time saver when you're trying to answer a specific question between patients.

Differential and reasoning support. Some clinicians use general AI models as a sounding board for atypical presentations - not to get the answer, but to make sure they haven't anchored on one. Used that way, with skepticism, it's similar to curbsiding a thoughtful colleague who has read a lot but hasn't seen your patient.

Administrative drafting. Letters, appeals, committee memos, CME summaries, patient emails that need a gentler tone than you can muster at 9 p.m. This is unglamorous and it's where most doctors recover the most time.

A reasonable rule of thumb: AI drafts, you decide. The model is a first-year resident with a very wide reading list and no clinical judgment. Treat its output accordingly.

And the cautions worth naming directly: current general-purpose models can fabricate citations, miscalculate doses, and sound confident when wrong. Data privacy is not a theoretical concern - it's a licensure and liability one. Bias in training data affects recommendations across populations your patients belong to. None of this means "don't use these tools." It means verify, and never paste what you wouldn't email.

Where to start without overwhelming yourself

You don't need a strategy. You need one small experiment this week.

Pick a single recurring task that drains you - the kind of thing you do twenty times a month and dread each time. Patient instruction sheets. Appeal letters. Summarizing a complex chart for a consult. Drafting a response to a frustrated message in the portal. Just one.

Then try one tool on one instance of that task. Read the output critically. Edit it. Notice what it got right and what you had to fix. Do this three or four times. You'll develop a real, personal sense of where AI helps you and where it wastes your time - which is more useful than any think piece, including this one.

After that, the questions get easier. Should you learn about prompting? Probably, a little. Should you understand how large language models work under the hood? Only as much as you understood how PACS worked - enough to use it well and spot when it's failing.

The doctors who will navigate this decade comfortably are not the ones who adopted every tool first. They're the ones who stayed curious, stayed skeptical, and kept their clinical judgment at the center. You already have the hardest part - years of training that tells you when something doesn't add up. That instinct is exactly what makes a good physician safe with these tools.

If you'd like a structured path from here, the rest of this site walks through specific tools, the ethical questions worth thinking about carefully, and short lessons designed for clinicians who have fifteen minutes, not fifteen hours. Start wherever your curiosity is strongest. The 2-minute quiz will point you to the right first step.

Explore topics