Why Some Clinicians Hate Using AI for Documentation
The promise of artificial intelligence in healthcare is simple: offload the burden of paperwork to AI scribes so clinicians can focus on patients. However, despite this compelling value proposition, a significant number of clinicians remain deeply skeptical, and some outright refuse to use them.
This resistance is not a fear of technology. It is rooted in experience‑based concerns about workflow disruption, clinical accuracy, and the current limitations of the tools being offered. Understanding this "apprehension" is not about dismissing critics; it is the essential first step in developing tools that actually solve clinical problems rather than creating new ones. Explore the primary reasons behind clinician resistance to AI scribes and examine what this pushback means for the future of healthcare AI.
The Main Issue: When the Tool Fights the Workflow
For a busy clinician, any tool that does not slide effortlessly into the existing workflow is not a solution; it is another burden.
The "One More Click" Problem
In fast‑paced clinical environments, from emergency departments to busy primary care schedules, workflow momentum is critical. An AI scribe that adds even minimal friction to this process fails its primary purpose.
Clinicians report that poorly designed tools create these issues:
- Context switching: Requiring a clinician to open a separate application, remember to toggle it on and off, or switch between windows to review the output breaks clinical focus.
- Editing burden: If the AI output requires extensive editing to correct errors or match style, the clinician spends just as much time as they would have writing the note manually.
- Cognitive load: The tool becomes a task in itself, something to manage and monitor, rather than an invisible assistant that frees up mental space for patient care.
The consequences of poor integration also include:
- Time Loss: The 4-5 minutes the AI was supposed to save are consumed by manual transfer tasks.
- Error Introduction: A study published in the Journal of the Primary Care & Community Health found that excessive data entry and transcription tasks are a major contributor to EHR-related burnout and can introduce:
- Potential data loss during copy-paste.
- Formatting errors that obscure clinical meaning.
- The risk of pasting into the wrong patient chart.
- Poor quality of care.
- Reduced patient satisfaction.
The Training and Configuration Challenges
Beyond the mechanics of daily use, there is the upfront cost of setup. AI scribes require configuration to be useful across different specialties and personal styles.
Clinicians face these configuration challenges:
- Specialty Mismatch: A psychiatrist needs notes structured to capture mental status exams and therapeutic progress, which looks entirely different from an orthopedic surgeon's procedure-focused template or a family physician's chronic care management note.
- Template Training: Some AI tools require users to "teach" them preferred formats such as SOAP (Subjective, Objective, Assessment, Plan) or DAP (Data, Assessment, Plan), which takes time and repetition.
- Terminology Tuning: Specialized vocabulary, from psychiatric terminology to surgical instruments, must be learned by the model, and initial outputs often miss the mark.
When a clinician's schedule is already very busy, investing hours into training an AI that might still make mistakes is a loss.
The Accuracy Inconsistency: It Sounds Right, But Is It Wrong?
The most unsettling aspect of AI clinical notes for clinicians is when it produces output that sounds perfectly plausible but is wrong.
Hallucinations
The term "hallucination" in AI refers to the model generating information that has no basis in the actual encounter. The AI invents a symptom the patient never mentioned, or records a physical exam finding that was never assessed.
The nature of these errors is what makes them so problematic:
- Plausibility: Hallucinations are not random nonsense; they are statistically likely phrases that fit the clinical context. An AI might add "Patient denies shortness of breath" to a note for an asthma follow-up, even if the clinician never asked, because that phrase commonly appears in similar notes.
- Detection Difficulty: Catching a hallucination requires the clinician to read the AI-generated note with the same vigilance they would apply to a note written by a new trainee. They must mentally cross-check every statement against their memory of the encounter.
- Cognitive Cost:. The clinician spends mental effort verifying rather than documenting, which defeats the purpose of using an AI tool.
Loss of Clinical Nuance and Medical Necessity
AI‑generated notes often capture the what of a visit but miss the why, which is often what matters most for medical decision‑making and justifying reimbursement.
What AI Often Misses
Clinicians report that current AI scribes struggle to capture the following nuanced elements consistently:
- Specific Descriptions: The difference between a "dry, hacking cough" and a "productive cough with thick green sputum" carries diagnostic significance that generic language obscures.
- Patient Communication Cues: A patient's hesitancy when discussing a treatment plan, their body language suggesting discomfort with a recommended procedure, or the subtle ways they express concerns. These non-verbal elements often inform clinical judgment but are invisible to audio-based AI.
- Medical Necessity Justification: The specific elements that support a Level 4 visit code over a Level 3, such as the complexity of data reviewed, the severity of presenting problems, or the time spent on counseling, are often the very details that AI summaries omit.
Trust and the "Black Box" Problem
Beneath the practical concerns about workflow and accuracy lies a deeper issue: trust.
Lack of Transparency in AI Decision-Making
Most AI documentation tools operates largely as a "black box." The clinician inputs a conversation, and the system outputs a note. What happens in between, i.e., how the model decides which information to include, how it prioritizes certain statements over others, how it constructs the assessment and plan, is unclear.
This Lack Of Transparency Conflicts With Core Clinical Values:
- Accountability: A clinician is ultimately responsible for the content of the medical record. If they cannot understand how the AI arrived at a particular section of documentation, they cannot fully endorse its accuracy.
- Auditability: In the event of a malpractice claim or payer audit, the clinician must be able to defend every element of the note.
- Learning and Improvement: When an AI makes an error, the clinician cannot learn from it or adjust their behavior to prevent similar errors in the future because they don't understand the underlying reason of failure.
Fear of De-skilling and Loss of Voice
Another concern among experienced clinicians is the fear of professional de‑skilling over time. The process of synthesizing a patient encounter into a coherent narrative forces the clinician to organize their thoughts, identify key findings, and articulate a reasoned plan.
If AI handles this synthesis, some clinicians worry they will lose the skill required to do it themselves. This concern manifests in several ways:
- Loss of Clinical Reasoning: The discipline of writing the assessment and plan is, for many clinicians, where diagnostic reasoning is solidified. Outsourcing this step could weaken that cognitive process over time.
- Homogenization of Notes: Experienced clinicians develop a distinctive "voice" in their documentation. AI-generated notes tend toward a generic, style that loses this personal voice.
- Loss of Ownership: A note that the clinician wrote themselves feels like their work product. A note that the AI generated and the clinician simply reviewed feels like “someone else's work” that the clinician is now responsible for.
What This Resistance Reveals About the Future
The skepticism clinicians express toward current AI documentation tools is not a rejection of the technology itself. It is a demand for better tools for the clinical workflow.
The Market is Demanding "Ambient" Experience, Not Just Transcription
Clinicians need tools that understand clinical context.
The future belongs to AI clinical notes tools that:
- Operate silently in the background, requiring no active engagement from the clinician
- Understand not just what was said, but what it means in a clinical context
- Anticipate documentation needs based on the type of visit and the clinician's patterns
- Flag missing information or inconsistencies proactively, rather than passively recording
The Need for Specialized vs. Generalist Models
Clinicians in specialized fields report that generic AI tools fail to capture the specific language, workflows, and documentation requirements of their practice.
The AI scribe market Is already responding with specialty-specific solutions:
- Psychiatry: Tools that understand therapy modalities, mental status exam components, and the narrative nature of psychotherapy notes
- Surgical Specialties: Tools that capture procedure details, intraoperative findings, and recovery protocols.
- Primary Care: Tools that manage chronic condition documentation, preventive care tracking, and longitudinal care narratives
Conclusion
The resistance to AI scribes among clinicians is not a problem to be overcome; it is a signal to bear in mind. It represents a skepticism born from real‑world experience with tools that promise much but dont deliver. This skepticism serves as a warning against deploying generic technology in environments where errors have real consequences for patients and clinicians alike. For those ready to explore a tool built with these clinical realities in mind, learn more about how AI clinical notes can be designed to work for you, not against you.
Frequently Asked Questions
ABOUT THE AUTHOR
Dr. Eli Neimark
Licensed Medical Doctor
Reduce burnout,
improve patient care.
Join thousands of clinicians already using AI to become more efficient.
A Five-Minute Breakdown Of How An AI Medical Scribe Works
Curious how AI scribes work? Get a clear breakdown of speech-to-text, AI analysis, and how clinical notes are structured.
Are Your AI Notes Helping Or Hurting The Continuity Of Care?
Do your AI notes improve care coordination? Learn how to audit your notes for clarity and ensure they actively support patient treatment goals.
From Backlogged to On-Time: How AI Helped Me Catch Up on 30 Notes
Overwhelmed by notes? See how clinicians are catching up on 30+ notes and reclaiming hours every week with AI.
