AI in Clinical Practice: Who Is Liable When the Algorithm Gets It Wrong?

April 29, 2026   |   Healthcare Professional

If you’re a nurse practitioner, PA, nurse, or therapist using an AI charting tool, you’re not alone. The American Medical Association’s 2026 Physician Survey on Augmented Intelligence reports that 81% of medical providers are now using AI in their practices, more than double the 38% reported in 2023. Tools like ambient listening scribes, AI-powered SOAP note generators, and diagnostic decision support systems are transforming how clinicians document care and manage their time. 

Most practitioners adopt these tools for the right reasons: less time charting, more time with patients, fewer documentation backlogs, and less burnout. But there’s a question that most AI vendors don’t address clearly, and most clinicians haven’t thought to ask: if an AI tool puts something inaccurate in your chart and that inaccuracy contributes to a patient harm claim, who is responsible? 

The short answer, as of right now, is you. The clinician whose name is on the chart is the clinician who bears responsibility for what’s in it. While AI in clinical practice is new, the liability framework is not. Understanding how that applies to AI-assisted documentation is one of the most important risk management conversations happening in healthcare right now. 

The Liability Still Falls on the Clinician 

There is no federal law that shifts malpractice liability from a clinician to an AI tool or its developer. Courts have historically held that physicians and other healthcare providers have a duty to independently apply the standard of care, regardless of what an algorithm recommended or what a tool produced. That legal framework applies whether the tool is a diagnostic aid, a charting assistant, or a clinical decision support system. 

What this means in practice: if an AI scribe listens to your patient encounter and generates a SOAP note that includes a clinical finding you didn’t actually observe, or omits a symptom the patient mentioned, and that note becomes the basis for a treatment decision that leads to harm, the liability rests with you. The AI vendor is not a licensed healthcare provider. They didn’t sign the chart. You did. 

This isn’t a theoretical risk. AI documentation tools can hallucinate clinical details, overstate exam findings, or insert standardized language that doesn’t match what actually happened during the encounter. As one legal analysis noted, even subtle inaccuracies (overstated exam findings or inflated time entries) can have consequences when those records are used in a malpractice claim or submitted for insurance reimbursement. 

Where the Risks Actually Show Up 

AI in clinical practice creates liability exposure in a few specific areas that practitioners should understand. None of these require that the AI “made a mistake” in the traditional sense. They require that the clinician didn’t catch it. 

Documentation inaccuracies. AI charting tools generate notes based on ambient audio or typed prompts. They can misinterpret clinical language, add findings that weren’t observed, or omit relevant details. If you don’t review and edit the AI-generated note before signing it, the record may not accurately reflect the encounter. In a malpractice context, that documentation is what your defense rests on. 

Diagnostic decision support errors. Some AI tools analyze patient data and suggest potential diagnoses or flag risk factors. If you follow an AI recommendation without applying your own clinical judgment and the recommendation turns out to be wrong, the liability is yours. Courts have consistently required that clinicians independently assess the standard of care, regardless of whether an algorithm supported their decision. 

HIPAA and data privacy risks. Not all AI tools handle patient data the same way. Public-facing AI platforms (like general-purpose chatbots) are not HIPAA-compliant. Inputting patient information into a tool that stores prompts and outputs in an unsecured environment creates a data breach risk. Even tools marketed as HIPAA-compliant may store data in ways that create compliance questions. Your cyber liability coverage is what protects you if a breach occurs, but the best protection is knowing how your AI tools handle patient data before you start using them. 

Billing and coding exposure. AI-generated documentation can inflate complexity codes or misrepresent the level of service provided. If those notes are submitted for reimbursement, the clinician is responsible for the accuracy of the claim. Under the False Claims Act, liability doesn’t require intentional fraud; “reckless disregard” for accuracy is sufficient. Signing an AI-generated note without reviewing it could meet that threshold. 

How to Use AI Tools Without Increasing Your Liability 

AI in clinical practice isn’t inherently risky. Used well, it reduces documentation burden, improves accuracy, and gives clinicians more time for patient care. CM&F’s own site has profiled how AI-powered charting is reducing burnout and how AI can improve emergency care diagnostics. The tools themselves aren’t the problem. The risk lives in how they’re adopted and overseen. 

A few practices that protect you: 

Review every AI-generated note before signing it. This is the single most important habit. If it’s in your chart with your signature, it’s your documentation. Read the note. Correct inaccuracies. Delete anything that wasn’t observed. Add anything that was missed. Treat AI output as a first draft, not a final product. 

Document how AI was used in the encounter. If an AI tool contributed to a clinical decision (a diagnostic suggestion, a risk flag, a treatment recommendation), note that in the record. Note whether you followed the recommendation or deviated from it, and why. This demonstrates that you exercised independent clinical judgment, which is the standard courts apply. 

Verify that your AI tools are HIPAA-compliant. Confirm that the tool uses encrypted data storage, doesn’t retain patient information beyond what’s necessary, and has a signed Business Associate Agreement (BAA) with your practice. Do not use public-facing AI tools for any interaction involving patient data. 

Notify your insurance carrier. If you’re adding AI tools to your practice workflow, let your carrier know. Your risk profile may have changed, and your carrier should understand how AI fits into your documentation and clinical decision-making process. CM&F’s carrier-partner, MedPro Group, maintains specialty-level claims data that can help inform how AI adoption affects risk across different practice settings. 

Keep your own knowledge current. AI in healthcare is evolving rapidly, and the regulatory environment hasn’t fully caught up. The FDA has approved or cleared dozens of AI-based devices for clinical use, but the liability framework for AI-assisted documentation and decision support is still developing. Staying informed protects you from adopting tools whose risk profile isn’t yet well understood. 

What Your Malpractice Policy Does and Doesn’t Cover 

Your professional liability policy covers claims arising from your professional services, including claims where AI-assisted documentation or decision-making played a role. If a patient alleges that an AI-generated charting error contributed to a misdiagnosis, and you are named in the claim, your malpractice coverage responds. 

What your malpractice policy does not do is shift liability to the AI vendor. That’s a separate legal question (product liability, contract law) that may or may not apply depending on the tool and the circumstances. From your perspective as the clinician, the relevant coverage is your own professional liability policy. 

CM&F policies include consent-to-settle rights, which means no one can settle an AI-related claim on your behalf without your approval. They include licensing board defense as a separate benefit, which matters because a board complaint related to documentation accuracy can arise independently of a civil claim. And they include telehealth coverage at no additional cost, which is relevant because many AI charting tools are used in virtual care settings where documentation challenges are amplified. 

If you’re adopting AI tools in your practice, the most important coverage question isn’t whether your policy covers AI. It’s whether your policy has the structural features (defense outside limits, consent-to-settle, licensing board defense, occurrence-based coverage) that make it effective when any complex claim is filed. Those features protect you regardless of whether the claim involves AI, a diagnostic error, a documentation gap, or a patient complaint. 

Key Takeaways 

AI documentation and decision-support tools are transforming how healthcare professionals work. Eighty-one percent of providers are already using AI in their practices, and adoption is accelerating. 

The liability for what’s in your chart still rests with you, regardless of whether an AI tool generated it. Review every AI-generated note before signing. Treat it as a first draft, not a finished record. 

Document how AI was used in clinical decisions. Courts apply the standard of independent clinical judgment, and your documentation should demonstrate that you exercised it. 

Verify that your AI tools are HIPAA-compliant, use encrypted storage, and have a signed Business Associate Agreement. Do not use public-facing AI for anything involving patient data. 

Notify your insurance carrier when you add AI tools to your workflow. Your risk profile may change, and your carrier should understand how AI fits into your practice.

Frequently Asked Questions

  • Does my malpractice insurance cover errors caused by AI charting tools?Yes. Your professional liability policy covers claims arising from your professional services, including situations where AI-assisted documentation or decision-making played a role. If a patient alleges that an AI-generated charting error contributed to a misdiagnosis or adverse outcome and you are named in the claim, your malpractice coverage responds. However, the liability rests with you as the clinician who signed the chart, not with the AI vendor.
  • Who is liable when AI makes an error in a medical chart?As of now, the clinician whose name is on the chart bears responsibility for its contents. There is no federal law that shifts malpractice liability from a healthcare provider to an AI tool or its developer. Courts have consistently held that clinicians must independently apply the standard of care, regardless of whether an algorithm supported their decision. This means reviewing and correcting every AI-generated note before signing it is essential.
  • How can healthcare professionals reduce liability risk when using AI tools?Review every AI-generated note before signing it and treat AI output as a first draft. Document how AI was used in clinical decisions, noting whether you followed or deviated from a recommendation and why. Verify that your AI tools are HIPAA-compliant with encrypted storage and a signed Business Associate Agreement. Notify your insurance carrier when you add AI tools to your workflow so your coverage reflects your updated risk profile.
 


Get the Coverage You Need In Just 5 Minutes

  • A++ Rated & 4.8/5 Satisfaction Rating
  • Competitive Rates, Comprehensive Coverage
  • Excellent, Live Customer Service
  • Quick, Easy, Quote – No Hidden Fees
  • Coverage & Documents Available Immediately

We have protected healthcare professionals for over 100 years. Are you protected?


Sign-Up For Our Newsletter

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Name*
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form



Related Articles