It’s no exaggeration to say the risks and opportunities of using AI tools in healthcare are life-or-death. At best, AI can help clinicians predict which patients need health screenings or care. At worst, AI without sufficient clinical oversight has harmed patients.
Chris Hemphill, founder of Modular Feedback, has spent their career in the healthcare technology space and worked in data science and artificial intelligence since 2018. In one of AI’s best use cases for improving care, Hemphill served on a team developing AI models that integrated into electronic health records (EHRs) to identify patients most likely to need cardiovascular care. Health systems used these models to expand outreach and preventive communications to patients in need.
From ambient scribes and marketing to predictive analytics and clinical decision support, AI can transform how you provide care. Still, the stakes are high for clinicians considering a new AI tool in 2026. Investing in the wrong AI tools for your practice or healthcare business can waste time and money. Worse, it can undermine care.
Hemphill shares some risks and opportunities for any clinicians considering investing in AI tools this year.
Why AI Tools Aren’t “Plug and Play” in Healthcare Settings
AI tools developed for other industries are often pitched as easily adaptable to healthcare settings. This couldn’t be further from the truth, says Hemphill. “A lot of AI with the biggest hype around it is not ready for healthcare just out of the box. You need tools that are fit and ready for production applications within healthcare,” says Hemphill.
This requires adapting, testing and validating AI tools for clinical use, especially when patient care, outcomes or protected health information are involved.
As a clinician, you carry the risk. You’re held accountable if an AI-generated output contributes to a mistake, not the AI tool or vendor.
Before investing in a new AI tool, Hemphill recommends clinicians ask deeper questions than other business owners. For example, claims like “100% accuracy” or “no hallucinations” don’t hold up in real clinical environments. What matters is understanding when tools fail, how they fail and how often.
Why High-Stakes Clinical Care Requires Explainable and Validated AI
Black-box AI models may work in advertising or retail, but they create unacceptable risk in healthcare. “There are a lot of AI models where we don’t understand how they work and are told just to trust the process and the output. That doesn’t work when you have high-stakes decisions for things that could impact care,” says Hemphill.
When speaking to AI salespeople, Hemphill recommends insisting on a high degree of explainability. Request clear insight into:
- The degree of accuracy and the rate of errors
- How to recognize and foolproof the software for errors
- How well it integrates with your workflow, such as marketing or documentation processes
- Measures the vendor has taken to promote health equity and mitigate bias along race, gender, sex, and other protected lines
- Will it save time or add friction?
“Whether you’re a big or small organization, we need to hold our vendors accountable. Ask about failure cases, how to determine when to use the system and how it works with your workflow,” says Hemphill.
Why Bias in Healthcare AI Is a Serious Clinical Risk
One of the most important, and often overlooked, AI risks is bias. Hemphill is an industry leader in mitigating bias in healthcare AI. They’ve seen firsthand how, without intentional design, machine learning models can perform well at the population level but sometimes yield poorer results for minorities and women. That can leave certain patients overlooked, misclassified or underserved.
“Vendors will report their overall accuracy numbers, but ask whether they’ve tested how results break down based on race, gender and other subgroups,” says Hemphill.
How Clinicians Can Reduce Risk When AI Healthcare Startups Fail
AI healthcare companies are startups, and not all of them will survive. When that happens, it wastes time and money. It also means you should consider the risks before sharing PHI, signing BAAs and integrating tools into care delivery.
While no one can predict which companies will fail, clinicians can reduce risks:
- Speak with current users, not just handpicked references
- Seek out peers with similar practice models or patient populations
- Ask about service quality, responsiveness and follow-through
Most AI products have multiple competitors, which means you have leverage. If one vendor doesn’t meet your standards, another likely will.
How AI Creates Big Opportunities for Small Healthcare Practices
For smaller healthcare businesses, the opportunities to use AI to improve business processes are massive. This is because large organizations move slowly, while smaller practices and solo providers are more nimble and have more to gain. “There are companies whose original model was going after the big players, but once they saw their inbound sales data, they looked to smaller companies who are clambering for these solutions,” says Hemphill.
Even more than large healthcare companies, innovation can have a direct, immediate impact on your smaller business, patient experience and growth.
As you consider how to integrate AI into your healthcare business in 2026, a few guardrails can set you up for success and lower your risks. To summarize, Hemphill recommends the following checklist before spending time or money on a new tool:
- Work with companies that have a proven track record in the healthcare space
- Ask how the tool protects security and compliance
- Demand transparency in how the AI works from vendors
- Choose tools that align with how you actually practice, giving you less to do, rather than more
Used wisely without compromising safety or trust, AI isn’t just a technology shift. It’s an opportunity to improve efficiency, reduce burnout and deliver more personalized care.
Frequently Asked Questions
- Why is AI in healthcare considered high risk for clinicians?
AI tools used in healthcare directly impact patient care, clinical decisions, and protected health information (PHI). Clinicians—not AI vendors—are ultimately responsible if an AI-generated output contributes to an error. Without proper validation, oversight, and explainability, AI can introduce safety, compliance, and malpractice risks. - What should clinicians look for before investing in an AI healthcare tool?
Clinicians should evaluate whether an AI tool is specifically designed and tested for healthcare use, not adapted from another industry. Key considerations include accuracy rates, known failure cases, explainability, workflow integration, bias mitigation, data security, and whether the tool reduces administrative burden rather than adding friction. - How can bias in AI negatively affect patient outcomes?
Bias in healthcare AI can cause models to perform well for some populations while producing poorer results for women, minorities, or other underrepresented groups. If vendors only report overall accuracy, clinicians may miss disparities that lead to misclassification, delayed care, or underserved patients. Asking for subgroup performance data is essential to reducing clinical risk.
If you’re considering adopting new technology this year, CM&F Group can help you consider any risks and liabilities you should ask about first. Get a quote in minutes or talk with our team about aligning your protection with your business needs.