- From pilot to scale: Bringing structure to AI chaos: With hundreds of tools flooding the market and few real results, learn how the Vizient AI Maturity Assessment helps hospitals build the strategy, governance and culture needed to move from promise to performance.
- Be cautious but move quickly: Our experts outline 6 actions to successfully deploy AI in healthcare.
- Inside the AI adoption curve: Check out this episode of the Sg2 Perspectives podcast that explores the current state of AI adoption across industries with a focus on healthcare.
- Learn how AI can help address emergency department overcrowding, among other strategies.
Artificial intelligence is rapidly reshaping healthcare, but the biggest challenge isn’t adopting the latest tools or standards — it’s about whether health systems are ready to deploy AI responsibly. Readiness — defined by strong governance, the ability to scale and alignment with enterprise priorities — is what separates meaningful innovation from costly experimentation.
The allure of new interoperability protocols like the Model Context Protocol (MCP) is real, promising a future where AI seamlessly integrates into hospital workflows, pulling in the right context from the right systems at the right time. Yet these advances will only prove valuable if organizations first build the internal discipline to use them safely and effectively.
In other words, readiness isn’t optional — it’s the foundation. Before rushing into the next big protocol, health systems must strengthen governance, ensure accuracy and establish safeguards that position them to capture AI’s true potential.
The governance imperative
The concept of AI governance is so omnipresent in healthcare, you’d be forgiven for viewing it as just another buzzword. But it’s so much more: True governance is what separates organizations that thrive in their use of AI from those that stumble. Without a strong framework in place, health systems open themselves up to bias, inaccuracies, security risks and wasted investments.
So, what does strong governance look like?
- A structured process for intake and oversight
Hospitals need a structured framework to evaluate AI use cases before they’re deployed, with large systems requiring the ability to delegate responsibility across sites. This isn’t about stifling innovation — it’s about ensuring that tools are solving the right problems, tied to enterprise objectives and rolled out with safety in mind. - Clear cybersecurity and compliance guardrails
There’s a strong need for formal policies around AI use, not only to prevent “shadow IT” but also to manage vendor risk. Leverage established frameworks like those from the National Institute of Standards and Technology (NIST) and draw from organizations such as the Coalition for Health AI (CHAI) to build a strong foundation for keeping patient data safe. Frameworks for ethical AI also exist and are often embedded in standards like ISO 42001. - Embed governance into culture
Governance isn’t just organizational policy — it’s a mindset, and one in which culture and talent serve as an essential component. Systems with role-based training, AI champions and ongoing enablement see far higher adoption and safer deployment. - Algorithmic accuracy and bias monitoring
AI in healthcare cannot be a “set it and forget it” exercise. Regular audits, explainability thresholds and human-in-the-loop checks are essential safeguards.
Speaking of accuracy…
It’s the Achilles’ heel of LLMs in healthcare. While LLMs can summarize vast troves of data or generate patient-facing communications, a single inaccuracy can have outsized consequences in clinical or operational settings.
How should you ensure better results?
- Start with high-quality, representative data
- If LLMs are trained or fine-tuned on incomplete or biased historical data, they will replicate those flaws — meaning if you rely on the treatment patterns of the past (knowing the inequities that exist), you are simply automating bias.
- Organizations must invest in data curation and normalization before deploying LLMs to ensure inputs represent the full patient population and avoid perpetuating disparities.
- Establish a structured accuracy framework
- Keep humans in the loop
- It’s hard to overstate the necessity of human oversight in critical workflows, particularly clinical ones, to provide essential feedback on how models are working and maturing.
- This doesn’t mean clinicians manually review every AI output; rather, high-risk decisions should include checkpoints where staff validate or override AI-generated insights. “AI should augment judgment, not replace it,” said Erik Swanson, Managing Director, Consulting. “The organizations that get this right will design workflows where humans validate and guide the model, especially when patient care is on the line.”
- Embed accuracy and monitor ROI
Rebhan outlined a five-step framework that organizations can adapt to mitigate bias across the entire AI development cycle:
This approach ensures accuracy isn’t a one-time check but a sustained practice.
Governance councils can define accuracy thresholds and establish third-party requirements and audit trails.
In addition to monitoring accuracy, be sure to establish KPIs to evaluate the impact of your AI initiatives. Many organizations find they’re not achieving ROI on AI efforts. Understanding if you’re hitting all desired outcomes — clinical, satisfaction, safety, financial, etc. — is critical.
Building readiness for the future
The Vizient AI Maturity Assessment identifies five critical domains in addition to governance that must be strengthened before organizations can confidently scale AI:
Some of the best use cases come from crowdsourcing, Rebhan said. “When you open a tool up beyond IT and let frontline staff experiment in a safe sandbox, you discover real, high-impact opportunities you’d never identify otherwise.”
Take our mini AI Maturity Assessment to benchmark your current readiness across the six domains. In just a few minutes, you’ll gain a snapshot of your organization’s AI maturity and see where Vizient can help you advance from AI ambition to measurable performance.
Why readiness pays off
Data shows the payoff for maturity is substantial:
Yet the industry still has work to do. According to recent analyses, only 30% of AI pilots reach production and over one-third of health system leaders admit they lack an AI prioritization process. This execution gap underscores why readiness matters as much as technical innovation.
The ‘prove-it era’
In his 2025 Sg2 Digital Health Landscape webinar, Rebhan noted that healthcare has entered the “prove-it era” of AI. That means systems must demonstrate measurable results — including improved efficiency, better outcomes and reduced waste — not just experimentation. Without disciplined readiness, MCP and other interoperability protocols risk becoming expensive theoretical frameworks rather than transformative tools.
Looking ahead
Preparing for the future of AI in healthcare requires more than enthusiasm for new technology — it demands strong governance, accountability and a clear strategy for responsible implementation. True readiness comes from establishing the right foundations: rigorous oversight, continuous monitoring for accuracy, bias and ROI, and building maturity across culture, technology and operations.