Kafuman Hall logo

AI’s next big test in healthcare: readiness first

As AI adoption accelerates, leaders must strengthen oversight, accuracy and maturity to ensure safe, scalable and sustainable results.
Data & analytics
AI_next_big_test_thumb_750x400px.jpg (Original)
Key points

      Artificial intelligence is rapidly reshaping healthcare, but the biggest challenge isn’t adopting the latest tools or standards — it’s about whether health systems are ready to deploy AI responsibly. Readiness — defined by strong governance, the ability to scale and alignment with enterprise priorities — is what separates meaningful innovation from costly experimentation.

      The allure of new interoperability protocols like the Model Context Protocol (MCP) is real, promising a future where AI seamlessly integrates into hospital workflows, pulling in the right context from the right systems at the right time. Yet these advances will only prove valuable if organizations first build the internal discipline to use them safely and effectively.

      The promise of interoperability and standardization is enormous, but we have to acknowledge that healthcare has unique requirements. Without governance in place, adoption could risk becoming chaotic rather than transformational.
      Robert Lord
      Robert Lord
      SVP, Data and Digital Research and Development

      In other words, readiness isn’t optional — it’s the foundation. Before rushing into the next big protocol, health systems must strengthen governance, ensure accuracy and establish safeguards that position them to capture AI’s true potential.

      The governance imperative

      The concept of AI governance is so omnipresent in healthcare, you’d be forgiven for viewing it as just another buzzword. But it’s so much more: True governance is what separates organizations that thrive in their use of AI from those that stumble. Without a strong framework in place, health systems open themselves up to bias, inaccuracies, security risks and wasted investments.

      There’s this urgency to do something, which makes organizations skip over problem definition and foundational components up front. That FOMO effect — ‘everyone else is doing this, so why aren’t we?’ — can lead to adoption without alignment.
      Andrew Rebhan
      Andrew Rebhan
      Senior Intelligence Director

      So, what does strong governance look like?

      1. A structured process for intake and oversight
        Hospitals need a structured framework to evaluate AI use cases before they’re deployed, with large systems requiring the ability to delegate responsibility across sites. This isn’t about stifling innovation — it’s about ensuring that tools are solving the right problems, tied to enterprise objectives and rolled out with safety in mind.
      2. Clear cybersecurity and compliance guardrails
        There’s a strong need for formal policies around AI use, not only to prevent “shadow IT” but also to manage vendor risk. Leverage established frameworks like those from the National Institute of Standards and Technology (NIST) and draw from organizations such as the Coalition for Health AI (CHAI) to build a strong foundation for keeping patient data safe. Frameworks for ethical AI also exist and are often embedded in standards like ISO 42001.
      3. Embed governance into culture
        Governance isn’t just organizational policy — it’s a mindset, and one in which culture and talent serve as an essential component. Systems with role-based training, AI champions and ongoing enablement see far higher adoption and safer deployment.
      4. Algorithmic accuracy and bias monitoring
        AI in healthcare cannot be a “set it and forget it” exercise. Regular audits, explainability thresholds and human-in-the-loop checks are essential safeguards.

      Speaking of accuracy…

      It’s the Achilles’ heel of LLMs in healthcare. While LLMs can summarize vast troves of data or generate patient-facing communications, a single inaccuracy can have outsized consequences in clinical or operational settings.

      How should you ensure better results?

      1. Start with high-quality, representative data
        • If LLMs are trained or fine-tuned on incomplete or biased historical data, they will replicate those flaws — meaning if you rely on the treatment patterns of the past (knowing the inequities that exist), you are simply automating bias.
        • Organizations must invest in data curation and normalization before deploying LLMs to ensure inputs represent the full patient population and avoid perpetuating disparities.
      2. Establish a structured accuracy framework
      3. Rebhan outlined a five-step framework that organizations can adapt to mitigate bias across the entire AI development cycle:

        This approach ensures accuracy isn’t a one-time check but a sustained practice.

      4. Keep humans in the loop
        • It’s hard to overstate the necessity of human oversight in critical workflows, particularly clinical ones, to provide essential feedback on how models are working and maturing.
        • This doesn’t mean clinicians manually review every AI output; rather, high-risk decisions should include checkpoints where staff validate or override AI-generated insights. “AI should augment judgment, not replace it,” said Erik Swanson, Managing Director, Consulting. “The organizations that get this right will design workflows where humans validate and guide the model, especially when patient care is on the line.”
      5. Embed accuracy and monitor ROI
      6. Governance councils can define accuracy thresholds and establish third-party requirements and audit trails.

        In addition to monitoring accuracy, be sure to establish KPIs to evaluate the impact of your AI initiatives. Many organizations find they’re not achieving ROI on AI efforts. Understanding if you’re hitting all desired outcomes — clinical, satisfaction, safety, financial, etc. — is critical.

      Building readiness for the future

      The Vizient AI Maturity Assessment identifies five critical domains in addition to governance that must be strengthened before organizations can confidently scale AI:

      Some of the best use cases come from crowdsourcing, Rebhan said. “When you open a tool up beyond IT and let frontline staff experiment in a safe sandbox, you discover real, high-impact opportunities you’d never identify otherwise.”

      Curious where your organization stands today?

      Take our mini AI Maturity Assessment to benchmark your current readiness across the six domains. In just a few minutes, you’ll gain a snapshot of your organization’s AI maturity and see where Vizient can help you advance from AI ambition to measurable performance.

      Why readiness pays off

      Data shows the payoff for maturity is substantial:

      Yet the industry still has work to do. According to recent analyses, only 30% of AI pilots reach production and over one-third of health system leaders admit they lack an AI prioritization process. This execution gap underscores why readiness matters as much as technical innovation.

      The ‘prove-it era’

      In his 2025 Sg2 Digital Health Landscape webinar, Rebhan noted that healthcare has entered the “prove-it era” of AI. That means systems must demonstrate measurable results — including improved efficiency, better outcomes and reduced waste — not just experimentation. Without disciplined readiness, MCP and other interoperability protocols risk becoming expensive theoretical frameworks rather than transformative tools.

      Looking ahead

      Preparing for the future of AI in healthcare requires more than enthusiasm for new technology — it demands strong governance, accountability and a clear strategy for responsible implementation. True readiness comes from establishing the right foundations: rigorous oversight, continuous monitoring for accuracy, bias and ROI, and building maturity across culture, technology and operations.

      The future of AI in healthcare isn’t just about adopting the latest innovations — it’s about proving value in a sustainable, trustworthy way. Organizations that invest in sound governance and readiness today will be the ones positioned to lead when the next wave of transformation arrives.
      Erik Swanson
      Erik Swanson
      Managing Director, Consulting