From Pilot to Scale: Why AI success in health systems depends on culture and talent
Blog | April 21, 2026
Reading time: 13 mins
Editor's note: This is part three of a seven-part series that unpacks each pillar of the Vizient AI Maturity Assessment, sharing lessons from our work with leading health systems and practical steps to build maturity across every stage of the AI journey. Read part one to explore an overview of the AI maturity framework and its six pillars and part two to understand the strategic imperative for AI in health systems.
In our last article, we argued that AI maturity begins with a clearly defined strategy. Health systems that scale AI successfully align initiatives to enterprise priorities, assign ownership, and treat AI as a long-term capability rather than a series of pilots.
Across the health systems we work with, there’s a consistent pattern. Leaders can articulate where they want to go with AI. They can identify high-value use cases. And in many cases, they’ve even begun investing in solutions both home-grown and from third-party vendors.
What often breaks down is not strategic intent, but the ability to operationalize those investments. AI maturity is not only a function of technical capability or data infrastructure—it’s a reflection of how an organization leads, engages its people, and operationalizes change. In healthcare, where complexity and risk are inherent, the success of AI is determined less by the sophistication of the tool and more by the strength of the leadership and cultural system into which it is introduced.
Research strongly reinforces this point. Across industries, AI success is driven far less by algorithms than by people and processes. BCG’s AI Radar Survey notes that 70% of AI’s impact comes from workforce adoption, workflow integration, and cultural alignment, compared to just 10% from the models themselves.
Healthcare reflects this dynamic even more strongly. A Mass General Brigham study on deploying AI agents in clinical practice notes that less than 20% of effort is spent on model development, while over 80% is consumed by implementation work—data integration, workflow redesign, and change management.
That’s why culture and talent matter so much. Culture in this context is not abstract. It is how work actually gets done: how it feels to operate in the environment, what behaviors are encouraged or tolerated, and how decisions are made day to day. These dynamics ultimately determine whether AI-enabled changes are adopted, adapted, or quietly worked around. In health systems, AI doesn’t sit neatly within a single function. It touches clinicians, nurses, revenue cycle teams, IT, compliance, and operations. It changes how decisions are made, how work gets done, and how accountability is defined.
Without the right culture and talent to support those changes, even high-performing AI solutions struggle to scale.
The second pillar of the Vizient AI Maturity Model focuses on culture and talent: building an organization that not only deploys AI but enables it within everyday work.
The culture gap between adoption and impact
Most health systems’ AI initiatives aren’t failing because the tools themselves don’t work. They're failing because the organization is not prepared to use them.
Similar themes are evident across systems:
- AI is often perceived as an IT-led initiative rather than a business capability.
- Clinicians may not trust or fully understand model outputs.
- Frontline teams are asked to adapt workflows without meaningful input or training.
- Managers are left to handle questions about job change without a clear message from leadership.
- Data scientists and operational leaders work in silos rather than in partnership.
Unsurprisingly, this leads to fragmented adoption, inconsistent usage, and limited return on investment.
A recent McKinsey survey found that only 21% of organizations report redesigning workflows alongside AI deployment, yet those that do are nearly three times more likely to achieve meaningful financial impact. The differentiator is not whether AI is deployed, it's whether organizations change how—and by whom—the work is done.
Those findings are just as relevant to healthcare. As the AMA’s 2026 physician survey put it, “Education and training hasn’t been extensive—respondents want more.” Even as 81% of physicians report awareness of or use of AI in a professional context, 27% say they've received no AI training from any source. Only 11% of those with training report receiving “a lot” of it, and 92% say they want more education and training.
These patterns point to a consistent conclusion. AI maturity requires more than access to tools. It requires behavioral change. It also requires an environment where people feel safe to engage with that change. For AI to be used effectively, team members must feel psychologically safe to question outputs, raise concerns, and suggest improvements. Without this, risk increases, errors go unchallenged, learning slows, and trust erodes.
Preliminary results from the AI maturity survey reinforce this point (see figure 1). Culture and Talent scored 2.28 out of 5 overall, with Team Structures (2.6) and Leadership & Messaging (2.6) ahead of Training & Enablement (2.1) and Workforce Planning (1.8). In practical terms, many organizations are starting to form cross-functional teams and send clearer messages about AI, but far fewer have embedded role-based training or built a structured plan for how roles and skills will change.
These shifts are what transform AI from a set of tools into an enterprise capability.
From experimentation to organizational capability
Early AI efforts in health systems are often driven by curiosity. A service line tests ambient documentation, finance explores forecasting, and IT deploys copilots. While valuable, these initiatives tend to evolve in isolation.
Scaling AI requires a shift from individual experimentation to institutional capability. This shift is not only structural but cultural. It requires moving from isolated efforts to a shared way of working, where leaders and teams collectively own both adoption and improvement. In healthcare, this mirrors a broader principle: every individual is responsible not only for doing the work but improving the work. Health systems that successfully make this transition tend to undergo three cultural shifts.
- Moving from skepticism to informed trust. Trust is a gating factor for adoption,
particularly in clinical
environments. In organizations with higher AI maturity, over half of business units report trust and readiness
to use AI, compared to only 14% in less mature systems. This trust is not assumed—it’s built through
transparency, education, and experience. Clinicians and operators do not need to become data scientists, but
they do need to understand the rigor of data going into the models that are developed, validated, and monitored.
When staff understand what a tool is doing, what evidence supports it, and how it should be used in their
workflow, they are far more likely to use it with confidence rather than hesitation.
The takeaway: Investment in role-based education and clear communication equals stronger adoption. Trust is further strengthened when organizations are transparent about how AI works, where it performs well, and where limitations exist. Transparency turns AI from a “black box” into a shared learning tool and builds confidence across clinical and operational teams. - Shifting from IT-led initiatives to business-owned outcomes. AI cannot remain confined to
digital or
analytics
teams. Clinical and operational leaders must have shared accountability for adoption and results. This
accountability extends beyond outcomes to ownership of how AI changes workflows, roles, and team dynamics.
Leaders who visibly engage in this work—asking questions, reinforcing expectations, and connecting AI to
operational priorities—create the conditions for sustained adoption. Research shows that organizations where
senior leaders actively own AI initiatives are three times more likely to achieve measurable value.
The takeaway: Embedding AI into system priorities and tying it to operational metrics enables the move from experimentation to execution. - Evolving from reliance on individual champions to building system-wide capability. Many
organizations
begin with
a small number of enthusiastic early adopters, and while these champions are critical, they are only the first
piece for true organizational scale. Sustainable maturity requires standardized training, formal roles, and
embedded support structures.
The takeaway: Scaling AI requires making it part of how new employees are onboarded, how teams are trained, and how performance is measured. Critically, this evolution requires engaging people not just in training, but in the design and refinement of AI-enabled workflows. Those closest to the work are best positioned to identify friction, risk, and opportunity, making their involvement essential to scaling effectively.
Gartner’s 2025 case study on Vizient’s human-centric generative AI strategy describes how champions were identified across 15 roles and how empathy maps were used to surface the ways GenAI can affect work, responsibility, and schedules for each persona. The practical lesson for health systems is that champions should do more than advocate for AI. They should help translate the technology into role-specific realities that staff can understand, trust, and act on.
The case study also emphasizes that organizations should not dictate how roles will change but rather invite employees to shape the answer. That idea is directly relevant to health systems, where adoption depends on clinicians, operators, and frontline staff viewing AI as something built with them, not imposed on them. This distinction is especially important in healthcare, where credibility and adoption are tightly linked to professional autonomy and expertise. When clinicians and frontline teams help shape how AI is used, they are far more likely to trust and sustain it.
Talent strategy as a driver of AI maturity
AI maturity does not require every health system to build a large internal data science function. The more important challenge is deciding where capability must exist, how it should be distributed across the organization, and which roles need to be prepared to use, evaluate, and manage AI in practice.
Today, fewer than one-third of organizations have upskilled even a quarter of their workforce to effectively use AI. This capability gap is particularly relevant in healthcare, where adoption depends heavily on frontline users integrating AI into complex workflows across fragmented systems.
Leading organizations take a deliberate approach, including:
- Defining which capabilities must be owned internally, such as governance, evaluation and model oversight, and where external partnerships can accelerate progress.
- Investing in role-based training for clinicians, operators, and administrative staff.
- Establishing clear ownership for AI performance after implementation, ensuring that solutions are not only deployed but actively managed.
- Building capability across four practical levers: cross-functional team structures, role-based training and enablement, workforce planning for changing roles, and leadership messaging that makes AI feel usable rather than threatening.
- Anchoring these efforts to a clear purpose. In healthcare, that purpose is explicit—delivering safer, more effective, and more compassionate care. AI adoption accelerates when people understand how it helps them achieve that purpose, rather than viewing it as an abstract efficiency initiative.
Our experience also highlights two practical enablers that are often missed. First, employees need room to experiment with role-specific measures of value rather than being handed one enterprise productivity metric too early. Gartner notes that Vizient used leading indicators such as reduced effort, increased speed, improved quality, and increased adoption to help different roles define what better looks like in practice. Second, employees need access to both peer learning and AI expertise. Vizient’s GenAI community of practice and internal AI product-line support created a repeatable structure for learning, collaboration, and safe experimentation.
Most importantly, mature organizations recognize that talent strategy is not about hiring more technical experts. It is about enabling decision-makers across the enterprise. Clinical, operational, and administrative leaders must be equipped to evaluate AI opportunities, ask informed questions, and manage the changes that come with adoption.
Without this layer of capability, AI remains dependent on a small group of specialists and struggles to scale. Importantly, mature organizations do not define success primarily as productivity. Instead, they focus on reducing unnecessary burden and creating capacity. The goal is not simply to do more work, but to enable people to do the right work—freeing time and attention for clinical judgment, patient interaction, and the human elements of care that cannot be automated.
Operationalizing culture through maturity progression
Culture is often described as intangible, but in practice it’s highly observable. It is reflected in whether teams feel supported to speak up, whether leaders respond constructively to concerns, and whether improvement is part of daily work or treated as an additional task. These are the conditions that determine whether AI becomes embedded in practice or remains peripheral.
In the Vizient AI Maturity Model, organizations progress through stages, from curious to engaged, enabled, fluent, and ultimately adaptive. Each stage reflects a deeper level of organizational capability across four dimensions: team structures, training and enablement, workforce planning, and leadership and messaging.
The progression outlined above matters because it gives leaders a practical roadmap. Moving from one level to the next is less about broad culture slogans and more about specific actions: naming champions, segmenting training by role, planning for skill shifts, and repeating a clear message that AI is there to support staff and improve work. For that message to resonate, it must be experienced, not just communicated. If AI introduces additional steps, ambiguity, or burden, it will be resisted. If it meaningfully reduces friction for patients, families, and care teams, it will be adopted and sustained.
Culture as a competitive advantage
As AI capabilities continue to expand, access to technology is becoming less of a differentiator. Most health systems will have access to similar tools, whether through EHR vendors, enterprise platforms, or external partners. As access to technology becomes more uniform, differentiation shifts to how effectively organizations integrate these tools into daily work. That integration is fundamentally a leadership and culture challenge.
Health systems that invest in culture and talent by building trust, enabling their workforce, and aligning leadership will move from pilot to scale faster and more consistently. They will integrate AI into workflows, realize measurable value, and adapt more quickly as technology evolves.
Ultimately, AI maturity is achieved not when tools are deployed, but when they are embedded into how the organization thinks, works, and improves every day. Organizations that focus on leadership, culture, and people as the foundation will not only scale AI more effectively—they will build stronger, more resilient systems where individuals are empowered to do their best work in service of patients and communities.
Those that do not will continue to experiment without achieving sustained impact.
Take our mini AI Maturity Assessment to benchmark your current readiness across the six domains. In just a few minutes, you’ll gain a snapshot of your organization’s AI maturity and see where Vizient can help you advance from AI ambition to measurable performance.