Vizient logo

Next in digital health: Defining the human advantage in the age of AI

As artificial intelligence transforms healthcare, leaders must redefine the human capabilities that technology can’t replace.
Data, analytics and AI
Next-in-digital-health_Thumbnail_750x400.jpg (Original)
Key points

      Editor’s note: This article marks the launch of "Next in digital health," a new blog series dedicated to exploring the technologies reshaping the future of healthcare. In this series, Vizient Senior Intelligence Director Andrew Rebhan will take a closer look at the innovations driving transformation across the industry—from emerging digital tools and data-driven strategies to the evolving role of AI and beyond—offering insights into what’s ahead and what it means for healthcare stakeholders.

      When talking to healthcare leaders about AI, one topic almost always strikes a chord.

      In a business that’s increasingly AI-driven, what are the unique advantages that humans still bring to the table?

      This discussion can make people uncomfortable—even defensive. I’ve spoken to executives who’ve simply brushed off the topic, insisting that AI is nowhere near matching or outperforming people across tasks. I’ve had others express caution at advocating for AI adoption given their worries about widespread layoffs.

      But mostly, this topic has become a healthy debate, and a strategic priority for leaders who are trying to balance rapid investment in AI with thoughtful governance and change management.

      As AI models have grown massive in compute size, more advanced in their reasoning, and ubiquitous in our daily lives, we’ve seen the evolution of what algorithms can do—and that progress has moved faster than many anticipated. AI already has an advantage when it comes to matters of speed, availability, repetition, prediction, and scalability. This reality is what’s leading to timely discussions around how to adapt the workforce for the future. As the ground shifts beneath our feet, this means that current roles will evolve or be redeployed, and new roles will emerge altogether.

      Years ago, as a simple thought exercise, I created a slide comparing human competencies to AI competencies. When I periodically revisit the slide, I find myself hesitating on the questions I initially raised:

      • Is AI’s ability to generate art, poetry, and movies a sign of creativity?
      • Is AI truly better at crafting empathetic responses?
      • As more people use AI for healthcare decisions, how does that impact trust?
      • As autonomous reasoning AI agents start to do more of the work, are individuals the sole source of judgment?

      I recently posed this topic to my colleagues within Vizient’s AI Community of Practice for feedback, and what followed was a flood of thoughtful ideas, challenges, and opinions. Below are the main themes that emerged and what they mean for leaders guiding their workforce through the next phase of AI adoption.

      Leadership in the AI era remains crucial

      Leadership in the AI era remains crucial

      The most practical thread in the feedback is how humans still own aspects of leadership related to stakeholder alignment, change management, shaping culture, and driving urgency. Healthcare leaders have a distinct role in providing clarity of strategic goals, mobilizing the workforce, and sustaining commitment to the organization’s mission over time.

      The implication: This leadership function helps overcome common barriers to AI adoption by easing resistance, building trust, and providing a “burning platform” for change.

      AI makes things; humans make meaning

      An intriguing idea among the feedback is the distinction between contributions based on producing concrete things (e.g., AI-generated drafts, summaries, code, forecasts), and contributions based on context and continuity of interpersonal relationships in the real world.

      This distinction explains why traits such as speed and scalability now feel like AI-native competencies while trust and leadership feel more native to us. AI can generate an output to any prompt, but it cannot generate the conditions that make an output accepted, acted on, or believed. In organizations, what moves people to align with a decision is rooted in the legitimacy of who is driving the discourse. One colleague suggested that executives can serve as “guardians of legitimacy,” acting as stewards who coordinate the workforce in the face of increasing uncertainty or change.

      The implication: As AI expands its capabilities, the highest value human work will continue to shift away from pure production to focus more on the enabling factors of sustained impact, such as aligning stakeholders, evaluating tradeoffs and owning consequences.

      Empathy

      Empathy is more than a feeling

      The topic of empathy triggered another interesting debate. While more studies have continued to highlight how chatbots have grown better at displaying empathy, there’s a distinction to be made on whether an AI is truly empathetic in any conscious way or whether it’s merely simulating empathy linguistically. Several colleagues suggested that empathy is part of a broader human advantage in emotional intelligence—the ability to perceive, understand, and respond to emotions.

      Going further, there is a debate about authenticity: if a system consistently produces messages that patients perceive as more empathetic than a response from a busy clinician, is that acceptable? Or is AI’s ability to generate an empathetic response in an email or chat ultimately hollow if it doesn’t come from a person who genuinely cares?

      I think both instincts can be right, and that tension is productive. In many healthcare interactions, the immediate goal is a timely response or clarity, and a well-designed AI can provide that in a way that gives comfort or reduces anxiety. Embodied empathy, on the other hand, matters relationally and is built upon repeated interactions that provide evidence that someone is invested in you.

      The implication: We should use AI as a communication amplifier—a tool that can craft supportive language and allow us to express care at scale. However, humans must remain responsible for the message behind the words, the follow-through, and demonstrating genuine concern, especially in high-stakes interactions.

      Judgment requires accountability (and sometimes courage)

      Judgment refers to the ability to make decisions or come to conclusions after careful consideration. AI agents are arguably starting to make judgments as they reason step by step through information, perform internal deliberation, and then act toward a defined goal. That said, a point of contention is whether an AI can exercise true judgment if it ultimately isn’t responsible for the consequences of a decision. This also connects to lingering concerns about hallucinations and the need for individuals to apply domain knowledge and critical thinking to validate AI output.

      AI already informs decisions and in some cases recommends actions, but it doesn’t own legal, ethical, or social outcomes. When things go wrong because of AI, we don’t interrogate the model; we interrogate the people and governance systems that deployed it. It’s the human in the loop who is ultimately accountable for providing ethical oversight and justifying action. A colleague surfaced another concept that’s relevant here: moral courage, or the ability for leaders to make unpopular decisions and stand behind values while under pressure.

      The implication: If agentic AI expands the decision surface area, organizations must expand workforce accountability in parallel. This means having clearer decision rights, explicit escalation paths and leadership that’s comfortable signing their name to a decision.

      AI lacks perceptivity, intuition, and the ability to “read the room”

      Another strong theme that emerged is how humans constantly assess the surrounding environment. We deal with nuance, subtle cues, body language, tone shifts, and unstated concerns. There’s the hard to quantify but hard to deny “gut instinct” that humans share, such as a clinician’s ability to sense the unsaid in patient conversations and pose questions that unlock deeper issues.

      We can also detect when AI output doesn’t fit the reality of the situation—which could arise from previously stated hallucinations, omissions, or inaccuracies—by recognizing edge cases or exceptions to the rule based on lived experiences. This may explain why some physicians have described the process of clinical diagnosis as being part science/part art, combining objective evidence with more subjective skills rooted in expertise.

      AI is already impressive at tasks like pattern recognition, but it still struggles in situations where it has incomplete information, shifting goals, or local context that isn’t represented in a prompt or training dataset. When those gaps arise, perceptivity becomes the bridge between a plausible answer and a useful one.

      The implication: AI underscores the value of experience. As we rely more on AI in our daily lives, we also must leverage our acquired knowledge, observation skills, and awareness of our surroundings to better validate outputs, spot nonsense, and ask the right follow-up questions.

      Defining what’s creative, funny, or personable remains in our control

      Defining what’s creative, funny, or personable remains in our control

      We can’t deny that generative AI tools create impressive combinations of output in a matter of seconds, but several colleagues insisted we still edge out AI on creativity. Others added that a sense of humor is an unmistakably human trait—not that an AI can’t produce jokes on command or respond in a witty tone—but because we have a better sense of timing, shared context, and social risk-taking that make humor land.

      There’s another human-centered factor that underlies matters of evaluating creativity or humor, and that’s having criteria. We still decide what “good” looks like, and even when AI creates novel output, it’s still evaluated through the human lens. It was also noted that individuals can better personalize messaging to an audience, adding unique touches to AI outputs that can sometimes feel generic.

      The implication: AI can surpass us with its ability to produce a seemingly never-ending flow of ideas and content, but the goal here shouldn’t be ideation volume. We should use AI to explore possibilities while humans set the aims, constraints, and quality benchmarks.

      Intent anchors everything

      Finally, we originate intent. We set aspirations, we seek purpose, we care about what the future should be, and we decide what’s worth protecting. AI can be assigned goals, but it doesn’t generate its own meaning the way we do (perhaps the singularity will change this, but we’ll cross that bridge when we get to it).

      The implication: The more powerful AI becomes, the more important it is to be explicit about our purpose, values and what it means to be human.

      What this means for healthcare leaders

      These themes help illuminate our advantages at a time when many feel vulnerable to AI’s rapid ascent. The near-term challenge is to continually reassess workflows, roles, and expectations so AI creates more space for the distinctly human capabilities that matter most. The organizations that do this well will be best positioned for long-term success as AI continues to reshape healthcare delivery and operations.

      Author
      RebhanAndrew_200x200px.jpg (Original)
      Vizient Senior Director, Intelligence
      As a senior director on the Intelligence team, Andrew Rebhan leads thought leadership and content creation for digital health research at Sg2, a Vizient company. In this role, he keeps members up to date on the latest technology trends and how to plan for new, disruptive forces and innovation entering... Learn more