Vizient logo

The rise of the AI care broker: How AI-mediated care could reshape access and trust

Data, analytics and AI
thumbnail-750x400.jpg (Original)
Key points

      Your digital front door strategy may see a big shift soon, and AI is at the root of it. I’ve already started to see the signs of it during executive conversations — less talk about portals or wayfinding, and more discussion about the consequential questions forming upstream:

      • Where do patients go first to make sense of their needs?
      • Who helps them decide what to do next?
      • What happens if those decision points increasingly sit outside the health system?

      There are signals that AI tools could start to assemble a new layer of healthcare interactions, one that could mediate care decisions at scale and reshape demand for healthcare services. The shorthand I’m using for this is the “AI care broker”: a conversational interface that helps patients interpret, decide and eventually navigate what happens next.

      A few recent announcements by Big Tech firms are part of that signal. OpenAI has introduced its consumer-facing ChatGPT Health product, explicitly framing it as a dedicated health experience that can connect medical records and wellness apps for more grounded data analysis, operate in a separate space with enhanced privacy protections, and support patients in preparing for care conversations (while strategically emphasizing it is not intended for diagnosis or treatment). There have also been efforts by firms like Samsung and Apple to increasingly embed AI into wearables and phones to learn from patient-generated health data and provide personalized insights and wellness coaching.

      These tools aren’t materially redirecting patient volumes today, but as AI capabilities expand and more digital health companies incorporate the technology into their services, that creates new strategic risks worth planning for now.

      Why this feels like a real shift (not just another chatbot moment)

      Part of what’s driving these new investments in consumer-facing AI is that incumbents still haven’t figured out how to leverage existing digital touchpoints to tackle major patient pain points — the care experience remains fragmented, impersonal and hard to navigate. That means that industry players like OpenAI see an opening: Build an AI-powered interface that isn’t just a data repository, but a platform that can contextualize, summarize and increasingly facilitate action.

      This AI care broker isn’t there to replace a clinician, but to sit between a patient and a complex market, supporting them when they often feel stranded:

      • “What does this result mean?”
      • “Is this urgent?”
      • “Who’s the right person to contact?”
      • “What are my options and what will they cost?”

      Typically, this brokering function would be influenced by a messy mix of call centers, patient portals, nurse lines, family caretakers or general internet searches. Tools like ChatGPT Health, on the other hand, can serve as a single interface that people go to first to address their healthcare needs.

      Also consider that apart from its consumer-facing product, OpenAI released an enterprise solution that leans hard into the operational realities health systems care about. Anthropic soon followed with its own similar product in Claude for Healthcare, and we can expect other technology firms to follow suit. These offerings provide more potential scaffolding for AI brokering.

      What this could mean for healthcare providers

      If the first stop for healthcare becomes this external interface, that could have big implications for defining access and trust.

      The access implication: Distribution risk becomes real (and invisible)

      Healthcare has a long list of access challenges that heavily influence patient behavior, including cost, specialist availability and geographic limitations. If more patients start their care journey using an AI care broker, it will start to shape demand long before a patient arrives in your clinic.

      This may result in incremental changes at first (e.g., patients arriving with pre-formed expectations or more pointed questions), but over time, the broker effect would increase as AI ingests more data and handles more tasks, turning access factors into a recommendation engine. In a sense, this AI care broker turns a health system into a fulfillment layer, “calling up” services and seeing which provider can best complete a request — compressing brands into comparable attributes.

      The risk here is not that healthcare providers lose all patient relationships overnight, but that they start to lose ground on being a primary entry point for a significant subset of patients. This also means that if an AI finds your system’s workflows confusing, opaque or unreliable, it will increasingly route to other organizations with simpler scripts or toward the path of least resistance, quietly shifting the basis of competition behind the scenes.

      The trust implication: Trust becomes a bigger routing factor

      What this means for patient trust is also interesting. Healthcare organizations anchor trust in measures like reputation, clinical expertise, hands-on care or clinical outcomes. But trust isn’t uniform in healthcare and can shift based on the stakeholder and context. For example, patients may evaluate trust through a collection of experience criteria, such as:

      • Clarity (“This finally makes sense”)
      • Responsiveness (“Someone answered quickly”)
      • Non-judgment (“I can ask what I’m embarrassed to ask”)
      • Continuity (“I’m not starting over”)
      • Follow-through (“The next step actually happens”)

      These measures can be hard for healthcare providers to demonstrate when not directly engaging with patients. It’s precisely these in-between moments where AI tools could gain more influence — they’re always available and have unlimited patience, can translate information for greater comprehension, have solid memory and are good at displaying empathy.

      This isn’t about guessing how many patients will put more trust in an AI model than a clinician but rather understanding the mindset of a patient that leads to a behavior when thinking, “I need help. Who do I ask first?”

      What this ultimately means is that if an AI broker is the starting point, healthcare organizations may start to be evaluated by a different set of trust attributes, and trust becomes something they must make visible in ways a broker can recognize.

      What proactive executives can do now

      If the AI care broker becomes a reality, that means health systems must become broker-friendly in the parts of the patient journey that matter. Four moves are worth putting on a near-term agenda:

      1. Make access transaction-ready, not just digital.
        Think of this like the difference between merely offering online scheduling and making scheduling successfully completable. This means reducing the number of points where a patient — or an intermediary acting on their behalf — hits ambiguity, conflicting options, confusing pathways or dead ends. When an AI intermediary tries to route a patient, it will prefer the systems where the transaction completes reliably.
      2. Make cost legible enough to be recommended.
        We may never achieve perfect price transparency, but legible cost transparency can be a differentiator if it is consistent. In a broker-mediated world, organizations that provide more reliable price estimates, clarify pre-auth expectations or create fewer billing surprises become easier to recommend, defend and return to.
      3. Don’t let patients create the narrative on their own.
        Patients will increasingly arrive with AI-generated summaries, interpretations and guidance. You can ignore that reality — or you can shape it. An opportunity is to offer an “official” patient-friendly narrative that’s easier to trust (e.g., visit summaries that are readable, next-step plans that are specific, instructions that don’t require a translator). Over time, that becomes part of your “trust surface,” and gives clinicians a cleaner artifact to reconcile when external narratives show up.
      4. Make trust observable.
        If a broker is mediating the journey, it’s looking for signs that the next step will be easy, safe and coherent. Organizations can make trust easier to infer through design choices: greater transparency, clear escalation to humans, reliable loop-closure and clarity about why a recommended next step makes sense. These traits will help define which organizations brokers can confidently route to.
      5. Challenge leadership to update their digital front door perspective.
        In your next strategy huddle, end the discussion with the following questions to push executive thinking:
        • If an AI care broker routed 15-30% of new patient demand in the next two years, would it route to us or around us? Why?
        • Which service lines are more adaptable to this future? Which are more vulnerable?
        • What are the top three changes we could make this year that would make us easier to transact with without compromising clinical integrity?

      The AI care broker is not here yet, but we see the seeds of change — and the window to act can shrink quickly. Take steps today to reevaluate your digital front door strategy so that if a broker does emerge, the patient’s next best step reliably leads back to you.

      Author
      RebhanAndrew_200x200px.jpg (Original)
      Vizient Senior Director, Intelligence
      As a senior director on the Intelligence team, Andrew Rebhan leads thought leadership and content creation for digital health research at Sg2, a Vizient company. In this role, he keeps members up to date on the latest technology trends and how to plan for new, disruptive forces and innovation entering... Learn more