Autonomous AI Doctors Will Fail Without One Missing Discipline: User Experience

Autonomous AI Doctors Will Fail Without One Missing Discipline: User Experience

Autonomous AI doctors and AI therapists are no longer science fiction. Models already pass medical exams, summarize charts, suggest diagnoses, and deliver structured mental health support. The technical capability is not the bottleneck anymore.

Yet if you walk into a clinic, visit an ER, or try to navigate mental health support, very little feels transformed by AI.

As Sebastian Caliri and Finn Kennedy argue in A Vision for Healthcare AI in America, the main brakes are regulatory and reimbursement. Healthcare AI that actually delivers care is effectively illegal in the United States today, while administrative AI quietly scales in the background.

I think there is another hidden bottleneck that will decide who wins, once those legal walls begin to fall:

User experience.

The future of autonomous AI doctors and AI therapists will not be decided by raw model IQ. It will be decided by whether the experience feels safe, understandable, trustworthy, and emotionally supportive to real people who are scared, busy, confused, or in pain.

In other words, the missing discipline is not more model research. It is user experience design, tightly integrated with clinical expertise and product engineering.

AI is ready. Patients are not.

The 8VC piece lays out a clear taxonomy of healthcare AI: from level 0 administrative tools, to level 1 assistive systems, level 2 supervised autonomous care, and level 3 fully autonomous AI doctors.

Today, level 0 is already big business. AI scribes, revenue cycle automation, and scheduling agents are generating over 1 billion dollars in annual revenue. They help the system, not the patient, feel more efficient. 

Levels 1 through 3 are where the real transformation lives:

  • AI that can coach, educate, and guide patients between visits
  • AI that can adjust meds, titrate doses, or triage symptoms under supervision
  • AI that can autonomously handle low risk care, refills, and basic urgent care

The 8VC team is right that regulation, licensing, and reimbursement need to change for those levels to become legal and investable.

Even if all of that were fixed tomorrow, most patients would still not know how to use an “AI doctor” safely, when to trust it, when to escalate, or how to fit it into their lives. They would bounce, churn, or quietly revert to the familiar frustration of phone queues and waiting rooms.

Technical capability without a designed experience is like handing a jet cockpit to someone who only wanted a simpler bus.

Healthcare is emotional. The interface must be too.

The 8VC article focuses on macro forces: wage growth crushed by insurance premiums, thirty day waits for appointments, burnout among physicians, and intergenerational unfairness as Millennials and Gen Z shoulder the cost of Medicare.

Underneath all of that is something you can only see at the individual level.

  • The person awake at three in the morning, spiraling about chest tightness
  • The parent wondering if their child’s fever is something they can watch at home
  • The worker refreshing bank apps because a surprise medical bill just arrived
  • The accident survivor who cannot sleep and turns to an AI therapist because every human appointment is booked for weeks

These people are not “users” in the normal tech sense. They are anxious, ashamed, overwhelmed, hopeful. Their emotional state is part of the interface.

That means an AI doctor or AI therapist can be clinically correct and still fail if the experience:

  • Uses language that feels cold or dismissive
  • Overwhelms with options instead of guiding
  • Fails to explain uncertainty and limitations
  • Makes it unclear when a human must step in
  • Loses continuity so the person feels like they start from zero every time

In mental health especially, tone and pacing are not cosmetic. They are part of the therapeutic outcome.

So when we talk about “autonomous AI doctors,” we are really talking about an entire experience layer around the model: the way it introduces itself, asks questions, remembers, follows up, escalates, and acknowledges human feelings.

That is user experience work.

Product and engineering cannot do this alone

The 8VC framework recognizes that “healthcare AI” is about doctoring, not just generic pattern recognition. It imagines AI coaches, advocates, navigators, and supervised autonomous systems that manage conditions like heart failure at home.

For those systems to work in the real world, you cannot just throw a model into a chat window and hope.

You need:

  • Clinicians who understand care pathways, edge cases, and risk
  • Therapists and behavioral psychologists who understand how people change
  • Product managers who design flows around real life, not idealized compliance
  • Engineers who can implement safety rails, memory, and integrations
  • UX researchers who sit with patients and watch where they hesitate or drop off

In our own work building an AI CBT coach, we have seen how much friction comes from tiny experience details:

  • If the AI asks “How can I help?” many people freeze. If it offers three structured options and one freeform one, they feel guided, not interrogated.
  • If it treats every session as a blank slate, users feel like it does not care. If it remembers patterns and reflects them back, they feel seen.
  • If safety messages feel like legal disclaimers, people click away. If safety is woven into the conversation with care and clarity, they stay and actually listen.

None of these problems are solvable with more GPU. They are solved with clinicians and CX people in the same room as engineers, working on scripts, guardrails, and flows together.

The missing discipline in healthcare AI: User experience

Traditional healthcare has treated experience as a side effect. Patients tolerate confusing portals, waiting rooms, and rushed visits because there is no alternative.

Autonomous AI changes that power dynamic. If the experience feels unsafe, untrustworthy, or confusing, people can simply close the tab.

That means experience becomes part of the clinical effectiveness.

Good healthcare AI experience design needs to answer questions like:

  • How does the system introduce what it can and cannot do, without panic or hype
  • How does it build trust over time, not just in one impressive answer
  • How does it show boundaries: “this is safe for AI, this requires a human now”
  • How does it give agency back to the patient, in line with the 8VC theme of restoring liberty and dignity to people who are often sidelined in medical settings

User experience in this context is not just nicer fonts and friendlier buttons. It is the discipline of designing how a person moves through their care journey with AI at their side:

What expectations we set.
What we remember.
What we surface and when.
How we respond when something goes wrong.

Safety scaffolding is part of the experience

The 8VC paper describes levels 2 and 3 as supervised and autonomous systems that can diagnose, prescribe, and triage within a defined scope of practice, under new regulatory frameworks and potentially even their own “AI NPIs.” 

That is the infrastructure side.

On the product side, you need a layered architecture that might look like this:

  • A language layer where the model generates responses
  • A clinical logic layer that constrains content to evidence based protocols
  • A safety and memory layer that checks for risk, consistency, and escalation needs
  • An experience layer that decides how to present information, ask questions, and follow up in a human friendly way

If an AI system detects worsening depression or signs of self harm, it is not enough to have an internal trigger that says “escalate.” The experience layer must handle that moment with enormous care.

For example:

  • Acknowledge feelings explicitly
  • Explain why a recommendation is changing
  • Offer simple, concrete next steps
  • Clearly introduce human options: hotlines, live chat, in person care

This is where user experience, clinical practice, and safety engineering intersect. The result is not merely “compliance.” It is a feeling of being looked after, even when the system is constrained.

From macro policy to micro moments

One of the strongest contributions of the 8VC article is its policy roadmap:

  • CMMI models for assistive AI reimbursement
  • Preemption of fragmented state disclosure laws
  • Reform of FDA approval benchmarks and PCCPs
  • State Medical AI Practice Acts to license AI systems
  • Social Security Act changes so Medicare can pay autonomous AI providers directly 

This roadmap is necessary. Without it, AI remains trapped at the administrative edge while patients and clinicians bear the same old burdens.

The risk is that we succeed at the macro level and fail at the micro level.

We might win the legislative fight, secure reimbursement, certify models, and still ship experiences that patients do not want, do not understand, or do not trust.

To close that gap, we need to treat the moments that matter in the product as seriously as we treat statutes and codes.

  • The moment a worried parent first opens the app
  • The moment an AI therapist suggests a safety plan
  • The moment a chronic patient decides whether to follow autonomous dosing advice
  • The moment a rural patient realizes they do not have to drive two hours for every minor concern

These are product and experience questions as much as policy and engineering questions.

The teams that will win

The companies that thrive in the world 8VC sketches will not be the ones with the most complex models. They will be the ones who bring the right disciplines together early and treat experience as core infrastructure, not a late layer of polish.

Those teams will:

  • Put clinicians and patients in product review meetings
  • Give UX and behavioral scientists real authority over flows and copy
  • Build feedback loops where actual outcomes and satisfaction data reshape the experience over time
  • Design for dignity and clarity, not just engagement and time on site

In other words, they will treat user experience as medicine.

Humane AI or nothing

Healthcare AI can ease physician burnout, bend the cost curve, and give working class and rural Americans access to care that was previously out of reach, as 8VC rightly emphasizes. 

But that future will not arrive automatically because models are smart enough or policies are modern enough.

It will arrive if, and only if, we build AI doctors and AI therapists that feel:

  • Understandable
  • Trustworthy
  • Emotionally attuned
  • Respectful of human limits and needs

That is the work of user experience, informed by clinical insight and powered by engineering.

Autonomous AI will not replace doctors and therapists. It will either become the most humane first line of support people have ever had, or it will sit on the shelf while the old system grinds on.

The missing discipline that will decide between those futures is not more model training. It is whether we take the experience of the patient seriously enough to design for it from day one.

Read more