Built for Nobody
Why the most sophisticated healthcare AI means nothing if real people can’t use it — and whose fault that actually is.
ThetaRho Team · May 2026 · 6 min read

There is a particular kind of software that gets built in healthcare. It has every feature someone asked for. It passed every compliance review. It survived the procurement process, the security audit, and the vendor evaluation. It got licensed, deployed, and announced in a press release.
And then nobody uses it.
Not because technology failed. Not because the clinical staff are resistant to change. But because somewhere between the requirements document and the real world, the system lost track of the person who would have to open it every morning.
This is not a new problem. But in the era of AI, it has become an existential one. Healthcare organizations are pouring resources into AI-powered tools built on data architectures that barely work, surfaced through interfaces that nobody designed for humans, solving problems that someone in a conference room decided were the important ones.
The debate over whether AI can match human doctors in diagnostic accuracy misses the point. Even superhuman capability means nothing if it can’t fit into real workflows used by real people under real time pressure.
The Question Nobody Is Asking
Healthcare technology discussions tend to organize themselves into two camps. The first believes that artificial general intelligence will transform medicine within a decade, rendering large portions of clinical work, and by extension people, obsolete. The second believes that current AI systems cannot be trusted because they hallucinate, lack explainability, and cannot be held accountable for errors.
Both camps are missing the question that determines whether any of this works:
What do the users actually need?
It sounds almost embarrassingly simple. But this question — relentlessly and honestly pursued — is the difference between software that gets used and software that gets shelved. It is why ServiceNow’s AI features generate over a billion dollars in annual revenue. They did not build the most sophisticated AI. They built AI that solved real problems for real people doing real work. The sophistication came later, as a product of understanding the user, not as a precondition for it.
Healthcare AI has largely failed to ask this question with the same seriousness. The result is a category of products that are technically impressive, clinically defensible, and practically abandoned.
What They’re Actually Working With
To understand what healthcare workers need from AI, you first need to understand what they are currently working with. The picture is not flattering the industry that is supposed to be transforming.
Epic, the dominant EHR system in US healthcare, was designed to be comprehensive. It is. It contains almost everything about a patient somewhere within its architecture. The problem is the word “somewhere”. The system presents itself as a labyrinthine landscape of tabs — multiple layers, developed by separate teams, with no unified consideration for the cognitive load imposed on the person navigating them.
Dr. Ilana Yurkovich, a Stanford physician, wrote a book called “Fragmented” documenting precisely this phenomenon. Her key insight was pointed: the fragmentation problem is not primarily about data being siloed across organizations. It is about how difficult it is to find information about a single patient within a single organization. The data exists. The system just makes you earn it.
52 clicks to order Tylenol in Epic — not a technology limitation, a design failure
The fifty-two clicks to order Tylenol is not a technology limitation. It is what happens when a system is built feature-by-feature, over decades, by teams solving their own specific problems without coordinating around the person who must use the whole thing. Every dropdown, every confirmation dialog, every status field made sense to somebody at some point. The user is an afterthought accumulated from a thousand reasonable decisions.
The consequence is measurable. Two-thirds of the tasks healthcare workers perform in EHR systems are retrieval tasks — not clinical judgment, not treatment decisions, not patient interaction. Just trying to find information that already exists somewhere in the system. A gastroenterologist receives a message about blood in a patient’s stool and immediately needs to know: did this patient have an ulcer? A recent procedure? What medications are they on? That information is in the system. Getting to it is work.
Real Voices, Real Costs
When you talk to the people using these systems — not the people procuring them, not the people presenting them at conferences, but the people opening them at 6 a.m. before their first appointment — three themes emerge with uncomfortable consistency.
”I wish the information I need about patients came to me in the form that I want, as opposed to me having to go look for it.”
— General Surgeon, Kaiser Permanente
”Burnout is a top problem. My inbox is killing us.“
— Head of Obstetrics, major health system
”I get a message from a patient saying they have blood in their stools. Immediately I need to find out why — did they have an ulcer? A recent procedure? Those retrieval tasks are incredibly hard.“
— Gastroenterologist
What is striking about these statements is not their emotional register but their specificity. None of these clinicians are asking for AI to make diagnoses. They are asking for information to come to them in the format they need, when they need it, without requiring them to go look for it. That is a data problem dressed in a UX problem dressed in an infrastructure problem.
The burnout statistics make the stakes concrete. Physician burnout rates in the United States have exceeded fifty percent in recent surveys. Studies consistently identify EHR burden as a primary driver. The technology that was supposed to make medicine more efficient has, for many practitioners, made it more exhausting.
The Adoption Equation
There is a reason user focus matters beyond the intrinsic value of building things people can use. Users sit at the intersection of technology and economics in a way that is frequently underestimated by product teams.
For a healthcare organization to invest in new technology — to sign the contract, complete the integration, train the staff, absorb the disruption — the people using it need to say it makes their jobs better. Not measurably better in a controlled trial. Actually better, in the way that gets communicated up the chain and into the next budget cycle.
This is why the teams succeeding in healthcare AI are not building standalone applications that require new logins and new workflows. They are embedding AI directly into the systems healthcare workers already use, surfacing information within familiar interfaces, and automating workflows without requiring behavioral change as a prerequisite for value.
The friction of adoption is not a marketing problem. It is a design problem. And it starts long before the product ships.
The organizations that will transform healthcare are not the ones with the most sophisticated algorithms. They’re the ones asking the most basic question: What do users actually need? And then building solutions that genuinely answer it.
Built for Nobody: The Structural Diagnosis
The “built for nobody” problem in healthcare technology is not the result of negligence or indifference. It is the result of the structure. Healthcare software procurement favors the large, the compliant, the defensible. The decision-makers are rarely the end users. The timelines are long and the switching costs are enormous.
In this environment, the incentive is to build systems that pass the evaluation criteria — not systems that delight the people who must use them. You can win a procurement process without a single nurse or physician having tried the product in a real workflow. You can deploy enterprise software across a health system without ever asking a clinician what they actually need to get through their day.
The result is predictable: software optimized for the people who buy it rather than the people who use it. Features that make sense in a requirements document but create friction at the point of care. Workflows that reflect how the procurement team thinks care should work, not how it actually does.
AI does not automatically solve this. An AI system built on top of a poorly designed workflow just adds another layer of complexity to navigate. An AI that requires its own interface, its own login, its own mental model creates adoption drag that no amount of accuracy improvement can overcome.
The question is not whether the AI works. The question is whether it fits into the life of the person who must use it.
What Getting This Right Actually Requires
Solving the “built for nobody” problem requires changes in every layer of how healthcare AI gets built and deployed. At minimum, it requires:
- Genuine user research before requirements — not surveys sent to administrators, but direct observation of clinical workflows, with the people who do the work, in the environments where they do it.
- Embedding over standalone — solutions that live inside existing EHR systems rather than requiring new interfaces, new logins, and new habits.
- Proactive information delivery — relevant patient data surfaced before appointments begin, formatted for the clinical context, without requiring retrieval.
- Inbox triage and drafting — automated handling of routine patient communications that currently consume hours of clinical time that should be spent on patient care.
- Reduction of retrieval burden — if two-thirds of EHR work is retrieval, and retrieval is what AI is demonstrably good at, this is the most direct value proposition available.
None of these are technically exotic. None require capabilities that do not exist today. What they require is a genuine commitment to building for the person who opens the system at 6 a.m., not just for the person signing the contract.
Healthcare AI will not transform medicine by being accurate. Accuracy is the floor. It will transform medicine by being usable — by fitting into real workflows, reducing real burden, and making the jobs of real people meaningfully better.
The technology exists. The data exists — though its quality is a separate and significant problem. What has been missing is the design discipline to build for the person who has to live with the result.
That discipline is not complicated. But it requires asking a question the industry has largely avoided:
Not “What can this AI do?” But “What does this person need?”
Next in the Clarity Protocol
Buried Alive in Data
Healthcare generates more data than any industry on earth — more than finance, more than manufacturing, more than defense. A single patient encounter can produce thousands of discrete data points. A hospital system generates petabytes a year.
And amost none of it gets used.
97% of healthcare data is never analyzed. The industry that most desperately needs intelligence is drowning in information it cannot access. In our next post, we'll unpack why — and what it would take to change that.
The Clarity Protocol is a series by ThetaRho AI on the infrastructure, data, and design realities of healthcare artificial intelligence. Honest conversations. Zero hype. Published at thetarho.ai.

Leave a Reply