We Digitized Healthcare. We Just Didn’t Make It Any Better.
How the technology that was supposed to fix American healthcare became part of the problem
ThetaRho Team · April 2026 · 6 min read
![]()
In the last post, we laid out a number that’s hard to shake: the United States spends 17.5% of its GDP on healthcare — one and a half times what comparable nations spend — and still ranks near the bottom on access, equity, and outcomes. Great clinicians. Broken system around them.
The obvious question is: why hasn’t technology fixed this? We’ve had decades of investment, a generation of startups, and more venture capital flowing into health tech than almost any other sector on earth.
The answer starts with a story about the most ambitious digitization project in healthcare history — and what happened when it was declared a success.
The Trillion-Dollar Bet on Digital Records
The HIPAA Act of 1996 opened the door. The HITECH Act of 2009 kicked it wide open, committing roughly $36 billion in federal incentives to push healthcare providers onto Electronic Health Records. The logic was compelling and, at the time, hard to argue with: patient information was still trapped on paper, clinical decisions were being made with incomplete data, and the administrative waste from manual processes was staggering.
By 2015, the mission was declared complete. Over 96% of hospitals and the majority of physician practices had adopted EHR systems. America had digitized its healthcare infrastructure.
And then something unexpected happened: nothing got better.
Healthcare spending as a percentage of GDP continued to climb. Physician burnout accelerated. Administrative overhead kept expanding. The promised efficiency gains never materialized at scale. What went wrong?
Digitization Is Not Transformation
Here is the uncomfortable truth about the EHR rollout: we took a broken paper-based workflow and encoded it, exactly as it was, into software.
EHRs were not designed around the physician’s actual workflow. They were designed around billing codes, regulatory compliance, and documentation requirements that protect hospitals from liability. The incentive structure was never “help clinicians deliver better care.” It was “create a defensible, auditable record that satisfies payers and regulators.”
The result is a system that physicians describe with remarkable consistency: it’s like doing your actual job and then doing a second job documenting the first one.
A study published in the Annals of Internal Medicine found that for every hour physicians spent with patients, they spent nearly two hours on EHR documentation. A 2022 report found that physicians were spending 28% of their workday on direct patient care — and 49% on desk work, much of it EHR-related. The technology that was supposed to free up clinical time created new demands on it.
Built for Billing, Not for Healing
The deeper problem is architectural. EHRs are fundamentally billing and compliance systems with a clinical interface bolted on. The data they capture is organized around what needs to be documented for reimbursement, not what a physician needs to understand a patient’s condition.
Consider what actually happens when a doctor opens a patient chart:
The information they need — current medications, recent lab trends, active diagnoses, relevant history — is scattered across dozens of tabs, buried under years of notes, fragmented across visits and care settings. If the patient has seen multiple providers on different systems, the picture is even more incomplete. Lab results from a specialist visit might live in a different EHR that doesn’t talk to this one.
The physician doesn’t need more data. They need the right data, organized the right way, surfaced at the right moment. EHRs, as they were built, do none of these things reliably.
This isn’t a criticism of the engineers who built these systems. They were solving the problem they were asked to solve. The problem they were asked to solve was documentation and billing compliance. Clinical usability was secondary.
The Interoperability Mirage
One of the promises that accompanied EHR adoption was interoperability — the idea that once everything was digital, information would flow freely across the healthcare system. A patient’s records from their primary care physician would seamlessly reach the emergency department. Lab results from one hospital would be visible to a specialist at another.
The reality has been considerably less seamless.
Despite years of federal mandates, data standards, and interoperability regulations — including the 21st Century Cures Act’s information blocking provisions — patient data still routinely gets stuck at organizational boundaries. Health systems have been economically incentivized to retain patients within their networks, which means sharing data with competitors has never been in their interest, even when it would benefit the patient.
The FHIR standard (Fast Healthcare Interoperability Resources) has created genuine progress at the technical layer. But technical capability and organizational willingness are two different things. You can build a pipe. You cannot force anyone to open the valve.
The Spending Number That Should Have Been a Warning
Here is the data point that, in retrospect, should have prompted a reckoning: U.S. healthcare spending as a percentage of GDP was approximately 13.3% in 2000. By 2015 — the year EHR adoption hit 96% — it had climbed to 17.4%. By 2023, it was approaching 18%.
The most significant technology investment in healthcare history ran in parallel with the continued expansion of the very problem it was supposed to address.
This is not an argument that EHRs made things worse. There are genuine benefits: medication error reduction, improved care coordination within networks, better data availability for population health analytics. These are real.
But the original claim — that digitization would bend the cost curve, reduce administrative burden, and improve outcomes — has not been borne out at a system level. We got digital records. We didn’t get a better healthcare system.
Why This Matters for AI
If you’re wondering why this history matters for a series about AI in healthcare, the answer is this: we are about to repeat the same mistake.
The healthcare AI conversation today is dominated by tools that make the existing workflow faster. AI scribes that transcribe clinical notes faster. AI-powered search that surfaces information from EHRs faster. AI algorithms that flag billing codes faster.
These are useful tools. Some of them meaningfully reduce physician burden. But speed isn’t the problem. The workflow itself is the problem.
An AI system built on top of a fragmented, billing-optimized, interoperability-challenged EHR infrastructure inherits every limitation of that infrastructure. A retrieval system that can search a physician’s chart more quickly is still searching a chart designed for documentation, not for clinical decision-making. Garbage in, garbage out — just faster.
The false fix is doing the same thing again and calling it a solution because the tool is more sophisticated.
What Genuine Transformation Requires
There are AI applications in healthcare that are genuinely transformative. They share a common characteristic: they work at the data layer before the clinical interface. They don’t just retrieve existing records more efficiently — they restructure what information is available, how it’s organized, and what a physician can actually ask of it.
That means longitudinal data models that track a patient’s health trajectory rather than just documenting individual encounters. It means multi-source integration that brings together EHR records, lab systems, pharmacy data, and wearable inputs into a coherent picture. It means AI that answers questions a physician actually has — “what’s changed since this patient’s last visit?” — rather than questions that were useful for billing.
This is harder to build than a faster documentation tool. It requires working with the actual complexity of healthcare data — the inconsistencies, the gaps, the competing standards, the organizational silos. But it’s the only version of healthcare AI that addresses the structural problem rather than adding a more efficient process on top of it.
The Honest Accounting
The EHR era gave us digital records. It did not give us usable clinical data. It gave us documentation. It did not give us intelligence. It gave us compliance. It did not give us care.
None of this means the effort was wasted. The data exists now. It didn’t before. That matters enormously. What we do with that data — how we structure it, how we make it accessible, how we build AI systems that can reason over it — is the question that determines whether the next chapter of healthcare technology is a genuine fix or just another false one.
The irony of the EHR era is that its greatest contribution may not be what it delivered but what it made possible: a foundation of digital clinical data that, if properly structured and made accessible, can finally enable the transformation that digitization alone never achieved.
Whether the healthcare AI industry builds on that foundation intelligently — or just builds faster interfaces to the same broken infrastructure — is what we’ll take up next.
This post is part of The Clarity Protocol, ThetaRho’s ongoing series on AI, clinical workflow, and healthcare data. The next piece looks at why healthcare AI keeps failing to reach physicians — and why the problem has less to do with the technology than with who was (and wasn’t) in the room when it was designed.

Leave a Reply