The Real Opportunity for AI in Healthcare Isn’t Where You Think

Anne Quaadgras

At a recent HSI seminar, Shani Fargun, VP Healthcare at StackAI made a deliberately unglamorous case for the future of artificial intelligence in healthcare. Not predictive diagnostics. Not breakthrough clinical tools. Something far less exciting on the surface: administrative work (also known as Unsexy AI).

For years, the dominant narrative around AI in healthcare has focused on its potential to transform clinical decision-making. Yet, according to Shani, the most immediate and measurable value lies elsewhere entirely. The real opportunity, she argued, is in reducing the operational friction that defines modern healthcare systems.

And there is a lot of friction.

Despite decades of digitization, healthcare operations remain deeply fragmented. Data is spread across electronic health records, call centers, billing systems, and internal tools that rarely communicate with one another. Much of the actual work still relies on manual processes and unstructured information, such as emails, PDFs, and call transcripts. In one of the more striking reminders of how little has changed, the majority of U.S. hospitals still rely on fax machines to transmit medical records.

If that sounds like an industry ripe for automation, it is. But the reality is more complicated.

One of the more surprising points from the seminar was not that healthcare organizations are experimenting with AI, but how little they are getting out of it. While a large majority have launched AI initiatives, roughly 95% fail to deliver meaningful returns on investment or progress beyond the pilot stage. The issue, it turns out, is not a lack of technical capability. It is everything else.

Liability concerns remain unresolved. If an AI system makes a mistake, accountability is unclear, leading many organizations to keep humans firmly in the loop. Regulatory constraints further complicate deployment, particularly when sensitive patient data is involved. And perhaps most quietly, there is resistance from within. Staff who are asked to adopt AI tools often see them less as productivity enhancers and more as potential replacements.

Taken together, these barriers explain why so many promising pilots stall before they scale.

What distinguishes the approach described in the seminar is a shift away from broad, tool-driven experimentation toward a more disciplined, workflow-first strategy. Instead of asking what AI can do, organizations are encouraged to start by identifying where inefficiencies lie: high-volume, manual processes with clear ownership and measurable outcomes.

This is where agentic AI comes into play. Unlike generative tools that produce text or summaries, these systems are designed to take action. They operate within defined scopes, pull from specific data sources, and execute discrete tasks across systems. In practice, that might mean drafting appeal letters, validating billing documentation, summarizing patient histories, or responding to routine patient inquiries.

None of this is particularly glamorous. Which is precisely the point.

Another counterintuitive takeaway is that the least visible applications of AI often produce the greatest financial impact. Administrative workflows, long dismissed as back-office overhead, are now emerging as the highest-return use cases. One estimate cited in the seminar suggests that a payer serving three million members could increase annual income by hundreds of millions of dollars simply by improving automation and interoperability.

This emphasis on the “unsexy” stands in contrast to earlier industry efforts. High-profile initiatives, such as IBM’s Watson in healthcare, struggled in part because they focused on ambitious clinical applications rather than the more tractable operational problems beneath them.

There is also a practical lesson in how these systems are being implemented. The most effective deployments tend to start small. Rather than attempting to overhaul entire workflows, organizations begin with narrow, low-complexity tasks and expand incrementally. Success depends less on the sophistication of the model and more on clarity around inputs, rules, and outputs, along with consistent monitoring and evaluation.

If there is a broader takeaway, it is that the path to meaningful impact in healthcare AI may be more incremental and less visible than expected. The systems that succeed are unlikely to be the ones that attract the most attention. They will be the ones that quietly remove friction, one workflow at a time.

Not especially glamorous. But, as the seminar made clear, potentially far more effective.