At the HLTH 2025 conference held at the Venetian Resort in Las Vegas, artificial intelligence (AI) dominated nearly every corner of the show floor. Startups offered agentic-AI solutions with bold claims like “Your data. Our agents. Real outcomes,” and Big Tech giants such as OpenAI, Anthropic, Google, and Microsoft loomed large both in booths and panels. At the same time, a palpable sense of fatigue and skepticism was growing among attendees as the hype began to feel repetitive and many questioned the real-world readiness of these solutions.
Despite the buzz, the sheer volume of “AI solutions for everything” left many healthcare professionals unimpressed. One health-system executive, speaking anonymously, remarked: “Everyone is framing themselves as the most generic, enterprise-wide agentic AI solution. It makes me want to vomit.” The core complaint: many offerings lacked clear evidence of delivering meaningful results today rather than promises of future outcomes.
Investment activity at the conference offered another side of the story: while digital-health funding remained robust — for instance, startups raised about $6.4 billion in the first half of 2025, with 62% going to AI companies — there was increasing unease about market saturation, competitive pressure from established players, and whether the ROI would materialize. Incumbent companies such as Epic Systems (which plans its own AI tools) and large AI platforms entering healthcare added to startup unease.
Amid the spectacle and hype, the conference also pointed to a shifting mindset: a clearer focus on responsible AI in healthcare. Panels highlighted the need for safety, validation, and trust-worthiness: for example, organisations launching benchmarking of mental-health chatbots and cardiovascular-AI validation labs. The takeaway: as AI cements its place in healthcare, the field is moving from “can we do this?” to “how must we do this safely and effectively?”