A recent New York Times article explores how artificial intelligence-powered chatbots are increasingly being used by people seeking medical information — but warns that these tools are not yet reliable substitutes for professional medical care. According to research cited by NYT, many adults now turn to AI programs to ask about symptoms and health problems, yet studies show that the advice these chatbots provide can be inconsistent and sometimes inaccurate, especially when users don’t supply complete or precise details about their condition.
Researchers involved in a large randomized study found that while advanced AI models like GPT-4o and Meta’s Llama 3 can theoretically identify diseases with high accuracy when given perfect information, real-world interactions fall short. In practice, people using these systems often misinterpret chatbot responses or fail to provide all relevant symptoms, leading to misdiagnoses or incorrect recommendations about next steps like whether to seek emergency care. Outcomes in these user-driven scenarios were no better than if people had relied on a simple internet search for health information.
The article also highlights broader concerns among physicians and health policy experts about the rapid spread of AI medical tools without sufficient oversight. Some AI-based health apps have even been removed from major app stores for making misleading claims about diagnoses or treatment advice, and doctors emphasize that context, clinical judgment, and patient history matter in ways that chatbots currently can’t fully capture.
Despite these limitations, NYT notes that people are drawn to AI for health guidance because it’s quick, available around the clock, and often more conversational than traditional search engines — especially in systems where access to care feels fragmented or rushed. The article suggests that the future may lie in integrating AI tools as assistants for trained clinicians rather than replacements for them, helping improve workflow and patient engagement without sidelining expert judgment.