Sometimes it’s unethical for a doctor not to use AI, ethicists argue

Sometimes it’s unethical for a doctor not to use AI, ethicists argue

A new opinion piece in STAT argues that as artificial intelligence (AI) becomes demonstrably more accurate than human clinicians at certain tasks, not using it could itself be unethical in medical practice. Authors Morish Shah and Ami Bhatt highlight evidence that some AI tools — for example, systems that interpret screening mammograms — have matched or outperformed expert radiologists in clinical studies, reducing both false negatives and false positives compared with human reads. This raises a moral question: when technology clearly improves outcomes, how should medicine balance clinical judgment and innovation?

The article criticises simplistic arguments on both sides of the debate. Tech evangelists sometimes imply that algorithms can completely replace physicians, while sceptics insist that medicine always requires the “human touch.” According to the opinion writers, neither framing captures reality. Instead, most AI systems currently work best as assistive tools — augmenting clinician insight rather than replacing it — and ethical use should focus on leveraging strengths from both humans and machines.

A key example is mammography interpretation: AI systems in some studies have detected breast cancer more reliably than individual experts, improving diagnostic performance. In such situations, withholding AI assistance — knowing that it may improve detection and reduce harm — may undermine the physician’s duty to provide the best possible care. The authors argue that ethical practice increasingly requires physicians to understand when and how to deploy AI safely and effectively, especially for tasks where evidence shows it adds value.

However, the debate is nuanced. Ethical usage depends on context, oversight, and ensuring that AI recommendations are interpreted appropriately by trained clinicians, not blindly followed. Critics of unfettered AI use warn that hallucinations, bias, or misplaced trust can lead to misdiagnosis or harm if systems are applied outside validated settings. So while not using AI in some high-performing areas may be ethically questionable, responsible integration — with human oversight, patient consent, and clear understanding of limitations — remains essential.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.