AI Assistants Make Widespread Errors About the News, New Research Shows

AI Assistants Make Widespread Errors About the News, New Research Shows

A recent study by the European Broadcasting Union (EBU) and the BBC reveals that leading AI assistants frequently misrepresent news content. Analyzing 3,000 responses from platforms like ChatGPT, Copilot, Gemini, and Perplexity, the research found that 45% of these responses contained at least one significant error, with 81% exhibiting some form of issue, including outdated information and inaccuracies.

The study highlighted serious sourcing errors in a third of the responses, such as missing, misleading, or incorrect attribution. Notably, 72% of responses from Gemini, Google's AI assistant, had significant sourcing issues, compared to below 25% for all other assistants. Examples of inaccuracies included Gemini incorrectly stating changes to a law on disposable vapes and ChatGPT reporting Pope Francis as the current Pope several months after his death.

These findings raise concerns about the reliability of AI assistants as sources of news, especially as they increasingly replace traditional search engines for information. The EBU warns that widespread inaccuracies could undermine public trust in digital information sources, potentially deterring democratic participation.

The report calls for AI companies to be held accountable and to improve how their AI assistants respond to news-related queries. With AI assistants becoming more prevalent, ensuring their accuracy and reliability is crucial for maintaining informed public discourse.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.