Stanford AI Expert's Credibility Shattered by Fake AI Sources

Stanford AI Expert's Credibility Shattered by Fake AI Sources

A Stanford AI expert's credibility has been called into question by a Minnesota federal judge after submitting an expert declaration with fake AI-generated sources. The expert, Jeff Hancock, co-director of Stanford University's Cyber Policy Center, was hired by Minnesota Attorney General Keith Ellison to support the state's position in a lawsuit challenging its ban on AI-generated election content.

Hancock's declaration included citations to nonexistent academic articles, which the judge deemed "particularly troubling" given Hancock's expertise in AI misinformation. The judge excluded Hancock's testimony and refused to allow a corrected version, emphasizing the importance of verifying AI-generated content in legal submissions.

This incident highlights concerns about the reliability of AI tools in legal research. A recent study found that even AI-powered legal research tools can hallucinate, or provide incorrect information, up to 34% of the time. The study's authors stress the need for transparent benchmarking and rigorous evaluation of AI tools to ensure their accuracy and reliability.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.