Randomly Quoting Ray Bradbury Didn’t Save a Lawyer From Losing a Case Over AI Errors

Randomly Quoting Ray Bradbury Didn’t Save a Lawyer From Losing a Case Over AI Errors

A New York federal judge terminated a lawsuit after a lawyer’s repeated misuse of AI in legal filings, despite the attorney’s attempts to explain his errors with creative prose and literary references. The filings at issue contained numerous fake legal citations caused by reliance on AI tools to verify sources rather than verifying them manually, prompting Judge Katherine Polk Failla to conclude that the attorney had failed to meet his professional obligations. The judge’s opinion noted that portions of the filings featured “conspicuously florid prose,” including an extended quote from Fahrenheit 451 by Ray Bradbury and metaphors about ancient libraries — but found these flourishes did nothing to excuse the underlying errors.

The attorney, Steven Feldman, admitted during a hearing that he used several AI programs (including Paxton AI, vLex’s Vincent AI, and Google’s NotebookLM) to check and cross-reference citations because he claimed difficulty accessing legal databases and limited time. However, this reliance on AI resulted in hallucinated or inaccurate human citations slipping into official court documents, a serious breach given that lawyers have a duty to verify citations before submission. Judge Failla expressed frustration at Feldman’s inconsistent responses and found his literary explanations and claims about personal inspiration implausible, seeing them as attempts to obscure rather than address the misuse of AI.

The judge emphasized that while using AI to assist research isn’t inherently wrong, leaving verification entirely to generative tools without careful checking is unacceptable in legal practice. Failla pointed out that lawyers must still know how to verify case law independently and cannot outsource this fundamental responsibility to technology. Because Feldman continued to submit faulty documents and failed to establish safeguards or take responsibility after warnings, she ruled that terminating the case and entering default judgment was appropriate — a rare and severe sanction that underscores growing judicial intolerance for sloppy or over-reliant AI use in legal work.

This case reflects a broader trend in the legal profession where judges are increasingly critical of AI-generated errors, especially hallucinated citations that do not exist. In some prior incidents, attorneys faced fines or other sanctions for similar lapses when AI invented case law or precedent, but terminating a case outright signals rising stakes for lawyers who fail to properly supervise and verify AI outputs before submitting them to courts.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.