A prominent law firm, Morgan & Morgan, is warning its attorneys about the risks of relying on AI-generated case law after one of its lawyers cited fake cases in a lawsuit against Walmart. The lawyer, Rudwin Ayala, used ChatGPT to supplement his research, but the AI tool provided fictitious case citations, including one that was completely made up.
This incident highlights the potential consequences of relying on AI-generated information without proper verification. Ayala's mistake led to his removal from the case, and his supervisor, T. Michael Morgan, had to take over. Morgan & Morgan's chief transformation officer, Yath Ithayakumar, warned the firm's attorneys that citing fake AI-generated cases could lead to disciplinary actions, including termination.
This is not an isolated incident. There have been several cases where lawyers have improperly cited AI-generated cases, leading to sanctions and reputational damage. The use of AI in legal research can be beneficial, but it's essential for lawyers to verify the accuracy of the information provided by AI tools to avoid these kinds of mistakes.