Courts are increasingly confronting a troubling new issue: AI-generated evidence, including deepfakes, is making its way into legal proceedings — and judges are warning of serious risks. While these artificially created videos, audio, or documents may seem convincing, their authenticity often can’t be taken for granted. Legal experts argue that existing evidentiary rules may not be sufficient to handle this new kind of manipulation.
One major concern is how to authenticate such AI-produced material. Generative AI can fabricate realistic-looking video or audio, making it hard to tell real from fake. Courts have discussed updating the Federal Rules of Evidence — in particular, Rule 901 — to explicitly address fabricated or altered digital evidence. Under one proposed amendment, if a party challenges a piece of evidence as possibly AI-generated, they must first show solid reason to suspect fabrication. Then, the side presenting the evidence must prove it’s more likely than not to be real.
Judges are also worried about prejudice and fairness. Even if the AI-generated content is admitted, it could influence juries unfairly — especially if it’s emotionally powerful or hard to distinguish from genuine evidence. Some legal scholars propose using existing rules, like Rule 403, which allows courts to exclude evidence whose potential for unfair prejudice outweighs its probative value.
Finally, there’s a call for proactive judicial safeguards. Experts recommend that judges and lawyers hold pretrial hearings dedicated to challenging the authenticity of AI-created evidence. They also suggest appointing neutral technical experts to evaluate suspected deepfakes, and training judges to better understand the capabilities and limitations of generative AI.