A new NPR report highlights an innovative use of artificial intelligence to make U.S. Supreme Court opinions more accessible and engaging to the public. Traditionally, the Court’s oral arguments and decision announcements were only available to those inside the courtroom or released months later. But a project called On The Docket is using AI to generate visual avatar videos of justices delivering their own words, paired with existing audio recordings. This represents a creative effort to bring the courtroom experience to viewers outside the Supreme Court building.
The technology builds on the long-running Oyez Project, a digital archive started in 1996 that collects audio of Supreme Court proceedings dating back decades. Since the COVID-19 pandemic, the Court has begun broadcasting oral arguments live, but bench announcements of decisions still aren’t publicly visible on the same day. By syncing real audio with AI-generated visuals of the justices — created using photos and video of their public appearances — On The Docket aims to give people something closer to watching a Court session in action.
Project developers faced technical and ethical challenges. Initial AI outputs included unnatural movements or “uncanny” animations, and the team deliberately chose to cartoonize the avatars slightly and label the videos clearly as AI-generated so viewers understand what is real (the audio) and what is synthetic (the visuals). This approach reflects broader debates about how AI should be used responsibly in portraying public institutions.
The first avatar visuals include a rendition of Chief Justice John Roberts announcing a major 6-3 decision, with accompanying segments of Justice Sonia Sotomayor’s dissent. While the Supreme Court itself hasn’t officially endorsed this AI project — and has historically restricted audio and video access — proponents argue that such initiatives can help democratize understanding of judicial processes by making key speeches and deliberations easier for the public to follow.