Generative AI is infiltrating all corners of society, making it impossible to escape its effects. From our personal lives and schools to workplaces and even the court system, its impact on our day-to-day activities is inescapable.
In less than four years since its release, AI’s ubiquity isn’t surprising. The efficiency-driven benefits are real, and its adoption reduces the tedium of drafting and editing routine correspondence and documents, filling out complex forms, and analyzing and summarizing lengthy documents. It’s also a great tool for brainstorming, data analysis, and general research.
However, as with any technology, AI tools can also be exploited by bad actors for harmful purposes. Because AI is such a powerful tool that can be used to manipulate or exaggerate images and videos, there is a very real danger that AI-created fake media could be used with malicious intent. The implications can be particularly dramatic if AI-generated deepfakes are submitted as evidence at trial.
Unfortunately, the impact of AI deepfakes on the court system isn’t hypothetical. It’s a very real issue that judges across the country grapple with every day, and one that has severe implications. If AI deepfakes are not discovered and are inadvertently entered into evidence, they have the potential to seriously degrade the integrity of our system of justice.
Examples of lawsuits involving generative AI
There are many situations where this has already occurred. Lawyers have produced AI-manipulated media, and judges have approached the issue in a variety of ways.
For example, at a recent Arizona sentencing, generative AI was used in an unusual way. The family of a road rage victim submitted an AI-generated video statement in which the deceased appeared to speak directly to the court. The statement, which was written by the victim’s sister, was admitted into evidence, and the judge factored it into the sentencing decision. The judge ultimately issued the maximum sentence, which exceeded the prosecutor’s recommendation.
Generative AI lawsuits have also touched New York this year. In one appellate argument before the New York State Supreme Court Appellate Division’s First Judicial Department, a litigant whose voice had been impaired by throat cancer attempted to present his case through a video of an AI-generated lawyer speaking on his behalf. The court quickly halted the attempt, requiring him to argue on his own. The episode raised significant questions about whether AI avatars could ever have a place in legal proceedings.
Similarly, in Washington State, the role of AI-enhanced evidence took center stage in State v. Puloka. The defense sought to admit altered video footage created from a bystander’s iPhone recording of the alleged crime. The judge rejected the submission, ruling that the manipulated video risked unfairly influencing the jury and was unduly prejudicial.
California courts have faced their own generative AI challenges. In a wrongful death case involving Tesla’s Autopilot system, the plaintiff’s family introduced a video of Elon Musk discussing the feature’s safety. Defense counsel objected, arguing that the video could be a deepfake. The judge, however, declined to exclude the evidence, ruling that authenticity concerns alone were insufficient to block it from consideration.
In another case in California’s Alameda County, a civil lawsuit was thrown out after plaintiffs submitted evidence later revealed to be AI-generated. The exhibits included deepfake witness videos, altered Ring camera footage, and fabricated text message screenshots. After analyzing the inconsistencies, which included mismatched lighting, unnatural speech patterns, and suspicious metadata, the judge dismissed the case with prejudice. The ruling highlighted that attempts to surreptitiously introduce AI-manipulated evidence will be met with the harshest sanctions.
The issue isn’t confined to video. In Florida, a judge wore a VR headset in a stand-your-ground case. The headset provided an AI-generated recreation of the alleged assault from the defendant’s perspective, effectively transporting the court into a simulated version of the crime scene. The experiment was an example of the evolving, and sometimes controversial, ways judges are allowing litigants to prove their case using emerging technologies.
Generative AI lawsuits have even impacted judicial campaigns. In Broward County, Florida, Judge Lauren Peffer faced an ethics complaint after allegedly circulating an AI-generated audio recording that falsely depicted her opponent making sexually explicit remarks. The case illustrates how generative AI can extend beyond the courtroom, influencing the very individuals who may preside over it.
Deepfakes and AI-generated evidence are cropping up in lawsuits across jurisdictions with increasing regularity. Courts are doing their best to keep up with the rapid influx of manipulated images and videos, but the frequency of evidence alteration will soon overwhelm the capabilities of an already underfunded and overextended court system.
Courts face a new era of truth distortion
Whether courts will be able to find ways to address this growing problem remains to be seen. What is certain, however, is that this trend will undoubtedly continue, and efforts to create AI-generated deepfakes will only increase in sophistication.
The impact of doctored videos extends far beyond the courtroom, blurring the line between reality and fiction. Will we weather this coming digital storm, or will it subsume popular culture and our understanding of what is real and what is not?
For now, the answer is uncertain. What is crystal clear, however, is that as generative AI lawsuits multiply, they will serve as a barometer for how our courts—and society at large—adapt to a world where truth itself becomes nearly impossible to confirm.
Want to understand how technology impacts the legal landscape beyond the courtroom? Explore our latest insights on AI in law to see how firms are adapting.
About the author
Nicole BlackPrinciple Legal Insight StrategistMyCase and LawPay
Niki Black is an attorney, author, journalist, and Principal Legal Insight Strategist at AffiniPay. She regularly writes and speaks about the intersection of law and emerging technology. She is an ABA Legal Rebel and is listed on the Fastcase 50 and ABA LTRC Women in Legal Tech.