
By the time Jerome Dewald’s video began to play, the courtroom was already intrigued. Then, a youthful figure with a crisp sweater and confident tone addressed five New York appellate judges in a polished, professional voice. “May it please the court,” the man began. But he wasn’t a lawyer. In fact, he wasn’t even real.
The man was a computer-generated avatar created by Dewald himself — a 74-year-old entrepreneur representing himself in an employment lawsuit. Dewald, who had received permission to submit a video as part of his oral argument, opted not to appear on camera. Instead, he used artificial intelligence to generate a synthetic speaker.
The judge, Justice Sallie Manzanet-Daniels, wanted to cut the performance short. “Ok, hold on,” she said. “Is that counsel for the case?” When Dewald admitted, “I generated that. That’s not a real person,” the courtroom atmosphere shifted. The judge demanded the video be turned off, visibly irritated. “It would have been nice to know that when you made your application,” she said. “I don’t appreciate being misled.”
The AI Lawyer
Dewald’s decision wasn’t born of mischief, he later explained. It was more out of desperation. Representing himself, he had previously struggled to speak clearly under pressure. “My intent was never to deceive but rather to present my arguments in the most efficient manner possible,” he wrote in a letter of apology to the court. “However, I recognize that proper disclosure and transparency must always take precedence.”
Originally, Dewald said, he had tried to create an avatar that resembled him. But technical difficulties forced him to rely on a generic synthetic male — one that looked decades younger and far more composed. In an interview with the Associated Press, he described the aftermath simply: “The court was really upset about it. They chewed me up pretty good.”
The hearing, held on March 26 in the New York State Supreme Court Appellate Division’s First Judicial Department, might have lasted only minutes — but it instantly became part of a larger conversation about the use of AI in courtrooms. And Dewald wasn’t the first to face backlash for such choices.
A Pattern of AI Missteps in Law
In recent years, AI has repeatedly slipped into legal proceedings — not always smoothly. In 2023, two New York attorneys were fined $5,000 each after citing fake court cases generated by ChatGPT in legal briefs. The tool, intended to assist with legal research, had “hallucinated” entire decisions. The lawyers called it a “good faith mistake.”
Not long after, Michael Cohen, once Donald Trump’s personal attorney, filed papers with fictitious citations created by Google Bard (now known as Google Gemini). He, too, claimed ignorance of the AI’s creative tendencies.
Still, not all court uses of AI have been accidents. In Arizona, the state’s Supreme Court now deliberately employs two AI-generated avatars named “Daniel” and “Victoria” to help summarize rulings for the public. On the court’s website, the digital presenters say they are there “to share its news.”
This growing presence of AI in legal contexts raises some important concerns: Where do convenience and clarity end — and deception begin? Can non-lawyers be expected to know the risks of synthetic speech, or should courts offer better guidance?
Daniel Shin, an adjunct professor at William & Mary Law School and assistant director of research at the Center for Legal and Court Technology, sees Dewald’s act not as an outlier, but a warning. “From my perspective, it was inevitable,” he said. Unlike professional attorneys — bound by ethical rules and the threat of disbarment — self-represented litigants often navigate court procedures alone.
“They can still hallucinate — produce very compelling looking information,” Shin told The New York Times regarding AI tools. But sometimes, what seems helpful on the screen proves legally dangerous.
A Fine Line
In hindsight, Dewald’s intentions seem more misguided than malicious. He believed the avatar could deliver his argument more fluently than he could himself. During the actual hearing, after the video was cut short, he resumed his case with visible discomfort — pausing frequently, reading from his phone, and stammering through his words.
Dewald had recently attended a webinar hosted by the American Bar Association on AI in law. But technology moved faster than courtroom expectations, and Dewald’s sense of innovation clashed with a judiciary still grappling with what “acceptable” looks like in the age of synthetic speech.
His case remains pending. The argument — real or artificial — may yet find its way into a ruling. But the broader trial is already underway: one over how justice should look, sound, and act in the age of artificial intelligence.