“The appellant has submitted a video for his argument," said Justice Sallie Manzanet-Daniels. “Ok. We will hear that video now.”
On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater.
“May it please the court,” the man began. "I come here today a humble pro se before a panel of five distinguished justices.”
“Ok, hold on,” Manzanet-Daniels said. “Is that counsel for the case?”
“I generated that. That’s not a real person,” Dewald answered.
It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased.
“It would have been nice to know that when you made your application. You did not tell me that sir,” Manzanet-Daniels said before yelling across the room for the video to be shut off.
“I don't appreciate being misled,” she said before letting Dewald continue with his argument.
Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words.
In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing.
“The court was really upset about it,” Dewald conceded. “They chewed me up pretty good.”
Even real lawyers have gotten into trouble when their use of artificial intelligence went awry.
In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a "good faith mistake" in failing to understand that artificial intelligence might make things up.
Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn't realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.
Those were errors, but Arizona's Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public.
On the court’s website, the avatars — who go by “Daniel” and “Victoria” — say they are there “to share its news.”
Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn't surprised to learn of Dewald’s introduction of a fake person to argue an appeals case in a New York court.
“From my perspective, it was inevitable,” he said.
He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case.
Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world.
As for Dewald's case, it was still pending before the appeals court as of Thursday.