U.S. District of Mississippi Judge Henry T. Wingate and U.S. District of New Jersey Judge Julien Xavier Neals admitted their staff used ChatGPT and Perplexity to draft court orders.
The court orders contained many errors.
Senate Judiciary Committee Chairman Chuck Grassley (R-IA) addressed the situation earlier this month.
Wingate admitted that “the opinion that was docketed on July 20, 2025, was an early draft that had not gone through the standard review process.”
Wingate also said that he implemented new rules in his office, “including a plan whereby all draft opinions, orders, and memorandum decisions undergo a mandatory, independent review by a second law clerk before submission to” him.
An intern with Neals “acted without authorization, without disclosure, and contrary to not only chambers policy but also the relevant law school policy” when they used ChatGPT “to perform legal research” regarding a case.
Also, just like with Wingate, the early draft should not have been docketed and did not go through the standard review process.
Neals said he does not allow anyone in his office to use AI “in the legal research for, or drafting of, opinions or orders.”
Neals also promised he made changes in his office. Instead of oral instructions, Neals has a written policy with guidance “for appropriate AI usage.”
“Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again,” Grassley said in a statement. “Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law. The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue.”
AI has become a problem within the judicial world. The mistakes are called “hallucinations.”
In September, a court ordered attorney Amir Mostafavi to “pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT.”
This case out of New York blew my mind. An attorney got caught submitting a summary judgment brief filled with “AI-hallucinated citations and quotations.”
But then the New York Supreme Court also found out that the attorney used AI to defend his use of AI! Man oh man:
Judge Cohen’s order is scathing. Some of the fake quotations “happened to be arguably correct statements of law,” he wrote, but he notes that the fact that they tripped into being correct makes them no less frivolous. “Indeed, when a fake case is used to support an uncontroversial statement of law, opposing counsel and courts—which rely on the candor and veracity of counsel—in many instances would have no reason to doubt that the case exists,” he wrote. “The proliferation of unvetted AI use thus creates the risk that a fake citation may make its way into a judicial decision, forcing courts to expend their limited time and resources to avoid such a result.” In short: Don’t waste this court’s time.
Stanford University discovered that the use of AI by attorneys and their offices has grown (emphasis mine):
Large language models have a documented tendency to “hallucinate,” or make up false information. In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.
Holy moly.
CLICK HERE FOR FULL VERSION OF THIS STORY