Image 01 Image 03

Judges Admit Staff Used AI in Error Filled Court Orders

Judges Admit Staff Used AI in Error Filled Court Orders

AI has been poisoning the judiciary…and the use of it keeps growing.

U.S. District of Mississippi Judge Henry T. Wingate and U.S. District of New Jersey Judge Julien Xavier Neals admitted their staff used ChatGPT and Perplexity to draft court orders.

The court orders contained many errors.

Senate Judiciary Committee Chairman Chuck Grassley (R-IA) addressed the situation earlier this month.

Wingate admitted that “the opinion that was docketed on July 20, 2025, was an early draft that had not gone through the standard review process.”

Wingate also said that he implemented new rules in his office, “including a plan whereby all draft opinions, orders, and memorandum decisions undergo a mandatory, independent review by a second law clerk before submission to” him.

An intern with Neals “acted without authorization, without disclosure, and contrary to not only chambers policy but also the relevant law school policy” when they used ChatGPT “to perform legal research” regarding a case.

Also, just like with Wingate, the early draft should not have been docketed and did not go through the standard review process.

Neals said he does not allow anyone in his office to use AI “in the legal research for, or drafting of, opinions or orders.”

Neals also promised he made changes in his office. Instead of oral instructions, Neals has a written policy with guidance “for appropriate AI usage.”

“Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again,” Grassley said in a statement. “Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law. The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue.”

AI has become a problem within the judicial world. The mistakes are called “hallucinations.”

In September, a court ordered attorney Amir Mostafavi to “pay a $10,000 fine for filing a state court appeal full of fake quotations generated by the artificial intelligence tool ChatGPT.”

This case out of New York blew my mind. An attorney got caught submitting a summary judgment brief filled with “AI-hallucinated citations and quotations.”

But then the New York Supreme Court also found out that the attorney used AI to defend his use of AI! Man oh man:

Judge Cohen’s order is scathing. Some of the fake quotations “happened to be arguably correct statements of law,” he wrote, but he notes that the fact that they tripped into being correct makes them no less frivolous. “Indeed, when a fake case is used to support an uncontroversial statement of law, opposing counsel and courts—which rely on the candor and veracity of counsel—in many instances would have no reason to doubt that the case exists,” he wrote. “The proliferation of unvetted AI use thus creates the risk that a fake citation may make its way into a judicial decision, forcing courts to expend their limited time and resources to avoid such a result.” In short: Don’t waste this court’s time.

Stanford University discovered that the use of AI by attorneys and their offices has grown (emphasis mine):

Large language models have a documented tendency to “hallucinate,” or make up false information. In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported. And our previous study of general-purpose chatbots found that they hallucinated between 58% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice. In his 2023 annual report on the judiciary, Chief Justice Roberts took note and warned lawyers of hallucinations.

Holy moly.

DONATE

Donations tax deductible
to the full extent allowed by law.

Comments

A problem in the judicial world!? It’s a problem for the ENTIRE world.

Launch these data centers into orbit.

If an attorney is found to have used AI in their legal please, motions or anything else considering a case and they allow mistakes or so-called hallucinations to get through to the final draft the nation first of all be forced to refund all money to their client. Secondly, they should lose their law license.

Zero consequences

As always

I totally understand using AI to draft legal documents, but it’s crazy that the author (law clerk or whoever) doesn’t check each and every citation to make sure it exists, stands for what it says, and any quote is accurate. In my day, we Shepardized every case to make sure it was still good law. This was after spending hours researching the case. If someone can “draft” it in minutes, they should have PLENTY of time to verify it. It’s mindboggling that people don’t do that.

    sisyphus in reply to sisyphus. | October 24, 2025 at 3:29 pm

    Also, it isn’t that hard to write a prompt to protect against this.

    henrybowman in reply to sisyphus. | October 24, 2025 at 4:37 pm

    “check each and every citation to make sure it exists, stands for what it says, and any quote is accurate.”

    One of my most frequent uses of AI is “find me a short article online that discusses issue X.” I used it yesterday to chase down a quick link to the “King Solomon’s riddles” tale. I am bumfuddled by how hard I have to work to vet the crap the AI gives me. I have to vet 7 to 10 URLs before I find one even remotely suitable.

    “Here’s an article (URL) that discusses everything you want.”
    “That page is 404.”
    “Oh, I’m sorry, you’re right. Anyway, here’s another one and I’ve checked that it’s live.”
    “That one is a biography of somebody named SHABBETHAI ẒEBI B. MORDECAI and has nothing to do with my query. Try again,”
    “So it is! How did that happen. Anyway, here’s the first URL again, in hopes you forgot you already rejected it…”

    The pattern of “Here’s exactly what you want,” “No, it clearly isn’t,” “Ah, yes, obviously it isn’t!” gets sickening after a while. It’s like cross-examining Jon Lovitz.

    Concise in reply to sisyphus. | October 24, 2025 at 8:59 pm

    Well, they never learned to do that on paper so why would they start caring now?

    Corky M in reply to sisyphus. | October 25, 2025 at 12:20 pm

    So why not just do the work up front, and them if you wonder if you missed something, run the AI. Stop outsourcing intelligence for information assimilation. The dumbing down of mankind through AI is a crime against humanity.

AI has value in finding things that are relevant to whatever it is you’re researching but it has to be handled in the same way that you’d do manual research in a library in the old days. And you most certainly cannot allow it to create a finished product from whole cloth, like RFK’s team did with the MAGA report.

When an entire industry dispenses with NATURAL intelligence, all the way up to the Supreme Court, they have to replace it with SOMETHING.

AI is a tool to damage brains – it removes the necessity to THINK. Humans are different. Does AI take into account unusual situations? Circumstances. IF I were requesting or teaching anything that requires AI, AI would not be allowed. However, would I be able to “catch” it? I don’t know.

Let me guess . . . when AI makes an error, it’s always in favor of the leftist-friendly outcome. Just a guess.

I love how I throw a question through the search engine about a particular command on Linux and ask specifically about Debbie and bass systems and it spits out a bunch of Pac-Man commands

    Ironclaw in reply to Ironclaw. | October 24, 2025 at 11:51 pm

    Wow, that’s supposed to be Debian, not Debbie and bass.. I really hate ai, especially the very limited kind of use for voice input

When lawyers do this judges sanction them, hard. And they don’t accept the excuse that an intern did it. These judges should be given the same penalty that a lawyer would have got in their place.