“I am not a cat.”
You may remember this famous line from the early days of the pandemic. A lawyer appeared on Zoom as a cat because he couldn’t change his filter. It was a minor comedic oops that did no harm and provided a good deal of amusement in a very stressful time.
We can contrast this with Mata v. Avianca, a 2023 court case in the southern district of New York, an oops that was neither minor nor comedic. In this case, lawyers submitted a brief filled with very convincing judicial opinions, which had the unfortunate drawback of being completely imaginary. ChatGPT had helpfully made up, aka hallucinated, answers to their questions. Similar situations occurred in 2025, when a partner at the firm Ellis George submitted a filing containing imaginary citations, and when lawyers for Anthropic submitted hallucinated citations in a copyright case.
Needless to say, the judges in these cases were not amused. Perhaps if the lawyers had been cats, they could have gotten away with the patented feline “I planned that” look. As it is, they were all sanctioned by the various courts involved. As none of the judges said, “Not cool, cats.”
At the same time, AI is capable of sorting through court cases at incredible speed, identifying obscure precedents and summarizing massive amounts of information. It just may make mistakes. Very confident mistakes. The problem with AI in law, in a nutshell, is that the legal profession is built on trust and accuracy. Having an overly helpful AI make things up might play well on LA LawGPT but only erodes judicial trust in a real court. Making AI useful requires addressing the hallucination problem as judges are only becoming less tolerant of AI fantasies.
Managing Hallucinations With SWIRL
![]()
Fortunately, SWIRL makes addressing hallucinations a manageable problem.
Hallucinations occur primarily when an AI is asked a question and draws the answer from its own knowledge base. Like a college student taking a final exam, when it doesn’t know the answer it writes something that sounds plausible. Unlike a college student, what the AI writes frequently sounds very convincing, even to someone knowledgeable on the topic.
SWIRL addresses hallucinations in two key ways:
First, SWIRL gathers information in response to a query by simultaneously searching all data sources (e.g. iManage, M-Files etc) that the user is authorized to access (SWIRL piggybacks on existing security protocols, allowing law firms to maintain ethical walls between lawyers working for different clients). SWIRL then feeds that information to the AI and instructs it to answer questions based on the information provided. This technique, known as Grounded RAG, dramatically reduces the likelihood of hallucinations.
Second, SWIRL provides deep-linked citations in its response. If the AI claims a particular court case exists, there’s a hyperlink to the specific document where that information was drawn from. Lawyers can quickly check SWIRL’s work, much as they might check the work of junior associate.
AI can’t do your thinking for you (fortunately!). But with SWIRL, it can do a lot of the drudge work, so you have more time and energy to think.
Be a cool cat. Contact us for a demo or download our white papers.