A New York lawyer is currently facing a court hearing after his law firm employed an AI tool called ChatGPT for legal research. The judge overseeing the case described the situation as unprecedented when it was discovered that the lawyer's filing referenced example legal cases that had no basis in reality.
During the court proceedings, the lawyer, informed the court that he had been unaware of the AI tool's potential to provide false information.
While ChatGPT has the capability to generate original text, it also carries warnings about its tendency to produce inaccurate content.
The original case involved a man who brought a lawsuit against an airline, alleging personal injury. In an attempt to support their argument and establish legal precedent, the plaintiff's legal team submitted a brief that cited several previous court cases.
However, it was later revealed that these cited cases were entirely fabricated or did not exist.
After the initial filing, the airline's legal representatives reached out to the judge, expressing their inability to locate several of the referenced cases in the plaintiff's brief.
Judge Castel, in response, issued an order demanding an explanation from the man's legal team, as he identified that six of the submitted cases appeared to be fictitious judicial decisions, complete with fabricated quotes and false internal citations.
Subsequent filings unveiled that the legal research in question had not been conducted by Peter LoDuca, the lawyer representing the plaintiff, but rather by his colleague at the same law firm. The responsible attorney, Steven A. Schwartz, boasts over three decades of legal experience and had employed ChatGPT to search for similar past cases.
In his written statement, Steven A. Schwartz clarified that Peter LoDuca had not been involved in the research process and had no knowledge of how it had been carried out.
Schwartz further expressed deep regret, stating that he "greatly regrets" relying on the chatbot for legal research. He admitted that he had never used the AI tool for legal research before and was completely unaware that its content could be false.
Moving forward, Schwartz vowed to never use AI to "supplement" his legal research without ensuring absolute verification of its authenticity.
Attached to the filing were screenshots displaying a conversation between Schwartz and ChatGPT, shedding further light on the situation.
In one message, Schwartz asks, "Is Varghese a real case?" in reference to Varghese v. China Southern Airlines Co Ltd, one of the cases that other lawyers were unable to locate. ChatGPT responds affirmatively, prompting Schwartz to inquire about the source of the information.
After "double checking," ChatGPT once again assures Schwartz that the case is genuine and can be found in legal reference databases such as LexisNexis and Westlaw. It also asserts that the other cases it provided to Schwartz are authentic.
Both lawyers, who are affiliated with the law firm Levidow, Levidow & Oberman, have been instructed to present an explanation for their actions at a hearing scheduled for June 8, where potential disciplinary measures will be addressed.
Since its launch in November 2022, ChatGPT has been utilized by millions of individuals. It is capable of answering queries using natural, human-like language and can mimic various writing styles. However, it relies on the internet as it existed in 2021 for its database.
The incident has raised concerns regarding the potential risks associated with artificial intelligence (AI), including the dissemination of misinformation and the presence of biases.