Categories: Technology

Lawyer Apologizes for Using Chatbot to Fabricate Evidence in Court Filing

A New York-based lawyer, Steven Schwartz, has apologized to a judge for submitting a brief with false information generated by the OpenAI chatbot. Schwartz had used ChatGPT to prepare a court filing in a civil case being heard by Manhattan federal court. The case involves Roberto Mata, who claims he was injured when a metal serving plate hit his leg during a flight in August 2019 from El Salvador to New York. After the airline’s lawyers moved to dismiss the case, Schwartz filed a response that cited more than half a dozen decisions to support why the litigation should proceed. However, the cases Schwartz cited, including Petersen v. Iran Air, Varghese v. China Southern Airlines, and Shaboon v. Egyptair, were all fabricated by the AI program.

The problem arose when neither Avianca’s lawyers nor the presiding judge, P. Kevin Castel, could find the cases cited by Schwartz. Judge Castel wrote that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” The judge ordered Schwartz and his law partner to appear before him to face possible sanctions.

Schwartz apologized to the court and said that he had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions in a manner that appeared authentic. Schwartz said that his college-educated children had introduced him to ChatGPT and that it was the first time he had ever used it in his professional work. He added that it “was never my intention to mislead the court.”

ChatGPT, an artificial intelligence program, has become a global sensation since it was launched late last year for its ability to produce human-like content, including essays, poems, and conversations from simple prompts. However, it has also sparked a mushrooming of generative AI content that has left lawmakers scrambling to try to figure out how to regulate such bots.

Schwartz and his law firm, Levidow, Levidow & Oberman, have been publicly ridiculed in the media coverage. Schwartz said that this has been deeply embarrassing on both a personal and professional level as these articles will be available for years to come. He added that the matter had been an eye-opening experience for him and that he could assure the court that he would never commit an error like this again.

The use of artificial intelligence in legal research and court filings is a new area of concern for the legal profession. While AI has the potential to automate many tasks, including legal research, it is not without risks. Lawyers need to ensure that they are using reliable sources of information and that they are not inadvertently misleading the court. As AI technology advances, it will be interesting to see how the legal profession adapts to these new challenges.

adam1

Recent Posts

Exploring the Dynamics of Magnetic Disorder in Quantum Anomalous Hall Insulators

Recent advancements in material science have illuminated the complex behavior of quantum anomalous Hall (QAH)…

1 day ago

The Hidden Struggles of Misokinesia: Understanding the Distress from Fidgeting Behaviors

Fidgeting is a common behavior, yet for many, it can spark an unexpected and profound…

2 days ago

The Stellar Recycling: How Rocky Planets Influence Metal Content in Co-natal Stars

The study of stars and their compositions is a critical aspect of astrophysics, as it…

2 days ago

The Enduring Legacy of Einstein: New Insights into Cosmic Gravity

Albert Einstein's theory of general relativity has stood the test of time, proving itself as…

2 days ago

The Battle Against Light Pollution: A Call to Action

For those fortunate enough to live in rural settings, like the serene expanses of the…

2 days ago

Emerging Threats: The Undetected Spread of Highly Pathogenic Bird Flu in Humans

The recent findings concerning the transmission of highly pathogenic avian influenza (HPAI) among dairy workers…

2 days ago

This website uses cookies.