Categories: Technology

Lawyer Apologizes for Using Chatbot to Fabricate Evidence in Court Filing

A New York-based lawyer, Steven Schwartz, has apologized to a judge for submitting a brief with false information generated by the OpenAI chatbot. Schwartz had used ChatGPT to prepare a court filing in a civil case being heard by Manhattan federal court. The case involves Roberto Mata, who claims he was injured when a metal serving plate hit his leg during a flight in August 2019 from El Salvador to New York. After the airline’s lawyers moved to dismiss the case, Schwartz filed a response that cited more than half a dozen decisions to support why the litigation should proceed. However, the cases Schwartz cited, including Petersen v. Iran Air, Varghese v. China Southern Airlines, and Shaboon v. Egyptair, were all fabricated by the AI program.

The problem arose when neither Avianca’s lawyers nor the presiding judge, P. Kevin Castel, could find the cases cited by Schwartz. Judge Castel wrote that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” The judge ordered Schwartz and his law partner to appear before him to face possible sanctions.

Schwartz apologized to the court and said that he had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions in a manner that appeared authentic. Schwartz said that his college-educated children had introduced him to ChatGPT and that it was the first time he had ever used it in his professional work. He added that it “was never my intention to mislead the court.”

ChatGPT, an artificial intelligence program, has become a global sensation since it was launched late last year for its ability to produce human-like content, including essays, poems, and conversations from simple prompts. However, it has also sparked a mushrooming of generative AI content that has left lawmakers scrambling to try to figure out how to regulate such bots.

Schwartz and his law firm, Levidow, Levidow & Oberman, have been publicly ridiculed in the media coverage. Schwartz said that this has been deeply embarrassing on both a personal and professional level as these articles will be available for years to come. He added that the matter had been an eye-opening experience for him and that he could assure the court that he would never commit an error like this again.

The use of artificial intelligence in legal research and court filings is a new area of concern for the legal profession. While AI has the potential to automate many tasks, including legal research, it is not without risks. Lawyers need to ensure that they are using reliable sources of information and that they are not inadvertently misleading the court. As AI technology advances, it will be interesting to see how the legal profession adapts to these new challenges.

adam1

Recent Posts

The Celestial Perspective: Reflections from the Edge of Space

The Earth, often described as a "blue marble," stands as a radiant beacon amidst the…

10 hours ago

Investigating Multi-Particle Quantum Interference: A New Frontier in Quantum Mechanics

In recent years, the exploration of quantum systems has taken on profound significance, especially as…

11 hours ago

The Digital Advertising Monopoly: Unpacking Google’s Dominance

In the world of digital marketing, split-second decisions govern the visibility of ads seen by…

11 hours ago

Revolutionizing Infection Research: The Discovery of a Novel Sphingomyelin Derivative

Recent advancements in the field of microbiology have shed light on the complex world of…

11 hours ago

The Hidden Impact of Recreational Activities on Waterways

As the summer season reaches its climax, many people eagerly flock to rivers, lakes, and…

13 hours ago

The New Era of Space Exploration: SpaceX’s Starship Test Launch Achievements

In a groundbreaking achievement, SpaceX has marked a significant milestone in space exploration with its…

13 hours ago

This website uses cookies.