A New York-based lawyer, Steven Schwartz, has apologized to a judge for submitting a brief with false information generated by the OpenAI chatbot. Schwartz had used ChatGPT to prepare a court filing in a civil case being heard by Manhattan federal court. The case involves Roberto Mata, who claims he was injured when a metal serving plate hit his leg during a flight in August 2019 from El Salvador to New York. After the airline’s lawyers moved to dismiss the case, Schwartz filed a response that cited more than half a dozen decisions to support why the litigation should proceed. However, the cases Schwartz cited, including Petersen v. Iran Air, Varghese v. China Southern Airlines, and Shaboon v. Egyptair, were all fabricated by the AI program.

The problem arose when neither Avianca’s lawyers nor the presiding judge, P. Kevin Castel, could find the cases cited by Schwartz. Judge Castel wrote that “six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.” The judge ordered Schwartz and his law partner to appear before him to face possible sanctions.

Schwartz apologized to the court and said that he had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions in a manner that appeared authentic. Schwartz said that his college-educated children had introduced him to ChatGPT and that it was the first time he had ever used it in his professional work. He added that it “was never my intention to mislead the court.”

ChatGPT, an artificial intelligence program, has become a global sensation since it was launched late last year for its ability to produce human-like content, including essays, poems, and conversations from simple prompts. However, it has also sparked a mushrooming of generative AI content that has left lawmakers scrambling to try to figure out how to regulate such bots.

Schwartz and his law firm, Levidow, Levidow & Oberman, have been publicly ridiculed in the media coverage. Schwartz said that this has been deeply embarrassing on both a personal and professional level as these articles will be available for years to come. He added that the matter had been an eye-opening experience for him and that he could assure the court that he would never commit an error like this again.

The use of artificial intelligence in legal research and court filings is a new area of concern for the legal profession. While AI has the potential to automate many tasks, including legal research, it is not without risks. Lawyers need to ensure that they are using reliable sources of information and that they are not inadvertently misleading the court. As AI technology advances, it will be interesting to see how the legal profession adapts to these new challenges.

Technology

Articles You May Like

The Future of Energy Storage and Transition in India
The Discovery of Dark Oxygen Production in the Deep-Ocean Floor
The Development of an Atomic-Scale Quantum Sensor: A Game-Changer in Quantum Materials Research
The Role of Dark Matter in Supermassive Black Hole Collisions

Leave a Reply

Your email address will not be published. Required fields are marked *