Latest News, Local News, International News, US Politics, Economy

Lawyer Admits Using ChatGPT To Draft Court Filing, Ends Having Fabricated Facts

A lawyer who relied on ChatGPT to draft a court filing for a man suing an airline is now all too familiar with the artificial intelligence tool’s flaws, including its tendency to fabricate facts.

Last year, Roberto Mata filed a lawsuit against the Colombian airline Avianca, alleging that a metal beverage trolley wounded his knee during a flight to Kennedy International Airport in New York. When Avianca asked a Manhattan magistrate to dismiss the lawsuit based on the statute of limitations, Mata’s attorney, Steven A. Schwartz, submitted a brief based on research conducted by ChatGPT, according to an affidavit filed by Schwartz of the law firm Levidow, Levidow & Oberman.

Lawyer Apologizes For Using ChatGPT That Produces ‘Bogus’ Citations

Although ChatGPT can be useful to professionals in a variety of fields, including the legal field, it has proven to be both limited and unreliable. In this instance, the AI fabricated nonexistent court cases and asserted that they were genuine.

Avianca’s attorneys uncovered the fabrications when they informed the case’s judge, Kevin Castel of the Southern District of New York, that they were unable to locate the cases cited in Mata’s attorneys’ brief in legal databases.

Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and Varghese v. China Southern Airlines were among the fictitious decisions.

Avianca’s attorney, Bart Banino of Condon & Forsyth, told CBS MoneyWatch, “When we didn’t recognize any of the cases in their opposition brief, we knew something was amiss.” We surmised that it was an automaton of some type.

Schwartz responded in an affidavit a week ago, stating that he had “consulted” ChatGPT to “supplement” his legal research and that the AI tool had proven to be “an unreliable source.” Adding that it was his first time using ChatGPT for work, he “was unaware of the possibility that its content was false.”

Read more: Navigating Layoffs: Mark Zuckerberg’s Call for Resilience and Resourcefulness at Meta

Use Of Artificial Intelligence In Court

lawyer-used-chatgpt
A lawyer who relied on ChatGPT to draft a court filing is now all too familiar with the artificial intelligence tool’s flaws.

He claimed he even urged the AI to verify that the cases it cited were authentic. ChatGPT affirmed it was the case. Schwartz then questioned the AI’s source.

ChatGPT’s reaction? “I apologize for the confusion earlier,” it read. The artificial intelligence then indicated that the Varghese case could be located in the Westlaw and LexisNexis databases.

Judge Castel has scheduled a hearing for June 8 regarding the legal blunder and has ordered Schwartz and Levidow, Levidow & Oberman to contend why they should not be sanctioned.

Read more: Air New Zealand To Require Weighing Passengers Before They Board Planes

Leave A Reply

Your email address will not be published.