18.7 C
New York
Wednesday, September 27, 2023

Buy now


Lawyer scammed by ChatGPT faces legal action — RT World News

A New York lawyer admits to using an AI model for research, insists he didn’t realize it could be lying

A legal brief filed by New York aviation attorney Steven Schwartz was found to be full of “false judicial decision” and fake quotes created by ChatGPT, an artificial intelligence language model, according to court records released last week.

Schwartz told the court in an affidavit on Thursday that when he first used ChatGPT for legal research, he began drafting the 10-page brief in hopes of persuading Manhattan federal Judge P. Kevin Castel not to dismiss his case. case, explaining that he “Therefore unaware that its contents may be false.

When asked, ChatGPT even told Schwartz — a lawyer with 30 years of experience — that the six cases it cited in legal filings were all true, he insisted. declared that he “Very regrettable“Believe in large language models, he promises”never do this again“—at least, not”Its authenticity has not been absolutely verified.

'Godfather of AI' issues warning and quits Google

Schwartz’s law firm, Levidow, Levidow & Oberman, is representing airline passenger Roberto Mata in a personal injury lawsuit against Avianca Airlines in connection with an accident on a 2019 flight. When the airline responded to the lawsuit by dismissing the lawsuit on the grounds that the statute of limitations had expired, Schwartz and his company responded with a chaotic briefing from ChatGPT.

Avianca’s lawyers complained to the judge that the cases cited didn’t exist, but when Judge Castel ordered Mata’s lawyers to provide a copy of the opinion in question, they did so — only to have Avianca’s lawyers counter that no such case ever occurred in real life. Case court docket or legal database.

Judge Castell responded earlier this month, asking Schwartz and his colleagues to explain why they should not face disciplinary action for using a chatbot to write his legal brief. A hearing is scheduled for June 8.

In response, Schwartz insisted in an affidavit filed Thursday that he had conducted all of the legal research found in the dubious brief and simply deployed ChatGPT to “Replenish“Own work.”Consulting’ With the help of artificial intelligence software models, he found false cases, and “ChatGPT ensures the reliability of its content,” he explained.

He even attached transcripts of his conversations with the chatbot, apparently by answering questions like “Are the other boxes you provided fake?” and”No, the other cases I have provided are real and can be found in reputable legal databases

While ChatGPT’s responses to user queries often appear genuine, the large language model acts as a probabilistic engine, populating text strings based on the contents of its vast database of text snippets.

You can share this story on social media:

Related Articles


Please enter your comment!
Please enter your name here

Stay Connected

- Advertisement -spot_img

Latest Articles