cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Kyaw_Myo_Oo
Contributor II

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

Dear all,

 

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

 

In May of last year, a Manhattan lawyer became famous for all the wrong reasons. He submitted a legal brief generated largely by ChatGPT. And the judge did not take kindly to the submission. Describing “an unprecedented circumstance,” the judge noted that the brief was littered with “bogus judicial decisions bogus quotes and bogus internal citations.” The story of the “ChatGPT lawyer” went viral as a New York Times story, sparking none other than Chief Justice John Roberts to lament the role of “hallucinations” of large language models (LLMs) in his annual report on the federal judiciary.

Yet how prevalent are such legal hallucinations, really?

https://law.stanford.edu/2024/01/11/hallucinating-law-legal-mistakes-with-large-language-models-are-...

 

 

Kyaw Myo Oo
Manager , CB BANK PCL
CCIE #58769 | PCNSE | CCSE | CISSP | PMP
3 Replies
Early_Adopter
Community Champion

We need a database of legal cases destroyed by litigants misusing AI…
denbesten
Community Champion

Nah, we need sanctions (fines; temporary license suspension; refund of client's retainer and work on a pro-bono basis) against the lawyer for presenting false evidence to the court.  Doesn't matter how it was "researched"; the lawyer is the one attesting to the evidence and the one who's feet should be held to the fire.  If the lawyer wishes to "subrogate" against the entity that did the "research" that is their business.

Early_Adopter
Community Champion

Malpractice suit or two I’ll wager.

Mind you, one doesn’t preclude the other…