Dear all,
A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.
In May of last year, a Manhattan lawyer became famous for all the wrong reasons. He submitted a legal brief generated largely by ChatGPT. And the judge did not take kindly to the submission. Describing “an unprecedented circumstance,” the judge noted that the brief was littered with “bogus judicial decisions bogus quotes and bogus internal citations.” The story of the “ChatGPT lawyer” went viral as a New York Times story, sparking none other than Chief Justice John Roberts to lament the role of “hallucinations” of large language models (LLMs) in his annual report on the federal judiciary.
Yet how prevalent are such legal hallucinations, really?
https://law.stanford.edu/2024/01/11/hallucinating-law-legal-mistakes-with-large-language-models-are-...
Nah, we need sanctions (fines; temporary license suspension; refund of client's retainer and work on a pro-bono basis) against the lawyer for presenting false evidence to the court. Doesn't matter how it was "researched"; the lawyer is the one attesting to the evidence and the one who's feet should be held to the fire. If the lawyer wishes to "subrogate" against the entity that did the "research" that is their business.