Although this may not be Security related, I found it interesting that a Lawyer would use AI to do his closing speech. To me it just shows another "flaw/caution" in using AI. I believe this highlights a risk to many organisations due to possible law suits.
Naturally, we're still in the nascent years of AI. And companies will naturally use AI to suit their own needs -- including an attorney with a vested interest in an AI system. None of this is a surprise.
But people hire legal representation, not pseudo-intelligent technical proxies. It seems Michel will be behind the legal 8-ball for years to come, as a result of hiring Kenner.
But isn't this always the case with new technology to test the bounds, regardless of the risks or where the AI actually obtained its library of information, which of course could be totally flawed, biased or actually be an incorrect source of knowledge.
So when will we actually get a grip of this and put "Mad" LLM's to the real test in the Laboratory rather than air their views in real scenarios? Or is it just a case of a lazy human being, looking for a work way to deal with his current case load, and not wanting to do real research to find the real information with real evidence?
What happens when Threads starts selling all those juicy personal secrets it is collecting, collating and analyzing for that moment, they can influence human beings or hold them to ransom?
We have already had one major case of mass influencing of human beings for political gains, how many times do we have tolerate this, before the judicial system stands up and puts appropriate controls in place?