Hi All
Do we put this under Threats or Tech Talk or even Privacy too it crosses a lot of discussions:
AI models, the subject of ongoing safety concerns about harmful and biased output, pose a risk beyond content emission. When wedded with tools that enable automated interaction with other systems, they can act on their own as malicious agents.
Computer scientists affiliated with the University of Illinois Urbana-Champaign (UIUC) have demonstrated this by weaponizing several large language models (LLMs) to compromise vulnerable websites without human guidance. Prior research suggests LLMs can be used, despite safety controls, to assist [PDF] with the creation of malware.
https://www.theregister.com/2024/02/17/ai_models_weaponized/
Regards
Caute_Cautim
Thank you for sharing this information with us @Caute_cautim.