Hi All
Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.
As generative AI systems like OpenAI's ChatGPT and Google's Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.
Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn't been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.
https://www.wired.com/story/here-come-the-ai-worms/?
Regards
Caute_Cautim
Here is another follow up:
AI worm infects user via AI-enabled email clients - Morris II generative AI worm steals confidential data as it spreads.
Researchers successfully tested this Morris II worm and published its findings using two methods.
Regards
Caute_Cautim