https://securityboulevard.com/2024/03/researchers-give-birth-to-the-first-genai-worm/

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUE1lw6baMa8N36NwCXP51sMVYlUWeejn_jPJoEl73VskqY-5LnHfbg0gZro49IZr82OlJDGSLHProIYqlRWMve6sS903Bpc7aVUlzxHbu9qoyAjM_8Jo3wayYnNOxoCBqpmRWCSrwDteev_6hy-zXkmUCN4ElQFCopJP1f0wYwf3qDrO4yOXJyjyQ6l2Z/w400-h225/AI_cybersecurity_worm_that_infects_email_propag.jpg

<aside> 💡 AI Generated Summary

Researchers from Cornell Tech, the Israel Institute of Technology, and Intuit have created the first generation AI worm, named 'Morris II', that can steal data, propagate malware, and spread via email. The worm targets AI apps and AI-enabled email assistants, embedding adversarial data into malicious emails to manipulate systems, propagate messages, perform malicious activities, and exfiltrate sensitive data. The development highlights the need for understanding and mitigating cybersecurity risks associated with disruptive technologies like AI.

</aside>

It was bound to happen — researchers have created a 1st generation AI worm that can steal data, propagate malware, and spread via email.

Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Bitton from Intuit created the self-replicating worm and bestowed the name ‘Morris II’ after the notorious worm that infected systems in the 1980’s. Their creation targets AI apps and AI-enabled email assistants. They published a research paper and video showing methods to steal data and affect others email systems.

This worm basically embeds adversarial type data into malicious email that manipulates victim’s systems to propagate messages, perform malicious activity, and exfiltrates sensitive data.

Strategically, the crux of this evolving problem is based in the fact that the pursuit of more functionality and subsequent value of GenAI and LLM systems, they require more access and permissions to do things in the digital ecosystems they inhabit. So, they become an incredibly powerful tool for both good and also for bad if instructed by malicious parties.

So, take a breath! This is just the beginning!

We must all understand that we can seize the great benefits of disruptive technologies, like Artificial Intelligence, but we must also be responsible to proactively understand and mitigate the accompanying cybersecurity risks!