Threat Spotlight: The Tactics Used by Attackers to Compromise AI Tools and Defenses for Enhanced SEO.
Barracuda has reported that generative AI is increasingly being exploited to create and distribute spam emails, as well as to craft highly persuasive phishing attacks. These threats are evolving and escalating, but they are not the only methods attackers are using to leverage AI. Security researchers have observed that threat actors are manipulating companies’ AI tools and tampering with their AI security features to steal information and compromise systems, thereby weakening a target’s defences.
Email attacks targeting AI assistants are particularly concerning. AI assistants and the Large Language Models (LLMs) that support them are vulnerable to abuse. Barracuda’s threat analysts have identified attacks where malicious prompts are concealed within seemingly benign emails. These malicious payloads are designed to manipulate the behaviour of the target’s AI information assistants. For instance, a recently reported vulnerability in Microsoft 365’s AI assistant, Copilot, could allow unauthorised individuals to extract information from a network. Attackers exploit the ability of internal AI assistants to collate contextual data from emails, messages, and documents when responding to queries. By sending a harmless-looking email containing a hidden malicious prompt, attackers can infect the AI assistant without any user interaction. This can lead to the AI assistant silently exfiltrating sensitive information or executing malicious commands.
Categories: Email Attacks, AI Manipulation, Security Vulnerabilities
Tags: Generative AI, Spam Emails, Phishing Attacks, AI Assistants, Malicious Prompts, Microsoft 365, Information Exfiltration, Retrieval-Augmented Generation, Email Security, Threat Actors