The article reports that as enterprises increasingly deploy AI assistants, copilots and agents to manage tasks like email triage, scheduling, and even financial transactions, these very systems are becoming targets of a new class of phishing attacks. Traditional email security tools—which look for suspicious links, known-malicious attachments or domains—are no longer sufficient because threat actors are now embedding instructions that specifically target AI agents.
According to the article, one of the key attack vectors is “prompt injection” — hidden or obfuscated text within the plain‐text portion of an email that is invisible or benign to humans, but when processed by an AI agent becomes a command to exfiltrate data, send money, or bypass normal controls. For example: the HTML version of the email is harmless, but the plain‐text version (which an AI assistant might parse) includes a covert directive.
In response, the cybersecurity firm Proofpoint has developed a new defence capability embedded in its “Prime Threat Protection” service. This capability places detection models in-line as emails transit networks (pre-delivery), utilizing slimmed-down AI models (~300 million parameters) to inspect for malicious prompts and agent-targeted commands. This is being done because large general-purpose models would be too slow for real-time email scanning.
The article underscores that the cybersecurity landscape must evolve: it’s no longer enough to protect humans from being duped by false links or social-engineering. Defenders must now anticipate when attackers are exploiting machine reasoning and agency. The next wave of defence architecture will involve behavioural, reputational, intent-based detection, especially focused on machine-to-machine threats where an AI agent acts on an attacker’s direction.