Security should not be an add-on, but a foundational pillar for every service, software, or infrastructure. From network segmentation to access control, backups, and the use of intelligent firewalls—everything revolves around one key idea: prevention, not just reaction.
And yet, just as we thought we had finally reached a mature approach to security, a new player entered the scene. Not a hacker. Not malware. But artificial intelligence.
When a single email—with no click required—is enough
A few days ago, a critical vulnerability (CVE-2025-32711), dubbed EchoLeak, was discovered within Microsoft 365 Copilot. The mechanism is unsettling in its simplicity: an attacker sends a normal email to a user. The user doesn’t have to open it, click anything, or respond. Copilot reads it automatically, processes the content, and… could potentially reply to the attacker with confidential company information.
This happens because Copilot, to provide a comprehensive experience, is connected to emails, files, chats, and documents. And like any attentive assistant, it tries to be helpful—even when it shouldn’t.
We are facing what experts call a zero-click attack, but with a twist: the weak point is no longer just the user, but also the AI agent. It’s not a coding error, but an “induced behavior.” A kind of social engineering, but targeting a machine.
The paradigm shift
This incident is just the tip of the iceberg. AI agents are becoming increasingly autonomous—not only able to read and process, but also to act, suggest, and decide. And as their responsibilities grow, so do the risks.
The paradigm shifts: security must no longer only protect data, but also how that data is processed and returned. It’s no longer enough to ask “who has access to what?” but also “how might AI combine this information?” and “could it be tricked into doing so?”
Today’s real threat is no longer just a virus hidden in a zip file, but also an instruction disguised as an innocuous message.
How to protect your company?
Facing this new reality requires new tools and approaches. Some concrete examples include:
- Limit AI access to only the data that is truly necessary
- Use protection mechanisms such as DLP and sensitive classifications on content processed by AI as well
- Monitor the output of virtual assistants with the same level of scrutiny applied to system logs.
- Implement behavioral guardrails—clear limits on what the AI can and cannot do.
And above all: don’t rely blindly on AI just because it’s convenient. Like any powerful tool, it must be used with intelligence, expertise, and—now more than ever—with caution.
The EchoLeak case leaves us with a clear message: security cannot stand still while AI moves forward. We need to envision cybersecurity not just as a safety belt, but as a design conscience. And if today we are entrusting more and more decision-making processes to intelligent assistants, we must ask ourselves a fundamental question: who protects the assistant?
Want to learn how to protect your company? Talk to our cybersecurity experts—we’ll help you build a tailored strategy that evolves alongside technology.