Virginia News Press

collapse
Home / Daily News Analysis / AI Is Resetting the Threat Curve for Phishing Attacks

AI Is Resetting the Threat Curve for Phishing Attacks

May 13, 2026  Twila Rosenbaum  6 views
AI Is Resetting the Threat Curve for Phishing Attacks

Artificial intelligence is fundamentally altering the landscape of cybersecurity threats, with phishing attacks undergoing a dramatic transformation. Once reliant on manual processes and generic templates, phishing has evolved into a continuous, autonomous operation that leverages AI at every stage. Eyal Benishti, CEO of IRONSCALES, a leader in AI-driven email security, explains that modern attacks are now highly contextual and can execute across reconnaissance to delivery without any human involvement. In a recent interview, he highlighted how this shift is resetting the threat curve, making it imperative for organizations to rethink their defensive strategies.

Key Facts

  • AI-driven phishing attacks are now fully autonomous, operating across multiple channels including email, voice, and video.
  • Benishti terms this evolution "phishing 3.0" — characterized by multi-step, multi-channel, and fully automated attack chains.
  • Unlike traditional threats that rely on known malicious indicators, new phishing focuses on intent and behavioral manipulation.
  • Organizations must adopt AI agents to move from reactive defense to continuous threat anticipation.

The Evolution of Phishing: From 1.0 to 3.0

Phishing attacks have been a staple of cybercrime for decades. The early days, often called Phishing 1.0, involved mass emails with obvious spelling errors and generic requests for login credentials. Attackers relied on volume rather than sophistication, hoping that even a tiny percentage of recipients would fall for the ruse. With the advent of better awareness training and spam filters, these rudimentary attacks became less effective.

Phishing 2.0 emerged with the use of social engineering and targeted spear-phishing. Attackers began researching individuals within organizations, crafting personalized messages that mimicked trusted contacts or services. This approach significantly increased success rates, as victims were less likely to question an email that appeared to come from a colleague or a known vendor. However, these attacks still required considerable manual effort — researching targets, writing believable messages, and managing delivery.

Now, Phishing 3.0 represents a quantum leap. AI automates every step of the attack lifecycle. Machine learning algorithms scrape social media and corporate websites to build detailed profiles of potential victims. Natural language generation creates convincing emails, voice messages, and even video clips that mimic the speech patterns and appearance of real people. The attack can adjust its tactics in real time based on the victim's responses, all without human intervention. As Benishti notes, phishing can now be a voice or even a face on the screen, eroding trust across digital interactions.

How AI Powers the New Wave

AI enables attackers to operate at scale while maintaining high levels of personalization. Generative models like large language models (LLMs) can craft emails that are grammatically perfect and contextually relevant. For example, an attacker might use AI to analyze a company's internal communications from a breached account, then generate a message that mimics the writing style of the CEO, requesting an urgent wire transfer.

Deepfake technology adds another dimension. Voice cloning allows attackers to leave voicemails that sound exactly like a manager, instructing an employee to click a link or transfer funds. Video deepfakes can create realistic avatars that appear on video calls, further blurring the line between legitimate and malicious communications. Benishti emphasizes that these multimodal attacks exploit multiple channels simultaneously, making them harder to detect and more persuasive.

From Detection to Behavioral Manipulation

Traditional cybersecurity defenses rely on known threat indicators — malicious URLs, phishing kits, suspicious attachments. But AI-driven attacks rarely use these hallmarks. Instead, they focus on intent. As Benishti explains, the core question becomes not "Is this email malicious?" but "Can we make someone do something they're not supposed to do?" This shift from threat detection to behavioral manipulation fundamentally changes the defense paradigm.

To combat this, organizations must adopt the same technologies that attackers use. AI agents can monitor user behavior in real time, identifying anomalies that suggest a successful phishing attempt. For instance, if an employee who never processes payments suddenly initiates a wire transfer, an AI agent can flag the activity and block it until verified. These agents can also simulate attacks to train users, learning which social engineering tactics are most effective and adapting defenses accordingly.

Background on IRONSCALES and Eyal Benishti

IRONSCALES has established itself as a leader in AI-driven email security, focusing on detecting and mitigating advanced phishing threats. The company's platform uses machine learning to analyze email content, sender behavior, and user interactions, providing real-time protection against spear-phishing, business email compromise, and credential harvesting.

Eyal Benishti brings deep technical expertise to the role of CEO. With a background in computer science and mathematics, he has worked as a security researcher, reverse engineer, and malware analyst. His experience includes analyzing sophisticated cyber threats and developing countermeasures. Benishti is also a member of the Forbes Technology Council, where he shares insights on cybersecurity and AI. His leadership at IRONSCALES reflects a commitment to advancing defenses against modern social engineering and AI-driven threats.

The Multi-Channel Challenge

One of the most concerning aspects of Phishing 3.0 is its ability to operate across multiple communication channels simultaneously. An attack might start with a spear-phishing email, follow up with a phone call using a cloned voice, and then escalate to a fake video conference request. This multi-step, multi-channel approach makes it exceedingly difficult for users to maintain situational awareness. Trust is eroded not just in emails but in all digital interactions.

Benishti highlights that the attack surface now includes voice, video, and even messaging platforms like Slack or Teams. Attackers can use AI to monitor these channels for sensitive information, then craft attacks that seem to originate from within the organization. For example, an AI agent might observe a team discussing a deadline, then send a mock email from a project manager asking for credentials to access a shared document.

Historical Context and Industry Trends

The rise of AI-driven phishing parallels broader trends in cybersecurity. The 2025 State of Malware report noted a significant increase in AI-generated malware and phishing kits available on dark web marketplaces. Deepfake technology, once expensive and accessible only to nation-states, is now cheap and widely available. This democratization of advanced cyber tools means even small criminal groups can launch sophisticated attacks.

Industry events like RSAC 2026 have highlighted the urgency of this issue. Security leaders are redefining defense strategies, focusing on AI-powered detection and response. The shift from reactive to proactive security is accelerating, with AI agents becoming central to threat anticipation. Enterprises are also investing in secure development practices, recognizing that AI-generated code can introduce vulnerabilities if not properly vetted.

What Organizations Should Do Now

To defend against Phishing 3.0, organizations must adopt a multi-layered approach. First, implement AI-based email security solutions that analyze behavioral patterns rather than just known indicators. Second, conduct regular training that simulates AI-driven attacks, helping employees recognize the subtleties of deepfakes and voice cloning. Third, deploy AI agents that monitor for anomalous user behavior and automatically block suspicious actions.

Additionally, companies should enforce strict verification protocols for sensitive actions, such as requiring multiple approvals for wire transfers or credential changes. Zero-trust architectures become critical when trust is no longer automatically granted to any communication channel. Finally, collaboration with industry peers and sharing threat intelligence can help identify emerging attack patterns before they become widespread.

Benishti's warning about the resetting threat curve is not hyperbole. As AI continues to advance, the gap between attacker capabilities and defender readiness will only widen unless organizations invest in equivalent technologies. The era of relying solely on static defenses is over. Continuous adaptation, driven by AI agents, is the only way to stay ahead of attacks that learn and evolve in real time.


Source: Darkreading News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy