ChatGPT in the Wrong Hands: How AI is Being Used in Cybercrime
Generative AI is reshaping enterprise cybersecurity by targeting trust, behavior, and user access. Learn how AI-powered threats bypass static defenses and what CISOs must do to protect the human layer.


For decades, cybersecurity was built like a walled city. Firewalls, VPNs, and endpoint controls defined the edge, keeping threats out and trusting everything inside. But that perimeter no longer exists.
Cloud adoption, remote work, and personal device usage have pushed access far beyond enterprise boundaries. By 2025, 85% of business applications will be SaaS-based, and nearly 70% of employees use their own devices for work. Risk has shifted from the perimeter to the human layer, where identity and behavior define the new attack surface.
Generative AI has accelerated this shift. Language models now enable attackers to craft believable messages, impersonate executives, and exploit trust at scale. These threats don’t live in malware payloads. They move across conversations, collaboration platforms, and authenticated sessions, exactly where traditional tools stop working.
What’s changed is not just the attacker’s toolkit; it’s the geometry of enterprise risk. Risk now moves with people, flowing through identities, behaviors, and real-time decisions. Every login, message, and interaction is a potential point of compromise.
In this blog, we explore how generative AI is reshaping enterprise threat models and why static controls fall short in a world where risk is real-time, identity-driven, and behavioral at its core. The organizations best positioned to defend themselves aren’t those with the tallest walls, but those with the clearest view into human-layer activity.

The Dark Side of Generative AI
Generative AI is now part of the enterprise workflow, but for cybercriminals, it’s a threat multiplier. Tools like ChatGPT, WormGPT, and FraudGPT are being used to automate personalized attacks, reduce the need for technical skill, and scale human-layer exploitation at unprecedented speed.
Attackers now craft phishing emails that evade detection with alarming precision. In recent benchmarks, AI-generated phishing messages had a 68% click-through rate, compared to just 14% for human-written messages (ETH Zurich, 2023). These emails mimic tone, structure, and workflows so well they often pass as internal communication. In fact, 81% of phishing campaigns now use generative AI to imitate authentic enterprise messages (Tessian, 2024).
Deepfake technology is also accelerating. Synthetic audio and video allow attackers to impersonate executives in real time, pressuring employees to transfer funds, disclose credentials, or approve sensitive actions. In one high-profile case, a finance employee transferred $25 million after attending a video call with deepfaked avatars of senior leadership (World Economic Forum, 2025).
Meanwhile, the dark web has seen a 200% surge in AI-enabled attack tooling. Threat actors are promoting generative platforms that produce polymorphic malware, with code that mutates every 90 minutes to avoid detection (Silicon Angle, 2025). These tools are distributed through Cybercrime-as-a-Service models, giving even low-skill actors the ability to launch attacks at enterprise scale.
This is more than an evolution of phishing. It’s a fundamental shift. Cybercriminals are no longer limited by time, language, or technical barriers. With generative AI, they can scale social engineering at a pace that traditional security tools cannot match.

New Threat Dynamics Every CISO Must Address
Generative AI is not just amplifying known threats. It is introducing new failure modes that exploit human behavior inside trusted systems. The most dangerous breaches are no longer launched from the outside. They are triggered by compromised sessions, synthetic identities, and behavior that appears legitimate until it's too late.
Below are key shifts every enterprise security leader must understand:
Faster Attack Velocity
AI has compressed the attack lifecycle. Threat actors can now generate, personalize, and deploy targeted campaigns faster than most security teams can triage alerts. Response time is no longer just a technical metric; it is a strategic differentiator.
Hyper-Personalization at Scale
AI enables attackers to mimic internal tone, timing, and workflows with uncanny accuracy. Messages appear legitimate, reference real business context, and often pass undetected through traditional filters. What used to take hours of manual effort can now be done in seconds.
Identity-Based Infiltration
Many threats now begin with a valid login. Once inside, attackers move laterally by hijacking trusted sessions, not by breaking through external defenses. This shift renders traditional credential validation insufficient. The focus must shift to detecting anomalous behavior across authenticated activity.
Erosion of Trust Signals
Common detection cues like device, location, and writing style can now be convincingly faked. AI allows adversaries to blend into enterprise environments by mimicking internal tone, user behavior, and business context, eroding the reliability of static detection rules.
Unmonitored Human Channels
Collaboration platforms like Slack, Zoom, and Teams have become essential to business operations but blind spots for security. High-risk behaviors often unfold in these unmonitored spaces, giving attackers and compromised users room to escalate without detection.
Multi-Channel Threat Movement
AI-powered attacks move across email, chat, video, and voice with ease. Without unified visibility, signals get fragmented across tools and teams, making correlation and early detection difficult. The ability to detect and correlate risk across all channels is now foundational to modern defense.
Attackers are no longer exploiting systems. They are exploiting people. Without real-time behavioral insight, prevention becomes guesswork and response comes too late.

Where Enterprise Security Goes From Here
Enterprise security is at a strategic inflection point. Generative AI has not only accelerated existing threats, it has reshaped how they move across organizations. The most damaging breaches no longer target systems. They target people.
These threats emerge inside trusted workflows, spread through collaboration tools, and operate within authenticated sessions. Yet most security programs are still built to stop only code, not human behavior. Static policies and awareness campaigns fail to detect real-time behavioral anomalies, especially those amplified by AI.
Modern defense requires continuous insight into how users behave and what access they exercise. It also means knowing when actions diverge from intent and how those signals can be exploited. Without this context, prevention breaks down and response comes too late.
Organizations that treat user risk as a one-time compliance task will fall behind. The ones that succeed will treat it as a dynamic challenge, driven by real-world data, behavioral context, and continuous visibility into how risk evolves across users and systems. Defending the modern enterprise is about tracking risk where it actually lies.
That’s why at Dune Security, we’ve built a platform designed for today’s human-layer threats. Our User Adaptive Risk Management solution replaces static training and legacy simulations with continuous, context-driven defenses. By adapting to each user’s behavior, access patterns, and risk profile, Dune helps prevent social engineering and mitigate threats before they escalate.
Never Miss a Human Risk Insights
Subscribe to the Dune Risk Brief - weekly trends, threat models,and strategies for enterprise CISOs.
FAQs
Complete the form below to get started.
Never Miss a Human Risk Insights
and strategies for enterprise CISOs.
