Inside the Rise of AI Voice Cloning, Deepfakes, and the Urgent Need for Human-Centric Cyber Resilience 

For years, the advice around phishing was straightforward: “Watch out for bad grammar, odd spellings, and suspicious links.” It worked. Until now. Generative AI has changed the game. Today’s scams are clean, convincing, and alarmingly personal. Emails sound just like your CEO. Deepfake videos mimic real meetings. Voice calls replicate trusted colleagues or family members with uncanny accuracy. 

What once seemed improbable is now the reality of cybersecurity in 2025. AI-driven phishing, smishing, and vishing attacks have surged by 1,265% since ChatGPT’s launch in November 2022. Deepfake fraud attempts have exploded, occurring approximately every five minutes in 2024. These sophisticated attacks hit hard, with the average cost of a data breach globally now standing at $4.88 million (around £3.8 million), with phishing among the top causes. 

But this isn’t just about money. It’s about trust, reputation, and the stability of your organisation. So, the question isn’t “Can we spot the scam?” It’s “How do we protect our people when the scam looks and sounds exactly like us?” The answer isn’t just more tech. It’s about changing how people think, question, and respond to digital interactions. Every day. 

The AI Attacker’s Playbook: What Organisations Are Up Against 

The old scams haven’t gone away, but today’s attackers are using AI to create far more convincing and targeted threats. These aren’t random attempts. They’re calculated, adaptive, and often uncomfortably personal. 

Hyper-Personalised Phishing: The Email That Knows You.

AI has taken phishing to a new level. Instead of sending out generic messages to thousands, attackers can now craft emails that feel personal and relevant. They do this by pulling together publicly available information, from company websites, LinkedIn profiles, social media, and even news articles. The result is a message that looks like it came from someone you know and sounds like they actually wrote it. It might reference a project you’re working on, a known supplier, or even mimic your CEO’s tone of voice. That’s what makes it so convincing, and so hard to spot. 

These scams often fall into two patterns: 

  • Executive impersonation, where a senior leader appears to request money or sensitive data. 
  • Contextual scams, which reference real events or conversations to make the message feel legitimate. 

Deepfake Deceptions: Seeing Isn’t Always Believing.

Deepfakes, the realistic videos and images generated by AI, are no longer just internet curiosities. They’ve become a serious tool for fraud. 

In one recent case, a finance officer was tricked into transferring millions during a video call. Everyone on the call was an AI-generated imposter. That wasn’t a one-off. In 2024, deepfake attacks were not just frequent, but highly impactful, accounting for over 40% of all biometric fraud, according to Forbes. These videos are now so convincing that even trained eyes struggle to spot them. While there are still subtle signs, they’re fading fast. Criminals use deepfakes to fake meetings, issue false statements, or create images for blackmail or reputational damage. 

Executives and their families are often the targets, not because of technical gaps, but because of emotional ones. The aim is to create something that looks like proof, even when it isn’t.

Voice Cloning & Vishing: The Familiar Voice of Deceit.

If deepfakes are the visual trick, voice cloning is the audio version. Imagine getting a call from your CEO, a family member, or someone from IT. The voice sounds exactly right. That’s the power of AI voice cloning. It’s driving a sharp rise in vishing, or voice phishing, attacks. Vishing incidents have risen by 442% in 2024, with deepfake-enabled vishing surging by 1,600% in Q1 2025. These scams are massively propelled by the ease and effectiveness of AI voice cloning. Criminals often lift clips from social media or public videos, then use them to make urgent, emotional calls asking for money, credentials, or access. Because the voice sounds familiar, people are far more likely to trust it. That’s what makes these attacks so effective, and so difficult to detect. 

Why Our Old Cybersecurity Training Isn’t Enough Anymore.

For years, cybersecurity training has focused on spotting the usual red flags. That made sense when scams were clumsy and easy to spot. But AI has changed that. The old rules don’t hold up in the same way. “Check for bad grammar” doesn’t help when AI writes better English than most people. “Hover over the link” is still useful, but AI can now create fake websites that look completely legitimate. These scams also play on psychology. They create urgency, mimic authority, and tap into emotions like fear or the instinct to help. That makes them harder to ignore, even for people who are usually cautious. And let’s be honest, most people are already overloaded. We can’t expect everyone to treat every message like a forensic investigation. It’s just not realistic. 

Typing in the keyboard and AI answering

The New Frontier 

Empowering Your People as a Human Firewall 

When digital threats become indistinguishable from the real thing, it is no longer enough to rely on instinct or old habits. Organisations need to rethink how they protect themselves. The answer is not just more technology, it is people. Every employee must become an active part of the defence, not just a passive recipient of warnings. 

  • Foster a Culture of Proactive Verification. Build a culture where people feel confident questioning unexpected or urgent requests, even if they appear to come from someone senior. If something feels off, it probably is. Put clear protocols in place. If someone gets an email asking for money, login details, or to click a link, they should verify it through a separate channel. That could mean calling the person on a known number, sending a fresh email, or checking in via a trusted platform. The same goes for voice messages. If a call sounds urgent but odd, don’t act on it straight away. Text the person back on a number you know is real. These small steps can stop even the most convincing AI-powered scams in their tracks. Organisations should have clear, consistent rules for checking sensitive requests. These rules must be followed without exception. 
  • Evolve Your Security Awareness Training. Traditional training is no longer enough. AI-driven scams are faster, smarter, and far more convincing than anything we’ve seen before. Training needs to keep up. That means moving away from once-a-year modules and towards short, regular sessions that reflect the latest threats. Start with realistic simulations. These should go beyond generic phishing tests and include deepfake videos, cloned voices, and highly personalised emails. The aim is to build instinct, so people recognise when something feels wrong, even if it looks right. But it’s not just about spotting the signs. People also need to understand why these scams work. AI exploits trust, urgency, and authority. Training should help staff recognise these tactics and respond with calm, clear steps: pause, verify, report. Most importantly, training must evolve as fast as the threats do. Keep it current, keep it relevant, and keep it going. 
  • Build a Strong Internal Reporting Culture. As AI-driven attacks become more convincing, reporting suspicious activity is more important than ever. People should feel safe flagging anything that seems off, even if it turns out to be nothing. AI scams are evolving quickly, and every report helps spot new tactics before they spread. Make it easy to report, and make it clear there’s no penalty for being cautious. The real risk is staying silent when something doesn’t feel right. 
  • Leadership as the First Line of Defence & Advocates. AI-powered scams often target senior leaders first. Their visibility, authority, and access make them prime candidates for impersonation. That means leaders need to be the most aware, the most trained, and the most cautious. They should understand how AI can be used against them, and how their behaviour sets the tone for everyone else. When leaders take cybersecurity seriously, others follow. When they question unexpected requests, verify instructions, and report suspicious activity, it sends a clear message: vigilance is part of the culture, not an inconvenience. 

Bolster Your Technical Safeguards 

People are your first line of defence, but technology still plays a vital role. The right tools can catch what even the sharpest eye might miss, especially when AI is used to mimic trusted behaviour. 

  • Advanced Email Security. Choose email systems that use AI to do more than just scan for dodgy links or known threats. The most effective tools analyse writing style, tone, and subtle anomalies in headers or sending patterns. This helps them detect phishing emails and impersonation attempts generated by AI, even when they look completely legitimate. 
  • Multi-Factor Authentication (MFA). MFA is essential. AI makes phishing more convincing and more successful. A second layer of protection, such as a code sent to a trusted device, can stop an attacker even if they have used AI to steal a password. 
  • Endpoint Detection and Response (EDR). EDR tools use machine learning to understand what is normal on your systems. That means they can detect unusual or evasive behaviour, including malware created or adapted by AI to avoid detection. These tools also provide real-time alerts, automated responses, and visibility across all devices. 
  • Emerging Detection Tools for Deepfakes and Voice Clones. AI-generated impersonations are becoming harder to spot. New tools are emerging that can detect the subtle inconsistencies in speech, video, and image data that AI often leaves behind. These technologies are still developing, but they are improving quickly and will be vital for verifying what is real and what is not. 

The Way Forward  

AI is changing the threat landscape faster than most organisations can adapt. But it is also giving us new tools to fight back. The real advantage will not come from the latest bit of software. It will come from people who know how to question what they see, hear, and read, and who feel confident enough to act when something does not feel right. That shift will not happen through policy alone. It takes training, leadership, and a culture that values caution over convenience. The future of cybersecurity is not just technical. It is human. And it starts with how we prepare our people today. 

To learn more about how to navigate this evolving landscape, explore Dionach’s AI Cyber Security Governance Services

Like what you see? Share with a friend.