Facial tracking, detection and recognition technology, Security system. Cyber security and Security password login online

By Matthew Geyman

You trust your team. But in the age of deepfakes, trust alone isn’t enough. With AI-driven scams growing more sophisticated, your strongest defence is a vigilant culture. In this urgent piece, Matthew Geyman reveals why “trust, but verify” is more relevant than ever—and how to embed it into your organisation. 

When Gallagher Re’s CEO recently revealed that the global insurance broker had conducted a simulated $10 million deepfake scam internally to test employee vigilance, it was a striking reminder that we’re now locked in a cyber arms race – and culture is our first line of defence. 

We’ve moved beyond phishing emails and suspicious attachments. The age of AI-generated deepfakes is here, and it’s escalating with a speed that should give every organisation pause. In 2023, around 500,000 deepfake videos were circulating online. This year, that number is expected to surpass 8 million. Financial services firms – guardians of money, trust, and identity – have become prime targets. 

These synthetic frauds are no longer crude or cartoonish. We’re seeing real-time video deepfakes that can mimic voices, faces, and even behaviours with unsettling accuracy. For cybercriminals, the playbook is simple: impersonate a trusted executive and authorise a high-value transfer. For businesses, the question is far more complex: how do you train your people to tell the difference between what’s real and what’s AI-generated illusion? 

An Evolving Threat That Starts with Human Error 

One of the most powerful defences against deepfakes isn’t just technological – it’s behavioural. The Gallagher Re simulation wasn’t just a gimmick; it was an exercise in instilling a security-conscious mindset. The aim isn’t to make people paranoid, but confident enough to challenge something that doesn’t feel quite right. That’s a cultural shift. 

There’s an old-fashioned but highly effective control many organisations should revisit: the two-person rule. Before the days of multi-factor authentication (MFA), it was common practice that no single person could authorise a large transaction alone. It’s a principle worth resurrecting. Call it “MFA for humans” – one initiates the request, another approves it. Even AI can struggle to simulate an entire conversation chain with multiple people on the fly. 

Spotting the Uncanny: Training for Deepfake Detection 

Teaching teams how to detect deepfakes is another pillar of resilience. While the technology is evolving rapidly, there are still telltale signs: 

  • Ask the unexpected: In a video call, asking someone to wave their hand across their face at different speeds can reveal rendering artefacts. Current algorithms often struggle to keep up with motion.
  • Side profile checks: Prompt the person to turn their head quickly from side to side, up and down. Many deepfake systems can’t convincingly simulate this from multiple angles.
  • Repeat the call: Suggest dropping and redialling. A genuine caller won’t object. A scammer relying on pre-rendered video might not return.

But most importantly, remove the stigma from calling out suspicions. Organisational culture needs to empower employees to say, “Something doesn’t feel right,” without fear of embarrassment or repercussions. The cost of silence is far greater. 

Innovation Must Be Matched with Collaboration 

At a national level, the work being done by the Accelerated Capability Environment (ACE), the Home Office, and partners like the Alan Turing Institute is showing real promise. Initiatives like the Deepfake Detection Challenge – where cross-sector teams created, tested, and benchmarked AI models against synthetic content – demonstrate the vital role of public-private partnerships. These efforts don’t just produce tools; they create repeatable methodologies and gold-standard datasets that are essential for keeping up with the pace of AI-enabled threats. 

And while much of this effort rightly targets areas like child exploitation and disinformation, the spillover benefits for fraud detection in financial services are significant. Better models, curated data, and forensic detection tools are the building blocks of tomorrow’s cyber defences. 

Culture is the Firewall 

Ultimately, technology alone won’t save us. Yes, we need better detection tools, smarter AI, and stronger controls – but without a security-first culture, it all crumbles. We need workplaces where curiosity and caution are not only allowed, but expected. 

In the fight against deepfakes, “Trust, but verify” is more than a Cold War relic. It’s a principle for the digital age – and one that every financial services firm should be embedding in their people, their processes, and their technology stack. 

The arms race is on. And it’s time we made sure every employee is trained, empowered, and ready to respond – not with fear, but with vigilance.

About the Author 

Matthew GeymanMatthew Geyman began his career working in the London insurance market as an IT manager for an underwriting firm. He saw a gap in the market for innovative IT with integrity and in 1996 founded Intersys. The business began as a one-man IT department, with Matthew zipping around the City of London on a motorbike. Nearly 30 years on, Intersys has grown into an award-winning, security-focused Managed Service Provider with more than 40 staff and over 140 live clients serving the UK and beyond.