Cybersecurity

Deepfake Fraud: The New Threat to Financial Services Firms

Deepfake attacks on financial firms surged 700% in 2025. Learn how AI-generated video and voice fraud works and the practical defences you need.

N

Nerdster Team

10 January 2026

In February 2024, a Hong Kong finance worker was tricked into transferring $25 million after a video call with what appeared to be their company’s CFO and several colleagues. Every person on the call was a deepfake. That incident marked a turning point, and since then, deepfake-enabled fraud targeting financial services has escalated dramatically.

Deloitte’s 2025 Financial Crime Report documented a 700% increase in deepfake-related fraud attempts against financial institutions compared to the previous year. This is not a theoretical risk. It is a current, active threat that demands practical defences.

How Deepfake Fraud Works in Practice

Video Call Impersonation

The Hong Kong case is the most publicised example, but the technique has become more accessible. Attackers use real-time face-swapping software combined with publicly available video footage of executives — from LinkedIn, YouTube presentations, or company websites — to create convincing live deepfakes during video calls.

The target is typically a finance team member or executive assistant with authority to initiate payments or share sensitive information. The call appears to come from a known colleague or senior executive, creating urgency and authority that overrides normal verification procedures.

Voice Cloning for Payment Authorisation

AI voice cloning now requires as little as three seconds of sample audio to create a convincing replica. Attackers source audio from earnings calls, conference presentations, podcast appearances, or even voicemail greetings. They then use the cloned voice to call finance teams and authorise urgent payments, often combining the voice clone with a spoofed phone number to match the executive’s known contact details.

Synthetic Identity for Account Opening

Beyond direct fraud, deepfake technology is being used to create entirely synthetic identities for account opening at financial institutions. AI-generated photos, combined with stolen personal data, can pass basic KYC checks including video verification steps.

Document Forgery

Generative AI can produce convincing forgeries of invoices, contracts, and authorisation letters. When combined with a deepfake voice or video call that appears to confirm the document’s authenticity, the deception becomes extremely difficult to detect through normal business processes.

Why Financial Services Is the Primary Target

Financial services firms are disproportionately targeted for three reasons:

  1. High-value transactions are routine. A payment instruction for hundreds of thousands of pounds does not automatically trigger alarm bells in an environment where such transactions are normal.
  2. Speed is culturally valued. Trading desks and deal teams are conditioned to act quickly. Attackers exploit this culture of urgency.
  3. Public exposure of key personnel. Fund managers, partners, and C-suite executives at financial firms often have significant public profiles with abundant video and audio content available for AI training.

Practical Defences

Implement Out-of-Band Verification

The single most effective defence is requiring a separate communication channel to verify any payment instruction or sensitive request that arrives via video, voice, or email. If someone calls requesting a payment, verify by texting or calling back on a known number — not the number they called from.

Establish a firm policy: no single communication channel is sufficient to authorise a payment above a defined threshold. This should be written policy, not informal guidance.

Establish Code Words or Challenge Phrases

Some firms have implemented rotating code words that must be used in any conversation involving payment authorisation. This is a low-tech but effective control — a deepfake cannot know this week’s code word.

Limit Public Audio and Video Exposure

Review how much executive audio and video is publicly available. Consider whether earnings call recordings need to remain permanently accessible, whether conference presentations should be gated, and whether executive LinkedIn profiles need video content.

This does not mean hiding from the public. It means making a conscious decision about the attack surface you are creating.

Deploy Deepfake Detection Tools

Several enterprise-grade deepfake detection tools have reached production maturity. These analyse video calls for artefacts such as inconsistent lighting, unnatural blinking patterns, audio-visual synchronisation issues, and compression anomalies. While not infallible, they add a valuable layer of detection.

Microsoft Teams and Zoom have both announced native deepfake detection features, though rollout timelines vary. Ask your IT provider about early access or third-party alternatives.

Train Your Team

Awareness training specifically addressing deepfake threats is essential. Your team needs to know:

  • That deepfake technology is now accessible and convincing
  • That even video calls cannot be implicitly trusted
  • The specific verification procedures they must follow
  • How to report a suspected deepfake attempt without embarrassment

The last point matters. People who fall for deepfake fraud are often experienced, intelligent professionals. Creating a blame-free reporting culture encourages early detection.

Review Payment Controls

Basic payment controls remain your strongest structural defence:

  • Dual authorisation for all payments above a threshold
  • Segregation of duties between payment creation and approval
  • Callback verification to known numbers for new payees or changed bank details
  • Cooling-off periods for urgent or unusual payment requests

The Regulatory Dimension

The FCA has flagged deepfake fraud as a priority area and expects regulated firms to incorporate AI-generated threats into their fraud risk assessments. Firms that suffer losses due to deepfake fraud may face regulatory scrutiny if they cannot demonstrate that reasonable preventive measures were in place.

Responding to a Deepfake Attempt

If you suspect a deepfake-enabled fraud attempt:

  1. Do not proceed with the requested action
  2. Verify the request through an independent channel
  3. Report the attempt to your IT security team immediately
  4. Preserve any recordings or evidence of the interaction
  5. Report to Action Fraud and, if you are FCA-regulated, consider whether a regulatory notification is required

How Nerdster Supports Financial Services Firms

We help hedge funds, private equity firms, and wealth managers implement layered defences against evolving AI threats, including deepfake detection tools, communication security, and staff awareness training. Our approach is practical and proportionate to the threat landscape your firm actually faces.

If you want an honest assessment of how your firm would hold up against a deepfake-enabled attack, contact Nerdster for a free IT assessment.

deepfakeAI threatsfinancial servicesfraud

Ready to fix your IT?

Book a free 30-minute IT assessment. We'll review your setup, identify risks, and show you exactly what better IT looks like.