Deepfakes and the Changing Risk Landscape for Finance Institution

| 4/27/2026
Deepfakes in banking

Read Time: 5 minutes

Deepfake incidents are rising globally. In 2024, an Indonesian financial institution reported fraudsters using AI-generated photos to bypass its digital Know-Your-Customer (KYC) process, successfully spoofing facial recognition systems. The Monetary Authority of Singapore (MAS) has released an information paper on cyber risks associated with deepfakes. This article outlines the key risk areas and practical mitigation strategies for financial institutions.

 

What is Deepfake?

Deepfakes leverage artificial intelligence to create convincing fake audio, videos, images, and text. Instances of impersonation, falsified documents, and fraudulent transactions have already caused financial losses and reputational damage worldwide, especially for financial institutions.

 

How FIs Can Mitigate the Key Risks Area

1. Compromising Biometric Security

Fraudsters use synthetic faces and forged documents to create false identities for money laundering and unauthorized transactions.

Mitigating measures:

  • Implement liveness detection in facial recognition systems
  • Verify authenticity of identification documents and detect tampering
  • Conduct regular vulnerability assessments with simulated deepfake attacks
  • Use strong encryption for biometric data
  • Deploy fingerprinting and watermarking to identify deepfakes

2. Social Engineering and Impersonation

Deepfakes create realistic fake videos or audio impersonating executives or colleagues, manipulating victims into transferring funds or sharing sensitive information.

Mitigating measures:

  • Conduct staff awareness campaigns and simulation exercises
  • Train staff to verify requests through separate, trusted channels
  • Deploy endpoint-based deepfake detection tools on corporate devices
  • Require additional verification and separation of duties for high-risk transactions
  • Implement multi-factor authentication for high-privilege accounts

3. Spreading False Information

Fraudsters fabricate executive statements or company news to impact investor confidence and trigger market fluctuations.

Mitigating measures:

  • Monitor digital channels for deepfake-based brand abuse and impersonation
  • Establish incident response protocols for reporting, investigation, and content takedown
  • Develop trusted channels to inform stakeholders of deepfake incidents
  • Collaborate with regulators and industry peers for sector-wide defense

 

Improving Deepfake Resilience

The growing sophistication of deepfakes demands continuous vigilance and adaptation. Crowe Center for Cybersecurity helps organizations evaluate specific deepfake threats through realistic AI-based attack simulations, implement layered defensive measures, enable people to recognize manipulation attempts, and provide rapid response capabilities when incidents occur.

Speak to our expert.
Crowe can provide specialized industry consulting services to help tackle the specific challenges you face.