Deepfakes and the Changing Risk Landscape for Finance Institutions

| 11/24/2025
Deepfakes in Financial Institutions

Read Time: 5 minutes
The surge in deepfake-related incidents has been affecting financial institutions (FIs) globally. In August 2024, an Indonesian financial institution reported fraudsters using AI-generated photos to defeat its digital KYC process for loan applications. Fraudsters used virtual camera technology to present deepfake photos as live inputs, successfully spoofing identities and tricking facial recognition systems.

The Monetary Authority of Singapore (MAS) recently released an information paper on "Cyber Risks Associated with Deepfakes." This article summarizes key points to raise financial institutions awareness regarding the latest threats, risks, potential impacts on the sector, and suggested mitigation strategies. You can access their full publication here.

What is Deepfake?

Deepfakes leverage artificial intelligence to create convincing fake audio, videos, images, and text. Instances of impersonation, falsified documents, and fraudulent transactions have already caused financial losses and reputational damage worldwide, especially for financial institutions.

How FIs Can Mitigate the Key Risks Area

1. Compromising Biometric Security

Deepfakes defeat facial recognition systems during customer onboarding and login. Fraudsters use synthetic faces and forged documents to create false identities for money laundering and unauthorized transactions.

FIs can consider the following mitigating measures:

  • Implement liveness detection in facial recognition systems
  • Verify authenticity of identification documents and detect tampering
  • Conduct regular vulnerability assessments with simulated deepfake attacks
  • Use strong encryption for biometric data
  • Deploy fingerprinting and watermarking to identify deepfakes

2. Social Engineering and Impersonation

Deepfakes create realistic fake videos or audio impersonating executives or colleagues. Attackers manipulate victims into transferring funds, granting access, or sharing sensitive information.

FIs can consider the following mitigating measures:

  • Conduct staff awareness campaigns and simulation exercises
  • Train staff to verify requests through separate, trusted channels
  • Deploy endpoint-based deepfake detection tools on corporate devices
  • Require additional verification and separation of duties for high-risk transactions
  • Implement multi-factor authentication for high-privilege accounts

3. Spreading False Information

Deepfakes spread misinformation that impacts investor confidence and triggers market fluctuations. Fraudsters fabricate news about company performance or create fake executive statements.

FIs can consider the following mitigating measures:

  • Monitor digital channels for deepfake-based brand abuse and impersonation
  • Establish incident response protocols for reporting, investigation, and content takedown
  • Develop trusted channels to inform stakeholders of deepfake incidents
  • Collaborate with regulators and industry peers for sector-wide defense

Improving Deepfake Resilience

The growing sophistication of deepfakes demands continuous vigilance and adaptation of security measures. Crowe Center for Cybersecurity helps organizations to evaluate specific deepfake threats through a realistic AI-based attack simulation, implement layered defensive measures, enable your people to recognize manipulation attempts, and provide rapid response capabilities when incidents occur.


Speak to our expert.
Crowe can provide specialized industry consulting services to help tackle the specific challenges you face.