AI Risk Management

A Must in the Era of Deepfakes

Clayton J. Mitchell
6/6/2025
AI Risk Management: A Must in the Era of Deepfakes

AI is powering new cyberthreats, including deepfakes. Learn how smart AI risk and governance strategies can help protect your business and build trust.

AI is transforming how businesses operate, but it’s also redefining how they’re attacked. Among the most alarming threats are deepfakes: AI-generated audio, video, and images designed to deceive. As these technologies become more realistic and accessible, organizations face a growing risk of fraud, financial loss, data breaches, and reputational harm. 

Mitigate AI risk with AI governance
If your company uses AI, you need an AI governance plan. We can help.

Why deepfakes are a real threat 

Deepfakes come in a variety of forms, and they can fray social bonds of trust and confidence.

  • Voice impersonation scams. Bad actors use cloned voices to trick employees into transferring funds or disclosing sensitive information.
  • Fake videos and media. Threat actors use deepfakes to manipulate public perception, impersonate leadership, or create false narratives. This approach includes whale phishing, which targets high-level individuals in an organization to gain access to funds or sensitive information.
  • Erosion of trust. As synthetic media becomes harder to identify, public confidence in authentic communications declines.

To stay ahead, organizations need more than traditional cybersecurity. They need comprehensive AI governance and risk management strategies to supplement preventive and detective controls in place today.

What effective AI governance looks like

At Crowe, our proven framework helps organizations manage AI responsibly and securely through proactive steps, such as:

  • Assessment. Review current data, privacy, IT, and security policies to identify vulnerabilities, including those coming from emerging technologies like deepfakes.
  • Policy development. Create clear standards for risk management, including policies that underpin procedures and controls to mitigate risks of AI-enabled attacks. What previously were theoretical threats – such as a voicemail from a CEO that sounds like the CEO asking the recipient to take action – are now real.
  • Education and awareness. Train employees on AI risks and threats to reduce unintentional exposure.
  • Oversight. Establish ongoing monitoring and testing to support governance and to identify and mitigate risks in a timely manner.

AI-powered threats require AI-powered security

As attackers use AI in more sophisticated ways, organizations must meet them with equally advanced defenses, such as:

  • Anomaly detection. AI tools can identify irregular behavior that signals fraud or intrusion.
  • Automated response. Smart systems can respond in real time to contain or neutralize threats.
  • Scalability. AI-driven security grows with the organization and adapts to evolving risks.

As deepfake threats grow more sophisticated, manual controls remain a critical layer of defense, especially in high-risk industries. AI tools can assist with detection, but organizations must engage employees to identify and stop manipulation in real time.

Critical manual safeguards include:

  • Out-of-band verification. Confirm sensitive requests using a separate, secure communication method, such as calling back on a known number or verifying in person.
  • Standardized escalation procedures. Flag and escalate unusual executive messages to risk and security teams.
  • Employee training. Teach staff to recognize deepfake red flags and tactics.
  • Manual media review. Carefully inspect sensitive videos for visual and audio manipulation signs.
  • Executive media restrictions. Limit exposure of long recordings to reduce cloning risk.

These low-tech, high-trust measures help build resilience against AI-powered impersonation and ensure that human judgment remains a frontline defense in digital communications.

Build resilience, not just compliance

Strong AI governance doesn’t just keep organizations compliant; it builds resilience and stakeholder trust. By aligning governance with enterprise risk management and a strong control environment, you can create a unified defense against current and future threats.

In short:

  • Deepfakes and AI-driven attacks are here to stay.
  • Governance, cybersecurity, and risk management must evolve in tandem.
  • Proactive, policy-driven frameworks powered by technology and employees are the key to staying protected and competitive.

Contact our AI governance team

If you suspect there are vulnerabilities in your AI risk management approach, our team specializes in helping companies build robust, future-ready AI governance – and we can help yours, too.

Contact us today

Clayton J. Mitchell
Clayton J. Mitchell
Principal, AI Governance
Paul Elggren
Paul Elggren
Managing Director, Internal Audit Consulting
Bo Qui
Bo Qiu
Principal, Consulting