Read Time: 5 minutes
Financial institutions are increasingly adopting Generative AI (GenAI) to improve how they operate and serve customers. In July 2024, the Monetary Authority of Singapore (MAS) published an information paper outlining the key cyber threats arising from GenAI adoption. This article summarizes those risks and what organizations can do to address them.
Deepfakes and AI-Enabled Social Engineering
GenAI makes impersonation significantly more convincing. Fraudsters can generate realistic voice, video, and text to carry out phishing attacks, impersonate executives, or bypass identity verification. Traditional controls are often not built to catch these.
Organizations should consider strengthening controls across a few areas:
- Liveness detection in biometric authentication to counter synthetic faces
- Multi-factor authentication for high-risk roles and transactions
- Clear verification procedures before acting on sensitive financial requests
- Deepfake scenarios included in incident response planning
AI-Enhanced Malware
GenAI has lowered the barrier to building sophisticated malware. Threat actors, including those without deep technical expertise, can now generate malicious code more quickly and at lower cost. Some malware uses GenAI to continuously mutate its own code, making signature-based detection tools less effective.
Basic cyber hygiene remains important, but it needs to be paired with more adaptive defences:
- Layered security controls so a single bypass does not expose everything
- Behavior-based and anomaly detection to catch threats that look unfamiliar
- AI-assisted log monitoring integrated with threat intelligence for faster identification of suspicious activity
Data Leakage and AI Deployment Risks
When employees use public GenAI tools, there is a real risk that confidential information gets submitted, often without realizing it. Attackers can also use prompt injection or jailbreak techniques to extract sensitive data from AI systems. Third-party or open-source models can introduce supply chain vulnerabilities that are not immediately visible.
Managing these risks requires clear policies on what data can and cannot be entered into GenAI tools, data loss prevention controls designed for AI environments, security built into GenAI development from the start, and proper due diligence on any external model before use.
Model Integrity and Output Manipulation
GenAI models themselves can be targeted. If training data is manipulated or unauthorized changes are made to a foundation model, the outputs become unreliable. This type of compromise is often difficult to detect until damage has already occurred. AI governance also needs to sit within the broader enterprise risk management framework.
Key controls to have in place:
- Strict access controls over training data and foundation models, with a maker-checker process for any changes
- Continuous monitoring for unusual model behavior or performance drift
- Human review for outputs that feed into critical decisions
- Contingency measures for GenAI solutions included in business continuity planning
Moving Forward
Crowe Indonesia works with financial institutions on AI risk and governance assessments, cybersecurity reviews for AI-enabled environments, data and model governance, and third-party AI risk evaluation. This enables organizations to adopt GenAI with confidence while maintaining security, compliance, and operational resilience.