Cybercriminals use AI’s ability to process massive data sets, automate tasks, and generate humanlike text and speech to scale and refine their attacks. Some of the key threats include:
- Generative AI (GenAI) for malicious code and vulnerability research. Threat actors can use GenAI applications to automate and optimize cyberattacks. AI-powered tools can analyze software vulnerabilities, generate sophisticated malware, and even automate penetration testing techniques. In its January 2025 report, the Google Threat Intelligence Group (GTIG) found that threat actors from China, Iran, North Korea, and Russia have experimented with generative AI tools to weaponize cyber operations. Some of their activities included using AI to write convincing phishing emails, debug malicious scripts, and automate reconnaissance.
- AI-powered phishing and social engineering. Phishing remains one of the most effective cyberattack methods, and AI has made it even more dangerous. Attackers can now use AI to craft well-written, error-free phishing emails that mimic human communication patterns. These messages can be tailored to specific individuals or organizations, which makes them difficult to distinguish from legitimate correspondence. In its January report, the GTIG described how cybercriminals use AI models to improve multilingual phishing attacks. The attackers craft messages in multiple languages, including English, Hebrew, and Farsi, to target organizations across different regions without relying on human translators.
- Deepfake technology for fraud and impersonation. Cybercriminals use deepfake technology, which relies on AI to generate hyperrealistic images, audio, and video to execute fraud, disinformation, and social engineering attacks. With deepfakes, cybercriminals can impersonate executives, spread misinformation, and manipulate financial transactions. In 2024, scammers used AI-powered deepfake video conferencing tools to impersonate a company’s chief financial officer in Hong Kong. The attackers tricked an employee into transferring $25 million to fraudulent accounts, showcasing the high stakes of deepfake-enabled financial fraud.
- AI-generated malware and polymorphic attacks. AI has made malware more adaptable. Attackers use AI to create polymorphic malware that continually modifies its code to evade traditional security detection methods. AI-powered malware can self-optimize, which makes it extremely difficult to track and neutralize. BlackMamba, an AI-generated polymorphic keylogger, is a proof-of-concept malware capable of dynamically rewriting its code during execution that is invisible to endpoint detection and response tools.
- AI-enabled vishing attacks. Cybercriminals exploit voice synthesis technology to create highly realistic voice phishing (vishing) scams. AI-generated voices can impersonate senior executives, financial officers, or family members and increase the success rate of fraud. Attackers also use AI-powered voice-cloning tools to disguise their accents and launch targeted vishing attacks. A growing number of businesses have reported cases in which employees received fraudulent calls mimicking their chief executive officer’s voice and instructing them to process unauthorized payments.
AI-powered security strategies
Organizations must respond to AI-powered cyberattacks with equally advanced, AI-powered security strategies of their own. Following are several AI-specific defensive measures that can help organizations mitigate risks.
- Deploy AI-powered cybersecurity solutions
Just as cybercriminals use AI offensively, organizations must use GenAI models and AI-driven security tools to detect and mitigate threats in real time. Leading solutions include Darktrace’s Enterprise Immune System, which uses AI to learn normal patterns of behavior and detect anomalies, and CrowdStrike’s Falcon® platform, which uses AI for real-time threat detection. These AI-powered tools can analyze large data sets, recognize patterns, and identify anomalies that might indicate a cyberattack.
- Invest in deepfake and synthetic media detection
As deepfake technology becomes more sophisticated, organizations should adopt AI-powered tools that analyze subtle inconsistencies in video and audio recordings, detect manipulated content, and prevent fraudulent transactions or impersonation scams. Microsoft™ Video Authenticator can analyze visual artifacts, audio inconsistencies, and behavioral patterns to detect synthetic content. Additionally, Sensity’s threat intelligence platform specifically monitors for deepfake threats across the internet.
- Strengthen phishing awareness and implement real-world verification
Phishing simulations should incorporate AI-generated attacks to prepare employees for evolving threats. Additionally, companies should enforce strict verification protocols, such as multifactor authentication and manual confirmations, to prevent unauthorized transactions. Reverting to low-tech verification methods is a simple yet effective countermeasure against AI-powered social engineering attacks. For instance, if an employee receives a call from an executive requesting a wire transfer, instead of relying solely on digital communication, the employee should physically walk to the executive’s office or call a known phone number to verify the request. Sometimes, the best defense against cutting-edge techniques is simply going old school.
- Collaborate with AI providers and cybersecurity communities
Organizations should work closely with AI vendors to confirm ethical AI use and request security features such as watermarking AI-generated content or integrating traceability measures. Additionally, sharing threat intelligence with industry peers can help strengthen collective security strategies and defenses.
- Employ AI-driven behavioral analytics for insider threat detection
Organizations can implement AI-driven behavioral analytics tools, such as Exabeam, to create a comprehensive user behavior baseline. These platforms use machine learning algorithms to analyze vast amounts of data, including login times, access patterns, and file interactions, to establish what constitutes normal behavior for each user. For example, if an employee typically accesses sensitive financial data during business hours but suddenly attempts to access it late at night from a different geographic location, the system can trigger alerts for further investigation. By implementing these tools, organizations can detect potential insider threats and reduce false positives through continual learning and adaptation of the AI models. This proactive monitoring allows security teams to intervene before significant damage occurs, such as data breaches or financial fraud.
- Use AI for automated incident response and threat hunting
To enhance incident response capabilities, organizations can deploy an AI-powered platform that integrates with existing security information and event management systems and can automate incident response workflows by orchestrating actions based on predefined playbooks as well as reduce response times during critical incidents. For instance, if a potential breach is detected, the system can automatically isolate affected endpoints, notify relevant personnel, and initiate forensic analysis. Additionally, AI-driven threat-hunting solutions, such as Darktrace or CrowdStrike, use advanced algorithms to scan networks for indicators of compromise associated with AI-generated attacks, including polymorphic malware or adversarial AI tactics. By employing these technologies, security teams can proactively identify and neutralize threats and achieve a more robust security posture.
- Deploy adversarial AI to test defenses
Organizations can adopt adversarial AI techniques through platforms like AIShield to rigorously test their security frameworks. These tools simulate sophisticated AI-driven cyberattacks, allowing security teams to evaluate their defenses against real-world scenarios. For example, by conducting red team exercises that mimic the tactics of AI-powered adversaries, organizations can uncover vulnerabilities in their security architecture, such as gaps in endpoint protection or weaknesses in network segmentation. By continually iterating on their defenses based on these simulations, organizations can proactively refine security protocols and enhance the overall resilience of the organization against evolving cyberthreats.
Investing in AI-powered defenses
AI is fundamentally reshaping business and threat landscapes. While AI can provide incredible benefits, it also gives cybercriminals the tools to launch more sophisticated and scalable attacks. From AI-generated phishing campaigns to deepfake impersonation fraud and self-evolving malware, the risks are escalating.
To stay ahead, organizations must invest in AI-driven defenses, enhance security awareness training, and, in some cases, revert to simple, low-tech verification methods. In any organization, cybersecurity is an ongoing effort. As cybercriminals increasingly use AI, staying proactive is the only way to maintain an advantage.