- Prompt injection attacks manipulate AI guardrails using natural language, exploiting the semantic gap to get models to ignore developer instructions.
- AI social engineering scales faster and lowers attacker skill barriers, enabling automated, targeted campaigns like deepfakes and credential theft.
- Primary harms include data exfiltration, unauthorized transactions, and malicious or biased outputs that damage reputation and operations.
- Defenses are immature; require layered controls: human in the loop, prompt firewalls, input sanitization, least privilege, fuzz testing, patching, and user training.
Last Updated on April 1, 2026
AI is transforming cybersecurity both by powering new defensive tools and by massively accelerating the speed, scale, and targeting precision of cyber threats. Defenders need to upshift their automated detection and response capabilities, as well as train new human critical thinking skills to spot the incredibly convincing fakes coming our way.
What is AI changing about cyberattacks? What important element is staying the same? And how can individuals and organizations prepare for AI-driven threats? This article shares key points and insights.
Key takeaways
- AI and machine learning (ML) can radically accelerate the speed and frequency of cyberattacks as well as improve their adaptability and effectiveness.
- AI deepfakes lower the effort and skill level required to create highly realistic bogus content and highly convincing social engineering attacks.
- AI-powered autonomous malware can learn how to bypass controls and avoid detection while it seeks vulnerabilities.
- While AI has intensified attack vectors, AI-powered attacks most commonly target traditional classes of vulnerabilities like misconfigured cloud assets, unpatched software, insecure IoT devices, weak authentication, etc.
- People at all levels within organizations, as well as individuals in society at large, need to expand their awareness about the new kinds of compelling scams that deepfake content makes possible, and how to validate that something we read, see, or hear is real or AI-generated—along with the manipulative potential behind it.
- Best practices for organizations to reduce risk from AI-powered threats are largely the same as always, such as applying zero trust principles in areas like network segmentation and least privilege access, encrypting sensitive data, implementing multifactor authentication (MFA), and patching third-party software.
How is AI transforming cyber threats?
AI greatly elevates the risk level associated with cyber threats in multiple ways:
- Leveraging automation and ML, AI-powered attacks can unfold much faster than humans can react, giving defenders little time to catch up.
- AI lowers the effort and skill level required to create highly convincing, personalized social engineering attacks (e.g., phishing emails, deepfaked audio and/or video, pretexting scenarios) at scale. By impersonating trusted sources like executives or coworkers with incredible realism, these scams are better able to bypass traditional spam filters and to fool even trained and wary human users.
- AI-powered autonomous malware can adapt to security controls to avoid signature detection and to identify and exploit vulnerabilities far faster than ever before.
- AI has become shockingly adept at finding exploitable vulnerabilities in source code or binary files, a double-edged sword that attackers increasingly wield in software supply chain attacks.
- AI’s ability to write code, gather public data, and create emails and other assets has “democratized” cybercrime so that less skilled hackers can now launch sophisticated attacks.
But while the weapons in their arsenal are more powerful than ever, the hackers’ goals remain familiar: extortion, financial fraud, credential theft, unauthorized system access, data exfiltration, physical security breaches, corporate espionage, etc.
Mike Armistead, CEO at Pulse Security AI, Inc., points out, “As a defender, we traditionally think of attackers as needing a lot of time or patience. But now they can do things at scales we haven’t really thought of or imagined. They can use a new kind of infrastructure to do ordinary old attacks and get at vulnerabilities you would think were solved long ago, like improper input validation or misconfigured servers. Those kinds of things are still very real.”
In fact, longstanding vulnerabilities in a company’s attack surface may be riskier than ever because the time and cost to find and exploit them is now much less thanks to AI.
“We’re in an arms race as always,” Mike continues. “The attackers are being innovative right now, and I think the defenders are a little bit behind, especially when it comes to preventive measures that reduce the blast radius of anything they’re trying to do.”
Defenders need to improve their critical thinking
AI deepfakes open up possibilities for new kinds of social engineering scams, such as pretexting. In these “long con” attacks, cybercriminals use AI tools create fake content that builds trust over time and then exploits it. Common AI-generated personas in pretexting attacks include IT staff, customer service agents, executives, customers, vendors, law enforcement, or government officials.
The following steps illustrate a sophisticated pretexting scam that targeted (and almost fooled) a senior cybersecurity executive:
- The executive was contacted on LinkedIn by someone asking for expert input on a new cybersecurity product. The initiating party mentioned as references two people whom the executive knew.
- At the executive’s request, the “conversation” moved to email and went on for about two weeks.
- Finally, the initiator emailed the executive a PDF that was supposedly about the new product.
- Fortunately, the executive stopped short of opening the document and instead forwarded it to a technical expert, who identified it as malicious.
Even cybersecurity professionals are not thinking about these rapidly evolving risks and the financial, reputational, and even career damage they can do.
Mike Armistead emphasizes that what is required for defense is a new level of critical thinking. “You can’t believe the first thing that you see anymore,” says Mike. “We have to educate ourselves in this new world.”
How can individuals defend against pretexting and other deepfake-powered scams?
While a lot of the new deepfake content out there is just clickbait on social media sites, more serious spear phishing and “whaling” ploys built on highly realistic deepfake content are increasingly aimed at us.
Individuals and organizations need to take proactive steps to reduce the chances of falling victim to AI-powered social engineering attacks. Strategies to reduce your personal viability as a target include:
- Educate yourself on the evolving anatomy and growing sophistication of deepfake scams.
- Verify the identity of the requester before sharing sensitive data. Often, this involves using publicly available contact information not provided by the potential attacker.
- Always be vigilant and skeptical about unsolicited contact and requests for sensitive data, especially when accompanied by a sense of urgency or pressure.
- Minimize the personal information you share online, such as on social media. This increases the level of effort required to attack you.
How can organizations combat AI-powered cyber threats?
Steps organizations can take to reduce cyber risk from deepfakes and other AI-powered attacks include:
- Invest in security awareness training on the latest AI-driven threats.
- Invest in advanced AI defensive tools, such as agentic AI for rapid alert analysis, autonomous threat hunting, and/or automated remediation.
- Implement MFA to reduce the utility of compromised login credentials.
- Secure your internal systems and eliminate risks and vulnerabilities with patching, network segmentation, enforcing least privilege access, and other proven techniques.
- Establish and enforce clear organizational policies and guidelines for responsible AI use.
- Invest in AI-centric cybersecurity skills through training/education, hiring, and/or outsourcing to a trusted partner.
- Encrypt sensitive data at rest and in transit.
- Use role-based access control (RBAC) to limit access to sensitive data.
- Manage vendor risk, enforce strong vendor security, and carefully validate any requests vendors make involving sensitive data.
- Use AI-powered automation to monitor the IT environment for suspicious activity.
- Participate in collaborative defense and information sharing (threat intelligence, best practices) with peer organizations and security vendors to build awareness and resilience against evolving attacks.
What’s next?
For more guidance on this topic, listen to Episode 158 of The Virtual CISO Podcast with guest Mike Armistead, CEO at Pulse Security AI, Inc.