Let’s be honest… the cybersecurity training program you rolled out two years ago is already outdated. The threat landscape has shifted dramatically, and attackers are now wielding artificial intelligence as a weapon, creating synthetic voices, fabricating video evidence, and exploiting trust in ways that were once the stuff of science fiction.
If your organization’s security awareness training still focuses primarily on suspicious email links and password hygiene, it’s time for a serious upgrade. The statistics are alarming: deepfake-enabled voice phishing (vishing) attacks increased by 1,600% in early 2025, while QR code phishing surged by more than 500% over the same period (CrowdStrike, 2025; Barracuda Networks, 2025). These aren’t incremental changes; they represent a fundamental shift in how adversaries target your people.
This article breaks down the five emerging threats your cybersecurity training must address in 2026, along with practical recommendations for organizations of all sizes, including specific guidance for K-12 schools and higher education institutions.
1. Deepfake Video Impersonation
What it is: Deepfakes utilize artificial intelligence to create highly realistic synthetic video content, enabling attackers to impersonate executives, colleagues, or trusted figures in video calls and recorded messages.
Why it matters now: Deepfakes now account for more than 30% of corporate impersonation attacks, according to recent industry analysis (Recorded Future, 2025). These aren’t grainy, obviously fake videos anymore. Modern deepfakes can convincingly replicate facial expressions, lip movements, and even background environments in real-time during video conferencing.

Real-world impact: Imagine receiving a video call from your superintendent or CFO requesting an urgent wire transfer. The face matches, the voice sounds right, and the request seems plausible. This scenario has already played out across multiple organizations, resulting in significant financial losses and reputational damage.
Training takeaway: Staff members should be trained to verify unusual requests through secondary channels, even when the request appears to come via video. Establishing code words or callback procedures for financial transactions provides an essential layer of protection.
2. Voice Cloning and Vishing Attacks
What it is: Voice cloning technology enables attackers to replicate an individual’s voice using just a few seconds of audio. Combined with traditional vishing (voice phishing) techniques, this creates extraordinarily convincing social engineering attacks.
Why it matters now: The 1,600% increase in deepfake vishing attacks during early 2025 represents one of the most dramatic threat escalations in recent memory (CrowdStrike, 2025). Attackers can now clone voices from publicly available sources such as conference presentations, YouTube videos, podcast appearances, or even voicemail greetings.
Real-world impact: In one widely reported incident, a multinational corporation lost $25 million after an employee received what appeared to be a legitimate phone call from the company’s CFO authorizing an urgent transfer (CNN Business, 2024). The voice was synthetic, generated using AI-powered cloning technology.
Training takeaway: Organizations should implement voice verification protocols for sensitive transactions. This includes establishing predetermined verification questions, requiring callback confirmation through known numbers, and cultivating healthy skepticism around any urgent financial requests, regardless of how familiar the caller sounds.
3. QR Code Phishing (Quishing)
What it is: Quishing is a social engineering attack where adversaries embed malicious URLs within QR codes, redirecting unsuspecting users to credential-harvesting sites or malware downloads when scanned.
Why it matters now: QR code usage exploded during the pandemic and has remained prevalent in everyday interactions: restaurant menus, event check-ins, parking payments, and corporate communications. Attackers have noticed. Research indicates that over half a million phishing emails containing QR codes embedded in PDF attachments were detected between mid-June and mid-September 2024 alone (Barracuda Networks, 2024).

How it works: The attack typically unfolds in three stages:
- Redirection: Attackers create QR codes containing URLs that redirect through legitimate websites or URL shorteners to mask the true destination
- Human verification: Sophisticated attacks employ tools like Cloudflare Turnstile to evade automated security scanning
- Credential harvesting: Victims arrive at convincing fake login pages, often with their email address pre-populated to establish false legitimacy
The detection challenge: Traditional email security filters struggle with quishing because the malicious URL is embedded within an image rather than appearing as a clickable text link. Furthermore, employees often scan QR codes using personal mobile devices that lack enterprise security protections.
Training takeaway: Staff should be trained to treat QR codes with the same suspicion as unknown links. Before scanning, employees should consider: Who placed this code here? Does the context make sense? After scanning, they should examine the URL carefully before entering any credentials.
4. AI-Enhanced Business Email Compromise (BEC)
What it is: Business Email Compromise attacks have existed for years, but generative AI has supercharged their effectiveness. Attackers now use large language models to craft grammatically perfect, contextually appropriate phishing emails that mimic individual writing styles.
Why it matters now: The traditional red flags like awkward phrasing, spelling errors, and generic greetings are increasingly absent from modern phishing attempts. AI tools can analyze a target’s communication patterns from publicly available sources and generate messages that feel authentic.
Real-world impact: Educational institutions have proven particularly vulnerable to BEC attacks targeting payroll systems, vendor payments, and student financial aid disbursements. A single successful attack can compromise thousands of records and trigger significant compliance violations under FERPA and other regulatory frameworks.
Training takeaway: Organizations should emphasize verification procedures over visual inspection. If an email requests sensitive information or financial action, employees should confirm the request through a separate communication channel, regardless of how legitimate the message appears.
5. Synthetic Identity Fraud in Hiring and Access
What it is: Synthetic identity fraud combines real and fabricated personal information to create entirely fictional personas. In the employment context, attackers use these synthetic identities that are often supported by AI-generated photos, fake LinkedIn profiles, and fabricated credentials to gain insider access to organizations.

Why it matters now: Remote hiring practices have significantly expanded the attack surface. Organizations conducting virtual interviews may interact with candidates whose entire professional identity has been manufactured using generative AI tools.
Real-world impact: Once inside an organization, synthetic employees can access sensitive systems, exfiltrate data, or establish persistent backdoors for future attacks. Institutions with distributed governance structures and extensive contractor relationships face elevated risk.
Training takeaway: HR teams and hiring managers should be trained to verify candidate identities through multiple channels. This includes conducting reference checks via independently verified contact information, utilizing identity verification services, and remaining vigilant for inconsistencies during the interview process.
What This Means for Your Organization
Organizations across every sector are now being targeted through a converged social-engineering threat model in which trust is being systematically exploited via synthetic media, QR-driven redirection, and highly polished AI-assisted communications. Sensitive data, ranging from customer and employee information to financial records, operational plans, intellectual property, regulated content, and authentication artifacts, has been positioned as a primary objective for bad actors, while decentralization, hybrid work patterns, and complex vendor ecosystems have continued to widen the practical attack surface.
Specific recommendations for leaders and staff across any organization:
- Implement explicit verification and escalation pathways for sensitive requests (e.g., payments, credential resets, data exports, access provisioning), with dual-authorization and out-of-band confirmation procedures applied where feasible, consistent with internal control expectations and audit requirements
- Train all staff, not just IT or security functions, on emerging threat techniques, because reception, operations, finance, HR, facilities, and executive support roles are routinely placed in the adversary’s initial targeting chain
- Harden third-party and vendor verification procedures by requiring validated contact routes, documented change-control for banking and invoice updates, and contractual security expectations aligned to widely adopted guidance (e.g., NIST SP 800-161) for supply chain risk management
- Embed threat awareness into existing organizational rhythms (e.g., onboarding, quarterly compliance cycles, leadership meetings, and operational standups) so security behaviors are reinforced as operational discipline rather than episodic compliance activity
- Align training outcomes to organizational mission and risk tolerance so that the “why” is consistently connected to continuity, safety, customer trust, and regulatory obligations, rather than being presented as a purely technical initiative
For additional perspective on a mission-aligned approach, see our post on why protecting our schools is a mission, not just a job.
Building a Training Program That Actually Works
Addressing these five threats requires more than an annual compliance module. Effective security awareness programs in 2026 should incorporate:
- Continuous micro-learning that reinforces concepts throughout the year
- Role-specific training that addresses the unique risks faced by different departments and roles
- Simulated attacks that test employee responses to emerging threat vectors, including voice and QR-based scenarios
- Clear reporting channels that encourage employees to flag suspicious activity without fear of embarrassment
- Regular updates that reflect the rapidly evolving threat landscape
The organizations that will successfully navigate this new era of AI-enabled attacks are those that treat security awareness as an ongoing conversation rather than an annual checkbox.
Moving Forward
The threats outlined in this article are not theoretical; they are actively being deployed against organizations across every sector. However, with appropriate training and verification procedures, your team can become your strongest line of defense rather than your greatest vulnerability.
If your organization is ready to implement a mission-driven approach to security awareness training, one that aligns protection with your organizational purpose, we’d welcome the conversation. Contact us now!
References:
- Barracuda Networks. (2024). Email Threat Trends Report.
- CNN Business. (2024). Company loses $25 million after employee deceived by deepfake CFO.
- CrowdStrike. (2025). Global Threat Report.
- Recorded Future. (2025). Annual Threat Intelligence Analysis.