Cybersecurity training has come a long way. Most organisations now run phishing simulations, require password hygiene modules, and have policies in place for suspicious emails. That is a solid foundation. The problem is that the threat landscape has moved on, and most training programmes have not moved with it.
AI has not simply made old attacks faster or more frequent. It has introduced entirely new categories of threat that require a fundamentally different kind of awareness. The gap between what attackers can do and what employees are trained to recognise is widening, and that gap is where breaches happen.
The standard security awareness curriculum was built around a specific mental model: an attacker sends a suspicious link, an employee clicks it, and the organisation is compromised. Train employees to spot the link, and you reduce the risk.
That model is increasingly inadequate. Today's AI-powered attacks do not always announce themselves with a suspicious email. They manipulate the tools your organisation already trusts. They impersonate colleagues on video calls. They corrupt the data your leadership relies on to make decisions.
According to a2026 threat report, the average time it takes an attacker to move laterally through a network after gaining access has dropped to just 29 minutes, with the fastest recorded case clocking in at 27 seconds. Spotting a suspicious link is no longer the primary skill your employees need.
What makes this particularly difficult for organisations is that the new generation of attacks is designed to look completely normal. There is no obviously suspicious behaviour to flag, no warning from an email filter, and no moment where an employee instinctively feels that something is wrong.
In 2024, a finance employee at engineering firm Arup wastricked into transferring $25 millionto fraudsters. The attack did not involve a phishing email. It involved a deepfake video call in which the attacker convincingly impersonated senior colleagues, including the CFO. The employee had no reason to doubt what they were seeing.
This is not an isolated case. It represents a broader shift in how malicious actors operate, withconvincing deepfake attacksbecoming increasingly common. Synthetic media, including convincing audio and video of people who do not exist or people who do, is now accessible to attackers at scale. The same technology that powers legitimate video production tools is being used to manufacture trust in corporate settings.
Then there is data poisoning. Attackers can subtly corrupt the data feeding your AI tools, skewing the outputs your teams use for financial, operational, or strategic decisions. Unlike a ransomware attack, there is no alarm. The damage looks like a series of bad calls, and by the time anyone traces the source, the consequences are already in motion.
Prompt injection takes this further. If your organisation uses AI-powered tools, attackers can embed hidden instructions into the content those tools process, directing the AI to act against your interests without anyone noticing. Your own systems become the vector.
Source: International Business Times UK