Threats from malicious cyber activity are likely to increase as nation-states, financially motivated criminals, and novices increasingly incorporate artificial intelligence into their routines, the UK’s top intelligence agency said.
The assessment, from the UK’s Government Communications Headquarters, predicted ransomware will be the biggest threat to get a boost from AI over the next two years. AI will lower barriers to entry, a change that will bring a surge of new entrants into the criminal enterprise. More experienced threat actors—such as nation-states, the commercial firms that serve them, and financially motivated crime groups—will likely also benefit, as AI allows them to identify vulnerabilities and bypass security defenses more efficiently.
“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” Lindly Cameron, CEO of the GCHQ’s National Cyber Security Centre, said. Cameron and other UK intelligence officials said that their country must ramp up defenses to counter the growing threat.
The assessment, which was published Wednesday, focused on the effect AI is likely to have in the next two years. The chances of AI increasing the volume and impact of cyber attacks in that timeframe were described as “almost certain,” the GCHQ’s highest confidence rating. Other, more-specific predictions listed as almost certain were:
- AI improving capabilities in reconnaissance and social engineering, making them more effective and harder to detect
- More impactful attacks against the UK as threat actors use AI to analyze exfiltrated data faster and more effectively, and use it to train AI models
- Beyond the two-year threshold, commoditization of AI-improving capabilities of financially motivated and state actors
- The trend of ransomware criminals and other types of threat actors who are already using AI will continue in 2025 and beyond.
The area of biggest impact from AI, Wednesday’s assessment said, would be in social engineering, particularly for less-skilled actors.
“Generative AI (GenAI) can already be used to enable convincing interaction with victims, including the creation of lure documents, without the translation, spelling and grammatical mistakes that often reveal phishing,” intelligence officials wrote. “This will highly likely increase over the next two years as models evolve and uptake increases.”
The assessment added: “To 2025, GenAI and large language models (LLMs) will make it difficult for everyone, regardless of their level of cyber security understanding, to assess whether an email or password reset request is genuine, or to identify phishing, spoofing or social engineering attempts.”
Some caveats apply
Security researcher Marcus Hutchins said parts of the assessment overstated the benefit AI would provide to people pursuing malicious cyber activity. Among the exaggerations, he said, is AI removing barriers to entry for novices.
“I believe the best phishing lures will always be the ones written by a human,” he said in an interview. “I don’t think AI will enable better lures, but better scale. Instead of a single perfect phishing lure, you might be able to output several hundred decent ones in the same time. AI is very good at quantity, but these models still struggle a lot when it comes to quality.”
Another way AI might help improve phishing and other social engineering lures is by digesting huge amounts of internal data obtained in previous breaches. By training a large language model on the data of a specific target, attackers can create lures that refer to particular characteristics of a target, such as the specific suppliers the target uses, to make the pretext seem more convincing.
A thread on Mastodon provided a broader view of reactions to the assessment from security experts.
The “key judgements” of the assessment were:
- Artificial intelligence (AI) will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years. However, the impact on the cyber threat will be uneven (see table 1).
- The threat to 2025 comes from evolution and enhancement of existing tactics, techniques and procedures (TTPs).
- All types of cyber threat actor – state and non-state, skilled and less skilled – are already using AI, to varying degrees.
- AI provides capability uplift in reconnaissance and social engineering, almost certainly making both more effective, efficient, and harder to detect.
- More sophisticated uses of AI in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.
- AI will almost certainly make cyber attacks against the UK more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
- AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.
- Moving towards 2025 and beyond, commoditisation of AI-enabled capability in criminal and commercial markets will almost certainly make improved capability available to cyber crime and state actors.
The assessment included the following table summarizing the various benefits, or “uplifts,” from AI in the next two years and how they applied to specific types of threat actors:
Highly capable state threat actors | Capable state actors, commercial companies selling to states, organised cyber crime groups | Less-skilled hackers-for-hire, opportunistic cyber criminals, hacktivists | |
---|---|---|---|
Intent | High | High | Opportunistic |
Capability | Highly skilled in AI and cyber, well resourced | Skilled in cyber, some resource constraints | Novice cyber skills, limited resource |
Reconnaissance | Moderate uplift | Moderate uplift | Uplift |
Social engineering, phishing, passwords | Uplift | Uplift | Significant uplift (from low base) |
Tools (malware, exploits) | Realistic possibility of uplift | Minimal uplift | Moderate uplift (from low base) |
Lateral movement | Minimal uplift | Minimal uplift | No uplift |
Exfiltration | Uplift | Uplift | Uplift |
Implications | Best placed to harness AI’s potential in advanced cyber operations against networks, for example use in advanced malware generation. | Most capability uplift in reconnaissance, social engineering and exfiltration. Will proliferate AI-enabled tools to novice cyber actors. | Lower barrier to entry to effective and scalable access operations – increasing volume of successful compromise of devices and accounts. |
Key: Minimal uplift → Moderate uplift → Uplift → Significant uplift
Wednesday’s assessment, titled The near-term impact of AI on the cyber threat, came two weeks after NSA Cybersecurity Director Rob Joyce said the intelligence agency is predicting AI will help threat actors to develop more convincing phishing documents, according to NBC News. Already, he said, the NSA has seen cybercriminals and hackers who work for foreign intelligence agencies using various chatbots to appear as native English speakers.