Today's attacks still require several humans behind the keyboard making guesses about the sorts of methods that will be most effective in their target network — and it's the human element that often allows defenders to neutralise attacks.
Even attackers' resources are finite. If they find a way — any way — to scale up their attacks, they will do it. Adversaries think like enterprises: How can I make my hackers more efficient? How can we attack even more targets? How can I achieve more results with fewer resources?
AI has already achieved breakthroughs in multiple fields, including cybersecurity, autonomous vehicles, healthcare, voice assistance and many others. It only makes sense that attackers will turn to AI to reap the same benefits: to understand context, scale up operations, make attribution - detection harder and increase their profitability.
We can expect Offensive AI to be used throughout the attack lifecycle. This can involve several different use cases, including:
- Using natural language processing to understand written language
- Crafting contextualised spear-phishing emails at scale
- Image classification to speed up the exfiltration of sensitive documents once an environment is compromised.
Offensive AI will make detecting attacks more difficult. For example, an agent using some form of decision-making engine for lateral movement might not even require command - control (C2) traffic to move laterally.
Eliminating the need for C2 traffic drastically reduces the detection surface of existing malware. In addition, open source research and projects can also be leveraged to augment every phase of the attack lifecycle that exists today.
This means that the speed, scale, and contextualisation of attacks is expected to grow dramatically. Traditional security controls are already struggling to detect attacks that have never been seen before in the wild — be it malware without known signatures, new C2 domains or individualised spear-phishing emails. There is no chance that traditional tools will cope with future attacks as this becomes the norm.
Autonomous Response is necessary not only because humans cannot keep up with today's threat climate, but because these attacks are approaching.
Hundreds of organisations are already using Autonomous Response to thwart machine-speed attacks, new strains of ransomware, insider threats, previously unknown techniques, tools and procedures and many other threats.
IT allows human responders to take stock and strategise from behind the front line. A new age in cyber-defence is dawning, and the effect of AI on this battleground has already proved to be fundamental.