Bridging the gap to AI security
Contrary to popular belief, well-coordinated artificial intelligence (AI) attacks on a corporate network are a possibility.
Experiments have shown AI's uncanny ability to coordinate cyberattacks and recent studies have predicted that these attacks will be a reality sooner, rather than later.
The machine learning (ML) algorithms that power AI have the capability to learn and adapt to any situation on the fly and this fact isn't lost on today's administrators and tech professionals.
Many are scrambling to understand and find ways to counter this dynamic and unpredictable future foe. Deployment of solutions such as observability tools is a good start, but businesses will need additional firepower to fight future AI incursions.
The threat of AI lies in its digital nature.
Consider for a moment what a human hacker needs to breach a corporation's digital defences.
For human hackers, it's a matter of resources and logistics—their knowledge, team size, energy levels, level of equipment, and funding dictates the length and severity of attacks that they can conduct.
Such considerations don't apply to an AI that can deploy automated algorithms to probe, analyse, and exploit system vulnerabilities across an extended timeframe without tiring, and at a far more cost-effective manner.
This makes them the very definition of an Advanced Persistent Threat (APT).
Unlike their human counterparts, they can simultaneously perform operations like social engineering, network scanning, zero-day cataloguing of vulnerability exploits, monitoring for password attacks, and DDoS cover attacks to mask their tracks.
They are also not constrained by resources and compromises, and can go on all day without pausing, making it a race for network admins to detect and mitigate an AI's attack.
Using the automation capabilities of AI, which has inexhaustible motivation or patience, even an attacker with limited access to machines and a constrained budget can work slowly over weeks or months to identify vulnerabilities and get in unnoticed.
And this extended timeframe adds to the complexity of threat detection; SysAdmins can't spend weeks watching systems and waiting for that one packet in a million or—more likely—a billion to occur.
Fighting AI with AI
There is also the possibility to leverage the capabilities of AI to turn the tables.
If IT professionals, along with their trusted vendors, take the initiative, security professionals will soon be able to rely on AI to counter AI-led cyberattacks and protect our networks and applications.
And though many network administrators are concerned that machines will replace human enterprise security experts, that's actually unlikely.
The management demands of increasingly complex systems are likely to require roughly the same resources saved by automating day-to-day management.
Admins are already gaining breakthrough insight via modern monitoring and observability platforms, even without AI.
New, easier-to-deploy tools mean metrics and events are finally being observed through IT systems' continuous telemetry, not just sporadic scanning, watching a dashboard, or regular reports.
In many ways, these new application process management (APM) approaches actually drive the effectiveness of algorithms, rather than AI driving new data platforms.
Data always comes first, then learning.
IT telemetry-enabled AI will allow network security experts to perform some amazing tricks.
Consider a spear-phishing campaign that is sending well-crafted fake super account emails to employee spouses.
A civilian recipient helpfully forwards it inside the firewall to his or her senior network security administrator spouse.
Now, its potentially zero-day, vulnerability-based payload has a chance to compromise an admin workstation.
It's challenging for a network administrator trying to prevent such an attack to deal with all these factors using traditional rules and techniques.
However, anomaly detection algorithms based on data from millions of email messages can make quick work of finding even one "perfectly" crafted spear-phishing attack.
Sharp security administrators will be able to run comparison data sets to help differentiate normal versus exceptional email by employee type, assessing trust and interest scoring for additional screening.
Bridging the gap to AI security
The first step in AI for the enterprise is maturing past the perception that AI is too difficult, too expensive, or not valuable enough for IT.
Fortunately, acquiring knowledge about machine and deep learning and how to apply algorithms to very large data sets is becoming much easier.
Until recently, trainable, neural net-based security products were only available to specific industries, and these tended to be highly complex and expensive.
As with any emerging technology, however, complexity and cost will decrease over time.
Vendors are already recognising the need to simplify solutions.
Microsoft, Google, and especially AWS have realised machine learning tools are too complex for the average admin.
In the meantime, the IT environment can benefit from the new telemetry being generated, or from broad observability of the network environment, even before an AI/ML element is added.
Security configurations, topology, monitoring skills, and events and change management requirements are all critical focus areas.
It doesn't matter if the processes are manual, hybrid, or fully automated.
Today, network administrators should begin focusing efforts on incorporating visibility into all systems, so they will be ready to attach layers of intelligent security applications on top of them.
At the same time, administrators should also look to clearly define the security posture objectives and begin utilising data science to ensure that the information from security-related events is well understood.
AI is set to become nearly as widespread as today's smart devices.
But AI is just a tool, albeit a powerful one—and thanks to machine learning, its power will grow to suit the motivations of its user.
This places a responsibility with all players in tech to steer the technology for future good; vendors, administrators, and IT professionals alike need to work together to find ways and create algorithms to counter the nefarious uses of AI.