As artificial intelligence gains traction, organisations are beginning to find enterprise use cases for the technology.
At the same time, cybercriminals are also finding ways to weaponise the technology to execute more sophisticated attacks at a higher volume.
A report by the Electronic Frontier Foundation found that there are several characteristics of AI that lend themselves well to being utilised in cyberattacks: namely its scalability, the ability for its algorithms to be rapidly distributed, and its ability to exceed human capabilities.
Using AI to amplify cyber threatsAI-enabled cyberattacks can expand existing threats, introduce new threats, and alter the typical character of threats, allowing attacks to be more versatile, effective, and targeted.
As an example, AI could influence email attacks such as spear phishing to become more automated – and it could even eliminate the need for the attacker to speak the same language as the target.
It could also target malware's behaviour that it becomes impossible for humans to control in a manual way.
In response, security experts have turned to the same technology that makes them so effective to combat these threats.
Artificial intelligence can reduce human effort and lower the time needed to detect, respond to, and recover from cybersecurity incidents.
One organisation focusing on this area is LogRhythm's with its AI-enabled security operations centre CloudAI.
The integration of artificial intelligence with it security operations centre was aimed at realising the following outcomes:
- Discerning patterns of known threats, on a global, geographic, vertical, and company basis and detecting in real time every observed instance going forward.
- Discerning with security and risk relevancy the behavioural shifts across the IT environment that predict and detect with accuracy an active or emerging threat.
- Enabling organisations to perform effective threat hunting – augmenting what is today an art few can effectively perform or afford.
- Eliminating false negatives with a very low false positive frequency, and present alarms with real time risk context in support of detecting and neutralising threats early in the kill chain.
User and entity behaviour analytics (UEBA) is a perfect application for machine learning as long as the necessary security context is available for understanding the significance of each anomaly.
Gartner defines UEBA as profiling and anomaly detection based on a range of analytics approaches, usually using a combination of basic analytics methods.
Machine learning can make UEBA considerably more effective for the following reasons:
• Machine learning can handle the volumes of data to be analysed and the environment to be understood. This includes being able to incorporate many types of data sets, from network traffic patterns and application data to records of user authentication attempts and user access to sensitive data.
• Machine learning-driven UEBA is well suited for identifying "qualified" threats—those that are legitimate and require action. Machine learning can take many more factors into consideration than humans can when looking at potential threats, and it can do so in near real time.
UEBA is only one example of the possible applications of artificial intelligence in cybersecurity.
The same characteristics that make it an efficient tool for executing cyber attacks – scalability, versatility, and efficiency at processing large amounts of data in a short time – also make it effective for threat detection, threat response, and against insider threats.
See how you can detect and stop an insider threat with LogRhythm CloudAI with UEBA.