SecurityBrief Asia logo
Asia's leading source of cybersecurity and cyber-attack news
Story image

Artificial Intelligence: Out of the futurists’ lab, into the real world of networks and cybersecurity

By Shannon Williams
Fri 13 Jan 2017
FYI, this story is more than a year old

Artificial Intelligence to the rescue! AI is widely seen by scientists, vendors and even enterprise IT professionals as the next step in cyber defense. It’s a perfect match. Cyberattacks are coming faster than humans can respond – and are morphing into new dangers that traditional anti-malware software can’t always detect. That’s where AI’s self-learning capabilities excel, and are able to respond at the speed of light (or at least, at the rate  of high-speed networks and modern microprocessors) in an adaptive, effective fashion.

We recently spoke to three cybersecurity experts to get their take on the state of AI for next-generation network defense – and their views about the future: Kathryn Hume, President, Fast Forward Labs; Stuart McClure, Author, Inventor, and CEO, Cylance; and Paul Jackson, Principal Analyst, Digital Media, Ovum.

Kathryn Hume explained that artificial intelligence algorithms always start with particular use cases and particular data sets “from which we elicit  general algorithms that then may or may not be able to be applied to different use cases but both the opportunity and the complexity of this space lies within that transition from particular to general.” 

For example, she cites the well-known IBM Watson computer that won on the game show Jeopardy by focusing on a specific data set; “we've seen Google DeepMind build AlphaGo which is a tool using a technique called reinforcement learning, a set of artificial intelligence algorithms that put in position a system of rewards to train systems to excel at a particular task.” In that case, AlphaGo developed and used a set of algorithms to beat Lee Sedol, the leading Go champion, in March 2016.

From Fun and Games to Data and Networks

Beating humans at trivia and at board games is one thing; it’s great for building awareness of AI and of exciting the popular press, but it doesn’t apply directly to enterprise computing. Neither do other applications of AI that we encounter as consumers, such as Facebook image recognition or textual analysis of Twitter posts to learn about users’ political preferences. How about protecting computer networks from attackers and malware? It’s all the same problem, said Ms. Hume: Studying huge amounts of training data to learn patterns – and then rapidly searching for those same patterns in real-world situations.

Cylance’s Stuart McClure picked up the narrative, explaining that for software that monitors the data stream – such as network traffic or email attachments – the goal is to quickly determine if the traffic or file is safe or malicious. That requires a lot of deep learning to see patterns – and the ability to evaluate new data quickly, to see if it meets the patterns of safe or malicious.

Mr. McClure used the analogy of watching a house to determine if a person walking nearby is a burglar. “Some cybersecurity platforms cannot determine if something is bad unless they’ve seen it before and know what it will do. It's sort of  saying, well I can't tell if this person walking up to my house is going to burglarize it until they actually break in and steal something. Right? That’s not very effective.”

By contrast, there’s a better, more effective way, said Mr. McClure, which is to learn patterns – and not trust anything suspicious. “If you were to watch the video cameras from every home around the world, for every burglar that ever walked up to a house and burglarized it you'd create patterns in your mind. You would create connections between how they were dressed, how they approached the house, how they interfaced with the locks. You would figure it out pretty quickly if they were good or bad. So when a new person came up to your house you'd think, uh this person matches the pattern for a burglar. I'm not going to trust them. I'm going to watch them the whole time.”

The Cybersecurity AI Breakthrough

Mr. McClure applied that reasoning to cybersecurity where, in the old, pre-AI model, an anti-malware company needed thousands of analysts to write rules and virus signatures, by studying malware that evaded previous rules and signatures. “That’s not scalable,” he said, and can’t reach the 99% success threshold needed for effective protection. “We can't possibly scale like that with thousands of analysts writing rules and signatures. The threats come out way too fast.”

That’s when Mr. McClure, through Cylance, had the breakthrough: Instead of studying the latest malware to write new rules and signatures – and therefore, detect it only after it successfully attacked someone – why not use artificial intelligence?

“That's what we've been able to do,” said Mr. McClure. “We talk about two parts of AI quite a bit - supervised and unsupervised learning. There are two parts to what we do. The first part is we automatically look for features that are going to be potentially indicative of good or bad.” That’s not just a few features, by the way - not even just a hundred features. “Now if I told you we have over five million features that are indicatively defined as malicious or safe you probably wouldn't believe me. Right? Five million? That's insane.”

The first part is to use software to look for features that might indicate malicious intent in a file. The second part? A supervised human judgment of whether sample files are actually malicious or not. “We collect as many files as humanly possible. Then we extract as many features as we possibly can that we've already mapped or learned are potentially useful. Then we transform those. We then train the AI using neural networks about what is going to cluster to good and what is going to cluster to bad. Then we classify it. If it's bad we block it. If it's good we allow it. It's that simple.”

Ovum’s Paul Jackson observed that while AI has been around for decades, both in the lab and in commercial products, there have been many rapid advancements recently. “To a lot of us, practical AI seems to have really come to the forefront over the last 12 or 15 months, butwhy now?”

Fast Forward’s Ms. Hume agreed with that point: many techniques such as neural networks and deep learning have been around since the 1990s, and in some cases AI goes back to the 1940s. But there were some problems, she said, and some tools that didn’t yet exist. “There wasn't a lot of data to work with. We didn't have the big data area - I use the term big data to refer to storing and processing data, not doing stuff with it. So 10 years ago it became really cheap to store a lot of data, keep it up in the cloud and then do stuff with it.”

Indeed, when it came to practical pattern recognition, she continued, “Around 2011 was when Google had a first coup using artificial neural networks to automatically identify cats in videos across the Internet. Computers needed to figure out that there was something about cats that made them similar, and could cluster together all these patterns. Then the supervised part was humans coming in and saying, oh yeah that thing you see that looks kind of like a blob of something, this amoeba thing, that's called a cat. And that one isn’t a cat.”

The Rise of the GPU and Big Data

Another factor, Ms. Hume said: the rise of graphical processing unit (GPU) chips that excelled at pattern recognition processing. “Some kid playing video games realized that the structure of GPUs to process images were pretty good at matrix multiplication, which just so happens to be the type of math that's powering these deep learning algorithms. So they said, the gaming industry is huge but gosh this other thing might be a lot bigger if we can actually apply these things to enterprise artificial intelligence needs, and this lets us train those neural networks faster.”

“Another factor in AI’s rapid rise is the data,” added Ms. Hume. “It takes a neural network probably 50,000 examples in order to gain that ability to recognize things. So you can imagine if we're going to go through all of the types of objects we might want to identify to build a recognition system we need a lot of training examples. So that data has also propelled the transition.”

Cylance’s Mr. McClure cited a fourth breakthrough technology: Cloud computing. “We never could have started this company and done what we've done without the cloud, without Amazon Web Services in particular. Two or three years ago, it would literally take about six months to build a malware detection model. Today our models take about a day and a half to build. But we have to spin up over 10,000 CPUs to do that in a day and a half. Without that flexible compute fabric there's no way we could be doing what we're doing. It's just that simple.”

The Perfect Place to Apply Artificial Intelligence

Ovum’s Mr. Jackson observed that, “We are increasingly facing many more sophisticated types of attack, and that end point protection is a key goal of cybersecurity systems. This type of security seems to be one of those areas where AI is particularly well suited, because trained tools can perform far better than people.”

Cylance’s Mr. McClure agreed that cybersecurity is the perfect place to apply AI and machine learning. “Quite honestly I don't know why it hasn't been done before! That seems pretty easy, straightforward. That would be a natural assumption to apply.”

He continued by citing three core ways that attackers manage to penetrate systems, all of which can be blocked or mitigated through the use of AI:

“First: Denial of Service, which starves the resources of the target. So you starve memory,you starve network bandwidth, you starve a CPU or a disc or something and the system falls down. It breaks.

“Second: Execution based attacks, which is what Cylance protects against. An attacker gets something sent to you or gets you to click on something that executes something in memory to do malicious things on your computer.

“Third: Authentication based attacks. Being able to steal your password and pretend to be you on your computer when you're not there, or bypassing authentication or brute forcing your password or any of those things. 

“AI can be applied to all three of those areas in a very meaningful way, you just need the data.”

How about the Rise of the Machines?

Mr. Jackson looked into the future, and was playfully concerned about what he might see. “We have talked about unsupervised and supervised learning. There is a whole realm of fear around wholly unsupervised AI, a sort of ghost in the machine, like the Terminator’s Skynet. The growth of AI is discussed a lot in the press - are those worries unfounded? Realistic? Is dangerous AI something we have to keep an eye on?”

Fast Forward’s Ms. Hume was not completely reassuring. “The thing to be concerned about in the near term is supervised learning, not unsupervised learning. That’s not because computers are dangerous but because people are dangerous. Why? There are all sorts of things that we do as people in society. We leave traces of that in our data.”

And, she continued, supervised learning requires human input and that input may not always be benign, or particularly thoughtful “We train systems based upon the decisions that humans have made in the past. So let's take an example of using algorithms to try to automatically hire somebody into your company or recruit students to your school or even give a loan for a credit application. If we try to automate that, the systems aren't that smart. They go out and they look in data sets. If in the past a specific university tended to recruit a certain type of candidate, the system will make future decisions based on that data. If the university tended to recruit relatively wealthy white males, the AI will build a model based on those past decisions.”

That can lead to perpetuating those decisions – without any specific intent to do so, Ms. Hume continued. “We go into the system and we say here is a model for the type of candidate we're looking for. These are the decisions that humans have made in the past. The algorithm will then learn to find candidates that look like those, basing its decisions upon what the humans did. The result? The AI algorithm comes back and says, ‘here is a pool of 95 per cent rich white males that we suggest you recruit to your school, precisely because if we think about a normal distribution this is where the bulk of the features tend to lie.’ “

Ms. Hume concluded, “If we relegate our decisions to the algorithms they tend to propagate and amplify the stupid decisions we as humans have made. It's not about systems being stupid or intelligent, it's about our mixing together the corporate values with social values. We as data scientists may take an ethical position with regards to potentially having to hack the AI-learned algorithm so that we can create the future that we want, instead of one that perpetuates our biases from the past.”

Look Out, Ransomware, Here Comes AI

Cylance’s Mr. McClure closed the conversation with an example of using AI algorithms to classify and defend against one of this year’s biggest challenges: Ransomware. It’s a numbers game, he said – the more effective AI is in blocking ransomware, the less attractive sorts of attacks will be. 

“We are seeing effective defenses against ransomware today,” he said. “With the AI technology that we have installed on over three million end points, we already have the ability to have all of that technology truly detect malware and get to the ninety-ninth percentile of protection, and that includes about 350,000 to 400,000 new attack variants that come out every day.”

As advanced AI-based malware detection tools deepen their market penetration, Mr. McClure added, cybercriminals will see that “all their new fancy attacks are no longer bypassing the security systems they are targeting. They are now getting caught. They're getting prevented. So there will be a natural desperation motivating the attacker  to proliferate even more attacks.

Unfortunately for the attacker, that won’t work, said Mr. McClure. “When attackers realize that doesn't work, they will get more sophisticated and spend a lot of money on trying to bypass the AI. I don't mind them bypassing us - I would actually love it because every single attempt to bypass helps us to make the AI model smarter.

Related stories
Top stories
Story image
Ivanti and Lookout bring zero trust security to hybrid work
Ivanti and Lookout have joined forces to help organisations accelerate cloud adoption and mature their zero trust security posture in the everywhere workplace.
Story image
BeyondTrust integrates Password Safe solution with SailPoint
BeyondTrust has announced the integration of BeyondTrust Password Safe with SailPoint identity security offerings.
Story image
Absolute Software expands Secure Access product offering
Absolute Software is enhancing its Secure Access product portfolio, enabling minimised risk exposure and optimised user experiences in the hybrid working environment.
Story image
Tech job moves
Tech job moves - Datacom, Micro Focus, SnapLogic and VMware
We round up all job appointments from May 6-12, 2022, in one place to keep you updated with the latest from across the tech industries.
Story image
Secure access service edge / SASE
Versa Networks recognised as SASE leader in Govie Awards
Versa Networks has announced its SASE offering has been recognised in The Govies Government Security Awards competition by Security Today magazine.
Story image
Public Cloud
Radware announced launch of CNP spinoff SkyHawk Security
“We recognise the growing opportunities in the public cloud security market and are planning to capitalise on them."
Story image
Digital Transformation
Physical security systems guide the hybrid workplace to new heights
Organisations are reviewing how data gathered from their physical security systems can optimise, protect and enhance their business operations in unique ways.
Story image
Fortinet training edges toward closing cybersecurity gap
The Fortinet Training Institute has made significant progress in closing the cybersecurity skills gap, on track to train one million people by 2026.
Story image
Jamf unveils new technologies for business security
Jamf has showcased new technologies it has developed to assist organisations in offering an enterprise secure and straightforward environment for consumers.
Story image
Sharp increase in phishing as cybercriminals leverage SEO to lure victims
"Malware lurking everywhere, from cloud apps to search engines, leaving organisations at greater risk than ever before."
Story image
Machine Learning
Radware named Leader for application and API protection
“It is a reflection of our ability to innovate in key areas, such as automated API discovery, bot management, and behavioural-based ML algorithms."
Find out how you and your business can prevent being caught out by everything from ransomware to cryptojacking.
Link image
Story image
Video: 10 Minute IT Jams - An update from IronNet
Michael Ehrlich joins us today to discuss the history of IronNet and the crucial role the company plays in the cyber defence space.
Story image
Odaseva expands in APAC and UK with more security features
Odaseva, a data platform for Salesforce, is establishing new headquarters in London as well as a new data center in India.
Story image
Developments in phishing and how to protect your business
Phishing, the practice of sending malicious emails to encourage users to perform actions that benefit an attacker, is a key security concern for modern businesses due to its prevalence and impact.
Story image
Shopping for cyber insurance? Six questions to ask before calling the insurer
The cyber threat landscape has always been worrisome, but today these concerns are heightened given an anticipated uptick in cyber-attacks from bad actors.
Story image
Palo Alto Networks says ZTNA 1.0 not secure enough
Palo Alto Networks is urging the industry to move to Zero Trust Network Access 2.0 because previous versions have major gaps in security protection.
Story image
Companies rushing to secure print infrastructure
As the pandemic prioritised communication channels and the shift to remote work, IT departments fell behind in updating the security of print and IT infrastructure.
Story image
Salesforce resets user passwords as GitHub breach investigation continues
Salesforce is increasing its security measures after a recent GitHub breach incident investigation reported in Heroku.
Story image
Microsoft unveils three new security managed services
Security Experts includes three new managed services, Defender Experts for Hunting, Defender Experts for XDR, Security Services for Enterprise.
Story image
New report reveals evolving techniques targeting cloud-native environments
Companies are adopting cloud-native technologies faster than ever before. Unfortunately, with new technology comes new threats and challenges.
Story image
Exterro adds advanced capabilities to digital forensic tool
The launch is in-line with Exterro’s investment in the FTK product line, and underscores the company's commitment to the digital forensics.
Story image
Global bot security market to reach US$2.5 billion by 2032
The global bot security market to reach US$2.5 billion by 2032 driven by a growing need for cyber security, according to a new report.
Story image
Apple, Google and Microsoft expands plans to get rid of passwords
FIDO Alliance says the world is closer to not relying on passwords after Apple, Google and Microsoft expands their support for a passwordless sign-in standard.
Story image
Zerto unveils updates to ransomware recovery capabilities
"Organisations face increased risks from the volume and sophistication of ransomware attacks prevalent today."
Story image
Blue Prism
Blue Prism Desktop uses IA to defend against vulnerabilities
SS&C Blue Prism Desktop aims to help protect businesses against vulnerabilities, using a combination of security measures and intelligent automation tech.
Story image
Artificial Intelligence
ForgeRock releases Autonomous Access solution powered by AI
ForgeRock has officially introduced ForgeRock Autonomous Access, a new solution that uses AI to prevent identity-based cyber attacks and fraud.
Story image
CyberArk launches $30M investment fund to advance security
CyberArk has announced the launch of CyberArk Ventures, a $30 million global investment fund dedicated to advancing the next generation of security disruptors.
Story image
Appian awarded billions in damages against Pegasystems Inc.
Appian has been awarded USD$2.036 billion in damages against Pegasystems Inc as the result of a jury verdict in the Circuit Court for Fairfax County, Virginia.
Story image
ThoughtLab reveals 10 best practices for cybersecurity in 2022
The benchmarking study reveals best practices that can reduce the probability of a material breach and the time it takes to find and respond to those that happen.
Story image
Sift shares crucial advice for preventing serious ATO breaches
Are you or your business struggling with Account Takeover Fraud (ATO)? One of the latest ebooks from Sift can provide readers with the tools and expertise to help launch them into the new era of account security.
Story image
Application Security
What are the DDoS attack trend predictions for 2022?
Mitigation and recovery are vital to ensuring brand reputation remains solid in the face of a Distributed Denial of Service (DDoS) attack and that business growth and innovation can continue.
Story image
Mitiga discovers potential hidden dangers in Google Cloud Platform’s (GCP) cloud control plane
The company also says that there is significant risk in cloud development caused by not recognising the differences between cloud and traditional definitions of common architecture terms.
Story image
Data Protection
Managed Service Providers key to customer data protection
Frequent cyberattacks in the ASEAN region drive the demand for MSPs that can deliver purpose-built security solutions for SME customers.
Story image
Google reveals new safety and security measures for users
Google's new measures include automatic two step verification, virtual cards and making it easier to remove contact information on Google Search results.
Story image
Artificial Intelligence
IBM study reveals changing sentiments around AI ethics
80% of respondents pointed to a non-technical executive, such as a CEO, as the primary 'champion' for AI ethics, a sharp uptick from 15% in 2018.
Story image
SpaceDC partners with Aofei for data center sales in Asia
SpaceDC has partnered with Aofei Data International to sell Aofei's data centers, CDN and SDN in China.
Story image
Application Security
Data Theorem launches "industry first" attack surface management solution
"No solution uses full-stack application runtime analysis and dynamic inventory discovery to support the challenges around vendor management."
Story image
HackerOne launches Attack Resistance Management solution
HackerOne has launched Attack Resistance Management - a new category of security solution that targets the root causes of the attack resistance gap. 
Story image
A10 Networks finds over 15 million DDoS weapons in 2021
A10 Networks notes that in the 2H 2021 reporting period, its security research team tracked more than 15.4 million Distributed Denial-of-Service (DDoS) weapons.
Story image
Tech job moves
Tech job moves - Cisco, Hitachi Vantara, Tricentis & Zscaler
We round up all job appointments from April 28 - May 5, 2022, in one place to keep you updated with the latest from across the tech industries.
Story image
Artificial Intelligence
Updates from Google Workspace set to ease hybrid working troubles
Google Workspace has announced a variety of new features which will utilise Google AI capabilities to help make hybrid working situations more efficient and effective.
Story image
Cybersecurity starts with education
In 2021, 80% of Australian organisations responding to the Sophos State of Ransomware study reported being hit by ransomware. 
Story image
Fortinet sees 34% revenue increase in latest financial results
Fortinet has released its financial results for the first quarter ended March 31, 2022, seeing a total revenue increase of 34.4% compared to the same quarter last year.