SecurityBrief Asia - Technology news for CISOs & cybersecurity decision-makers
Story image
Report: The murky world of malicious AI
Mon, 24th Jun 2019
FYI, this story is more than a year old

Artificial intelligence (AI) is pervasive in everyday technologies – from biometrics, to speech recognition and machine learning in almost every industry. While organisations should be looking at ways to enhance their business with these technologies, they should also keep a close eye on how others could use AI for malicious purposes.

A new report from cybersecurity firm Malwarebytes, titled When artificial intelligence goes awry: separating science fiction from fact, explains how the technology could be used for malicious purposes such as trickery and cyber attacks.

“With rapid adoption of AI in technology—especially as cybersecurity organisations run to incorporate AI and ML into their security infrastructure—there also becomes an undeniable chance for cybercriminals to use the weaknesses in currently-adopted AI against security vendors and users,” the repot says.

“Once threat actors figure out what a security program is looking for, they can come up with clever solutions that help them avoid detection, keeping their own malicious files under the radar. For example, malware authors could subvert AI-enhanced security platforms in order to trick detections into incorrectly identifying threats, damaging the vendor's reputation in the market. Threat actors could also dirty the sample for machine learning, flagging legitimate packages as malware, and training the platform to churn out false positives.

That's not the only worry – threat actors could also outsmart technologies such as Captcha, which was designed to help people prove they were human. It now turns out that Captcha is ‘trivial' for machine learning.

The report also touches on how deepfakes (fake images or videos in which a person's face or voice is blended with somebody else's body) can be used to create ‘incredibly convincing' spear phishing attacks.

“Imagine getting a video call from your boss telling you she needs you to wire cash to an account for a business trip that the company will later reimburse. DeepFakes could be used in incredibly convincing spear phishing attacks that users would be hard-pressed to identify as false,” the report says.

AI is also used in malware including several Trojans, and a proof-of-concept attack tool called DeepLocker that was developed by IBM.

DeepLocker is a stealth malware that masquerades as video conferencing software. Once it finds a system that meets its condition, it then deploys its payload. IBM security experts say the code is hard to find and almost impossible to reengineer.

“Malware designed with these specifications could infect many machines without being detected, and then be deployed on target machines according to the threat actor's command,” the report says.

So what's the answer to AI malware prevention? Malwarebytes says that AI and big data could ‘annihilate what little privacy we have left'.

Malwarebytes says cybersecurity vendors should look at how they can develop AI and machine learning capabilities with their own security in mind.

“Closing any loopholes, especially for training systems to correctly identify threats, should be a top priority. But protecting the security program alone isn't enough. The technology should also not open up new attack vectors that could potentially be used against customers, and it should be well-tested before being implemented,” the report says.

Organisations should also conduct due diligence on security vendors and ask questions about how those vendors use AI.

“Organisations should look to vendors who aren't burying their heads in the sand when it comes to AI—both its benefits and potential for negative consequences. Which companies are using AI? How are they using it? Do they have plans to protect it from abuse?

“Users should favour organisations that are implementing the shiny new tech with deliberate consideration of its widespread impact and how it aides in strengthening security, not serving as a loophole through which criminals can gain access.