Story image

Musk says AI 'more dangerous than nukes' - expert stays optimistic

13 Mar 2018

​Speaking at South by Southwest (SXSW) on Sunday entrepreneur Elon Musk revealed his startling beliefs around artificial intelligence (AI).

AI ‘scares the hell’ out of Musk as he believes in the wrong hands it could be ‘more dangerous than nukes’.

Musk is known for his exploits that stretch the boundaries of regulations, but he says AI is one area where he is willing to make an exception.

"This is a case where you have a very serious danger to the public, therefore there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely -- this is extremely important," Musk says.

"Some AI experts think they know more than they do and they think they're smarter than they are ... this tends to plague smart people, they define themselves by their intelligence and they don't like the idea that a machine can be way smarter than them so they just discount the idea, which is fundamentally flawed. I'm very close to the cutting edge in AI and it scares the hell out of me."

Musk used the example of AlphaGo and its successor AlphaGo Zero, the board-game players powered by AI to illustrate the rates of improvement, which according to Musk, no one predicted.

Over six to nine months AlphaGo went from being unable to beat a relatively good Go player to then beating both current and former world champions. AlphaGo Zero then completely destroyed AlphaGo 100-0.

"The rate of improvement is really dramatic, but we have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that's the single biggest existential crisis that we face, and the most pressing one," Musk says.

"The danger of AI is much greater than the danger of nuclear warheads, by a lot and nobody would suggest that we allow anyone to just build nuclear warheads if they want -- that would be insane. Mark my words: AI is far more dangerous than nukes, by far, so why do we have no regulatory oversight, this is insane."

Musk believes it’s crucial that framework is implemented for the creation of digital super intelligence before further innovation and advances are made.

High-Tech Bridge CEO Ilia Kolochenko says the AI term is both amorphous and ubiquitous with many people unwittingly using it for a great wealth of unrelated topics and technologies.

“We are still far from the Strong AI, capable of replacing human in many different areas without continuous and thus expensive training,” says Kolochenko.

“On the other side, Machine Learning (ML) technologies have proven their efficiency and capacity to outperform human in many precise, albeit limited, tasks such as Go or even Chess playing.”

Kolochenko says currently financial institutions use ML algorithms to better score mortgage or leasing customers, insurances use ML to forecast customers' susceptibility to diseases, even law enforcement have started using ML and big data to forecast crimes and better plan patrol routes – but he doesn’t see any danger to humanity from AI.

“Nonetheless, there is no risk to humanity from AI, other than unemployment in particular sectors that can be fully automated by machines,” says Kolochenko.

“I, however, remain optimistic: humans are still be able to concentrate their efforts on something more creative and valuable for society.”

Privacy: The real cost of “free” mobile apps
Sales of location targeted advertising, based on location data provided by apps, is set to reach $30 billion by 2020.
Myth-busting assumptions about identity governance - SailPoint
The identity governance space has evolved and matured over the past 10 years, changing with the world around it.
Forrester names Crowdstrike leader in incident response
The report provides an in-depth evaluation of the top 15 IR service providers across 11 criteria.
Slack doubles down on enterprise key management
EKM adds an extra layer of protection so customers can share conversations, files, and data while still meeting their own risk mitigation requirements.
Security professionals want to return fire – Venafi
Seventy-two percent of professionals surveyed believe nation-states have the right to ‘hack back’ cybercriminals.
Alcatraz AI to replace corporate badges with AI security
The Palo Alto-based startup supposedly leverages facial recognition, 3D sensing, and machine learning to enable secure access control.
Ensign and IronNet partner to create cyber analytics capabilities
The Singapore-based joint venture will form a Cyber Analytics Center for Excellence focused on securing regional enterprises from sophisticated cyber threats.
Unencrypted Gearbest database leaves over 1.5mil shoppers’ records exposed
Depending on the countries and information requirements, the data could give hackers access to online government portals, banking apps, and health insurance records.