Tesla's Elon Musk and Stephen Hawking are among those calling for regulations to curb the development of lethal autonomous weapons systems and cyberterrorism

Scare stories have always held a particular sexiness for the media, and AI is no exception. Essentially, they take two forms: one, that a weaponised AI – particularly if harnessed by unsavoury regimes or terrorists – will unleash a wave of death and destruction; and two, that the technology itself will rapidly evolve into a “super-intelligence” far beyond the scope of humans to understand, let alone control. Ultimately, it might even decide that we are just a messy nuisance to be disposed of.

It would be tempting to dismiss both scenarios as the product of a febrile media, if it weren’t for the fact that some of the people who might be expected to be AI’s biggest cheerleaders are among the Cassandras. Take Tesla’s Elon Musk and Stephen Hawking. They joined other scientists in an open letter warning of the dangers of so-called LAWS (lethal autonomous weapons systems), widely seen as an early application of AI. Russian manufacturer Kalashnikov has already announced it is developing a series of AI “combat modules” (killer robots, basically), and China and the US are working on their own LAWS programmes too.

“Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group," wrote Musk et al. "Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control." Brave talk, but the history of previous attempts to ban weapons of mass destruction hardly fills one with confidence that such a sanction could be agreed, let alone enforced.

And if the killer bots don’t get you, the super-intelligence might. Musk characterises the quest to develop it as “summoning the demon”, while Hawking warns bluntly that “the development of [AI] could spell the end of the human race … It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Although if that happens, we shouldn’t take it personally. “The real risk with AI,” Hawking goes on, “isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.”

Cyberterrorism is regarded as a big threat with AI  (credit: Sergey Nivens/Shutterstock Inc)
 
 

Less apocalyptic, but no less alarming, scenarios abound. Cyberterrorists hacking AI-enabled driverless trucks to send them crashing into crowds (and with no human at the wheel, there would be noone to be stopped with a well-aimed shot). It need not even be a high-tech hack: researchers in the US demonstrated that applying small stickers to a Stop sign could fool a driverless car into interpreting it as a 45 mph one.

With such storm clouds on the horizon, it’s small wonder that many of those closest to AI have been at the forefront of calls to bubblewrap it in standards and regulations (see Taming the AI tiger)

But not every tech supremo has signed up to the nightmare. Facebook’s Mark Zuckerberg declares himself “really optimistic” about the potential of AI to make the world a better place, and says that those who constantly conjure up visions of doomsday are “really negative, and in some ways, pretty irresponsible.”

Peet van Biljon, head of innovation at McKinsey, of also thinks the apocalypse has been overstated: “We’re really not on the cusp of super-intelligent killer robots taking over the world,” he says, and adds that there is no inevitability about ceding control to the machine. “We are making these things. They are our creatures. We are the authors of whatever happens.” He suggests substituting “extended” for “artificial” intelligence, to emphasise the point that this is about augmenting human capabilities, not replacing them. Despite its apparent sophistication, says Van Biljon, AI is still at a relatively primitive stage: “We’ve already developed the brute [computing] power of a mouse brain, but we cannot simulate a mouse – we cannot make AI that’s as smart as a mouse.”

Those who predict disaster are getting ahead of reality, he suggests. “Everybody watched Star Trek, and I like it too, but people can extrapolate too much.”

Main image credit: Pavel Chagochkin/Shutterstock Inc.

 

This is part of our in-depth briefing on AI. See also:
 

Can we turn AI into a force for good?

How AI and robotics can transform CSR

Comment: 'We can't leave Silicon Valley to solve AI's ethical issues'

Machine learning: how firms from Danone to Sodexo are integrating AI

First, do no harm: regulators and tech industry scramble to tame the AI tiger

AI explainer: why machines have an edge

'With AI polluters will have nowhere to hide'

Rise of the sewbots: Asian factory workers feel chill winds of automation

'Our problem with automation is a labour shortage, not surplus'

LAWS  McKinsey  Mark Zuckerberg  automation  self-driving cars  cyberterrorism 

comments powered by Disqus