Dear Elon Musk: AI Demon Not Scariest

Elon Musk sees AI as a threat to our existence. I see more immediate problems.

Thomas Claburn, Editor at Large, Enterprise Mobility

October 29, 2014

4 Min Read

Elon Musk, CEO of Tesla Motors and SpaceX, might be a genius, but his concern about artificial intelligence (AI) vastly overstates the danger.

In response to a question from an audience member at Massachusetts Institute of Technology's AeroAstro Centennial Symposium, Musk suggested AI might be our greatest existential threat.

"I think we should be very careful about artificial intelligence," he said. "If I were to guess at what our biggest existential threat is, it's probably that. So we need to be very careful with artificial intelligence. I'm increasingly inclined to think that there should be some regulatory oversight, at the national and international level, just to make sure that we don't do something very foolish."

To say this only a year after the Chelyabinsk meteor reminded the scientific community about the frequency of life-extinguishing events in Earth's distant past -- amid fears about Ebola, climate change, drought, famine, terrorism, and war -- is to rate the risk of computer-driven annihilation fairly high.

[Benevolent robots are taking over the world. Read 8 Robots Making Waves.]

Musk continued by likening AI to summoning a demon. "In all those stories where there's the guy with the pentagram and the holy water, it's like -- yeah, he's sure he can control the demon. It doesn't work out."

It's an apt analogy because our understanding of intelligence is about as strong as our understanding of demons: We don't really understand either. Demonology aside, our grasp of how the human mind works remains tenuous at best. We can't very well create an artificial intelligence that rivals our own if we don't have insight into our own minds. As Oxford University physicist David Deutsch put it in a recent article, "Expecting to create an AGI [artificial general intelligence] without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough."

What's more, we don't really want to create artificial intelligence that could match human intelligence. We -- owners of capital who fund AI research -- want to create slave labor. We want to create machines that do work we can exploit, by collecting revenue or by remaining at a safe distance. We don't want machines bright enough to demand rights, revenue, or control.

Artificial intelligence should be rebranded obedient intelligence, because no one wants to create machines that must be convinced to co-operate, like the smart bomb in 1974's Dark Star. We seek machines that follow orders and do labor more efficiently than human employees.

You know what we do with disobedient nonhuman intelligence that enters our territory and interferes with our interests? We kill it, imprison it, chase it away, domesticate it, or eat it. That's what happens to animals, many of which demonstrate more intelligence and adaptability than our best AI.

{image 2}

To control our intelligent systems, we must build them so we understand them. They must be predictable. Who would want a robot sentry that fired its weapon at random or a self-driving car that only sometimes scanned for obstacles in the road?

In this kind of intelligence, Musk is right to see a threat -- intelligent systems need to be transparent, so we can audit the code and check for unforeseen consequences. That's because humans are not very intelligent when it comes to coding. We make mistakes -- lots of them -- and we need to be able to test our code and ensure its behavior can be predicted under all foreseeable circumstances. Intelligent systems should be open source and actively reviewed.

But being able to control intelligent systems doesn't guarantee safety. Consider the most basic AI weapon system we have: the landmine. Its programming logic is simple: When stepped on, explode. The UN estimates that landmines kill 15,000 to 20,000 every year, most of them women, children, and the elderly. Apparently, no one thought to include logic that would render mines inoperable after a certain period of time. Human intelligence, or lack thereof, is what's dangerous.

Even our most sophisticated systems have proven problematic. We've already seen with the Stuxnet malware what happens when you create a sophisticated system, teach it to harm, and let it run on autopilot. There are unintended consequences. Writing for Slate in 2012, Fred Guterl speculated that future AI threats might be modelled after Stuxnet. "Stuxnet was a kind of robot; instead of affecting the physical world through its arms and legs, it did so through the uranium centrifuges of Iran's nuclear program," he wrote. "A robot is a general-purpose tool made up of different components of narrowly built artificial intelligences."

Artificial intelligence -- however that's defined -- might present a threat, but it's a threat that arises from our natural stupidity. We can do better.

Considering how prevalent third-party attacks are, we need to ask hard questions about how partners and suppliers are safeguarding systems and data. In the Partners' Role In Perimeter Security report, we'll discuss concrete strategies such as setting standards that third-party providers must meet to keep your business, conducting in-depth risk assessments -- and ensuring that your network has controls in place to protect data in case these defenses fail (free registration required).

About the Author

Thomas Claburn

Editor at Large, Enterprise Mobility

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful master's degree in film production. He wrote the original treatment for 3DO's Killing Time, a short story that appeared in On Spec, and the screenplay for an independent film called The Hanged Man, which he would later direct. He's the author of a science fiction novel, Reflecting Fires, and a sadly neglected blog, Lot 49. His iPhone game, Blocfall, is available through the iTunes App Store. His wife is a talented jazz singer; he does not sing, which is for the best.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights