Super-intelligent robots deserve some concern, but really we should be paying more attention to the people and processes involved in building our machines.

Thomas Claburn, Editor at Large, Enterprise Mobility

July 3, 2015

3 Min Read
<p align="left">(Image: CSA-Printstock/iStockphoto)</p>

Millennials: Why Customer Service Will Never Be The Same

Millennials: Why Customer Service Will Never Be The Same


Millennials: Why Customer Service Will Never Be The Same (Click image for larger view and slideshow.)

At the end of June, a group of computer scientists gathered at the Information Technology and Innovation Foundation in Washington, D.C., to debate whether super-intelligent computers are really a threat to humanity.

The discussion followed reports a few days earlier of two self-driving cars that, according to Reuters, almost collided. Near-misses on a road aren't normally news, but when a Google self-driving car comes close to a Delphi self-driving car and prompts it to change course, that gets coverage.

To hear Google tell it, the two automated cars performed as they should have. "The headline here is that two self-driving cars did what they were supposed to do in an ordinary everyday driving scenario," a Google spokesperson told Ars Technica.

Ostensibly benevolent artificial intelligence, in rudimentary form, is already here, but we don't trust it. Two cars driven by AI navigated around each other without incident -- that gets characterized as a near-miss. No wonder technical luminaries who muse about the future worry that ongoing advances in AI have the potential to threaten humanity. Bill Gates, Stephen Hawking, and Elon Musk have suggested as much.

The panelists at the ITIF event more or less agreed that it could take anywhere from 5 to 150 years before the emergence of super-human intelligence. But really, no one knows. Humans have a bad track record for predicting such things.

But before our machines achieve brilliance, we will need half-a-dozen technological breakthroughs comparable to development of nuclear weapons, according to Stuart Russell, an AI professor at UC Berkeley.

Russell took issue with the construction of the question, "Are super-intelligent computers really a threat to humanity?"

AI, said Russell, is "not like [the weather]. We choose what it's going to be. So whether or not AI is a threat to the human race depends on whether or not we make it a threat to the human race."

Problem solved. Computer researchers can simply follow Google's example: Don't be evil.

However, Russell didn't sound convinced that we could simply do the right thing. "At the moment, there is not nearly enough work on making sure that [AI] isn't a threat to the human race," he said.

Ronald Arkin, a computing professor at Georgia Tech, suggested humanity has more immediate concerns. "I'm glad people are worrying about super-intelligence, don't get me wrong," he said. "But there are many, many threats on the path to super-intelligence."

Arkin pointed to lethal autonomous weapon systems, an ongoing challenge confronted by military planners, policymakers, and people around the world.

What's more, robots without much intelligence can be deadly, as an unfortunate Volkswagen contractor in Germany discovered the day before the ITIF talk. The 21-year-old technician was installing an industrial robot with a co-worker when he was struck by the robot and crushed, according to The Financial Times. The technician was inside a safety cage intended to keep people at a distance.

An investigation into the accident has begun. But the cause isn't likely to be malevolent machine intelligence. Human error would be a safer bet. And that's really something to worry about.

About the Author(s)

Thomas Claburn

Editor at Large, Enterprise Mobility

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful master's degree in film production. He wrote the original treatment for 3DO's Killing Time, a short story that appeared in On Spec, and the screenplay for an independent film called The Hanged Man, which he would later direct. He's the author of a science fiction novel, Reflecting Fires, and a sadly neglected blog, Lot 49. His iPhone game, Blocfall, is available through the iTunes App Store. His wife is a talented jazz singer; he does not sing, which is for the best.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights