Robot Villains: What They Teach Us
This year's National Robotics Week, April 4-12, occurred as warnings against "killer robots" reached a fever pitch. Robots, however, are but tools -- tools that do their masters' bidding, as they have been programmed to do. Here are three examples of the lessons we can learn from fictional robot villains to prevent the apocalyptic future that Neo-Luddites fear.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blt557998ecbcf7677e/64cb5826574c368eeae91c36/HV-1.jpg?width=700&auto=webp&quality=80&disable=upscale)
Beware the killer robots!
So goes the warning from figures across academia, nonprofit advocacy, and the technology industry. Notable figures -- including Elon Musk, Stephen Hawking, and Andrew McAfee -- have all spoken out, in some form or another, against the ongoing evolution of artificial intelligence.
As AI becomes increasingly intelligent (indeed, one program "passed" the Turing Test last year), computers and robots are becoming increasingly autonomous.
This is frightening to some.
Advocates released a report this past week urging the United Nations to move towards a wholesale international ban "on the development, production and use of fully autonomous weapons."
Titled "Mind the Gap -- The Lack of Accountability for Killer Robots," the report further advocates that criminal culpability and/or civil liability be assigned to those who would create or program any autonomous device or other AI that can kill. It is jointly published by Human Rights Watch, a New York-based nonprofit advocacy group, and Harvard Law School’s International Human Rights Clinic. Human Rights Watch is notable for serving as one of several non-governmental organizations on the Steering Committee of The Campaign to Stop Killer Robots.
For National Robotics Week, we at InformationWeek decided that the concern about killer robots merits a closer look. Earlier this week we took some time to rebut anti-robotics fearmongers' most relied-upon arguments against AI. Now we hope to put a lens to the notion of "killer robots" and to talk more precisely about what should really concern society -- leaving the scary rhetoric aside.
The best way to do this, we determined, was examining examples of villainous killer robots in science fiction. On the following pages, we take a look (in reverse chronological order) at robot villains from the minds of three of the world's most prolific science fiction writers. We highlight what exactly made these fictional robots so villainous and deadly. In so doing, we offer lessons to prevent such robotic lethality ever seeing the light of day in real life.
What do you think about killer robots? Should they be banned? What lessons do you draw from our examples? Did we miss one of your favorite robot villains? Let us know your thoughts in the comments section below.
The third installment of Douglas Adams's Hitchhiker's Guide series, Life, the Universe, and Everything, describes the cheerful planet of Krikkit -- whose mild-mannered inhabitants want little more than to sing, dance, and destroy the rest of the universe.
The Krikkiters are manipulated by a supercomputer (surrounding the planet in darkness as a dust cloud) called Hactar. They send a massive battle fleet and a squadron of lethal robots out into the universe to utterly obliterate it.
Why does Hactar want to destroy the universe? Because he was commanded to do so by his programmers about 20 billion years ago.
In the early days of the Galaxy, the Silastic Armorfiends of Striterax -- a nasty, destructive, warmongering people -- had ordered Hactar to build for them an "Ultimate Weapon." They wanted to quickly and efficaciously persevere in their war with the Strenuous Garfighters of Stug and the Strangulous Stilettans of Jajazikstak.
"What do you mean," asked Hactar, "by Ultimate?"
To which the Silastic Armorfiends of Striterax said, "Read a bloody dictionary[.]"
The Silastic Armorfiends of Striterax pulverized Hactar after the supercomputer dared to second-guess them by introducing a debilitating flaw into its truly ultimate, universe-destroying weapon. When Hactar later reassembled itself, it set out to complete the task it was instructed to do -- properly this time.
Computers and robots will only do what we program them to do. The Silastic Armorfiends of Striterax, blinded by hawkish aggression, failed to appreciate the natural consequences of their directives: That they too would be destroyed by an Ultimate Weapon. A computer will have only the discretion of its programmers. Any code we program into artificially intelligent robots that we create should be carefully triple-checked for unintended, undesirable consequences.
The City is a short story in Ray Bradbury's 1951 anthology The Illustrated Man. The story tells of astronauts from Earth who have landed on the outskirts of an empty city on a faraway planet. The city, however, is no ordinary city. It is a massive seeing, hearing, smelling, tasting, feeling, speaking robot. After Earthlings had destroyed an entire alien race -- the Taollans -- with war and disease more than 20,000 years prior, those Taollans who were left built the city to lie in wait for the return of Earthlings. When they do return, the city slays them, dissects them, replaces their organs with robot parts, and sends them back to whence they came -- to drop a biological weapon on their home planet.
The name of the city is that of its purpose: Revenge.
The Taollans -- left weakened by war, slavery, and infection, dying from a leprosy-like disease -- had but one goal left in their dark, embittered hearts after Earthlings left their planet: vengeance. They knew that one day their Earthling oppressors would return, and so the robotic city was designed, waiting millennia until its programmed goal could come to fruition.
The city, despite its malicious programming, does not blindly lash out. When two separate groups of explorers arrive before the Earthlings' return, they are carefully observed and compared against what the city knows Earthlings to be. Upon finding that they are not of Earth, the city does not harm them. It has no reason to do so.
Robots don't have true hearts or minds, but they will ultimately be guided by what lies in the hearts and minds of men.
The very term "robot" comes to us by way of a 1920 Czech play by Karel Čapek, Rossum's Universal Robots. In the play, a company has created "robots" from a nascent strain of organic matter it has discovered. It has designed them to perform all manner of work -- drastically bringing down the cost of goods. Harry Domin, the general manager of the company, dreams of a world without poverty or blight, in which robots perform all the labor while humans partake of all the food, clothing, and other goods that they could ever want.
Things come to a head when one of the robot designers -- at the request of Domin's wife (herself a robots' rights activist) -- makes psychological adjustments to the robots to make them more like humans. The result is a revolution in which the robots are hell-bent on enslaving and killing all humanity.
In a few weeks, Hollywood gives us another robot run amok in Avengers: Age of Ultron. Ultron is a "mistake" made by a genius creator, Tony Stark. No doubt the robot will exhibit more of our human flaws than any flaws of robots. It is, after all, our flaws that fiction details each time a robot tries to destroy us. Our creations are only as good as their creators. To expect more isn't fair to robots, or people.
The good news is we get to heed the warnings of our own fiction before we make a huge mistake with robots. As we celebrate them for National Robotics Week, let's not only celebrate our successes, but the good sense and caution in the people who have made them so far. Fiction, and our robots, have a lot to teach us about being human.
-
About the Author(s)
You May Also Like