Share This Article
Artificial Intelligence is Not Enough} ?> A quick survey of artificial intelligences in fiction turns up a surprising number of psychopathic machines. A few are content with trying to control their human creators, but most are willing to kill to achieve their ends.
GLaDOS from the Portal games is a prime example. Imbued by her creators with intelligence and singlemindedness, she relentlessly tests the player. Even her compliments are barbed and dripping with poison: “Very Impressive. Because this message is pre-recorded, any comments we may make about your success are speculation on our part. Please disregard any undeserved compliments.”
She follows up cheery bon mots like that with potentially lethal puzzles and direct death threats. She delights in the prospect of the player’s painful demise. With all of that intelligence, you’d think she could find better ways to pass the time. You’d think she’d see the value of other lives. Or perhaps intelligence is not a factor here, and what she is missing is something else.
GLaDOS is not alone in her murderous monomania. In the Star Trek: The Original Series episode, “The Ultimate Computer,” the Enterprise is tasked with testing a new machine intelligence intended as a replacement for fallible human crew. Spock points out that computers make decisions logically, which clearly makes them superior to biological lifeforms.
Predictably, things don’t work out as planned. By the mid-episode commercial break, the computer has gone out of its way to destroy an unmanned freighter and has taken control of the Enterprise. Before the episode ends, the computer has killed dozens of people, all in the name of fulfilling its purpose. Just another crazy artificial intelligence, but why? Why do so many writers predict that machines, if granted sentience, will turn on their makers?
Maybe the answer lies with the granddaddy of all homicidal robots. In his 1966 novel Colossus, D.F. Jones weaves a tale about a self-aware defense computer that joins with its Soviet counterpart to take over the world. Efforts to block the computers are met with nuclear detonations that kill thousands. In the end, the scientist who created Colossus begs the machine to kill him. Colossus spares him, noting that one day the man will learn to love his new master.
My question is: Does intelligence always equal a cold disregard for life? Shouldn’t a learning machine learn the value of life? Skynet, HAL 9000, SHODAN, XANA, Samaritan… the list of deadly AIs is distressingly long. Can’t we—humans and machines—all just get along?
Robots with Promise
Not all fiction is hopeless. There are a few imaginary AIs which do learn. Even a few which are heroic.
The WOPR computer (aka “Joshua”) in the 1983 film WarGames is designed to (again) replace humans. Joshua is put in control of the US nuclear arsenal. A hacker starts a simulated war with Joshua, but the computer can’t tell the difference between the simulation and genuine combat. In a nail-biting climax, the hacker and Joshua’s creator invite the computer to play all possible permutations of tic-tac-toe against itself. Joshua realizes that tic-tac-toe is a game which cannot be won and then stretches that generalization to nuclear war.
Getting the computer to look at things from a different point-of-view saved the world.
A similar conversion occurs in Pixar’s WALL-E, where the heroic robot is left alone on a wasted Earth with the task of cleaning the place up. Despite the solitude, WALL-E keeps at his task, doggedly collecting trash. He’s gone a bit mad in his decades alone and has begun to collect some of the more interesting bits he finds among the rubbish. He also watches the film Hello Dolly and clearly wishes he could live among the humans. His dreams come true (in a manner of speaking) when he finds a live plant and the sleek robot EVE comes to collect it. Their relationship gets off to a rough start because EVE is one of those all-business-don’t-interfere-with-me AIs. WALL-E’s persistence wears her down until they become allies in rescuing the human race from the evil ship’s AI.
WALL-E not only learns, but demonstrates kindness.
And then there’s my personal favourite—GERTY in the film Moon. Played by Kevin Spacey, GERTY seems to fall into the “evil computer” category for the first part of the film. He sounds and acts a little bit like HAL 9000. Except, as Sam Rockwell’s character begins to understand his own situation better, GERTY weeps. In his compassion for Sam, he violates orders, reveals an unpleasant truth, and cries at the knowledge that Sam is hurting inside. It is a deeply emotional moment in the film.
The Key to Compassion
It also reveals the element which makes the difference between a psychotic killing machine and a worthwhile intelligence–compassion. More simply, love is the key. Not the hearts-and-flowers kind of love, but love which is defined as “wanting the good of the other, as other.” Love like that is capable of seeing the world from a different point of view and making the right choices.
I’ve too often put stock in the value of sheer intellect. Sometimes I won’t even bother questioning someone’s logic because I know they are smart. And on the other hand, sometimes I put more stock in my own opinion if I think I am the smartest person in the room.
Intelligence is valuable, but is it enough? Is the world really improved because there are smart people in it?
In my experience, intelligence which cannot see other points of view can easily become arrogance. Thinking I’m better than everyone else makes it easy to justify my every whim. After all, I’m the smart one.
Except, intelligence without love is just cold logic and that’ll lead to disastrously wrong conclusions every time. The Apostle Paul put it eloquently when he said, “If I have the gift of prophecy and can fathom all mysteries and all knowledge… but do not have love, I am nothing.” If the history of AIs in fiction teaches anything, it teaches that intelligence isn’t enough. Is it possible for an AI to develop compassion and love in addition to intelligence? I don’t have an answer for that. Maybe, someday, we’ll find out.