Artificial Intelligence is Not Enough Oct05

Artificial Intelligence is Not Enough

A quick survey of artificial intelligences in fiction turns up a surprising number of psychopathic machines. A few are content with trying to control their human creators, but most are willing to kill to achieve their ends. Homicidal Machines GLaDOS from the Portal games is a prime example. Imbued by her creators with intelligence and singlemindedness, she relentlessly tests the player. Even her compliments are barbed and dripping with poison: “Very Impressive. Because this message is pre-recorded, any comments we may make about your success are speculation on our part. Please disregard any undeserved compliments.” She follows up cheery bon mots like that with potentially lethal puzzles and direct death threats. She delights in the prospect of the player’s painful demise. With all of that intelligence, you’d think she could find better ways to pass the time. You’d think she’d see the value of other lives. Or perhaps intelligence is not a factor here, and what she is missing is something else. GLaDOS is not alone in her murderous monomania. In the Star Trek: The Original Series episode, “The Ultimate Computer,” the Enterprise is tasked with testing a new machine intelligence intended as a replacement for fallible human crew. Spock points out that computers make decisions logically, which clearly makes them superior to biological lifeforms. Intelligence which cannot see other points of view can easily become arrogance. Predictably, things don’t work out as planned. By the mid-episode commercial break, the computer has gone out of its way to destroy an unmanned freighter and has taken control of the Enterprise. Before the episode ends, the computer has killed dozens of people, all in the name of fulfilling its purpose. Just another crazy artificial intelligence, but why? Why do so many writers predict that machines, if granted sentience, will turn on their makers? Maybe the answer lies with the granddaddy of all homicidal robots. In his 1966 novel Colossus, D.F. Jones weaves a tale about a self-aware defense computer that joins with its Soviet counterpart to take over the world. Efforts to block the computers are met with nuclear detonations that kill thousands. In the end, the scientist who created Colossus begs the machine to kill him. Colossus spares him, noting that one day the man will learn to love his new master. My question is: Does intelligence always equal a cold disregard for life? Shouldn’t a learning machine learn the value of life?  Skynet, HAL 9000, SHODAN, XANA, Samaritan… the list of deadly AIs is distressingly long. Can’t we—humans and machines—all just get along? Robots with Promise Not all fiction is hopeless. There are a few imaginary AIs which do learn. Even a few which are heroic. The WOPR computer (aka “Joshua”) in the 1983 film WarGames is designed to (again) replace humans. Joshua is put in control of the US nuclear arsenal. A hacker starts a simulated war with Joshua, but the computer can’t tell the difference between the simulation and genuine combat. In a nail-biting climax, the hacker and Joshua’s creator invite the computer to play all possible permutations of tic-tac-toe against itself. Joshua realizes that tic-tac-toe is a game which cannot be won and then stretches that generalization to nuclear war. Getting the computer to look at things from a different point-of-view saved the world. A similar conversion occurs in Pixar’s WALL-E, where the heroic robot is left alone on a wasted Earth with the task of cleaning the place up. Despite the solitude, WALL-E keeps at his task, doggedly collecting trash. He’s gone a bit mad in his decades alone and has begun to collect some of the more interesting bits he finds among the rubbish. He also watches the film Hello Dolly and clearly wishes he could live among the humans. His dreams come true (in a manner of speaking) when he finds a live plant and the sleek robot EVE comes to collect it....