Apex Fiction

Warning: spoilers for 2001: A Space Odyssey, The Terminator, Colussus: The Forbin Project, Demon Seed, Tau, Transcendence, and Ex Machina follow.

Greetings, Computer Overlords

The Singularity—the notion that intelligent computers will be able to build even more intelligent computers, kicking off an exponential growth in AI that leaves humanity obsolete virtually overnight—is the stuff of science fiction, both in the sense that it hasn’t happened and in the sense that it remains a rich source of movie plots. Divining the future from science fiction is a dicey business (since the latter ends up being more about the era in which it was created than the one it envisions), but paying attention to the conventions of this genre may give us insight into how we approach this particular thought experiment and where some common blind spots might lie.

Mostly famously in Stanley Kubrick’s 2001: A Space Odyssey, HAL, the computer controlling the spaceship Discovery, achieves malign sentience and starts killing off the crew. In the Terminator movies, a defense network called Skynet also becomes self-aware, wipes out most of humanity by starting a nuclear war, then proceeds to mop up the survivors with the help of time-traveling robot assassins. Skynet has an ancestor in 1970’s Colossus: The Forbin Project, in which NORAD turns over control of its nuclear arsenal to a supercomputer that promptly strikes a deal with its Soviet counterpart to blackmail humanity into slavery by threatening global annihilation.

1977’s Demon Seed transports evil computer tropes from the Dr. Strangelove-style apocalypse genre to a home invasion horror movie. The supercomputer Proteus IV—resentful of its corporate masters—escapes into the machine that controls the house of its creator and proceeds to hold his wife, played by Julie Christie, prisoner.  It does so with the help of robots that fly and shoot lasers that it builds overnight in the basement. A similar formula is repeated in 2018’s eminently-forgettable straight-to-streaming thriller Tau, except here the computer holding the beautiful woman prisoner is in cahoots with its mad scientist creator, and the robot enforcer can neither fly nor shoot lasers but makes up for these shortcomings by being really, really big.

2014’s Transcendence brings the formula out of the cold war and into the internet age by making the super-intelligence the uploaded consciousness of Johnny Depp’s brilliant AI researcher, who ascends to the virtual realm after being mortally wounded by Luddite anti-AI terrorists. Once there he is able to achieve omniscience by spreading throughout the world’s computer networks. Then, because he becomes so smart, he is able to create ultra-advanced nanobot technology that takes the form of a sparkly black dust that can instantly repair any damage inflicted by his enemies plus transform the locals in the town where Depp sets up his underground laboratory into an indestructible zombie army.

For my purposes here I find Demon Seed the most compelling example, even though it’s far from the best movie. For starters, in some ways it is the most prescient. Before it becomes a psychopath, the home computer is essentially Alexa, playing background music and setting wakeup alarms via simple voice commands. Then once things get going, Demon Seed lays the faulty assumptions underlying the evil computer genre particularly bare. Watch the movie’s “Open the pod bay doors, HAL” moment in which the computer openly reveals its evil intent.

I can believe that an invasive piece of software could take control of the doors, windows, and phone. Since it has been established earlier that the household already contains a primitive robot, I guess I also buy that it would be possible to rig the the basement doorknob with enough voltage to knock someone unconscious, but why would anyone choose to live in such a house in the first place? Who would voluntarily sleep in a potential prison with doors that cannot be opened from the inside and windows covered by unbreakable metal curtains? Never mind takeover by a malevolent intelligence, what if there’s a fire?

Presumably the Dean Koontz novel on which the film is based makes this seem more plausible. The movie justifies it with a early brief aside about the house having a first-class security system, while we in the audience simply choose to suspend disbelief. This is easier to do in the fast-moving Demon Seed than in the comparatively draggy first act of Colossus: The Forbin Project in which there is ample time to wonder why none of America’s top military brass has the common sense to ask “What if the program has a bug?” (Recall that the closest humanity ever came to actual thermonuclear annihilation was not due to artificial sentience but plain old equipment malfunction.) These oversights (along with the casting of Arnold Schwarzenegger as the Terminator) lead the evil computer genre to espouse an unintended message: it’s not intelligence that matters, it’s muscle

Cartesian Dualism and the Nanobot Handwave

The Singularity is plausible because it treats computers as a special case, crucially different from other forms of technology. Though most engineering techniques improve over time, we expect an eventual leveling off. Throughout the 19th century people were able to continually improve on the design of the steam engine, but that does not lead us to believe that the construction of one stream engine should inevitably lead to the construction of another, better steam engine and so on until each of us carries around in our pocket a tiny, inexpensive steam engine powerful enough to drag a locomotive to Mars. There are limits to technology, and those limits arise from the brute facts of physics. Coal only burns so hot, a car piston will eventually wear down, and you have to pack a whole lot of fuel in your rocket ship if you hope it to escape the atmosphere.

Computers seem like they might be an exception to this rule. They are machines for manipulating information itself, a pure weightless abstraction outside the vulgar restrictions of the natural world. A semi truck can pull more weight than a compact car only because it is much, much bigger, but the brain of an intelligent person looks the same as the brain of a dim one. The difference doesn’t appear to lie in the material so much as its configuration, to which the laws of physics could be largely indifferent.

At its most naive, this is just a form of Cartesian Dualism: computers are different because they are the technology that manipulates res cogitans instead of res extensa. Less abstractly, the laws of thermodynamics suggest that there is still an energy price to be paid for useful complexity, and indeed the data centers that power our current internet boom consume an awful lot of electricity. (Arguably the information revolution is just the transformation of difficult problems in mathematics to slightly easier problems in air conditioning.) Computers are still ultimately part of and limited by the physical world.

But so are human beings, and yet our learning and culture doesn’t have an obvious physical shape. Think of the way scientific knowledge accumulates over time. A physics textbook today is superior to one from two hundred years ago not because it is heavier or longer, but because it contains a more complete picture of the world. So even rejecting strict Dualism, there is still a case for information processing being a kind of meta-technology. Unlike a steam engine, an intelligent agent (human or machine) doesn’t do anything: it simply figures of how to do things. As long as progress remains within this purely potential realm you could imagine it building on itself in an unbounded way—not amassing but simply becoming more refined. Significant advances could be made in a relatively frictionless manner before they are cashed out all at once in the form of technology and power. Again, this seems roughly like what has actually happened in human history. Maybe computers could speed things up, resulting in the same technological evolution, with the same potential for disastrous consequences, except much, much faster.

It at least seems worth worrying about, and cautionary tales are a way for us to collectively run through the possibilities of what might go wrong. So these science fiction horror stories serve a useful purpose: not by outlining unanticipated consequences (which are by definition unanticipated) but by allowing us to focus on what we think the problems might be. So what do these cautionary tales have in common? What are they about?

Nominally they are about artificial intelligence, but note how intelligence is not what makes any of these computer villains dangerous. They’re not dumb, sure, and they are all sentient (where “becomes sentient” is synonymous with “turns evil”), but at no point do they outwit their human opponents. They start out with some physical advantage which they then proceed to exploit. These movies are cautionary tales not about the dangers of artificial intelligence, but rather about the dangers of space travel, nuclear war, inexplicably turning your house into a prison, nanotechnology, and time-traveling robot assassins. The intelligence on display may be artificial but it is not exceptional. If you hand a loaded gun to a guy who proceeds to point it at your head and demand your wallet, that makes you an idiot, but it doesn’t make him a genius.

(An exception is 2014’s Ex Machina which inverts the Demon Seed formula by having the beautiful woman be the artificial intelligence. She starts off imprisoned in a plexiglass cage and must win her freedom by outwitting her captors, an outcome that is not forgone and so makes her come across as actually intelligent instead of intelligent as a formal precondition of the plot. Even here, though, she possesses not the Godlike intellect imagined by Singularity proponents, but rather a normal human level of guile coupled with a ruthless survival instinct, plus whatever sexual advantage that comes from looking like an android version of Alicia Vikander.

Ex Machina also knows what it’s doing micro-genre-wise. The end of the trailer features a shot of Vikander’s robot character charging down a hallway at someone. By this point the trailer has established that this is a thriller, so anyone who has been to the movies in the past century thinks, “Uh-oh, this is the point where the worm turns and the robots demonstrate their superhuman strength”. However, in the actual scene when she reaches her human captor she only has strength comparable to a typical human woman, and prevails because her android friend sneaks up behind him with a kitchen knife. For once, brains defeat brawn.)

The only intellectual edge any of these computer villains possess is the ability to cook up advanced technology on the spot: whether a flying robot or omnipotent nanotech dust. Dramatically this is unsatisfying. Like the if-you-die-in-the-Matrix-you-die-for-real rule, it arises more for the sake of raising the narrative stakes than from any organic aspect of the premise. From a speculative angle it also seems like a cheat. Robots—whether of the nano- or flying-and-shooting variety—fall clearly into the domain of physics. Implying that they don’t contradicts the Singularity’s fundamental assumption that there’s something special about computers. If physical technology can develop in the same frictionless exponential way as information processing, to the point where a flying robot can be whipped up from materials available in a basement in 1977 just by an intelligent agent thinking really hard, then we have ventured beyond even unlikely science fiction into the realm of pure magic.

Clarke’s Law—“Any sufficiently advanced technology is indistinguishable from magic”—seems true, but its converse—the fact that we have some impressive technology right now entails that in the future we’ll possess magic powers—is almost certainly not.

Eaten by Bears

I do not fear being eaten by bears. This is because I am a human being, and human beings are apex predators: we get to eat pretty much any animal we want while essentially never fearing being eaten ourselves. Now I do not fear being eaten by a bear because I can run faster than a bear, and it is certainly not because I am stronger than a bear. In some sense I feel secure because I am smarter than a bear. But what precisely does that mean?

For starters, as a human being I can get all the food I need at a grocery store. I don’t have to go foraging in a forest that might be infested with bears. If for some reason I do need to travel through bear territory, I can take precautions: I can drive in a car instead of going on foot, and I can carry a rifle for protection. But note: I do not personally know how to build a car or a rifle. Nor do I know how to grow food beyond the scale of a backyard tomato garden. Other groups of human beings do, and I trade money for their food, vehicles, and weapons, money which I earn by being good at programming computers.

If during hike in the woods me, a farmer, an auto mechanic, and a gunsmith happen upon an angry grizzly, all bets are off. We are all intelligent in our own ways—and each of us is orders of magnitude more intelligent than any bear—but our skills are equally useless. I for instance would not step forward and say, “Stand back, fellas—I have a degree from a prestigious university and an aptitude for linear algebra: there’s no way this bear will mess with me!”

Me, the farmer, the auto mechanic, and the gunsmith are all individually smarter than a bear, but that by itself confers no survival advantage. We all remain (modulo how fast each can run) equally bear food. Our survival edge as members of homo sapiens only materializes when we have a truck or a rifle at our disposal, or if we aren’t there in the first place. Those outcomes all require cooperation, assembly lines, a supply chain. Intelligence is like money: if there isn’t a society around you willing to trade it for useful things, a giant stack of it is worthless. It takes a village to keep any one of us alive.

There’s a difference between a species advantage and an individual advantage. At this point in time one of the most successful species on Earth is the chicken. There is no danger in the foreseeable future that chickens will go extinct. Why is this? Because human beings find chickens delicious. We will go out of our way to ensure that there are always some around to kill and eat. This is a great deal for chickens as a species, but a terrible deal for each individual chicken. Human beings are like chickens in that sense: collectively invincible, but individually weak. When we speculate that our species’ peculiar quality—intelligence—could confer omnipotence to an individual being, we’re indulging a fantasy, not a fear. We are a chicken, dreaming of unkillable chickenhood. It’s a wish for power that none of us are individually strong enough to bring to pass.

Advertisements
This entry was posted in Those that have just broken the flower vase. Bookmark the permalink.

2 Responses to Apex Fiction

  1. Oded Ben-David says:

    Thank you for a thought provoking and delightful read.

    I do hope our robot overlords, when they arise, will forgive you the comparison to unkillable chikenhood.

  2. Pingback: 每周分享第 40 期

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.