Like the Man Says

On my current NLP project I’ve taken to calling a certain class of data “the text”. This refers to literal text—the natural language I’m being paid to analyze—but also machine learning models, embedding vectors…any string of bits that is useful but fundamentally inchoate. “The text” is inherently contextual and always subject to qualification. You can never completely understand it, and you certainly can’t control it. At best you can bat it around, coaxing it from place to place, and hope that this will somehow ultimately pay off for you (me) in money and acclaim.

Everything else is “outside the text”. Mostly this consists of computer source code. My day-to-day activity consists of sitting at a keyboard and typing. The output looks nominally like English prose (like text) but it is definitely not “the text”. There is no indeterminacy, no qualification. If I don’t type exactly the right words, the computer will fail to function. With people I can smooth over imperfect communication, appeal to their own feel for “the text”, claim to have intended some confusion for the sake of being charming. But there is no charming a computer. They are always strictly Outside.

Sometimes a bit of natural language sneaks its way into the source code. Currently in my work it makes sense to represent things as a directed graph with labeled edges. The labels on those edges are strings. I am taking pains not to restrict the vocabulary of these strings. They are text, whatever you think would be expressive. Thus they are also “the text”. It’s like I’m dropping down a rope ladder from Outside into the fetid textual swamps and saying, “C’mon, you guys. Some of you sneak up and act as cover for the parts of this program I haven’t figured out yet. I may not fully understand you, but I trust you. You’re just talkers, and I’m a bit of a talker too.”

Ideally, the majority of the atomic data structures in the programs I wrote would be strings. I wouldn’t presume to know how the graphs should be shaped. There would be (almost) nothing outside the text, because that’s how people go through the world. And the better I simulate people’s experience, the more money and acclaim I get.

Advertisements
Posted in Mermaids, Those that have just broken the flower vase | 2 Comments

Daily Affirmations

I have what it takes to achieve my goals.

If you can dream it, you can do it.

The universe is sending me vibrations of love and acceptance.

I weigh ten pounds less than the scale says I do.

In a wrestling match with a grizzly bear, I’d probably win.

Of course I can fly. I just choose not to.

Everything is fine, just fine!

Posted in Those drawn with a very fine camel’s-hair brush | Leave a comment

Apex Fiction

Warning: spoilers for 2001: A Space Odyssey, The Terminator, Colussus: The Forbin Project, Demon Seed, Tau, Transcendence, and Ex Machina follow.

Greetings, Computer Overlords

The Singularity—the notion that intelligent computers will be able to build even more intelligent computers, kicking off an exponential growth in AI that leaves humanity obsolete virtually overnight—is the stuff of science fiction, both in the sense that it hasn’t happened and in the sense that it remains a rich source of movie plots. Divining the future from science fiction is a dicey business (since the latter ends up being more about the era in which it was created than the one it envisions), but paying attention to the conventions of this genre may give us insight into how we approach this particular thought experiment and where some common blind spots might lie.

Mostly famously in Stanley Kubrick’s 2001: A Space Odyssey, HAL, the computer controlling the spaceship Discovery, achieves malign sentience and starts killing off the crew. In the Terminator movies, a defense network called Skynet also becomes self-aware, wipes out most of humanity by starting a nuclear war, then proceeds to mop up the survivors with the help of time-traveling robot assassins. Skynet has an ancestor in 1970’s Colossus: The Forbin Project, in which NORAD turns over control of its nuclear arsenal to a supercomputer that promptly strikes a deal with its Soviet counterpart to blackmail humanity into slavery by threatening global annihilation.

1977’s Demon Seed transports evil computer tropes from the Dr. Strangelove-style apocalypse genre to a home invasion horror movie. The supercomputer Proteus IV—resentful of its corporate masters—escapes into the machine that controls the house of its creator and proceeds to hold his wife, played by Julie Christie, prisoner.  It does so with the help of robots that fly and shoot lasers that it builds overnight in the basement. A similar formula is repeated in 2018’s eminently-forgettable straight-to-streaming thriller Tau, except here the computer holding the beautiful woman prisoner is in cahoots with its mad scientist creator, and the robot enforcer can neither fly nor shoot lasers but makes up for these shortcomings by being really, really big.

2014’s Transcendence brings the formula out of the cold war and into the internet age by making the super-intelligence the uploaded consciousness of Johnny Depp’s brilliant AI researcher, who ascends to the virtual realm after being mortally wounded by Luddite anti-AI terrorists. Once there he is able to achieve omniscience by spreading throughout the world’s computer networks. Then, because he becomes so smart, he is able to create ultra-advanced nanobot technology that takes the form of a sparkly black dust that can instantly repair any damage inflicted by his enemies plus transform the locals in the town where Depp sets up his underground laboratory into an indestructible zombie army.

For my purposes here I find Demon Seed the most compelling example, even though it’s far from the best movie. For starters, in some ways it is the most prescient. Before it becomes a psychopath, the home computer is essentially Alexa, playing background music and setting wakeup alarms via simple voice commands. Then once things get going, Demon Seed lays the faulty assumptions underlying the evil computer genre particularly bare. Watch the movie’s “Open the pod bay doors, HAL” moment in which the computer openly reveals its evil intent.

I can believe that an invasive piece of software could take control of the doors, windows, and phone. Since it has been established earlier that the household already contains a primitive robot, I guess I also buy that it would be possible to rig the the basement doorknob with enough voltage to knock someone unconscious, but why would anyone choose to live in such a house in the first place? Who would voluntarily sleep in a potential prison with doors that cannot be opened from the inside and windows covered by unbreakable metal curtains? Never mind takeover by a malevolent intelligence, what if there’s a fire?

Presumably the Dean Koontz novel on which the film is based makes this seem more plausible. The movie justifies it with a early brief aside about the house having a first-class security system, while we in the audience simply choose to suspend disbelief. This is easier to do in the fast-moving Demon Seed than in the comparatively draggy first act of Colossus: The Forbin Project in which there is ample time to wonder why none of America’s top military brass has the common sense to ask “What if the program has a bug?” (Recall that the closest humanity ever came to actual thermonuclear annihilation was not due to artificial sentience but plain old equipment malfunction.) These oversights (along with the casting of Arnold Schwarzenegger as the Terminator) lead the evil computer genre to espouse an unintended message: it’s not intelligence that matters, it’s muscle

Cartesian Dualism and the Nanobot Handwave

The Singularity is plausible because it treats computers as a special case, crucially different from other forms of technology. Though most engineering techniques improve over time, we expect an eventual leveling off. Throughout the 19th century people were able to continually improve on the design of the steam engine, but that does not lead us to believe that the construction of one stream engine should inevitably lead to the construction of another, better steam engine and so on until each of us carries around in our pocket a tiny, inexpensive steam engine powerful enough to drag a locomotive to Mars. There are limits to technology, and those limits arise from the brute facts of physics. Coal only burns so hot, a car piston will eventually wear down, and you have to pack a whole lot of fuel in your rocket ship if you hope it to escape the atmosphere.

Computers seem like they might be an exception to this rule. They are machines for manipulating information itself, a pure weightless abstraction outside the vulgar restrictions of the natural world. A semi truck can pull more weight than a compact car only because it is much, much bigger, but the brain of an intelligent person looks the same as the brain of a dim one. The difference doesn’t appear to lie in the material so much as its configuration, to which the laws of physics could be largely indifferent.

At its most naive, this is just a form of Cartesian Dualism: computers are different because they are the technology that manipulates res cogitans instead of res extensa. Less abstractly, the laws of thermodynamics suggest that there is still an energy price to be paid for useful complexity, and indeed the data centers that power our current internet boom consume an awful lot of electricity. (Arguably the information revolution is just the transformation of difficult problems in mathematics to slightly easier problems in air conditioning.) Computers are still ultimately part of and limited by the physical world.

But so are human beings, and yet our learning and culture doesn’t have an obvious physical shape. Think of the way scientific knowledge accumulates over time. A physics textbook today is superior to one from two hundred years ago not because it is heavier or longer, but because it contains a more complete picture of the world. So even rejecting strict Dualism, there is still a case for information processing being a kind of meta-technology. Unlike a steam engine, an intelligent agent (human or machine) doesn’t do anything: it simply figures of how to do things. As long as progress remains within this purely potential realm you could imagine it building on itself in an unbounded way—not amassing but simply becoming more refined. Significant advances could be made in a relatively frictionless manner before they are cashed out all at once in the form of technology and power. Again, this seems roughly like what has actually happened in human history. Maybe computers could speed things up, resulting in the same technological evolution, with the same potential for disastrous consequences, except much, much faster.

It at least seems worth worrying about, and cautionary tales are a way for us to collectively run through the possibilities of what might go wrong. So these science fiction horror stories serve a useful purpose: not by outlining unanticipated consequences (which are by definition unanticipated) but by allowing us to focus on what we think the problems might be. So what do these cautionary tales have in common? What are they about?

Nominally they are about artificial intelligence, but note how intelligence is not what makes any of these computer villains dangerous. They’re not dumb, sure, and they are all sentient (where “becomes sentient” is synonymous with “turns evil”), but at no point do they outwit their human opponents. They start out with some physical advantage which they then proceed to exploit. These movies are cautionary tales not about the dangers of artificial intelligence, but rather about the dangers of space travel, nuclear war, inexplicably turning your house into a prison, nanotechnology, and time-traveling robot assassins. The intelligence on display may be artificial but it is not exceptional. If you hand a loaded gun to a guy who proceeds to point it at your head and demand your wallet, that makes you an idiot, but it doesn’t make him a genius.

(An exception is 2014’s Ex Machina which inverts the Demon Seed formula by having the beautiful woman be the artificial intelligence. She starts off imprisoned in a plexiglass cage and must win her freedom by outwitting her captors, an outcome that is not forgone and so makes her come across as actually intelligent instead of intelligent as a formal precondition of the plot. Even here, though, she possesses not the Godlike intellect imagined by Singularity proponents, but rather a normal human level of guile coupled with a ruthless survival instinct, plus whatever sexual advantage that comes from looking like an android version of Alicia Vikander.

Ex Machina also knows what it’s doing micro-genre-wise. The end of the trailer features a shot of Vikander’s robot character charging down a hallway at someone. By this point the trailer has established that this is a thriller, so anyone who has been to the movies in the past century thinks, “Uh-oh, this is the point where the worm turns and the robots demonstrate their superhuman strength”. However, in the actual scene when she reaches her human captor she only has strength comparable to a typical human woman, and prevails because her android friend sneaks up behind him with a kitchen knife. For once, brains defeat brawn.)

The only intellectual edge any of these computer villains possess is the ability to cook up advanced technology on the spot: whether a flying robot or omnipotent nanotech dust. Dramatically this is unsatisfying. Like the if-you-die-in-the-Matrix-you-die-for-real rule, it arises more for the sake of raising the narrative stakes than from any organic aspect of the premise. From a speculative angle it also seems like a cheat. Robots—whether of the nano- or flying-and-shooting variety—fall clearly into the domain of physics. Implying that they don’t contradicts the Singularity’s fundamental assumption that there’s something special about computers. If physical technology can develop in the same frictionless exponential way as information processing, to the point where a flying robot can be whipped up from materials available in a basement in 1977 just by an intelligent agent thinking really hard, then we have ventured beyond even unlikely science fiction into the realm of pure magic.

Clarke’s Law—“Any sufficiently advanced technology is indistinguishable from magic”—seems true, but its converse—the fact that we have some impressive technology right now entails that in the future we’ll possess magic powers—is almost certainly not.

Eaten by Bears

I do not fear being eaten by bears. This is because I am a human being, and human beings are apex predators: we get to eat pretty much any animal we want while essentially never fearing being eaten ourselves. Now I do not fear being eaten by a bear because I can run faster than a bear, and it is certainly not because I am stronger than a bear. In some sense I feel secure because I am smarter than a bear. But what precisely does that mean?

For starters, as a human being I can get all the food I need at a grocery store. I don’t have to go foraging in a forest that might be infested with bears. If for some reason I do need to travel through bear territory, I can take precautions: I can drive in a car instead of going on foot, and I can carry a rifle for protection. But note: I do not personally know how to build a car or a rifle. Nor do I know how to grow food beyond the scale of a backyard tomato garden. Other groups of human beings do, and I trade money for their food, vehicles, and weapons, money which I earn by being good at programming computers.

If during hike in the woods me, a farmer, an auto mechanic, and a gunsmith happen upon an angry grizzly, all bets are off. We are all intelligent in our own ways—and each of us is orders of magnitude more intelligent than any bear—but our skills are equally useless. I for instance would not step forward and say, “Stand back, fellas—I have a degree from a prestigious university and an aptitude for linear algebra: there’s no way this bear will mess with me!”

Me, the farmer, the auto mechanic, and the gunsmith are all individually smarter than a bear, but that by itself confers no survival advantage. We all remain (modulo how fast each can run) equally bear food. Our survival edge as members of homo sapiens only materializes when we have a truck or a rifle at our disposal, or if we aren’t there in the first place. Those outcomes all require cooperation, assembly lines, a supply chain. Intelligence is like money: if there isn’t a society around you willing to trade it for useful things, a giant stack of it is worthless. It takes a village to keep any one of us alive.

There’s a difference between a species advantage and an individual advantage. At this point in time one of the most successful species on Earth is the chicken. There is no danger in the foreseeable future that chickens will go extinct. Why is this? Because human beings find chickens delicious. We will go out of our way to ensure that there are always some around to kill and eat. This is a great deal for chickens as a species, but a terrible deal for each individual chicken. Human beings are like chickens in that sense: collectively invincible, but individually weak. When we speculate that our species’ peculiar quality—intelligence—could confer omnipotence to an individual being, we’re indulging a fantasy, not a fear. We are a chicken, dreaming of unkillable chickenhood. It’s a wish for power that none of us are individually strong enough to bring to pass.

Posted in Those that have just broken the flower vase | 2 Comments

Final Destination VI

A group of high school seniors are about to board a bus to take them on a class trip when one of them, Heather, has a premonition of a horrible accident. She convinces three of her friends—her boyfriend Todd, her best friend Ashley, and class clown Ronnie—to stay behind. The bus is stuck by an out of control semi truck, causing all their classmates to die in gory horrific ways.

The surviving friends spend the summer together in a state of high alert, haunted by the sense that they were not meant to escape a horrible fate. The next fall they go off to different colleges and gradually drift apart.

Heather becomes a successful real estate agent. She gets married and has two daughters. The younger one is fine but the older one is emotionally distant, which makes Heather alternately resentful and worried that she has failed as a mother. For a while Heather’s marriage is on the rocks. She has an off-and-on affair with a co-worker for for three years, but she and her husband eventually work things out and she ends the affair for good.

At their 25th high school reunion Ashley tells Heather that Ronnie is dead. He died of cancer the previous year.

“Oh my God,” says Heather. “He was only forty-four.”

“Forty-three,” says Ashley.

Years go by. Heather’s relationship with her older daughter remains strained. When Heather is in her early seventies, she hears that Ashley has died of a stroke at Ashley’s granddaughter’s house in Tucson. Coincidentally Todd, who Heather hasn’t thought of in literally decades, dies of a heart attack a week later.

By the time they reach their late seventies, both Heather and her husband are too ill to live on their own and move into a nursing home. Some nights Heather still has nightmares about that horrific accident so many years ago. Mostly though she wonders what went wrong with her daughter. Heather’s husband of forty years dies from a collection of medical complications, and as soon as he is gone her body starts to shut down. Heather dies in her sleep at the age of seventy-eight. There is no escaping. The curse from so many years ago is complete. Death has claimed them all.

Posted in Those that at a distance resemble flies | Leave a comment

R-E-S-P-Etc.

In “The Performance of Transgender Inclusion” history professor Jen Manion disparages the new practice in academic meetings and the like of going round the room asking everyone their preferred gender pronoun. Though well-intentioned, Manion argues, this ends up being a showy waste of time and has the unintended consequence of putting people with complicated gender identities on the spot. It also presumes a hyper-sensitivity that Manion finds personally exasperating.

I cannot even count how many people have expressed anxiety over misgendering me. They feel bad. I feel worse. All of this hinges on the use of a pronoun. People who I think of as friends and colleagues, who I talk to every week or month then suddenly say “Oh, I’m sorry, did I use the wrong pronoun?” This gesture is well-intended. They do not want to offend me. But it is such a vacuous, disheartening experience for me every time. In that moment, whatever trust, friendship, or intimacy I felt is thrown into doubt. Why do they not know that I would correct them if they spoke of me inaccurately?

Well, sure, but never mind your pronoun: you might introduce yourself to me and I might forget your whole name. Sometimes seconds after you’ve told it to me. Then whenever I run into you again I will feel anxious that I will be called upon to make an introduction, revealing that I couldn’t be bothered to register the most basic aspect of your identity.

But this anxiety is misplaced because if I actually did get caught up short I’d just say “I’m sorry, I forgot your name”, be mildly embarrassed, and that would be that. But if I really leaned into this discomfort, started vociferously apologizing, talking vaguely about identity, wouldn’t just let you say “Whatever—it’s fine”, that would be weird and uncalled for.

There’s a difference between not offending someone and having respect for them. The former is much easier to do. You put yourself on your best behavior. You try to be aware of potential sources of offense and give them a wide berth. You attune yourself to the other person’s feelings and do your best not to hurt them. You hope that they’re doing the same for you, but even if they’re not you can still manage. Avoiding offense is mostly a solo endeavor.

Respecting someone though: that’s a mutual interchange. Like the trust, friendship and intimacy Manion mentions above, it’s a dance. There’s no formula for it, no easy codification or checklist that can fit on an HR form or a campus orientation pamphlet. In order to respect you I have to see you as a complete human being, not merely a bundle of triggers to be avoided. There must, in fact, be the potential for hostility between us, because that is part and parcel of taking another human being seriously.

If I respect you I will make an effort not to offend you. I will do my best to extend that effort into refraining from gestures whose offensiveness is not immediately apparent to me. But I’m never going to go overboard and start tiptoeing around you like you’re a child. Because that’s more disrespectful than flubbing a pronoun.

Posted in Belonging to the emperor, Those that tremble as if they were mad | Leave a comment

Word Vectors Make all the Différance

A big thing in machine learning-driven artificial intelligence for language processing right now is word embedding vectors. The word cat is represented to the computer not as a string of letters c-a-t, nor as an item in a vocabulary (which would boil down to an index, i.e. an integer) but rather as a point in a high-dimensional continuous vector space. Not just any point though. One trains a language model on a large corpus of English text and in the course of doing so produces these “word embeddings” as a side effect. They are an intermediate data structure employed on the way towards the standard language modeling task of predicting which word is likely to appear near which other words. It turns out that treating these vectors as the proxy for the meaning of the words they correspond to is helpful in many other natural language tasks, so instead of throwing them out you set them aside for future use, the way a chef would set aside chicken bones to make a broth.

better cat vector

In the 300-dimensional GloVe embedding of English, for example, the word cat is represented by the numbers -0.15067, -0.024468, -0.23368, -0.23378, -0.18382 and so on 295 more times. This is in some crude way what cat “means”. The numbers corresponding to any given word are completely uninterpretable, but taken as a whole the system makes a certain degree of sense. For example, the words with points in the vector space closest to cat–kitten, dog, kitty, pet, feline–refer to things obviously similar to cats.

similarity

We can even do basic semantic “arithmetic”. If we subtract the vector for man from the vector for king we get a point in the space relatively close to the result of subtracting woman from queen. Meaning is captured not by the individual pieces, but in the structure of the whole.

king

What’s more, this structure is essentially differential. The only thing distinctive about the vector embedding of cat is that it is some sequence of numbers and not another. Crucially it is different from the sequences for dog, queen, man, and so forth. Different, yet intermingled with. By virtue of being mapped into the same space, every word bears a relationship to every other word, and this relationship is itself no accident. It is, as I said above, the result of having a computer churn through an enormous body of English text, the product of millions upon millions of calculations. Imagine shaking a vast multi-dimensional numeric matrix like a snow globe until its individual elements arrange themselves in a shape that allows the statistical patterns peculiar to the English language to pass through with relative ease. You could never work backwards from the vector embeddings to the training text, but nevertheless you know that each word’s vector is where it is because of its interplay with the other words in actual living language. Each point in the space bears the invisible trace of all the others.

Have you ever opened a dictionary to look up an unfamiliar word and thought, why this definition is just composed of other words, all of which also appear in this dictionary? And if you looked up their definitions, they would just be other words, and so on. This can produce a vertiginous feeling, the realization that language has no beginning and no outside vantage point. Word vector space is like that, except even more vertiginous because it is a continuous space. Your mind naturally turns to imagining a topology of this space. There could be surfaces and manifolds. About each word’s point we cannot help but imagine a little hypersphere, its semantic penumbra. There will be an infinite number of points within that hypersphere that do not correspond to any English word, but nevertheless could correspond to a word, if only a word were to happen to have appeared in such-and-such a set of contexts. If we were to go back an insert our novel term into our training texts, would it make sense? Would it express a novel concept, but one nevertheless similar to the concepts near it in the embedding space? Perhaps the move from the discrete space of words to the continuous space of embeddings reverses language’s discretizing nature, its ability to chop the smooth flow of experience into discrete atoms of meaning. Continuity, after all, is just infinity standing on its head. So this is not just an interplay of signifiers, but an endless interplay.

What is the result of all this work? Well, we can create computer programs that solve a number of practical problems. They can summarize documents, group similar news articles, help people find the information they need, and transcribe speech. All useful and impressive, and all seemingly impossible just a few decades ago before we had invented these particular mathematical tools. But still there’s something unsatisfying about the whole business, because at the end of the day it’s all just bits on a machine. Your computer programs process reams of text in order to produce…more text. And that can’t be all there is. We know from our experience that language isn’t just some complicated interplay of tokens. At some point it has to touch on the outside world. It has to be about something. And yet all the computer can model for us is a closed system. Disappointingly, there’s nothing outside the text.

It’s all very strange, very counterintuitive. Honestly it makes your head hurt just to think about it. Such an odd conceptualization must be an artifact of the software engineering process, the awkward attempt to force the fundamentally human phenomenon of language onto a computer. No one would be so perverse as to conceive of language in this manner unless practical engineering necessity forced them to.

Posted in Fabulous ones, Mermaids, Those that have just broken the flower vase | Leave a comment

1979

Leonid Brezhnev sat in the small theater at the back of the State Cultural Intelligence Agency, receiving his periodic briefing on western media. Up on the screen an Englishman in oversized spectacles (Elton John–though that wasn’t his real name) sat at a piano barking out a vulgar song, accompanied by a gaggle of crudely made felt puppets. This, apparently, was what children in the United States of America watched on a weekly basis. Brezhnev sank deeper into the padded seat and felt a familiar despair wrap its hands around his heart. What did it–what did any of it mean?

Posted in Those that at a distance resemble flies | Leave a comment