Here the term “language-game” is meant to bring into prominence the fact that the speaking of language is part of an activity, or of a form of life.
Review the multiplicity of language-games in the following examples, and in others:
Giving orders, and obeying them—
Describing the appearance of an object, or giving its measurements—
Constructing an object from a description (a drawing)—
Reporting an event—
Speculating about an event—
Forming and testing a hypothesis—
Presenting the results of an experiment in tables and diagrams—
Making up a story; and reading it—
Making a joke; telling it—
Solving a problem in practical arithmetic—
Translating from one language into another—
Asking, thanking, cursing, greeting, praying.
–Ludwig Wittgenstein, Philosophical Investigations
Jeopardy! and John Henry
Consider another kind of language game. A quizmaster reads a clue. For example, “William Wilkinson’s ‘An Account of the Principalities of Wallachia and Moldavia’ inspired this author’s most famous novel.” You then try to formulate a question that would evoke that clue as an answer. For example, “Who is Bram Stoker?” You are competing against two other players. Whoever formulates a correct question first wins a certain amount of money. The person with the most money after a set number of questions wins the game. The clues are drawn from general knowledge—history, geography, science, culture—and often involve wordplay: rhymes and puns and whatnot. To be good at this game it helps to have an encyclopedic knowledge of trivia, quick recall, and a corny sense of humor.
In 2011 a computer program built by IBM research called Watson1 appeared on the game show Jeopardy! and defeated its human opponents, among them Ken Jennings, the reigning champion. The contest was structured like a regular all-human competition: Watson played by the same rules as the people. The clue above about Bram Stoker is the final one with which Watson secured its victory.
It is astounding that a computer defeated a person in this arena. Machines are good at certain things (storing and retrieving vast quantities of data, working without a pause for years at a time) and humans are good at other things (synthesizing, inferring, catching jokes). Jeopardy! would seem to favor the latter strengths. Furthermore, the standard assumption in the field of artificial intelligence is that the humans will always be smarter than the computers—our job as programmers is merely to make the computers smart enough to be useful. So Watson’s victory on Jeopardy! was a tremendous upset. It is not a case of the steam engine defeating John Henry. Rather it is if by some miracle a mass-produced dining room set turned out to be of higher quality than one hand-built by a carpenter.
In his book Philosophical Investigations from a few decades before, the philosopher Ludwig Wittgenstein claims that speech is a kind of game people play. He talked about games to emphasize the fact that speech exists not just for the purpose of transmitting information, but also as an end in itself. (What is the purpose of playing a game of chess, after all, other than to play a game of chess?) Later in the same book, Wittgenstein tries to come up with clearly articulated definition of the concept of a game, but after considering the varied activities that might fall under that heading (chess, athletics, jumprope, political contests, war) gives up. So in using the game metaphor he is also emphasizing how varied the act of speaking is. It is foolish to try and reduce language to some formula. Talking is simply one of the many things people do.
Wittgenstein knew nothing about computers. His notion of language games was a metaphor, but was prescient in this instance because Watson literally played a language game and won. So we may want to keep him in the back of minds in case there is other guidance he might offer.
A Russian Language Game
You can simulate what it’s like to be Watson right now. Type the phrase “William Wilkinson’s ‘An Account of the Principalities of Wallachia and Moldavia’ inspired this author’s most famous novel -IBM -Watson” into your favorite search engine.2 This brings up multiple links to articles about Bram Stoker and Dracula. It is easy for you to skim them and infer that the correct answer is “Bram Stoker”. (Sorry–“Who is Bram Stoker?”) So easy that it almost seems like cheating. But there’s a crucial difference between you and Watson: you understand English. Watson does not.
So in the interest of realism let’s play a different language game. The rules of this game are that I give you a clue in Russian and you have to return the correct answer in Russian. (I’ll permit you to skip the whole phrase-it-as-a-question business.) You are allowed to use a search engine, but you are not allowed to understand Russian. So now when you blindly perform the search “‘Рассказ о княжествах Валахии и Молдавии’ Уильяма Уилкинсона вдохновил самый известный роман этого автора” it brings back links to documents that are incomprehensible to you. What do you do next? How do you win?
Let’s make the rules of the game a little more forgiving. You are not allowed to understand Russian, but you are allowed to recognize Cyrillic characters, distinguish words, and tell when two phrases look similar or different. (Basically you are allowed to understand Russian as well as someone who doesn’t actually understand Russian.) If you carefully combed through the top hits returned by the search engine, you might notice the phrases Дракула and Брэма Стокера showing up repeatedly. Perhaps one of these is the answer.
It order to choose one term over the other it would be helpful to know the relationship between them. Knowing this offhand would be considered understanding Russian, however, if you did web searches on these two terms, you might discover that they often occur near each in the same documents. Furthermore you might notice that these documents often also contain the terms автора and роман, which appear in the question as well. You wouldn’t know what these words meant either, but their presence still might pique your interest. If you had enough examples of similar clues and answers about authors and novels3, you might be able to recognize typical word patterns that would lead you to guess the correct answer, “Брэма Стокера”.
|Брэма Стокера||Bram Stoker|
In order to discern these patterns, you’d have to make careful tallies of Cyrillic word shapes across many thousands of documents and analyze them with sophisticated mathematical techniques capable of teasing out the subtle correlations between them. It would be a much too tedious job for a human being. You’d need a computer.
At Play in the World
Watson plays something very similar to the Russian language game. It analyzes the clue it is given, extracting relevant terms that are then used to perform a query of a general knowledge database. Candidate entity names returned by the query are ranked by a machine learning model trained on a long history of Jeopardy! questions and answers, and the highest scoring one is proposed as the answer.4 There is also knowledge about the grammar of English and basic ontology baked into the program, but for the most part Watson is just recognizing patterns of words.
This naturally raises the question, are we, the humans, playing the same game? Is our understanding of English, Russian or what-have-you ultimately just unconscious, statistically-driven pattern recognition? This is a contemporary rephrasing of the question of how to distinguish between appropriately conditioned behavior and true comprehension, which has a long philosophical history outside of software. Officially, engineers like myself are agnostic on this issue.5 We are only concerned with getting the appropriate responses and don’t care what underlies them. As a practical matter though, there remains a vast gulf between the variety of games a human and a computer program—even a world-class program like Watson—is able to play.
Consider another world-class computer program—the Google web search engine. To find the answer to the Bram Stoker clue, I typed it verbatim into Google, which instantly provided relevant results. It turns out that this is often the case: Google is an excellent way to cheat at Jeopardy! However, the Google search engine could not have gone on TV and beat Ken Jennings because it returns links to documents and relies on a human being to make sense of them. That is not sufficient for Jeopardy! There you must return the name of a specific entity couched in the form of a question, and as the Russian language example above demonstrates, going that final mile is harder than it looks. Google can’t win at Jeopardy! because it’s not playing by the rules of that game.
In fact the rules of this particular game impose all manner of non-obvious constraints. Both clues and answers must be concise. “What effect has the character Dracula had on film and literature?” is a valid question to ask, but no good for Jeopardy! because you could fill books answering it. Answers must not be a matter of opinion (“This gothic tale about a bloodsucking count is the greatest novel of the 19th century”) and clues must not contain incorrect presuppositions (“This male English author wrote the novel Frankenstein”). The convention of answering in the form “What is–?” “Who is–?” restricts answers to being well-defined entities, while the quiz show format disallows all manner of discourse. The designers of Watson knew their system would never have to produce a poem, or a joke, comfort a grieving widow, or maunder on about the weather. The set of utterances you don’t have to handle is as vast as the set of ones you do.
Navigating open but still constrained domains is where the field of artificial intelligence stands at the moment. We know how to play particular games. Given a task—find a webpage, recommend a movie, transcribe a spoken utterance, win at Jeopardy!–and a large set of exemplars of how human beings have successfully performed that task before, we can find a way to train a machine to imitate them. Usually not surpass6, but at least emulate to some reasonable degree. It’s not easy—machine learning is still as much an art as a science—but for the foreseeable future there is a clear way forward that lies in making incremental progress by winning incrementally different games.
Computer programmers have a instinct towards generalization, and so naturally wonder if there is an approach that could subsume the current piecemeal state of the art. You’d want there to be a single human game—call it reason, rationality, intelligence—that we could learn to play just once, and have particular tasks fall out as special cases. This was the dream of both an earlier generation of artificial intelligence researchers and an earlier incarnation of Wittgenstein, who in the Tractatus Logico-Philosophicus proposed a sort of gnomic form of predicate calculus as the definitive solution to all outstanding philosophical problems. Wittgenstein later renounced this position, and his description of the multiplicities of language games in the quote above reflects his later view that a description of human experience will never be able to fully abstract beyond the particulars. Currently, engineers are okay with attacking sets of particular problems and undecided as to whether these may someday be unified. Wittgenstein, however, warns that any attempt at this unification is a fool’s errand. There is no master template from which all reasoning derives. Instead it’s just games, games, games all the way down.
1 Usually I find these disclaimers superfluous, but since my job title is “Watson NLP Developer” I suppose I should state for the record that the opinions expressed on this blog are entirely my own and do not reflect those of my employer.
2 The “-IBM” and “-Watson” are necessary for the sake of fairness, because all the web pages that contain this phrase verbatim discuss Watson’s victory on Jeopardy!
3 Of course you wouldn’t know they were about authors and novels. You’d just know that certain Russian words tended to pattern with certain other Russian words in certain ways.
4 The system is described in detail in the May-June 2012 edition of the IBM Journal of Research and Development.
5 Though personally I have to say that it sure doesn’t feel that way to me.
6 It is interesting that the other great publicity coup IBM has scored in the past twenty years was Deep Blue’s defeat of the chess grandmaster Garry Kasparov. I wonder if Wittgenstein had lived later in the 20th century he would have defined a game as “something a computer could eventually defeat a human at”.