Pages

Monday, March 14, 2011

Language and computing

The "On Language" feature in the New York Times ended on 25 February 2011 with a column entitled "The Future Tense." I wouldn't bother mentioning it except for columnist Ben Zimmer's offhand remarks about "Watson," the IBM supercomputer that competed on the game show "Jeopardy!"
Watson’s trouncing of the “Jeopardy!” champs Brad Rutter and Ken Jennings doesn’t mean that language processing has advanced to the point of language comprehension. The best-guess techniques Watson used never approached any deep understanding of the semantic content in the “Jeopardy!” clues. Instead, Watson crunched terabytes of data to figure out statistically likely responses to clues based in part on which words appear most often with other words in the texts it has stored.
I'm no expert in natural-language processing or artificial intelligence, but even I know that Zimmer is talking out of something other than his mouth.

We simply don't know how the human brain "comprehends." Odds are, though, that comprehension is rooted in the processing of massive amounts of data. If so, "deep understanding of semantic content" is a matter of our brains finding patterns in input data that match patterns of previously input data. In other words, our brains probably do something rather similar to Watson, albeit via parallel processing techniques we do not yet fully understand.

Zimmer made a mistake I thought had been consigned to the garbage heap of history in the eighteenth century: he assumed that humans were qualitatively different from, and superior to, the rest of the natural world. We are neither. Nor should we bewail that fact.

No comments:

Post a Comment