<i> Contending Views on Language and the Brain</i> - Los Angeles Times
Advertisement

<i> Contending Views on Language and the Brain</i>

Share via
<i> Robert C. Berwick is co-director of the Massachusetts Institute of Technology's Center for Biological and Computational Learning and MIT professor of computer science. His most recent book, "Cartesian Computation," will be published by MIT Press this fall</i>

I

Nearly 160 years ago, Charles Darwin wrote that “He who understands baboon would do more toward metaphysics than Locke.†He tried. In his prescient two-volume “The Descent of Man and Selection in Relation to Sex†(1871), written 12 years after “Origin of Species,†Darwin found that “the large size of the brain in man, in comparison with that of the lower animals, relatively to the size of their bodies, may be attributed in chief part . . . to the early use of some simple form of language, that wonderful engine which affixes signs to all sorts of objects and qualities, and excites trains of thought which would never arise from the mere impression of the senses.†Language, then, produces a powerful stimulus for the brain or, as Darwin put it: “[T]he continued use of language will have reacted on the brain, and produced an inherited effect; and this again will have reacted on the improvement of language.â€

Sound familiar? It should. It’s the exact model of a modern major evolutionary “Just So†story. As usual, Darwin’s batting average bested those of most of our current players. Not only did Darwin get our “out of Africa†origin (probably) right, but he also spotted most of the currently debated evolutionary ingredients accounting for language: a ballooning brain expanded by an evolutionary “arms raceâ€; and symbols that increased our ancestors’ behavioral repertoire, thereby pushing the brain to keep pace and spurring further language development in an upward-spiraling direction--what biologists now dub “co-evolution.â€

What fueled this escalation? For Darwin, it was sexual selection: the same process of female choice or male competition that leads to ever-gaudier peacock feathers. Smooth-talking potential mates won the day. In a moment of extended Victorian reverie, Darwin speculates in “Descent†that our ancestors’ “love songs†were the precursors to human language. Caruso lives!

Advertisement

It is testament to Darwin’s triumph that this story of how speech divides us from the rest of creation seems so familiar. Whether you’re talking about Desmond Morris and Richard Ardrey’s 1960s work on the “naked ape,†Edmund O. Wilson’s 1970s sociobiological speculations linking language to hunting-plan competence and scratch-my-back-I’ll-scratch-yours reciprocal altruism or Harry Jerison’s brain expansion calculations, it’s all rejuggled Darwin.

Even this decade’s revivals--such as William H. Calvin’s and Christopher Wills’ “runaway brain,†Michael Corballis’ “lopsided ape†or psychologist Steven Pinker’s language-as-Machiavellian offshoot (to win friends and influence people)--are just filigree adorning the same basic tale about brain-culture co-evolution. But it is when potboiler novels such as John Darnton’s “Neanderthal†mix talk of telepathy with sophisticated byplay concerning whether or not the hyoid bone in early man’s larynx was properly placed to hit Caruso’s modern vowel tones, that you realize that the science writers have some catching up to do.

Luckily for us, Terrence Deacon’s informative book makes a remarkable contribution. Most authors go awry by equating language development with communication. Darwin did. Our brains ballooned, bigger brains meant more smarts and, with more smarts, the wit for language. That much has dominated the quasi-theological discussion of language origins from before Aquinas to Hobbes’ dictum “Homo rationale quid orationale†(man thinks because he speaks) and beyond. But this equation gets it wrong. As the linguist Derek Bickerton has noted, confusing “language†with “communication†is like confusing a car--the machinery itself--with the act of driving. Not so with Deacon. He dodges this misleading thought, along with many others. Gossip replaced grooming? Since animal communication systems can be so sophisticated, there’s no obvious adaptive reason our ancestors ever needed anything more for hunting plans or “gossip.†Bigger brains and machinery for speech?

Advertisement

Deacon argues that three, and only three, wildly divergent animal groups do all of the following: babble from birth, imitate the sounds of others and have (mostly) voluntary control of their breathing--humans, songbirds and parrots, and cetaceans (seals, whales, dolphins). Breath control is essential to speech: We speak only while exhaling (try saying this sentence while breathing in). Deacon observes that breath intake is yet another way that human speech differs from primate vocal calls which, like their human analogues of sobbing or laughter, are largely involuntary, contagious, and can involve rapid intakes of breath. Yet, being a literal bird-brain suffices for human-like breath control, and even children who are otherwise severely impaired, down 50% on normal IQ, still acquire language in the usual way.

Indeed, our brain size has apparently decreased by 10% or more over the past 35,000 years. What happened? Deacon begins by asking a provocative question: Why are there no simple languages? All human languages, from the ancient Akkadian language to Zulu in South Africa today, are equally complex. To illustrate how such complexity exists for humans alone, Deacon turns to the animal kingdom and the tireless efforts of Sue Savage-Rumbaugh, who trained primates in the use of simple language tools. Savage-Rumbaugh and her colleagues worked with the most “promising†primates--a pair of Pygmy chimps, or bonobos, named Austin and Sherman--to associate arbitrarily crafted lexigrams like “blue triangle†and “blue square†with command sequences like “want banana†and “like banana.†The chimps mastered this easily, but this, as Deacon notes, is rote activity. Even trained pigeons pass muster here. It’s the next step up the abstract cliff that’s the impasse: the relationship between “square†and “triangle.†That’s a tall order--perhaps a tiny bit easier than mastering Microsoft Word commands.

It’s so hard, in fact, that it took a few years for Savage-Rumbaugh to figure out that the only way to proceed was by doing what every dog owner knows: Wait until the incorrect behavior appears--the dog soils the carpet--and extinguish that behavior until only the correct combinations remain. Literally thousands of incorrect trial combinations proved necessary with one prominent exception: an infant chimp named Kanzi, who clung to its mother as she was being trained, and was never explicitly taught. Yet this is part of Deacon’s story. He believes that perhaps the younger, more flexible brains of newborns, like Kanzi, enable them to learn something like language, and humans are known to have a long developmental period for that.

Advertisement

Deacon argues that people have at least one attribute that other animals don’t: a perception of abstract concepts beyond the surface of things and upon which human society is based. To juxtapose two of Deacon’s most pointed examples, what makes us peculiarly human is our ability to understand the difference, say, between my finger and the wedding band that wraps around it. My finger’s nearly (always) right in front of my nose--my perceptual stimulus matches “finger†in the here and now. The wedding band’s crucially different. To be sure, there’s the glint of gold that can be perceived, but it’s mere scrim masking a huge social, contractual and religious machinery that’s not right in front of my nose. And this symbolism, Deacon asserts, other animals just simply never grasp. Or, to put it another way, animals may mate or bond but never get married--reciprocal altruism be damned--and, perhaps like some people, other animals don’t have the foggiest idea what a marriage means.

More broadly, they can’t learn the higher-order rules that govern any relationship that’s not directly in the here and now. To scale symbolic heights, Deacon claims that a special learning ability is needed: word-to-word and word-to-symbol learning beyond Darwin’s argument for mere stimulus association, sparked by a prefrontal cortical expansion, new “wiring†and delayed plasticity, all as the tinder for language. Languages became easy to learn because the brain is more resistant to change than language--that is, biological evolutionary change runs a thousand times slower than linguistic change, so it must be that language has been made learnable because of the exacting demands of brain development, not the other way around. It’s no accident, then, that the Book of Genesis has Adam naming all the animals. Anyone with a young daughter also could vouch for that predilection: Just consider a child’s pointing finger from about age 1 and the incessant question, “What’s that?†for object after object. Apes never start.

II

“The Symbolic Species†retells Darwin’s story but does so from a new perspective, buttressed by the immensely more sophisticated contributions of modern neuroscience and molecular biology. Anyone who’s trekked a thousand miles to eat whale blubber, to get a peek at whether an obscure whale brain’s convolutions differ from ours, deserves a certain measure of respect. One must be versed in the tricks of so many trades that Deacon’s expertise reads like a university course catalog: physiological mechanics (to understand how something the size of a basketball folds up when stuffed into a space the size of a large grapefruit), embryology and the transplantation of one animal’s growing nerve cells into another’s (in order to understand brain development), molecular genetics, neurophysiology, archeology and so on.

Here lies both the strengths and the weaknesses of Deacon’s heroic effort “to arrange all the questions in the right order.†Deacon divides his efforts into three areas: language (words and symbols); the neurological underpinnings down to brain circuits and how we speak (which is Deacon’s own professional stock-in-trade as a neuroscience researcher); and the coordinated “arms race†between culture and language, specifically, the demand-driven symbolic need inherent in ceremonies. On this score, Deacon bats a bit better than 1-for-3; he raps almost every brain science curve ball dead on while upping his average by dodging the language-as-communication pitches and unraveling the real details of bonobo language training. A respectable average: It’s enough to judge Deacon the first .400 hitter here since Darwin. Indeed, he pulls together the most impressive collection of neurophysiological, anatomical and brain evolution evidence about language to date.

Regrettably, only Deacon’s investigation of the brain is satisfactorily treated. Language and the co-evolutionary ramparts are barely occupied. If one could pick out a single snare that catches Deacon, it’s the same one that’s plagued psychology from the start: Take any cognitive ability such as vision because it usually doesn’t bear the same intertwined confusion between thought and language: When I see a glass of ice tea on a table, what enables me to see? What’s the balance between the information contributed by the external world and the internal constructions and computations carried out by my brain? Do I see the glass on a table because objects are “out there†in the world or because my brain has computed them? We know part, probably most, of the answer: It’s the mind/brain that imposes coherence on the external world, not the other way around. To be sure, the world contributes some of the necessary constraint, or else we’d be in hallucinatory Never-Never Land, but it’s a credit allocation problem. The external world contributes the reflected light, the raw data, but the brain throws most of that raw data away and does the hard job of computing, piece by piece, the answer to what I see.

Consider a computer analogy: a personal computer program that balances your checkbook. Like the light streaming into the eye, you type numbers into the program--dates, check numbers, amounts, and ATM debits; balance, payments and interest--but it’s just a stream of numbers. It’s up to the program to interpret it--to decide which numbers are which: what that blasted $ means, what the difference is between the numbers before the decimal point and after, how to subtract the debits and add the payments, and so forth. So there must be a great deal of knowledge in the program, not just about the decimal number system but, deeper still, the laws of arithmetic. Now, that information is in the program, not in the world, and it’s pretty clear that although both the data and the program are necessary to arrive at the final answer, it’s the computer program that contains the lion’s share of the information and deserves most of the credit. The program did not have to extract very much else from the outside world; perhaps when it was installed, one had to set a few switches depending upon what country one was in, in order to know that dates stream in day-month-year in Europe, and month-day-year in Los Angeles, or that $120,583.23 is written as 732 743,42 francs in Paris.

Advertisement

So now we have a litmus test for whether it’s the external world or internal computation that gets the credit for a (computational) ability. We look and see how the knowledge is carved up between the two and how much new information besides the raw data the program needs in order to get its work done. (Of course, the programmer who puts the knowledge into the program counts too, maybe more--the obvious analogy here being evolution--but as we shall see, here too the points go to what could be called the “internalist perspective.â€)

In short, wire up a different organism--the computation a fly’s eye-brain does, for example--and the edges and objects we see dissolve, replaced only by vague approaching and receding blobs. Different software, different end result. Now over evolutionary time, there’s no doubt that we have adapted to some regularities of the physical world and not to others. We detect light waves with certain vibrations, colors and not others, and those regularities have gotten incorporated into our “wetware.â€

What about language? Our experience vastly undermines what our brains compute, so the external world does not--cannot--provide the information we need to speak and understand. When I speak, there are no silent pauses between words, the blanks in printed text; that’s partly why it’s so hard to build computers that can analyze spoken language. The mind constructs these pauses just as surely as it constructs the edges of the drinking glass sitting before me, the separation of glass-on-table into glass and table, and everything else.

To borrow an even more potent example from Noam Chomsky, consider how we fill in the interpretation of missing phrases in sentences, just as we fill in the details of the curves of a glass. When we say “John ate an apple,†it’s “the apple†that we eat. Now suppose we omit the object: “John ate†means “John ate something or other.†We might suppose that the computational rule, picked up by example after example like this one, is to replace the missing phrase with “something or other.†Now consider “John is too stubborn to talk to Bill,†which means that John is stubborn and will not talk to Bill. Suppose we drop the object of “talk to,†just as we did with “apple†giving: “John is too stubborn to talk to.†Applying the same computational rule to fill in missing phrases as before, or either by analogy or even, perhaps, via learning from the regularities of the external world, we conclude that the resulting sentence means that John is stubborn and will not talk to someone-or-other. But that’s precisely the wrong meaning! Rather, the sentence means that other people won’t talk to John because he’s stubborn. Now it’s John that’s like the apple, the person being talked to or not, while other people are doing the talking--exactly the reverse of what the sentence meant before. So much for learning rules by analogy or by “regularities in the external world.â€

In short, just like the check-balancing program, and just like recognizing objects, somehow we, even as children, come to this complex and subtle set of computational principles without corresponding relevant experience of the “external word†or its regularities, what’s called the “poverty of the stimulus.†It’s true for the way that words are put together to make sentences, syntax, and it’s just as true for the “meanings†of words themselves. As Chomsky notes, when we talk about Los Angeles, “we can be talking about a location, people who sometimes live there, the air above (but not too high), buildings, institutions, etc. in various combinations. A single occurrence of the term can serve all these functions simultaneously, as when I say that [Los Angeles] is so unhappy, ugly and polluted that it should be destroyed and rebuilt 100 miles away. No object in the world could have this collection of properties.†Deacon understands this much of course, as the finger/wedding band example makes clear. The difference, again, comes down to credit assignment: What we know about words and about syntax, the ways that words can be put together, so far transcends the raw data that it cannot be learning, symbolic or otherwise, that turns the trick. And so, without a firm grip on what knowledge of language comes to, how can one say what exactly evolved?

III

What linguists have discovered over the past few decades is that all human languages get built on a single underlying “chassisâ€--as if General Motors made a single automobile engine and selected a range of bumper styles, front grills, body panels, fenders and so forth to assemble apparently different-looking cars--Buicks, Pontiacs, Chevrolets. For language, the analogy is nearly exact: part of the difference between English and Japanese is that the “bumper style†for English is “ate an apple,†verb and then object, while Japanese selects the only other possible choice, “apple ate†(pronounced ringo tabeta in Japanese), object then verb. By “setting†perhaps 20 or so such “parameters†we fill the space of possible human language “body styles,†or “syntax,†just as we can assemble many, many different chemical compounds by combining a handful of atomic elements. Obviously, a child cannot know in advance whether it will be born in Tokyo or Los Angeles, so as far as we know this choice is made by the child listening to the language of its caretakers. Children set these switches swiftly and surely--according to current research, by 2 or 3, for the most part. So, for language, the answer to our litmus test seems clear. The “external world†need provide only “the raw data†to answer 20 yes/no questions for the child to acquire English, Japanese, German or any other human language, while the rest of the “program†is inside the child--just like our checkbook example, just like vision, just like almost every other comparable, cognitive competence that’s not explicitly taught.

Advertisement

To be sure, children must learn from experience that “apple†corresponds to the sound “apple†in English and the sound “ringo†in Japanese. That’s arbitrary, and so it must be learned. But beyond this, little seems certain even for words, where, as we have seen, our knowledge vastly transcends any ordinary sense of external world experience, and besides, unlike the basic scaffolding for language, word learning extends over a lifetime (or at least until one has to take SATs).

In this way, language syntax comes into the life of each person, allowing us to make “infinite use of finite means†in the familiar sense that sentences can be arbitrarily long, recursive and totally novel. It is this “engine†that lets us talk about things displaced from the here and now, about our beliefs as opposed to the beliefs of others--â€I think that if he hid behind that rock, then tomorrow I’ll be able to catch the antelope insteadâ€--a world inaccessible to primates and any other species, as far as we know.

Deacon’s stance runs counter to this “internalist†perspective, relying instead on humans’ increased and longer-term developmental plasticity and learning abilities to acquire language--that language has “co-evolved†so as to be easy to learn from parents and the external world. As with many other evolution-savvy writers, he appeals to what’s known as the “Baldwin effect†as a means to incorporate, over evolutionary time, informational regularities about the environment, while reducing the actual (and presumably limited and expensive) knowledge burden that must be carried by genes. The idea is that natural selection can off-load information into the external environment, so long as an organism can learn it, therefore, the better the learner, the more information that can be moved to that side of the fence, saving scarce gene space for information that can’t be predicted. For Deacon, language serves as a perfect example where almost everything has been off-loaded into learning.

But even casual examination shows this conclusion to be false. What should we expect to see if language had been molded by optimal Baldwinian information off-loading? Answer: Exactly that information that cannot be determined by experience ought to be given to us as a jump-start for language, and exactly that information that can be provided by external experience would not be. To the extent that we know anything about language acquisition, this seems to be exactly right: children are labile along a narrow set of linguistic dimensions, from possible sound systems to syntax and beyond, exactly for the range of parameters they could not possibly know in advance of experience. All the rest--and it is a great deal compared to the mere 20 yes/no questions for syntax--is fixed and completely confirms the syntactic, “internalist†perspective, indeed seems even “evolutionary optimal†(if such a statement about optimality can be said properly at all).

Deacon, following the lead of much recent popular work, appeals to “neural networks†as a candidate for a general learning system to work the Baldwinian magic. One can admire such optimism, but sadly, such networks can’t do the job, not yet, perhaps never. For example, given sentences like “John saw Mary†and “John saw the guy,†people easily conclude that one can say “Mary saw the guyâ€--but networks cannot. Networks need careful spoon-fed regularities from the external world, a self-fulfilling machinery for Deacon’s theory. Alas, language acquisition does not work that way.

Even the evidence about language change and co-adaptation runs counter to what Deacon requires. Languages can evolve vastly faster than the brain but only within a limited range permitted by the underlying chassis: object-verb (as in Old English) can change to verb-object (as in Modern English) but there’s no going from English to, say, Vulcan. General Motors can’t produce tricycles, only four-wheel variations.

Advertisement

IV

If all this is so, then one can take a giant step beyond Deacon’s brilliant starting-gate question: Why are there no simple languages today? And we arrive at a far stronger speculation: because there never were any simple languages. Not ever. Certainly, as far back as we know of writing, languages were just as complex as today, say, 6,000 years or roughly 240 generations, and only eight times more takes us back to Cro-Magnon days.

But there’s more. More trenchantly and ironically still, there’s a strong sense in which Deacon’s enterprise, explaining the origin of human language, cannot at present succeed. The problem is the way evolutionary inference works. Yet, the irony is that if there’s any approach that has a hope of recovering the “lost world†that led to language, then, thanks to one of the more startling evolutionary discoveries of the last 10 years, Deacon’s may be the only one to get us there.

The dilemma is this: Evolutionary theories rely on comparison to infer effects from historical causes and to deduce from what we can observe now, such as the frequency differences in sickle cell anemia and the gene change associated with it in certain African populations as compared to, say, North American populations, to a possible cause, namely resistance to (one type of) malaria that the anemia ameliorates. So, for evolutionists, true novelties or new traits in a single lineage like language, called autapomorphies, pose the greatest challenge. For instance, we can much more easily explain why we have hair by standard comparative reasoning: one, all primates have hair; two, hair is a shared, derived character from early mammals; three, early mammals had hair, by comparison with functional analogies in other lineages (such as feathers), because it was warm. The key point is that we can bring natural selection into the picture only because it gains explanatory power via comparison; similar circumstances result in similar functional responses. That’s why Darwin was so successful in predicting that our line came out of Africa; he simply noted the similarity to primates, and that most primates, especially those most externally like us, were located there. But when a unique trait appears in a single evolutionary line, comparison becomes impossible. If we have only one example, then we lose the ability to sort out what features are historical contingencies, what features are developmental or intrinsic constraints, and what features were truly selected for. That’s why Darwin never talked about the origin of species in “Origin of Species.†It’s the toughest nut of all to crack.

Worse still, trying to account for a cognitive difference like language only raises the bar because the effect might be actually invisible, or nearly so, while the chain between cause and effect is likely to be unknown, for practical or historically contingent reasons. The causal chasm between molecules bouncing into one another and synapses firing on one end and words springing from one’s lips on the other looms large. We don’t even have one complete causal chain of how a difference in physical properties of the brain leads to a particular cognitive ability, say, to recognize objects like snakes and snails and puppy dog tails in the world--just as plain objects, let alone symbols (we can get as far as edges--just).

To be sure, there are enticing clues: In yet another nugget from his rich and stimulating book, Deacon notes that in order to gain voluntary control of the rib muscles that control breathing, more neurons had to grow there, and so we have an enlargement of the chest part of the spinal cord. If this idea is right, that’s solid physical evidence, and what’s oft interred within bones becomes grist for the paleontologist: We can actually go and see whether earlier fossils of humankind show a similar spinal enlargement. The evidence is hard to come by, but one specimen--a Homo erectus boy--has been examined. He does not show a modern, human-like spine.

But our dilemma is even worse than this. Due to the luck of the historical draw, people happen to be just about the worst possible candidates for comparative evolutionary study. This cannot be emphasized enough: Unlike most animals, even most mammals, we have no close cousins, and only two cousins at all, chimps and gorillas, and only three in the superfamily above that (orangutan, gibbon, siamang). As the evolutionary biologist Richard Lewontin says, this “evolutionary space is too sparsely populated to be able to connect the points sensibly.†Too bad we’re not lemurs, with a dozen related lemur species at hand.

Advertisement

Now we like to believe we have close living relatives: Chimps, gorillas and other primates seem to look like us. And it’s commonplace to cite the classic Mary-Claire King and Alan Wilson (1975) study that human and chimp DNA is “99 percent identical.†On this authority we must be nearly kissing cousins. Taken out of context, this quote quite neglects the punch line that King and Wilson themselves emphasized: that small differences in the underlying DNA (the genotype) can and do lead to huge differences in external shape (the phenotype, or form that shows). It’s not surprising: Just last year, the most careful redating pegged the last common ancestors of Homo sapiens and chimps at about double what we previously thought, about 14 million years ago. Since chimps and people both stand 14 million years away from a last common point, that means that nearly 30 million years of evolutionary time separate us from chimps: 14 million years back from chimps to the common ancestor and then another 14 million years back down from that branch point ancestor to us. World enough and time. Just to put all those millenniums in perspective, consider that this is the same evolutionary time separating giraffes and deer, one couldn’t ever imagine that some scientist would want to figure out how Rudolph the Reindeer could talk by looking at a giraffe.

So without a comparative crutch, one is logically driven to invent one: In order to resolve the obvious discontinuity between humans and other primates, one must first adduce some similar trait between the two and assert that something that primates do is like, perhaps just like, what humans do; be it language, communication or gossip. The result is a proliferation of stories in every possible order: terrestrialism, bipedalism, encephalization/language, civilization; or bipedalism, terrestrialism, civilization, language; or language, bipedalism, terrestrialism, then civilization, and so forth. None of these often imaginative reconstructions seems more compelling than any other.

Even so, Deacon’s research, involving everything from understanding the molecular-biological details of nerve growth to knocking out single genes to pinpointing the intricate regulation machinery building brains, holds out possibly the best hope we have to understand the mystery and miracle of language and how it is that we became creatures able to walk the walk, and talk the talk.

Advertisement