Science in Review: The Linguistic Match Game

grand central terminal photo

Is this where your client is waiting? (Photo by Daniel Wehner )

You’ve got a meeting tomorrow, in Manhattan, with a very important client. People manage this sort of thing all the time, right? Easy as pie. Only, suppose you can’t communicate with your client. Not now, not tomorrow until you meet her face to face. You don’t have a time or a location for your meeting. You know this, your client knows this, and you both know that the other knows. Where and when will you be?

Scientists have used this scenario to study how humans think. At first glance, the problem sounds nigh impossible. If two people were to randomly throw darts at a map of Manhattan, and then (metaphorically) throw two more darts at a 24-hour clock, the odds that they would both pick the same time and place are discouraging. Yet subjects in these experiments do sometimes match–because it turns out they’ve got loaded darts.

The most common answer to this meeting challenge is noon at Grand Central Terminal. Was that your answer? If it wasn’t, that’s OK, I’m sure your client will understand. Still, maybe you followed a similar line of reasoning. You need to pick a time and place that someone else will pick without coordination. So 7:23am is out, because it’s too specific and arbitrary. You might try the start or end of the work day, but not everyone keeps the same schedule. Lunch at noon is probably the most common schedule feature, at least for modern urbanites, so noon is your best bet for a time someone else would pick. Similarly, you’ll need a location that many people would know and could easily get to from anywhere. Grand Central Terminal is a major transportation hub, and well known. My first instinct was the Empire State Building, but the point is not that everyone will agree. The point is that you can quickly constrain the options so that agreement is plausible.

I came across this experiment in the book Creating Language: Integrating Evolution, Acquisition, and Processing by Morten H. Christiansen and Nick Chater. The material is a little outside my usual sphere, and skewed more technical than one would probably prefer for a general audience. Still, I found the ideas fascinating enough to make the effort.

Everything you know about language is wrong. Well, maybe not you specifically, and certainly not me because linguistics and cognitive science aren’t my fields; I didn’t know enough in the first place to even be wrong. The authors are challenging the conventional view that our brains have a dedicated language center from birth which provides us with some innate ability to process grammar. This view is meant to explain why kids can pick up language so easily just by listening, without formal instruction on grammar or vocabulary or really anything. Christiansen and Chater cite Noam Chomsky as the key figure in promoting this view. I’ve come across this idea of a dedicated language center, but I largely have to take these authors’ word for it that they are representing Chomsky’s position accurately.

Photo of two toddlers

How did these adorable moppets learn to talk? Science is still figuring it out. (Photo by Jodi Walsh)

Christiansen and Chater argue instead that we learn languages so easily because languages themselves have evolved to be easy to learn. The brain needn’t have a special, genetically-encoded structure specifically tuned to handling language; in fact, they argue such a structure could not have evolved. Instead, language adapted to cognitive faculties we already had, which may in turn have been further tuned for language. They also draw connections between the evolution of language and the acquisition of language–the process by which each individual learns language–domains which are usually kept separate. They think acquisition of language by the human species tells us something about the acquisition of language by individual humans, and vice versa. Likewise, our ongoing language process, how we make sense of what we hear and how we create our own spoken utterances, can tell us something about acquisition. We are basically always learning language, we simply make smaller and smaller refinements over time because we’ve already learned so much.

I found the narrative in Creating Language compelling, although as I’ve said I’m not well-versed in the alternative accounts. I may simply be making an aesthetic judgment; making a continuum out of various facets of language is more appealing to me than keeping them as distinct and unrelated processes. Still, whether their unification project pans out or not, I found one idea particularly useful: two types of learning called C-induction and N-induction. The Manhattan meeting problem was used to illustrate the difference. If your client was at some randomly chosen spot and you had to find her, that would be N (for natural) induction, because you have to learn an objective fact about the natural world. That problem is very hard. The actual problem as stated is one of C (for cultural) induction, which makes it easier. You can think about where you would go in order be found, and assume your client would go to the same place because your shared cultural experience would bias you both to the same times and places.

Christiansen and Chater apply these categories to their language discussion. They say the conventional approach frames language learning as N-induction, making it difficult because we have to learn an abstract and arbitrary syntax and grammar. Instead, they think we should think of language learning as an easier C-induction challenge. We are (often subconsciously) guessing what other humans would mean by a given sequence of noises, and we have the advantage of assuming those other humans’ brains are biased in the same way ours are. A variety of evidence is presented, such as the observation that nouns and verbs tend to sound different. You can make up a word and ask people to guess whether it is a noun or a verb and they will agree more often than you’d expect if it were the 50/50 proposition it seems like at first. And you can predict which one they’ll pick by analyzing the sound differences of nouns and verbs in their language.

While I found the treatment of language rewarding in and of itself, I was excited to reflect on what other aspects of life are C-induction or N-induction. If C(ultural)-induction is aptly named, many social activities should probably fall into that category. We align ourselves with friends and groups because we agree on various topics–shared interests which really also represent shared biases. I don’t see any harm in that per se but perhaps we would do well to remember that finding people with whom we agree is a comparatively easy task and so success may not signify much. Imagine if we made our meeting problem even easier by simply saying that our client is whoever we happen to find at our chosen spot. By contrast, theology should be a more challenging N-induction problem, because we have to learn about a God who does not share our enculturation. Perhaps we can even use that to discern something about our beliefs. If our theology always aligns with our intuitions, should we be wary that we simply believe in a religion that has evolved to conform to our cognitive biases rather than representing truth about the world?

Print Friendly, PDF & Email
drandrewwalsh@gmail.com'

Andy Walsh

Andy has worn many hats in his life. He knows this is a dreadfully clichéd notion, but since it is also literally true he uses it anyway. Among his current metaphorical hats: husband of one wife, father of two elementary school students, reader of science fiction and science fact, enthusiast of contemporary symphonic music, and chief science officer. Previous metaphorical hats include: comp bio postdoc, molecular biology grad student, InterVarsity chapter president (that one came with a literal hat), music store clerk, house painter, and mosquito trapper. Among his more unique literal hats: British bobby, captain's hats (of varying levels of authenticity) of several specific vessels, a deerstalker from 221B Baker St, and a railroad engineer's cap. His monthly Science in Review is drawn from his weekly Science Corner posts -- Wednesdays, 8am (Eastern) on the Emerging Scholars Network Blog.

More Posts

Leave a Reply