Sign up FAST! Login

MIT claims to have found a “language universal” that ties all languages together.

Stashed in: Science!, Words!, Brain, Awesome, MIT TR, Language, For Will, Learn a language.

To save this post, select a stash from drop-down menu or type in a new one:

All languages, the authors say, self-organize in such a way that related concepts stay as close together as possible within a sentence, making it easier to piece together the overall meaning.

Language takes an astonishing variety of forms across the world—to such a huge extent that a long-standing debate rages around the question of whether all languages have even a single property in common. Well, there’s a new candidate for the elusive title of “language universal” according to a paper in this week’s issue of PNAS.

Language universals are a big deal because they shed light on heavy questions about human cognition. The most famous proponent of the idea of language universals is Noam Chomsky, who suggested a “universal grammar” that underlies all languages. Finding a property that occurs in every single language would suggest that some element of language is genetically predetermined and perhaps that there is specific brain architecture dedicated to language.

However, other researchers argue that there are vanishingly few candidates for a true language universal. They say that there is enormous diversity at every possible level of linguistic structure from the sentence right down to the individual sounds we make with our mouths (that’s without including sign languages).

There are widespread tendencies across languages, they concede, but they argue that these patterns are just a signal that languages find common solutions to common problems. Without finding a true universal, it’s difficult to make the case that language is a specific cognitive package rather than a more general result of the remarkable capabilities of the human brain.

Top Reddit comment:

There was an early natural-language processing model (probably in the 1970s) which was called the Sausage Machine, because it broke sentences down into chunks of about six or seven words, as a way of simulating the problems that humans have with processing long-distance dependencies.

It didn't last very long, because all the other linguists found examples of sentences that humans could cope with, but which -- according to the Sausage Machine model -- shouldn't have been parseable. They then published their examples in papers with names like "Is the Sausage Machine a load of baloney?" and "Can the Sausage Machine cut the mustard?" And yes, those were really the titles.

Very interesting. I am also fascinated by the similarity of many words across languages

Word similarities make me think about what is universal to all humans. 

You May Also Like: