Thursday, February 21, 2013

"Do you know Lat Pack?"



Charlotte: Why do they switch the r's and the l's here? 
Bob: Uh... for yuks. You know? Just to mix it up. 
Bob: They have to amuse themselves, 'cause we're not making them laugh. 

Ok, so it's time for me to address a myth that gets on my multilingual neuroscientist nerves, which is illustrated perfectly in the little section of dialogue above, from Lost In Translation. In our little American melting pot, we are confronted with the accents of non-native English speakers every day. While I'm not really going to address the idea that the near misses of the American pronunciation of English shouldn't be the object of ridicule (cause seriously, you try speaking Spanish/Chinese/Korean/Italian and see how you get made fun of. By the Chinese in particular. We're a judgey group), I feel it's important to give context for these near misses.

All humans are born with the ability to understand and produce any language...well...ever. As an infant, you have a full vocabulary of phonemes, which are the smallest component parts of all speech. Phonemes cover everything from the slight changes in mouth openness between an English "m" and "b" to the massive differences between vowel-heavy languages like Japanese and the consonant-heavy South African click languages. For example, in Chinese, we have different melodic tones for all our words. Mandarin Chinese has 4 different tones, known as yin, yang, sang, and chu, and they all carry a different musical note and emphasis (by the way, Cantonese Chinese has 9. I'm sure we're all great at singing). Each of these melodic tones is a different phoneme. As you grow and hear more and more of the same phonemes around you, you lose the ability to hear others. This is all part of a regular "pruning" process your brain goes through - why keep things around that aren't necessary? The pruning takes place pretty early - infants stop responding to other-language phonemes at about 6 months of age. 

Once your brain specializes in a language, those phonemes get reinforced. Each time you hear a hard English "b", your brain is reminded of what "b" sounds like. Same goes for the rounded English "r" and the lilting English "l". You start to expect those phonemes. So when you hear a lilting Japanese "r", and a rounded Japanese "l", they deviate so much from your understanding of r-ness and l-ness that you think they must be switched. Truth is, they're both just far closer to neutral than you expect. 


*Note: this is not scientific in any way, just a visual representation of my point.
Now, while it looks like they're not that far apart, one of the things this graphic doesn't capture is how reinforced your native phonemes are. Because your understanding of the hard American "r" is so reinforced that you expect a certain type of "r"-ness, the distance between the Japanese "r" and the American "r" becomes far more pronounced. This reinforcement/expectation ends up putting the lilting Japanese "r" closer to the American "l" on the spectrum. So, rather than looking at this graphic as a continuous flat plane, imagine the American "r" and "l" being valleys or vortexes. A native speaker would have to travel a farther distance to get from the American "r" to the Japanese "r" than it takes to get from the Japanese "r" to the American "l". The same thing happens with the rounded Japanese "l", which is understood by you as being closer to the American "r". 

So, basically, it's not them. It's you.