This time, I take up a paper by Patricia K. Kuhl entitled "Early Language Learning and Literacy: Neuroscience Implications for Education", which appeared in Mind, Brain, and Education (Vol. 5.No. 3 pages 128-142, September 2011). I offer a commentary on this article along with my thoughts.
Newborn infants convey their emotions to others only through crying or babbling in the early stages, and they cannot be expected to convey the infinitely rich range of language.
However, we cannot help but be surprised that they are able to understand more than a few dozen words and speak some words, although not many, less than a year later.
No one can deny that language is the highest of human's higher brain functions, and human infants easily learn language without being taught.
The systems of infants' early language acquisition and the development of related brain function have attracted many developmental psychologists. Recently, brain scientists have joined them in trying to solve the mystery of language acquisition.
The author of the paper, Dr. Patricia K. Kuhl is an American developmental psychologist whose research has told us much about infant language learning. This paper focuses on the mystery of infant language learning based on the results of her own longtime studies. It is one of the most thought-provoking articles that I have read on child development in recent years, and my comments will be offered in two parts.
Dr. Kuhl's paper is composed of three parts. Part 1 discusses infants' language acquisition strategies based on studies of phonetic learning, her main research theme, and offers her analysis. This is the main part of this review. Part 2 discusses features of bilingual studies and the brain functions of bilingual people based on findings in Part 1. Part 3 further addresses the relationships between infants' language learning environment and their later language development.
Infants crack the speech code
In Part 1, Dr. Kuhl uses the expression "cracking the speech code" to describe the mysterious capacity of infants to acquire language.
Infants come into the world without previous experiences. All of what they see, hear and touch are unknown. How do infants decipher the relation of human sounds (voice) to things, their attributes (color, size, etc.), or actions (eating, walking) in an unknown world? This work is similar to the procedure of cracking a code or deciphering unknown words. Egyptian hieroglyphs, which had not yet been deciphered, were successfully decoded because of the accidental discovery of the Rosetta Stone which enabled a correspondence to Greek. Infants do not have anything so useful as the Rosetta Stone for understanding the world.
To solve this mystery, Dr. Kuhl's laboratory has long studied phonetic learning in infants. It has been said that there are 600 consonants and 200 vowels in the world's languages. Of course, all these are not included in one language, for example, there are about 40 speech sounds (phonemes) in English. The first work for infants in decoding a conversation is to "distinguish" from among these 40 sounds. Many people may think that distinguishing different sounds is not so difficult. This is, however, a tough task, because the boundary of the two sounds differs in each language and nobody tells them where the difference lies.
The infants who hear Japanese may only distinguish the boundaries of five vowels, a, i, u, e, o, however, there are nearly 30 vowels in English (and various theories about this) and without distinguishing their boundaries, they cannot understand English. Japanese infants who have learned the knack of distinguishing "a" and "e" cannot distinguish English sounds because there are four vowels (æ, ʌ, ɑ, ə) in English that correspond to Japanese "a." These four vowels are, of course, different sounds. When Japanese first learn English at school, we have difficulty distinguishing these sounds. However, for English native speakers, they are four clearly distinct sounds. In Japanese, "apple," "up," "art," and "earth" begin with same "a." However, these four "a" have clearly different pronunciations in English. Japanese infants group them into a single phonetic category "a." However, American or English infants group them into more categories. Of course, even Japanese infants, if they are brought up in America or England, they will be able to distinguish four "a's."
Through learning, infants lose the ability to distinguish all sounds
Dr. Kuhl first tried to identify the time when infants become able to distinguish vowels of their mother tongue. She used an artificial nipple that flipped an electric switch to make a sound from a speaker when an infant sucked it. Infants soon notice the causal relationship between their acts and the sound and try to keep on sucking to make a sound. But the longer they keep on sucking, they become bored and the frequency decreases. However, it is known that when the sound changes, infants' "interest" increases and they begin to suck strongly again. She took advantage of this nature.
For example, the nipples were set to make the sound "a," "a," "a" each time when the infant sucked. After sucking for a while, the infants lost interest. When the sound was changed to "e," the infants sucked strongly again. This indicates that they could distinguish the difference in sound between "a" and "e." For example, infants raised in America or England suck strongly, distinguishing between "æ" and "ʌ," but Japanese infants do not suck strongly since they cannot distinguish the difference.
Using this method, Dr. Kuhl has studied the development of listening skills, that is, the capacity to distinguish the difference in sound between "r" and "l" in both American and Japanese infants. According to her astonishing findings, Japanese infants more than 10 months of age could not distinguish between "r" and "l" as well as Japanese adults, compared with American infants who could easily distinguish between them or even Japanese infants 6 to 8 months of age.
This characteristic in the listening skills of infants to distinguish between different sounds exists in other languages besides English and Japanese. In Spanish, there is a intermediate consonant between the English sounds "b" and "p." Even American infants, who clearly distinguish "r" and "l," cannot distinguish between them after 10 months. However, like Japanese infants, American infants before six to eight months can distinguish the intermediate consonant between "b" and "p."
Based on these results, Dr. Kuhl says that the most important procedure for infants to acquire the ability to distinguish phonemes of their mother tongue's phonemes is as follows. Firstly, human infants come to the world with the ability to distinguish all phonemes. This means that they can distinguish 200 vowels and 600 consonants in human languages. However, while listening to people around them talk, they come to "ignore" the difference in the sounds that are unnecessary to distinguish in their mother tongue and become sensitive to the differences in pronunciation that are critical in their mother tongue. In other words, they find what delineates the sounds or marks their boundaries while listening to the pronunciation of other people around them in daily life. Dr. Kuhl explains that human infants have "computational function" and "statistically" learn from hearing sounds.
Japanese adults who are not good at distinguishing the difference between English sounds "r" and "l" may think that it would be better if the sensitivity to all pronunciation which we had in early infancy remained. However, in other studies, Dr. Kuhl has shown that infants who remain sensitive to the sounds of other languages until later were slower to acquire their mother tongue and has stressed that this process of seemingly becoming insensitive is important.
Social interaction needed for working computational abilities
Why do infants listen so sensitively to surrounding people's speech sounds? Adults who want to improve language skills such as English and eagerly listen to tapes and CDs but have failed may want to pick up some tips from infants. How can infants listen so eagerly to sounds even though their brains can statistically analyze them like computers? Dr. Kuhl further investigated this problem in another experiment and solved the secret.
At 9 months, the age at which language perception becomes language-specific, American infants were exposed to Mandarin in two ways. In the first group, a Chinese "tutor" came to a nursery school and read books in Mandarin during 12 sessions. In the second group, the scenes of the same tutor reading books were recorded on videotape and the children listened while watching a TV monitor for the same amount of time. After that, children's ability to distinguish Mandarin sounds was measured in both groups and compared using behavioral observation method and brain waves.
The results of the experiment were also astonishing. The percentage of American infants who were able to distinguish Mandarin sounds was 65% in the group with a real tutor and 55% in the group who listened to the reading through a TV monitor. The difference was statistically significant.
What does this result mean? Even though there were only 12 reading sessions, "real" contact with a tutor switched on the "computer" of infants' brain. In other words, real social contact with other speakers is important for learning language.
Next time, I will comment on the secret of bilingualism and how infant language learning affects their later language ability in conjunction with the Dr. Kuhl's article.
- Early Language Learning and Literacy: Neuroscience Implications for Education by Patricia K. Kuhl from MIND, BRAIN AND EDUCATION, 10 August 2011. © 2011 the Author. Journal Compilation © 2011 International Mind, Brain, and Education Society and Blackwell Publishing, Inc. Republished with permission of John Wiley and Sons, Inc.