Friday, November 27, 2020

THE ART OF THE BRAIN

What do painters and neuroscientists have in common? As the painters and artists among us know, both painters and neuroscientists have a strong interest in uncovering the laws by which the brain paints a picture of reality. Though painters’ techniques clearly differ from the scientific method, by understanding and playing with neurologic and perceptual rules, painters have intuitively revealed secrets of the visual brain.


Engaging in art can allow you to gain insights into how visual perception arises. Leonardo da Vinci is one prime example of a painter and scientist who studied gradual changes in light to understand the visual perception of form and depth. In fact, Helmholtz emphasised during his lecture in 1871 how artists should be viewed as investigators who through numerous observations and experiments create work that due to its vividness and accuracy manifests a series of pertinent facts that physiologists cannot ignore. According to Harvard psychologist Patrick Cavanagh, artists are indeed neuroscientists in that they understand how the visual brain uses a basic approach to see the world, which can be reflected onto the canvas in ways that are invisible to the viewer.

These observations can strike a chord with us when we believe painting is essentially seeing, mentally and physically playing with colours, contours and contrasts. When you stand in front of one of the many Stillleben in the Borghese Gallery in Rome, the artist in you will mentally decompose the painting and recreate it to meaningfully understand the painter’s use of perspective, colours and light.


It is clear that art is a truly unique way of viewing the world that is distinct from the scientific method. By understanding how art engages us at different levels such as emotion, cognition and intellect, any information revealed can help us, when integrated with the scientific perspective, to create a better understanding of reality. Thus, by training your eye to dissect reality via engaging in art, the creative process will give you insights into the construction of visual perception. To help you pick up your painting brush to get you started on your first artwork, you can have a look at various close-ups depicting different structures of the nervous system to inspire you to focus your eye on the true beauty of the brain!


Thus, while the relationship between neuroscience and art can be studied by how art is perceived by the brain, i.e. how we hear sounds, view colours and feel rhythm, it is equally useful to consider the brain through the lens of art. By studying the brain during the execution of art, art will present itself as a reflection of the inner neural scenery of one’s subjective experience. This branch of the interdisciplinary field termed as neuroaesthetics by Semir Zeki, is known as John Onians’ idea of ‘Neuroarthistory’. It can be studied to better understand artistic aspects such as style, and more complex questions such as what the origin of art is. By observing and appreciating your consciousness first-hand while engaging in artistic activities, you can experience how your nervous system is giving rise to art.

While we are at the topic of art and the brain, is it true that music can help us improve our thought processes? While previous research has debunked the myth of the ‘Mozart effect’, which first came to the surface in a Nature report in 1993 that claimed listening to Mozart for 10 minutes enhanced participants’ performance on spatial tasks, recent evidence suggests there is an effect of music training on IQ development in developing brains. According to a study at the Northwestern University in Chicago, learning to play music helps children toward the development of a neurophysiological distinction between certain sounds, which can help them with their literacy. Active engagement and participation in creating music, specifically, predicted neural processing strength after music lessons. Moreover, when coupled with consistency in attending music training, active class participation was revealed to lead to enhanced effects on speech processing and reading scores.


As an adult, could you still expect to see any benefits if you take up an instrument late in life? Luckily, the answer is yes! A study at the University of South Florida that looked into the impact of piano training on adults aged between 60-85 demonstrated how after 6 months of piano training recipients of lessons improved in verbal fluency, planning, memory and pace at which information is processed in contrast to non-recipients of piano lessons.

No matter if you never played an instrument or have not played music in decades, it is undeniable that the power of music impacts on more than our cognitive and emotional development. As the Lebanese poet Kahlil Gibran said ‘Music is the language of the spirit. It opens the secret of life bringing peace, abolishing strife’.

Moreover, creating art provides ample mental health benefits such as relieving stress, as well as symptoms of depression and anxiety. It can serve as a vehicle for alleviating the burden of chronic disease. Engagement with art can also stimulate creative thinking, and increase brain connectivity and plasticity. Apart from painting and music, other ways to engage with art include movement-based creative expression, and expressive writing. Writing poetry, specifically, encourages emotional and intellectual growth as it involves the symbolic representation of experience through language. Engaging in writing poetry therefore lets the inner poet in you express how your brain makes you feel.


There are endless creative possibilities to express yourself. Creativity is the language of our soul, and so our creativity isn’t confined to just any one human language. This means you can express yourself creatively in any human language you want. So in a sense poetry is a feeling that you can put in word using multiple colours and flavours. Each colour or flavour is like a different language with its own deep rhythm, rich sound, vivid tone, exotic feel and fine shades of meaning that allow your soul to creatively express itself in a unique way. Reading a poem in English will therefore feel different to reading a poem in Italian, and so it will naturally leave a different sensation or taste on your mind, or feeling on your soul so to speak. On that note, why not have a go at writing your first poem!?

Thursday, October 1, 2020

Celebrate National Poetry Day with Multilingual Neuropoetry!

Check out 'Multilingual Neuropoetry' as today is National Poetry Day!


 

As today is National Poetry Day, I am happy to announce that the fully revised content of my book 'Multilingual Neuropoetry' is now available on Amazon, Barnes& Nobles etc, and is also now available at lulu.com at a discount, which expires by the end of this week. The book has many new highlights that you can enjoy! So why not grab a warm cup of tea and have neuropoetry entertain your mind and heart on this special day?

So what's new? The full version of 'Multilingual Neuropoetry' now features a chapter on neuroscience and a chapter on poetry. While the neuroscience chapter gives you the neuroscience basics and also helps you familiarise yourself with the essential anatomy of the brain, the poetry chapter tells you how you can form different types of poems with examples to illustrate each poem type. However, reading these two chapters is not essential to understand the neuropoems in the book, but it's worth skimming through this content to have some background info if you like to have it whilst reading the book.

What's more, the book now includes 40 neuropoems, each accompanied by two pages of extensive notes that explain to you the story underlying each poem, thereby helping you understand a new aspect of neuroscience that you didn't know about before while showing you how it relates to you. By writing in the notes section about the ideas that inspired me to pen a certain neuropoem, I aim to inspire others to become creative by expressing themselves using poetry. So, why don't you have a go and find out for yourself how you can link poetry and neuroscience yourself today on this special day?

The book also has got a more detailed list of references, glossary and index to provide you with as much information as you need if you are interested in knowing more about a topic or if you would like to read for yourself the articles that inspired a neuropoem. Although it took me quite some time to revise the content, I had great fun rediscovering the content of my book and seeing it through the eyes of editors and reviewers, who helped me make my book more reader-friendly. I therefore would like to give my thanks to each and everyone of you who got involved in the process of refining the book content with your valuable comments and feedback. Check out the video below on 'Multilingual Neuropoetry' for more, which is part 1 of an interview I did earlier this year to promote the idea of neuropoetry during 'Brain Awareness Week'!

Now that my book is finished, I have got say that my neuropoetry journey was quite a ride: From penning my first few neuropoems about 4 years ago to writing my last few neuropoems about 4 months ago, each neuropoem was a revelation. I hope you enjoy reading the end-product as much as I had the joy in creating it! I wish you a Happy National Poetry Day!

Tuesday, September 29, 2020

Brain processes underlying the morphological decomposition of derived words

Derivational morphology

How do we process words? We know that words are built out of a root or stem, i.e. a base, and one or more affixes, i.e. infixes, prefixes or suffixes. The field that is concerned with how lexical representations such as words are created by combining different roots and affixes to give rise to polymorphemic words is known as derivational morphology. Polymorphemic means that a word consists of at least two morphemes. A morpheme is a meaningful linguistic unit that cannot be broken down any further into any more, smaller meaningful units. For example, the word ‘polymorphemic’ consists of the three linguistic units that are the prefix ‘poly’, the base ‘morpheme’ and the suffix ‘ic’. This word therefore can be divided into three meaningful linguistic units while units such as ’poly’ or ‘ic’ cannot be broken down into any more meaningful units.

In order to go from the morphological basic units to the derived words, one needs to take a certain number of steps. This number of steps often varies. For example, in the case of the derived form ‘nationality’, it is clear that this word derives from ‘national’, which in turn is derived from the noun ‘nation’. In this case therefore two steps are needed to move between ‘nation’ and ‘nationality’. This can be formally expressed in the following way: nationality <national<nation-N. In addition to two-step forms, there are also one-step forms where only one step is required to go from the basic unit to the derived form. For example, in the case of ‘development’, it is clear that it is derived from ‘develop’, which is a verb. This can be expressed in the following way: development<develop-V. Another example of one-step words is ‘soaking’, which morphologically is derived from the verb ‘soak’: soaking < soak-V. The verb root therefore is the basic form for one-step words.

Similar to ’soaking’, another example that consists of a verb root (e.g. ‘soak’) and the ‘ing’ form, is ‘eating’. In this sense, ‘eating’ and ‘soaking’ have the same surface structure ‘ing’ while having different verb roots. In both cases, only one step is needed to go from the basic form that is the verb root (e.g. ‘eat’, or ‘soak’) to the derived word (e.g. ‘eating’, or ‘soaking’), which makes them both one-step words.

Zero-derivation

When one looks at two-step words such as ‘bridging’, it is clear that at the surface both one-step and two-step forms have the same structure that is the surface structure ‘-ing’. ‘Bridging’ is a two-step word because it can be derived from the verb ‘bridge’, which in turn is derived from the noun ‘bridge’. This can be expressed in the following way: bridging<bridge-V<bridge-N. Here one can see that for two-step words the verb root (bridge-V) is zero derived from a basic noun (bridge-N).

Morphological processing where the derivational steps are not overtly marked as in this case (e.g. bridge-V and bridge-N) is called ‘zero-derivation’ (Aronoff, 1980).

 

                                               Examples of Zero Derivations

                                                                Noun                 Verb

                                                               the bridge         to bridge

                                                               the knot            to knot

                                                               the skate           to skate

                                                               the bike             to bike

                                                               the work           to work

 

Zero-derivation is a word class alternation because you have semantically related pairs of homophonous forms that differ in parts of speech. For example, there is a difference in how you would use the words ‘a bridge’ or ‘to bridge’ or ‘a knot’ and ‘to knot’ within a sentence. In both cases the words sound the same but both would be used in different contexts. Specifically, ‘bridge’ within a sentence would be used as a noun, while ‘to bridge’ within a sentence would be used as a verb. Similarly, while ‘a knot’ would be used within a sentence as a noun, ‘to knot’ would be used as a verb.

Some theories argue that ‘to knot’ is covertly derived from ‘knot’ or that ‘development’ is covertly derived from ‘develop’. Here it is obvious that in a covert derivational relationship, the derived form is morphologically more complex than the base. On the other hand, you have theories that argue that such pairs are two forms of a single lexeme that has no inherent word class. In an integrated approach, you would have grammar differentiating between distinct forms of zero-related pairs on the basis of their underlying morphological relationships.

Processing morphologically complex derivations

Morphologically complex derivations are decomposed automatically while we process them mentally (Marslen-Wilson et al., 1994). How are morphologically complex derived words processed in our brain? It has been observed that the process of decomposing morphologically complex derivations causes increased activity in our brain (Gold & Rastle, 2007). Specifically, complex derivations cause more brain activity than simple derivations (Pliatsikas et al., 2014). Accordingly, more brain activity was reported for one-step and two-step forms versus simple forms. This was observed within areas of the brain that are implicated in morphological processing such as the left inferior frontal gyrus (LIFG). Similarly, more brain activity for two-step forms was reported than for one-step forms. Specifically, there was more increased brain activity in the LIFG for two-step versus one-step nouns that was accompanied by heightened activity in occipital regions and bilateral superior temporal regions.

What tasks have been used in research to investigate morphological processing? One approach is to use a single-word presentation paradigm, in which participants are given a lexical decision task. Here derived forms of words are presented, and participants decide via a button press whether the word they see on the screen is a word or not. This task allows you to compare the performance on two-step and one-step derived forms as well as on novel derivations. One study, for example, showed an auditory lexical decision task, in which legal novel and already existent derivations were displayed in Finnish (Leminen et al., 2010). Elicited responses were measured using electrophysiology. An effect called N400 was reported in both cases, which signifies the subsequent mapping of lexical form onto meaning. This was taken as support for the effective parsing of derivations that are novel.

Another task that is used to study morphological processing is a masked priming lexical decision task, in which the first word of each pair is presented very briefly and preceded by a set of symbols and succeeded by the second word to which a word/nonword lexical decision is made. Such a design allows you to determine whether there are differences between the processing of real morphological pairs (e.g. cleaner-clean), non-morphological pairs (planet-plan) and pseudo-morphological pairs with morpheme-like chunks (proper-prop) (e.g. Rastle et al., 2004). Effects of morphological conditions such as semantically transparent morphological derivations (development-develop) or identical words (table-table) can also be compared to form priming (e.g. scandal-scan) to establish morphological effects.

Morphologically related word pairs elicit either an effect called N250 attenuation or both N250 and N400 attenuation in visual masked priming (Morris et al., 2008). Research that compared between the priming of form-related word pairs, pseudo-derived word pairs and morphologically related word pairs revealed more priming by morphologically related word pairs in the N250 and N400 latency range than by form-related and pseudo-related words (Morris et al., 2007). N400 and N250 effects both reflect the time-course of processing of complex words, specifically, the early stages of lexical processing.

Insights from research into morphological decomposition

What is known from research on morphological processing using such paradigms is that we access morphological units of derived words during the process of word recognition. The question that has been the focus up until recently is when exactly we access and process each unit. It has been considered that there are distinct units that are processed and segmented at different phases of the process of word recognition. Two processes that have been linked to the morphological decomposition of derived words over the course of word recognition are orthographic and semantic processes. 

Research in this area has been dealing among others with questions such as whether morphological units such as derivational affixes are either just a by-product of statistically recursive orthographic parts (known as the morpho-orthographic perspective), the outcome of a semantic analysis of information that affects the earliest word recognition stages (known as the morpho-semantic perspective), or if morphological units emerge at a point at which both semantic and orthographic knowledge are made use of at the same time. In the latter case, the processing of morphological units would actively follow both morpho-orthographic and morpho-semantic routes.

The majority of studies on morphological processing support the decompositional dual route perspective in terms of the observed patterns of response and activation (Leminen et al., 2019). The location and latency of the morphological effects, however, vary very much dependent on the linguistic variables that were under investigation, and on the paradigm that was used in each study. Because of the conflicting nature of the results across studies regarding the location of morphological effects, it has been therefore suggested that the processing of derivational complex words entails a network of areas in the brain. This network includes both regions that are particular to the modality in which stimuli are presented, as well as the main language-related fronto-parietal areas of the brain.

It is therefore clear that in order to create a better understanding of derivational processing, future empirical research needs to use paradigms, and stimulus characteristics that are more uniform and that even allows for comparisons to be made across studies that investigate this subject in different languages. It is only then that a completely thorough picture of how at the temporal and spatial level derived words are processed and what precise neural processes underlie morphological decomposition can emerge.

 

References

Aronoff, M. (1980). Contextuals. Language, 56(4), 744–758.

Gold, B. T., & Rastle, K. (2007). Neural correlates of morphological decomposition during visual word recognition. Journal of Cognitive Neuroscience, 19(12), 1983–1993.

 

Leminen, A., Leminen, M. M., & Krause, C. M. (2010). Time course of the neural processing of spoken derived words: An event- related potential study. Neuroreport, 21, 948e952.

Leminen, A., Smolka, E., Dunabeitia, J., Pliatsikas, C ,(2010). Morphological processing in the brain: The good (inflection), the bad (derivation) and the ugly (compounding). Cortex, 116, 4-44.

Marslen-Wilson, W. D., Tyler, L. K., Waksler, R., & Older, L. (1994). Morphology and meaning in the English mental lexicon. Psychological Review, 101(1), 3–33.

 

Morris, J., Frank, T., Grainger, J., & Holcomb, P.,J. (2007). Semantic transparency and masked morphological priming: An ERP investigation. Psychophysiology, 44, 506e521.

 

Morris, J., Grainger, J., & Holcomb, P.,J. (2008). An electrophysiological investigation of early effects of masked morphological priming. Language and Cognitive Processes, 23, 1021e1056.

 

Pliatsikas, C., Wheeldon, L., Lahiri, A., & Hansen, P. C. (2014). Processing of zero-derived words in English: An fMRI investigation. Neuropsychologia, 53, 47e53.

 

Rastle, K., Davis, M. H., & New, B. (2004). The broth in my brother's brothel: Morphoorthographic segmentation in visual word recognition. Psychonomic Bulletin & Review, 11, 1090e1098.

Monday, June 29, 2020

Speech perception abilities in early and late bilinguals



Among the early L2 learners we can distinguish between sequential and simultaneous bilinguals. Early L2 learners can be contrasted to late L2 learners who learned their second language relatively late in life. How do early and late L2 learners compare to native speakers in terms of reaching proficiency in L2?  In quiet, L2 speakers show the same performance as native speakers in speech perception tasks (e.g. Nabelek & Donahue, 1984) while in background noise,  their speech perception in the second language is more affected than in the first language (Florentine, 1985a, b; Takata & Nabelek, 1990). This effect has been suggested to relate to listeners’ age (Bergman, 1980), the time-period of L2 study (Florentine, 1985a, b), and the environmental situation under which listening occurs (Takata & Nabelek, 1990).
Research by Florentine (1985b) showed that exposure to L2 from infancy onwards helped two L2 listeners to perform as well as L1 speakers on speech perception tasks in the presence of increasing noise. By contrast, L2 listeners who had been exposed to L2 only after puberty did not perform at the same level as L1 listeners of American English even after massive exposure. Moreover, L2 listeners did not make use of any contextual cues, which contrasts to the effects seen in L1 listeners. These data are interpreted as indicating a sensitive period after which learning a second language negatively affects L2 listeners’ perception of L2 in noise. Thus, in noise, L2 speakers’ performance on speech perception tasks has been shown to depend on the age at which they acquire L2 (Florentine, 1985b; Mayo et al., 1997).
It was shown that in speech perception tasks with noise, early learners of L2 performed better and benefitted more from sentence-level contextual information compared to late but very proficient L2 learners. However, early L2 learners’ ability to perceive L2 in noise has been suggested to be decreased and not be like that of native listeners’ due to intervention by L1 experience (Mayo et al., 1997). It has therefore been argued that early L2 learners’ better performance might be due to the age at which L2 was acquired and not the average time length of L2 exposure. Consequently, if L2 study is not started in early childhood, L2 listeners will have difficulty perceiving L2 in noise even with extensive exposure. This has been illustrated by early L2 learners showing higher levels of tolerating noise than late L2 learners. However, L1 English listeners had higher noise-tolerance levels than early L2 learners of English (Mayo et al., 1997).  L1 listeners have thus been claimed to recover quickly from noise-induced disturbance because of their linguistic knowledge of established L1 categories (Bradlow & Alexander, 2007). On the contrary, late L2 learners are not able to recover their speech perception that is disrupted by noise as quickly as L1 listeners because their lacking linguistic knowledge of L2 causes their recovery from noise to be too slow (Bradlow & Alexander, 2007).
Does this mean that late L2 learners will not be able to attain nativelike L2 proficiency? Luckily, findings that found nativelike attainment in L2 late learners, and non-native like attainment in early L2 learners suggest the answer is no (Birdsong, 1999, 2006; Bongaerts, 1999). For example, a study, in which native Vietnamese early and child learners of English were tested on a grammaticality judgement task, showed how they did not perform better or worse than native English speakers (MacDonald, 2000). Specifically, early Vietnamese learners were revealed to have lasting grammatical accents as well, suggesting that some early L2 learners do not show nativelike proficiency at all. One can therefore say that it is not certain that nativelike proficiency will result from an early exposure to a second language. In another study, native English speakers were asked to rate the English speech samples for accent that had been produced by L1 Dutch speakers who started learning English at about the age of 12 (Bongaerts, 1999). About half of these proficient learners of English were mistaken for native speakers of English suggesting that it is possible for late L2 learners to attain proficiency similar to native speakers with regard to pronunciation. This possibility of successful late learning of a second language was further supported by studies that replicated the previous study with learners of languages that are not closely related to L1 such as proficient Dutch L1 learners of French (Bongaerts, 1999), and late learners of Hebrew who had a different native language (Abu-Rabia & Kehat, 2004). Thus, an early start in learning a second language is not prerequisite in acquiring unaccented speech in L2.
Physiological evidence provided additional support in form of a study, in which it was shown how a group of adult participants who were trained on an artificial language demonstrated a similar pattern of brain activation during the processing of that language that is observed when native speakers process a natural language (Friederici et al., 2002). In particular, the observations of early negativity and late positivity (N400 and P600) as elicited by syntactic violations imply the processes of swift automatic parsing, and decelerated repair and reanalysis that are also observed during native speakers’ processing of syntactic violations. This finding seems to suggest that both early and late learners of a language make use of the same brain mechanism during the processing of that language. These results are supported by other ERP studies that have shown ERP patterns evoked in fluent L2 users that are largely observed in native speakers as well (Hahne & Friederici, 2001) while differences in ERP patterns were revealed when native and non-proficient speakers’ processing was compared (Ojima et al., 2005). Similarly, brain imaging studies found the same brain areas being activated when fluent L2 speakers and native speakers process a language (Perani et al., 1998) while brain activation in different regions were observed when native and non-proficient speakers’ processing was compared (Dehaene et al. 1997).
Can you become bilingual?
The observation that our ability to learn speech in different languages does remain functional over the course of life is good news for all of us who recently thought of taking up a second language. No matter what your age is, or which language you considered of learning, as long as you create yourself the ideal environment for you to learn a second language, with effort it is possible. For example, I learnt my second language after learning my first language; hence I am a sequential bilingual. The fact that both languages were rhythmically different languages and do not share the same sound system, helped me with learning both languages more efficiently, and to this day I am still learning bits and pieces in these two languages that help me understand how I can refine my communication skills. The learning therefore never stops! Thus, when you train yourself intensively in perceiving and producing the sounds of the second language, show the motivation and enthusiasm to sound nativelike, with massive L2 input, it will be possible for you to achieve your aim of becoming a fluent L2 speaker. 

References
 
Abu-Rabia, S., & Kehat, S. (2004). The critical period for second language pronunciation: Is there such a thing? Ten case studies of late starters who attained a native-like Hebrew accent. Educational Psychology, 24, 77-98.

Bergman, M. (1980). Aging and the perception of speech. Baltimore: University Park Press.

Birdsong, D. (1999). Introduction: Whys and why nots of the critical period hypothesis for second language acquisition. In D. Birdsong (Ed.), Second language acquisition and the critical period hypothesis (pp. 1-22). Mahwah, NJ: Erlbaum.

Birdsong, D. (2006). Age and second language acquisition and processing: A selective overview. Language Learning, 56, 9-48.

Bongaerts, T. (1999). Ultimate attainment in L2 pronunciation: The ease of very advanced late L2 learners. In D. Birdsong (Ed.), Second language acquisition and the critical period hypothesis (pp.133-149). Mahwah, NJ. Erlbaum.
Dehaene, S., Dupoux, E., Mehler, J., Cohen, L., Paulesu, E., Perani, D., et al. (1997). Anatomical variability in the cortical representation of first and second language. NeuroReport, 8, 3809-3815.

Florentine, M. (1985a). Non-native listeners’ perception of American-English in noise. Proceedings of Inter-Noise ’85, 1021-1024.

Florentine, M. (1985b). Speech perception in noise by fluent, non-native listeners. Proceedings of the Acoustical Society of Japan. H-85-16.

Hahne, A., & Friederici, A. D. (2001). Processing a second language: Late learners’ comprehension mechanisms as revealed by event-related brain potentials. Bilingualism: Language and Cognition, 4, 123-141.
 
Mayo, L. H., Florentine, M., & Buus, S. (1997). Age of second-language acquisition and perception of speech in noise Journal of Speech, Language & Hearing Research, 40, 686–93.


McDonald, J.L. (2000). Grammaticality judgements in a second language: Influences of age of acquisition and native language. Applied Psycholinguistics, 21, 395-423.


Nabelek, A.K., & Donahue, A.M. (1984). Perception of consonants in reverberation by native and non-native listeners. Journal of Acoustical Society of America, 75, 632-634.

Ojima, S., Nakata, H., & Kakigi, R. (2005). An ERP study of second language learning after childhood: Effects of proficiency. Journal of Cognitive Neuroscience, 17, 1212-1228.

Perani, D., Paulesu, E., Sebastian-Galles, N., Dupoux, E., Dehaene, S., Bettinardi, V., et al. (1998). The bilingual brain: Proficiency and age of acquisition of the second language. Brain, 121, 1841-1852.

Takata, Y., & Nabelek, A.K. (1990). English consonant recognition in noise and in reverberation by Japanese and American listeners. Journal of Acoustical Society of America, 88, 663-666.
  
 

Language learning in bilinguals’ early development



One fact about humankind is that a large portion of human beings on this earth speaks more than one language. As mentioned before, our speech perception abilities undergo a binary modification towards the end of the first year of life in that our ability to differentiate between non-native contrasts decreases.  The question therefore arises what impact a bilingual context would have on such a typical phonetic development? Would the impact vary depending on whether an infant learns the second language at the same time as the first language or shortly after the first language?

Bilinguals who were exposed to their first (L1) and second (L2) languages from birth, are known as simultaneous bilinguals (Meisel, 1989). Simultaneous bilinguals differ from bilinguals who learned their second language when their lexical and phonological knowledge of their first language had already partially established. Those bilinguals are known as sequential bilinguals. While sequential bilingualism is considered to cause a transfer from the first onto the second language, simultaneous bilingualism is marked by both languages evolving relatively autonomously from one another (de Houwer, 2005; Meisel, 1989, 2001).

Since children will have lost their sensitivity to non-native speech sounds by the end of the first year of life, in sequential bilingualism, when children are exposed to the second language, they will need to regain their sensitivity to the non-native phonetic sounds including sounds that are distinct in the second language. For example, a child with Japanese as L1, who is now learning English as L2, would need to restore the perceptual difference between speech sounds /r/ and /l/ that in contrast to English language, belong to the same phonetic category in the Japanese language (Goto, 1971; Miyawaki et al., 1975). This way, they will be able to differentiate between /l/ and /r/, and understand that, for example, ‘lead’ and ‘read’ are words in English that differ in meaning.

What does the situation look like in simultaneous bilinguals who grew up in a bilingual language environment since birth? Previous research showed that infants are able to create and learn phonetic categories and contrasts because they are sensitive to how the phonetic values of the phonetic elements are statistically distributed in a language system (Maye, Werker and Gerken, 2002). Then, what is the impact of this sensitivity to how speech sounds are statistically distributed in the language input on infants who grew up in a bilingual context? If the role of continued exposure to contrasts in both languages is important, this would mean that by the age of 8 months, infants as simultaneous bilinguals would have created two phonetic categories. However, a distributional overlap between two contrasts in one language and one speech sound in the other language that represents an acoustically intermediate speech sound may give rise to a single extended phonetic category in the simultaneous bilingual infants that includes all three speech sounds (Bosch & Sebastian-Galles, 2003 a, b). The simultaneous bilingual infant would then have difficulties discriminating between these sounds.

This would imply that compared to monolingual infants’ perceptual abilities, simultaneous bilingual infants’ ability to create contrastive categories that are particular to a language is postponed by this cross-language distributional overlap of speech sounds. Researchers predicted that the extent to which speech sounds occur in both languages, i.e. the frequency occurrence of speech sounds, may counteract the impact that this delay has on bilingual infants’ discrimination ability (Sundara et al., 2008). They compared monolingual English infants’, monolingual French infants’ and bilingual French-English infants’ discrimination ability of English and French instances of /d/ (Sundara et al., 2008). They found that due to the high frequency of specific speech sounds tested, similar to monolingual English infants, bilingual 10-12 month olds were able to differentiate between different exemplars of French and English /d/ in spite of overlapping distributions of French and English /d/ (Sundara et al., 2008). Thus, apart from confirming previous evidence that statistical distributional learning assists infants in language learning (Saffran, 2003), the results specifically suggest that for a particular phonetic contrast, the question whether bilinguals follow a different developmental trajectory from matched monolingual controls depends on how frequent speech sounds from phonetic categories occur in actual speech, and on their cross-language distributional overlap.

References


Bosch, L., Sebastian-Galles, N. (2003a). Language experience and the perception of a voicing contrast in fricatives: Infant and adult data. In M.J. Sole, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (pp. 1987-1990). Barcelona: Causal Productions.
 

De Houwer, A. (2005).  Early bilingual acquisition: Focus on morphosyntax and the Separate Development Hypothesis. In J.F. Kroll  & A. M. B. de Groot (Eds.), Handbook of bilingualism: Psycholinguistic approaches (pp.30-48). New York: Oxford University Press.

Goto, H. (1971). Auditory perception by normal Japanese adults of the sounds L and R. Neuropsychologia, 9, 317-323.

Maye, J. Werker, J.F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognitive Psychology, 82, B101-B111.
 
Meisel, J. (1989). Early differentiation of languages in bilingual children. In K. Hyltenstam & L. Obler (Eds.), Bilingualism across the lifespan. Aspects of acquisition, maturity and loss (pp. 13-40). Cambridge, UK: Cambridge University Press.



Meisel, J. M. (2001). The simultaneous acquisition of two first languages: Early differentiation and subsequent development of grammars. In J. Cenoz& F. Genese (Eds.), Trends in bilingual acquisition (pp.11-41). Amsterdam/Philadelphia: John Benjamins.



Miyawaki, K., Strange, W., Verbrugge, R., & Liberman, A. M. (1975). An effect of linguistic experience: The discrimination of [r] and [l] by native speakers of Japanese and English. Perception & Psychophysics, 18, 331-340.

Saffran, J.R. (2003). Statistical language learning: Mechanisms and constraints. Current Directions in Psychological Science, 12, 110-114.

Sundara, M., Polka, L., & Molnar, M. (2008). Development of coronal step perception: Bilingual infants keep pace with their monolingual peers. Cognition, 108, 232-242.
     
 


The Expanded Natural History of Song Discography, A Global Corpus of Vocal Music

The article 'The Expanded Natural History of Song Discography, A Global Corpus of Vocal Music' has now been published in OpenMind an...