Werker, JF (Janet F)
Prenatal exposure to antidepressants and depressed maternal mood alter trajectory of infant speech perception.
Department of Pediatrics, University of British Columbia, Vancouver, BC, Canada V6H 3V4.
Language acquisition reflects a complex interplay between biology and early experience. Psychotropic medication exposure has been shown to alter neural plasticity and shift sensitive periods in perceptual development. Notably, serotonin reuptake inhibitors (SRIs) are antidepressant agents increasingly prescribed to manage antenatal mood disorders, and depressed maternal mood per se during pregnancy impacts infant behavior, also raising concerns about long-term consequences following such developmental exposure. We studied whether infants' language development is altered by prenatal exposure to SRIs and whether such effects differ from exposure to maternal mood disturbances. Infants from non-SRI-treated mothers with little or no depression (control), depressed but non-SRI-treated (depressed-only), and depressed and treated with an SRI (SRI-exposed) were studied at 36 wk gestation (while still in utero) on a consonant and vowel discrimination task and at 6 and 10 mo of age on a nonnative speech and visual language discrimination task. Whereas the control infants responded as expected (success at 6 mo and failure at 10 mo) the SRI-exposed infants failed to discriminate the language differences at either age and the depressed-only infants succeeded at 10 mo instead of 6 mo. Fetuses at 36 wk gestation in the control condition performed as expected, with a response on vowel but not consonant discrimination, whereas the SRI-exposed fetuses showed accelerated perceptual development by discriminating both vowels and consonants. Thus, prenatal depressed maternal mood and SRI exposure were found to shift developmental milestones bidirectionally on infant speech perception tasks.
Psychol Sci. 2012 Jul 18;: 22810164
1Center for Brain and Cognition, Department of Technology, Universitat Pompeu Fabra.
The origins of the bilingual advantage in various cognitive tasks are largely unknown. We tested the hypothesis that bilinguals' early capacities to track their native languages separately and learn about the properties of each may be at the origin of such differences. Spanish-Catalan bilingual and Spanish or Catalan monolingual infants watched silent video recordings of French-English bilingual speakers and were tested on their ability to discern when the language changed from French to English or vice versa. The infants' performance was compared with that of previously tested French-English bilingual and English monolingual infants. Although all groups of monolingual infants failed to detect the change between English and French, both groups of bilingual infants succeeded. These findings reveal that bilingual experience can modulate the attentional system even without explicit training or feedback. They provide a basis for explaining the ontogeny of the general cognitive advantages of bilinguals.
Most cited papers:
Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, NY 14627, USA. firstname.lastname@example.org
For nearly two decades it has been known that infants' perception of speech sounds is affected by native language input during the first year of life. However, definitive evidence of a mechanism to explain these developmental changes in speech perception has remained elusive. The present study provides the first evidence for such a mechanism, showing that the statistical distribution of phonetic variation in the speech signal influences whether 6- and 8-month-old infants discriminate a pair of speech sounds. We familiarized infants with speech sounds from a phonetic continuum, exhibiting either a bimodal or unimodal frequency distribution. During the test phase, only infants in the bimodal condition discriminated tokens from the endpoints of the continuum. These results demonstrate that infants are sensitive to the statistical distribution of speech sounds in the input language, and that this sensitivity influences speech perception.
Speech perception as a window for understanding plasticity and commitment in language systems of the brain.
Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC V6T 1Z4, Canada. email@example.com
In this article, we provide a critical review of the literature on speech perception and phonological processing in infancy, and in populations with different experiential histories as a window to understanding how the notion of critical periods might apply to the acquisition of one part of language: the sound system. We begin by suggesting the use of the term "optimal period" because (a) both the onset (opening) and offset (closing) of openness to experience is variable rather than absolute and (b) phonological acquisition involves the emergence of a series of nested capabilities, each with its own sensitive period and each best explained at one of several different levels of specificity. In support, we cite evidence suggesting that to fully understand plasticity and commitment in phonological acquisition, it is necessary to consider not only the biological and experiential factors which may contribute to the onset and the offset of openness to experience but also how the sequentially developing parts of phonology constrain and direct development. In summary, we propose a nested, cascading model wherein biology, experience, and functional use each contribute.
Whitney M Weikum, Athena Vouloumanos, Jordi Navarra, Salvador Soto-Faraco, Núria Sebastián-Gallés, Janet F Werker
University of British Columbia, Vancouver, BC V6T 1Z4, Canada. firstname.lastname@example.org
This study shows that 4- and 6-month-old infants can discriminate languages (English from French) just from viewing silently presented articulations. By the age of 8 months, only bilingual (French-English) infants succeed at this task. These findings reveal a surprisingly early preparedness for visual language discrimination and highlight infants' selectivity for retaining only necessary perceptual sensitivities.
Department of Psychology, University of British Columbia, Canada. email@example.com
Do young infants treat speech as a special signal, compared with structurally similar non-speech sounds? We presented 2- to 7-month-old infants with nonsense speech sounds and complex non-speech analogues. The non-speech analogues retain many of the spectral and temporal properties of the speech signal, including the pitch contour information which is known to be salient to young listeners, and thus provide a stringent test for a potential listening bias for speech. Our results show that infants as young as 2 months of age listened longer to speech sounds. This listening selectivity indicates that early-functioning biases direct infants' attention to speech, granting speech a special status in relation to other sounds.
Dept. of Psychology, University of British Columbia, Vancouver, Canada. firstname.lastname@example.org
Several recent studies from our laboratory have shown that 14-month-old infants have difficulty learning to associate two phonetically similar new words to two different objects when tested in the Switch task. Because the infants can discriminate the same phonetic detail that they fail to use in the associative word-learning situation, we have argued that this word-learning failure results from a processing overload. Here we explore how infants perform in the Switch task with already known minimally different words. The experiment involved the same phonetic difference as used in our earlier word-learning studies. Following habituation to two familiar minimal pair object-label combinations (ball and doll), infants of 14 months looked longer to a violation in the object-label pairing (e.g., label 'ball' paired with object doll) than to an appropriate pairing. These results using well known words are consistent with the pattern of data recently obtained by Swingley and Aslin (2002) in which it was found that infants of 14 months look longer to the correct object when the accompanying well known word is spoken correctly rather than mispronounced. We discuss how these results are compatible with the limited resource explanation originally offered by Stager and Werker (1997).
Language experience and the organization of brain activity to phonetically similar words: ERP evidence from 14- and 20-month-olds.
Department of Psychology, Emory University, 532 Kilgo Circle, Atlanta, GA 30322, USA. email@example.com
The ability to discriminate phonetically similar speech sounds is evident quite early in development. However, inexperienced word learners do not always use this information in processing word meanings [Stager & Werker (1997). Nature, 388, 381-382]. The present study used event-related potentials (ERPs) to examine developmental changes from 14 to 20 months in brain activity important in processing phonetic detail in the context of meaningful words. ERPs were compared to three types of words: words whose meanings were known by the child (e.g.,''bear''), nonsense words that differed by an initial phoneme (e.g.,''gare''), and nonsense words that differed from the known words by more than one phoneme (e.g.,''kobe''). These results supported the behavioral findings suggesting that inexperienced word learners do not use information about phonetic detail when processing word meanings. For the 14-month-olds, ERPs to known words (e.g.,''bear'') differed from ERPs to phonetically dissimilar nonsense words (e.g.,''kobe''), but did not differ from ERPs to phonetically similar nonsense words (e.g.,''gare''), suggesting that known words and similar mispronunciations were processed as the same word. In contrast, for experienced word learners (i. e., 20-month-olds), ERPs to known words (e.g.,''bear'') differed from those to both types of nonsense words (''gare'' and ''kobe''). Changes in the lateral distribution of ERP differences to known and unknown (nonce) words between 14 and 20 months replicated previous findings. The findings suggested that vocabulary development is an important factor in the organization of neural systems linked to processing phonetic detail within the context of word comprehension.
Department of Psychology, McGill University, Canada. firstname.lastname@example.org
The nature and origin of the human capacity for acquiring language is not yet fully understood. Here we uncover early roots of this capacity by demonstrating that humans are born with a preference for listening to speech. Human neonates adjusted their high amplitude sucking to preferentially listen to speech, compared with complex non-speech analogues that controlled for critical spectral and temporal parameters of speech. These results support the hypothesis that human infants begin language acquisition with a bias for listening to speech. The implications of these results for language and communication development are discussed. For a commentary on this article see Rosen and Iverson (2007).
Department of Psychology, The University of British Columbia, 2136 West Mall, Vancouver, BC, Canada V6T 1Z4. email@example.com
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F.,& Tees, R. C.(1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7, 49-63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F.,& Gerken, L.(2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories.
Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver BC, V6T 1Z4, Canada. firstname.lastname@example.org
By their first birthday, infants can understand many spoken words. Research in cognitive development has long focused on the conceptual changes that accompany word learning, but learning new words also entails perceptual sophistication. Several developmental steps are required as infants learn to segment, identify and represent the phonetic forms of spoken words, and map those word forms to different concepts. We review recent research on how infants' perceptual systems unfold in the service of word learning, from initial sensitivity for speech to the learning of language-specific sound patterns. Building on a recent theoretical framework and emerging new methodologies, we show how speech perception is crucial for word learning, and suggest that it bootstraps the development of a separate but parallel phonological system that links sound to meaning.
Department of Psychology, University of British Columbia, British Columbia, Vancouver, Canada. email@example.com
Six experiments tested young infants' sensitivity to vowel and gender information in dynamic faces and voices. Infants were presented with side-by-side displays of two faces articulating the vowels /a/ or /i/ in synchrony. The heard voice matched the gender of one face in some studies and the vowel of one face in other studies and, in some studies, vowel and gender were placed in conflict. Infants of age 4.5 months showed no evidence of matching face and voice on the basis of gender, but were able to ignore irrelevant gender information and match on the basis of the vowel. Robust evidence of the ability to match on the basis of gender was not evident until 8 months of age. This set of findings suggests that, when identical stimuli are used, gender matching does not emerge until a later age than does phonetic matching. Results are discussed in relation to key theories of intermodal development.