Babies can differentiate most sounds soon after birth, and by age 1 they are listeners by language. But researchers are still trying to understand how babies recognize the phonemic dimensions of their language disparateA term in linguistics that describes the differences between speech sounds that can change the meanings of words. For example, in English, [b] And the [d] Contradictory, because change [b] In the “ball” to a [d] It makes it a different word, “dummy.”
Recent paper in Proceedings of the National Academy of Sciences (PNAS) by two computational linguists affiliated with the University of Maryland offers new insight on the topic, which is essential for a better understanding of how children learn what the sounds of their mother tongue are.
Their research shows that an infant’s ability to interpret phonemic differences as either paradoxical or non-contrasting may come from the contexts in which the different sounds occur.
For a long time, researchers believed that there would be distinct differences between the way dissimilar sounds, such as short and long vowels, are pronounced in Japanese. However, although the pronunciation of these two sounds differ in exact speech, the phonemes are often more ambiguous in natural environments.
This is one of the first phonological learning computations proven to operate on automatic data, indicating that children can eventually learn phonological contrasting dimensions.”
Kasia Hitzenko, lead author of the paper
Hitczenko graduated from the University of Maryland in 2019 with a Ph.D. in Linguistics. She is currently a postdoctoral researcher in the Laboratory of Cognitive Sciences and Psycholinguistics at the Ecole Normale Supérieure in Paris.
Hitczenko’s work shows that children can distinguish between phonemic sounds based on context clues, such as adjacent sounds. Her team tested their theory in two case studies with two different definitions of context, by comparing data on Japanese, Dutch and French.
The researchers combined speech that occurred in different contexts and developed charts summarizing the durations of vowels in each context. In Japanese, they find that the plots of these vowel duration are distinctly different in different contexts, because some contexts have more short vowels, while others contain longer vowels. In French, the plots for this vowel duration were similar in all contexts.
says Naomi Feldman, co-author and professor of linguistics with an appointment at the University of Maryland Institute for Advanced Computer Studies (UMIACS).
Feldman adds that the sign they studied is true across most languages, and it is possible that their result can be generalized to other discrepancies.
The recently published research is an extension of Hitczenko’s Ph.D. A thesis examining how context is used for phonemic learning and the cognition of natural speech.
Feldman was the academic advisor to Hitczenko in Maryland, where they both completed much of their research in the Computational Linguistics and Information Processing Laboratory, supported by UMIACS.
Hitczenko, K & Feldman, NH (2022) Natural speech supports distributional learning across contexts. PNAS. doi.org/10.1073/pnas.2123230119.