The way we experience music and speech is different from what has been thought until now. This is the result of a study conducted by researchers at Linköping University, Sweden, and Oregon Health & Science University, USA. The results have been published in science progressand may make it possible to design better cochlear implants.
We are social creatures. The sound of other people’s voices is important to us, and our hearing is geared toward testing and distinguishing human voices and speech. Sound that reaches the outer ear is carried by the eardrum to the spiral-shaped inner ear, also known as the cochlea. The sensory cells of hearing, the outer and inner hair cells, are located in the cochlea. The sound waves cause the inner hair cell “whiskers” to bend, sending a signal through the nerves to the brain, which interprets the sound we hear.
For the past 100 years, we have believed that each sensory cell has its own “optimal frequency” (a measure of the number of sound waves per second). The hair cell responds strongly to this frequency. This idea means that a sensory cell with an optimal frequency of 1,000 Hz will respond less strongly to sounds with a slightly lower or higher frequency. It is also assumed that all parts of the cochlea work in the same way. Now, however, a research team has discovered that this is not the case for sensory cells that process sound at frequencies below 1,000 Hz, which is considered low-frequency sound. Vowel sounds in human speech lie in this region.
“Our study shows that many cells in the inner ear react synchronously to low-frequency sound. We think this makes experiencing low-frequency sounds easier than it would otherwise be, as the brain receives information from many sensory cells at the same time.” , says Anders Friedberger, professor in the Department of Biomedical and Clinical Sciences at Linköping University.
Scientists believe that building this hearing system makes it even more powerful. If some sensory cells are damaged, many others are still able to send nerve impulses to the brain.
It’s not just the vowel sounds in human speech that fall into the low-frequency region: many of the sounds that make up music lie here, too. Middle C on a piano, for example, has a frequency of 262 Hz.
These findings may ultimately be important for people with severe hearing impairment. The most successful treatment currently available in such cases is a cochlear implant, in which electrodes are placed in the cochlea.
“The design of current cochlear implants is based on the assumption that each electrode should only give stimulation to nerves at certain frequencies, in a way that attempts to copy what was thought about the function of our hearing system. We suggest that changing the method of stimulation at lower frequencies would be more similar to natural stimulation, and should improve the user’s hearing experience in this way,” says Anders Friedberger.
The researchers now plan to examine how to apply their new knowledge in practice. One of the projects they are studying relates to new ways to stimulate the low-frequency parts of the cochlea.
These results come from experiments with the shells of guinea pigs, whose hearing in the low-frequency region is similar to that of humans. This work was funded by the US National Institutes of Health and the Swedish Research Council.