Climate change is the number one research priority of the NSF Engineering Alliance



However, even now, after 150 years of development, the sound we hear even from a sophisticated sound system is much less than when we are physically present at a live music performance. In such an event, we are in a normal sound field and can easily perceive that the sounds of different instruments come from different locations, even when the sound field intersects with mixed sounds from multiple instruments. There’s a reason people pay so much to listen to live music: It’s more fun, exciting, and can generate greater emotional impact.

Today, researchers, companies and entrepreneurs, including us, are finally getting closer to recorded sound that truly recreates a natural sound field. The group includes large companies, such as Apple and Sony, as well as smaller companies such as
Creator. Netflix recently revealed a file Partnership with Sennheiser Under which the network began using a new system, Ambeo 2-Channel Spatial Audio, to increase the sonic realism of such TV shows as “Weird thingsand “The Witcher”.

There are now at least six different ways to produce a very realistic sound. We use the term “soundstage” to distinguish our work from other audio formats, such as those referred to as spatial audio or immersive audio. These sounds can represent more spatial effect than regular stereo, but they don’t usually include the detailed sound source location cues needed to reproduce a truly convincing sound field.

We believe the sound field is the future of music recording and reproduction. But before such a comprehensive revolution occurs, it will be necessary to overcome a formidable obstacle: the convenient and inexpensive conversion of hours of current recordings, regardless of whether they are surround sound in mono, stereo or multi-channel (5.1, 7.1, and so on). No one knows exactly how many songs were recorded, but according to Gracenote’s entertainment metadata, More than 200 million Recorded songs are now available on planet earth. Given that the average length of a song is about 3 minutes, this equates to about 1,100 years of music.

after a season By recording their component tracks, the next step is to recombine them into an audio recording. This is accomplished by the Soundstage signal processor. This sound processor performs a complex mathematical function to generate the output signals that drive the speakers and produce the stage sound. Generator inputs include isolated tracks, physical locations of the speakers, desired locations of the listener, and audio sources in the recreated audio field. The outputs of the audio processor are multiple path signals, one for each channel to drive multiple speakers.

The sound field can be in a physical space, if generated by speakers, or in a virtual space, if generated by headphones or earphones. The function performed within the sound processor is based on computational acoustics and psychoacoustics, and takes into account the propagation of sound waves, interference in the desired sound field, HRTFs of the listener and the desired sound field.

For example, if the listener is to use earphones, the generator selects a set of HRTFs based on the configuration of the desired audio source locations, and then uses the selected HRTFs to filter out the isolated audio source tracks. Finally, the Soundstage processor combines all the HRTF outputs to create the left and right tracks for the earphones. If the music will be played on speakers, then at least two are needed, but the more speakers, the better the sound field. The number of sound sources in the recreated sound field can be more or less than the number of speakers.

We released our first soundstage app, for iPhone, in 2020. It allows listeners to compose, listen to and save sound system music in real time – and the processing causes no noticeable time lag. The app is called
3D MusicConverts stereo music from a listener’s personal music library, cloud, or even streaming music to soundstage in real time. (For karaoke, the app can remove sounds or take out any isolated musical instrument.)

Earlier this year, we opened a web portal,
3dsoundstage.com, which provides all the features of the 3D Musica app in the cloud as well as an application programming interface (API) making the features available to streaming music providers and even to users of any popular web browser. Anyone can now listen to music in the sound system on basically any device.

When sound travels to your ears, the unique characteristics of your head — its physical shape, the shape of your outer and inner ears, and even the shape of your nasal cavities — alter the sound range of the original sound.

We’ve also developed separate versions of 3D Soundstage software for vehicles, home audio systems, and appliances to recreate a 3D sound field using two, four or more speakers. Besides playing music, we have high hopes for this technology in video conferencing. Many of us have had the stressful experience of attending video conferences where we had difficulty hearing other participants clearly or were confused about who was speaking. With soundstage, the sound can be configured so that everyone is heard coming from a privileged location in a virtual room. Or the “location” can simply be set depending on the person’s position in the typical network of Zoom and other video conferencing applications. For some, at least, video conferencing will be less stressful and speech will be clearer.

Just like the sound It moved from mono to stereo, and from stereo to surround and spatial sound, it’s now making the transition to soundstage. In those earlier eras, audio enthusiasts evaluated a sound system by its fidelity, based on parameters such as bandwidth,
Harmonic distortiondata accuracy, response time, data compression without loss or data loss, and other signal-related factors. Now, sound theater can be added as another dimension of sound fidelity—and, dare we say, the primary dimension. For human ears, the effect of sound theater, with its spatial cues and captivating immediacy, is more important than incremental improvements in fidelity. This extraordinary feature provides capabilities that go beyond the experience of even the most savvy audiophile.

Technology has fueled previous revolutions in the audio industry, and now it is unleashing another one. Artificial intelligence, virtual reality, and digital signal processing are leveraging psychoacoustics to give audiophiles capabilities they never had before. At the same time, these technologies are giving record companies and artists new tools that will breathe new life into old recordings and open new horizons for creativity. Finally, the century-old goal of convincingly recreating the sounds of a concert hall has been achieved.

This article appears in the October 2022 print issue as “How to Get Audio Back.”

articles from your site

Related articles around the web



Source link

Related Posts

Precaliga