Human simulators that talk and listen to each other facilitate the search for acoustic properties of the head for better designed acoustic devices. – Teach daily

[ad_1]

Imagine a cocktail party filled with 3D-printed humanoid robots listening and talking to each other. A seemingly sci-fi landscape is the goal of the Augmentative Listening Lab at the University of Illinois at Urbana-Champaign. Realistic speaking (and listening) heads are essential to the investigation of how humans receive sound and the development of sound technology.

The team will describe talking human head simulators in their presentation, “3D Printed Vocal Head Simulators That Talk and Move,” on May 8. US East at the Northwestern/Ohio State Room at the Chicago Marriott Downtown Magnificent Mile. This talk comes as part of the 184th Meeting of the Acoustical Society of America, which runs from May 8-12.

Algorithms used to improve human hearing must take into account the acoustic characteristics of the human head. For example, hearing aids adjust the sound received in each ear to create a more realistic listening experience. For the adjustment to be successful, the algorithm must realistically evaluate the difference between the arrival time in each ear and the amplitude of the sound.

It is important to study human listening in natural settings, such as cocktail parties, where many conversations occur simultaneously.

“Simulating real-world conversation improvement scenarios often requires hours of recording with human targets. The whole process can be stressful for the targets, and it is extremely difficult for the subject to remain completely still between and during recordings, which affects measured vocal stresses,” said Austin Lu, one of the students on the team. Audio head simulators can overcome both drawbacks. It can be used to create large datasets with continuous logging and ensures they remain stable.”

Because the researchers have precise control over the simulated subject, they can adjust the parameters of the experiment and even set the machines in motion to simulate neck movements.

In a feat of design and engineering, the heads are 3D printed into components and assembled, enabling customization at low cost. The highly detailed ears are equipped with microphones along different parts to simulate both human hearing and Bluetooth earphones. A “talkbox” or oral megaphone closely mimics human voices. To facilitate movement, the researchers paid special attention to the neck. Since the 3D model of the head design is open source, other teams can download and modify it as needed. The decreasing cost of 3D printing means that there is a relatively low barrier to manufacturing these heads.

“Our audio major project is the culmination of work done by many students with very diverse artistic backgrounds,” said Manan Mittal, a graduate researcher with the team. “Projects like this are due to an interdisciplinary research that requires engineers to work with designers.”

The Augmentative Listening Lab has also created wheeled and motorized systems for more complex walking and locomotion simulations.

[ad_2]

Source link

Related Posts

Precaliga