Home / General / AR/VR / AVP3: Sounding the Call for Acoustic Simulation

AVP3: Sounding the Call for Acoustic Simulation

Acoustically advanced virtualization of products and production processes (AVP3)Today’s engineers use simulation to try to understand the behavior and characteristics of almost everything related to product development. Now the Fraunhofer Institute for Digital Media Technology (IDMT) is hard at work trying to add sound to the simulation mix and make 3D models audible.

As part of the Acoustically Advanced Virtualization of Products and Production Processes (AVP3) project, five companies and three research institutions are collaborating on an attempt to virtualize acoustic representations of products and production processes so teams can do the earliest possible evaluation and optimization of their auditory characteristics, according to Christoph Sladeczek, head of IDMT’s Virtual Acoustics research group. The team’s work is not only to leverage simulation to reduce unwanted sounds, but also to put engineers in the position of evaluating the best possible sound as part of the total product experience, he explains.

“Design of product sound acts as audible brand recognition, so acoustic product development is becoming more and more important,” Sladeczek says. “While a lot of companies already take care of acoustic product design without promoting it, the process is not fully digital today.”

Typically, design teams have to build physical prototypes to measure and evaluate acoustic behavior and potential improvements—a process that can be time consuming and expensive. Therefore, Sladeczek and his team argue that a series of digital acoustic product development tools should be part of the overall virtual product development life cycle.

To do so, Fraunhofer IDMT is leveraging its expertise in the field of spatial sound reproduction using wave field synthesis to devise methods and techniques to link the 3D visualization of prototypes with their respective, authentic sounds. The way it works is that the sound signals produced by a machine are digitized, through the use of acoustic models, and the signals are made audible using SpatialSound Wave, the company’s solution for producing and replaying three-dimensional sound.

Currently, acoustic simulation software evaluates acoustic product behavior and depicts it as numerical data, often represented visually through heat maps, etc. However, visual representation can’t tell an engineer anything about a product actually sounds, Sladeczek says. “To know anything about a product sound, the numerical data needs to be auralized considering human hearing senses … so new technologies for auralization are needed,” he says.

The group’s research aims to deliver a standardized format for exchanging acoustical data as well as auralization methods and a system. In addition, to make auralization simulation a reality, vendors will need to evolve systems like virtual reality solutions with capabilities for multi-sensor experiences, including acoustic properties.

In this vein, one particular challenge will be correctly simulating the acoustics of virtual prototypes by taking into account the different perspectives from which they are viewed and heard. “The sound of a virtual object must be as realistic as possible in order to be able to correctly assess its acoustic properties and behavior from any direction,” noted Sandra Brix, manager of the Fraunhofer IDMT’s researchers on the project.

Check out this video to learn more about Fraunhofer IDMT’s SpatialSound Wave.

If you enjoyed this post, make sure you subscribe to my RSS feed!

About Beth Stackpole

Beth Stackpole is a contributing editor to Digital Engineering. Send e-mail about this article to DE-Editors@digitaleng.news.

Leave a Reply

Your email address will not be published. Required fields are marked *