In the world of ICT, strategies for innovation are increasingly focusing on applications that exploit real-time interaction with and within real and virtual reality (VR). One of the goals is to develop technologies for accurate and realistic artificial multimodal rendering. This research project contributes to the next generation of 3D audio framework that is able to extract salient acoustic information of user and environment through headphones-embedded microphones, enabling training of (I) parametric head-related transfer function models, (II) dynamic headphone compensation, and (III) room acoustic modeling.
Fusion of virtual/real auditory with visual information are validated in an innovative platform that will be an enabling technology for a personalized virtual acoustic reality. A variety of applications currently under study in the Multisensory Experience Lab will benefit from these technologies: VR musical instruments, VR for education, and VR for health e rehabilitation, to name but a few.
The project was an AAU-funded project: internationalization grant of the 2016-2021 strategic program “Knowledge for the World” (2017-2019) awarded to Michele Geronazzo by Aalborg University. The project also involved collaborations with international researchers, including, Federico Avanzini (University of Milano), Paola Cesari (University of Verona), Jari Kleimola (Hefio Ltd.), and Lauri Savioja (Aalto University).
Selected publications: Auditory Feedback for Navigation with Echoes in Virtual Environments: Training Procedure and Orientation Strategies, Sonic Interactions in Virtual Reality: State of the Art, Current Challenges, and Future Directions, Do We Need Individual Head-Related Transfer Functions for Vertical Localization? The Case Study of a Spectral Notch Distance Metric, The impact of an accurate vertical localization with HRTFs on short explorations of immersive virtual reality scenarios.