Download Wave field synthesis interaction with the listening environment, improvements in the reproduction of virtual sources situated inside the listening room
Holophonic sound reproduction using Wave Field Synthesis (WFS)  aims at recreating a virtual spatialized sound scene in an extended area. Applying this technique to synthesize virtual sources located within an indoor environment can create striking audio effects in the context of virtual or augmented reality applications. However, interactions of the synthesized sound field with the listening room must be taken into account for they cause modifications in the resulting sound field. This paper enumerates some of these interactions according to different virtual scene configurations and applications. Particular attention is paid to the reproduction of the sound source directivity and to the reproduction of a room effect coherent with the real environment. Solutions for synthesizing the directivity of the source and the associated room effect are proposed and discussed around simulations, developpements and first perceptual validation.
Download Monitoring distance effect with wave field synthesis
Wave Field Synthesis (WFS)  rendering allows the reproduction of virtual point sources. Depending on source positioning, the wave front synthesized in the listening area exhibits a given curvature that is responsible for a spatial perspective sensation. It is then possible to monitor the distance of a source with a “holophonic distance” parameter concurrently with conventional distance cues based on the control of direct/reverberation ratio. Presentation of this holophonic distance is made and then discussed in the context of authoring sound scenes in WFS installations. Three goals to this work: Introducing WFS to sound engineers in an active listening test where they manipulate different parameters for the construction of a sound scene. Assessing the perceptual relevance of the holophonic distance modifications. Studying the possible link between the holophonic distance parameter and conventional subjective distance parameters traditionally used by sound engineers.
Download Perceptual evaluation of weighted multi-channel binaural format
This paper deals with perceptual evaluation of an efficient method for creating 3D sound material on headphones. The two main issues of the classical two-channel binaural rendering technique are computational cost and individualization. These two aspects are emphasized in the context of a general-purpose 3D auditory display. The multi-channel binaural synthesis tries to provide solutions. Several studies have been dedicated to this approach where the minimum-phase parts of the Head-Related Transfer Functions (HRTFs) are linearly decomposed in the purpose of achieving a separation of the direction and frequency variables. The present investigation aims at improving this model, making use of weighting functions applied to the reconstruction error, in order to focus modeling effort on the most perceptually relevant cues in the frequency or spatial domain. For validating the methodology, a localization listening test is undertaken, with static stimuli, using a reporting interface which allows a minimization of interpretation errors. Beyond the optimization of the binaural implementation, one of the main questions addressed by the study is the search for a perceptually relevant definition of a reconstruction error.