Download Separation Of Speech Signal From Complex Auditory Scenes
The hearing system, even in front of complex auditory scenes and in unfavourable conditions, is able to separate and recognize auditory events accurately. A great deal of effort has gone into the understanding of how, after having captured the acoustical data, the human auditory system processes them. The aim of this work is the digital implementation of the decomposition of a complex sound in separate parts as it would appear to a listener. This operation is called signal separation. In this work, the separation of speech signal from complex auditory scenes has been studied and an experimentation of the techniques that address this problem has been done.
Download Recognition Of Ellipsoids From Acoustic Cues
Ideal three-dimensional resonators are “labeled” (identified) by infinite sequences of resonance modes, whose distribution depends on the resonator shape. We are investigating the ability of human beings to recognize these shapes by auditory spectral cues. Rather than focusing on a precise simulation of the resonator, we want to understand if the recognition takes place using simplified “cartoon” models, just providing the first resonances that identify a shape. In fact, such models can be easily translated into efficient algorithms for real-time sound synthesis in contexts of human-machine interaction, where the resonator shape and other rendering parameters can be interactively manipulated. This paper describes the method we have followed to come up with an application that, executed in real-time, can be used in listening tests of shape recognition and together with human-computer interfaces.
Download Recognition of Distance Cues from a Virtual Spatialization Model
Emerging issues in the auditory display aim at increasing the usability of interfaces. In this paper we present a virtual resonating environment, which synthesizes distance cues by means of reverberation. We realize a model that recreates the acoustics inside a tube, applying a numerical scheme called Waveguide Mesh, and we present the psychophysical experiments we have conducted for validating the information about distance conveyed by the virtual environment.
Download Investigations with the Sonic browser on two of the perceptual auditory dimensions of sound objects: Elasticity and force
The Sonic Browser is a software tool developed especially for navigating among sounds in a 2-D space, primarily through listening. It could be used for managing large collections of sounds, but now it is turning out to be useful also for conducting psychophysical experiments, aiming at investigating perceptual dimension scaling of sounds. We used it for analyzing the relationship between the physical parameters involved in the sound synthesis and for studying the quality of the sounds generated by the SOb models. Some experiments in this direction have been already reported [1, 2], examining real and model generated sounds of impacts and bounces of objects made with different materials. In this paper, we introduce our further investigations, by analyzing perceptually the impacts and bounces sounds from a different perspective, focusing on other two perceptual dimensions, i.e elasticity of the event and the force applied to the dropped object. We will describe the new experiment we conducted and we will report the collected data, by analyzing the resulting perceptual evaluation spaces.