Download Sound Modeling from the Analysis of Real Sounds
This work addresses sound modeling using a combination of physical and signal models with a particular emphasis on the flute sound. For that purpose, analysis methods adapted to the nonstationary nature of sounds are developed. Further on parameters characterizing the sound from a perceptive and a physical point of view are extracted. The synthesis process is then designed to reproduce a perceptive effect and to simulate the physical behavior of the sound generating system. The correspondence between analysis and synthesis parameters is crucial and can be achieved using both mathematical and perceptive criteria. Real-time control of such models makes it possible to use specially designed interfaces mirroring already existing sound generators like traditional musical instruments.
Download Resynthesis of coupled piano strings vibrations based on physical modeling
This paper presents a technique to resynthesize the sound generated by the vibrations of two piano strings tuned to a very close pitch and coupled at the bridge level. Such a mechanical system produces doublets of components generating beats and double decays on the amplitudes of the partials of the sound. We design a waveguide model by coupling two elementary waveguide models. This model is able to reproduce perceptually relevant sounds. The parameters of the model are estimated from the analysis of real signals collected directly on the strings by laser velocimetry. Sound transformations can be achieved by modifying relevant parameters and simulate physical situations.
Download Low bit-rate audio coding with hybrid representations
We present a general audio coder based on a structural decomposition : the signal is expanded into three features : its harmonic part, the transients and the remaining part (referred as the noise). The rst two of these layers can be very eciently encoded in a wellchosen basis. The noise is by construction modelized as a gaussian (colored) random noise. Furthermore, this decomposition allows a good time-frequency psycoacoustic modeling, as it dircetly provides us with the tonal and nontonal part of the signal.
Download The wave digital reed: A passive formulation
In this short paper, we address the numerical simulation of the single reed excitation mechanism. In particular, we discuss a formalism for approaching the lumped nonlinearity inherent in such a model using a circuit model and the application of wave digital filters (WDFs), which are of interest in that they allow simple stability verification, a property which is not generally guaranteed if one employs straightforward numerical methods. We present first a standard reed model, then its circuit representation, then finally the associated wave digital network. We then enter into some implementation issues, such as the solution of nonlinear algebraic equations, and the removal of delay-free loops, and present simulation results.
Download Consistency of Timbre Patterns in Expressive Music Performance
Musical interpretation is an intricate process due to the interaction of the musician’s gesture and the physical possibilities of the instrument. From a perceptual point of view, these elements induce variations in rhythm, acoustical energy and timbre. This study aims at showing the importance of timbre variations as an important attribute of musical interpretation. For this purpose, a general protocol aiming at emphasizing specific timbre patterns from the analysis of recorded musical sequences is proposed. An example of the results obtained by analyzing clarinet sequences is presented, showing stable timbre variations and their correlations with both rhythm and energy deviations.
Download Adjusting the Spectral Envelope Evolution of Transposed Sounds with Gabor Mask Prototypes
Audio-samplers often require to modify the pitch of recorded sounds in order to generate scales or chords. This article tackles the use of Gabor masks and their capacity to improve the perceptual realism of transposed notes obtained through the classical phasevocoder algorithm. Gabor masks can be seen as operators that allows the modification of time-dependent spectral content of sounds by modifying their time-frequency representation. The goal here is to restore a distribution of energy that is more in line with the physics of the structure that generated the original sound. The Gabor mask is elaborated using an estimation of the spectral envelope evolution in the time-frequency plane, and then applied to the modified Gabor transform. This operation turns the modified Gabor transform into another one which respects the estimated spectral envelope evolution, and therefore leads to a note that is more perceptually convincing.
Download Modal analysis of impact sounds with ESPRIT in Gabor transforms
Identifying the acoustical modes of a resonant object can be achieved by expanding a recorded impact sound in a sum of damped sinusoids. High-resolution methods, e.g. the ESPRIT algorithm, can be used, but the time-length of the signal often requires a sub-band decomposition. This ensures, thanks to sub-sampling, that the signal is analysed over a significant duration so that the damping coefficient of each mode is estimated properly, and that no frequency band is neglected. In this article, we show that the ESPRIT algorithm can be efficiently applied in a Gabor transform (similar to a sub-sampled short-time Fourier transform). The combined use of a time-frequency transform and a high-resolution analysis allows selective and sharp analysis over selected areas of the time-frequency plane. Finally, we show that this method produces high-quality resynthesized impact sounds which are perceptually very close to the original sounds.
Download Navigating in a Space of Synthesized Interaction-Sounds: Rubbing, Scratching and Rolling Sounds
In this paper, we investigate a control strategy of synthesized interaction-sounds. The framework of our research is based on the action/object paradigm that considers that sounds result from an action on an object. This paradigm presumes that there exists some sound invariants, i.e. perceptually relevant signal morphologies that carry information about the action or the object. Some of these auditory cues are considered for rubbing, scratching and rolling interactions. A generic sound synthesis model, allowing the production of these three types of interaction together with a control strategy of this model are detailed. The proposed control strategy allows the users to navigate continuously in an ”action space”, and to morph between interactions, e.g. from rubbing to rolling.
Download Controlling a Non Linear Friction Model for Evocative Sound Synthesis Applications
In this paper, a flexible strategy to control a synthesis model of sounds produced by non linear friction phenomena is proposed for guidance or musical purposes. It enables to synthesize different types of sounds, such a creaky door, a singing glass or a squeaking wet plate. This approach is based on the action/object paradigm that enables to propose a synthesis strategy using classical linear filtering techniques (source/resonance approach) which provide an efficient implementation. Within this paradigm, a sound can be considered as the result of an action (e.g. impacting, rubbing, ...) on an object (plate, bowl, ...). However, in the case of non linear friction phenomena, simulating the physical coupling between the action and the object with a completely decoupled source/resonance model is a real and relevant challenge. To meet this challenge, we propose to use a synthesis model of the source that is tuned on recorded sounds according to physical and spectral observations. This model enables to synthesize many types of non linear behaviors. A control strategy of the model is then proposed by defining a flexible physically informed mapping between a descriptor, and the non linear synthesis behavior. Finally, potential applications to the remediation of motor diseases are presented. In all sections, video and audio materials are available at the following URL: http://www.lma.cnrs-mrs.fr/~kronland/ thoretDAFx2013/
Download Sound morphologies due to non-linear interactions : towards a perceptive control of environmental sound-synthesis processes
This paper is concerned with perceptual control strategies for physical modeling synthesis of vibrating resonant objects colliding nonlinearly with rigid obstacles. For this purpose, we investigate sound morphologies from samples synthesized using physical modeling for non-linear interactions. As a starting point, we study the effect of linear and non-linear springs and collisions on a single-degreeof-freedom system and on a stiff strings. We then synthesize realistic sounds of a stiff string colliding with a rigid obstacle. Numerical simulations allowed the definition of specific signal patterns characterizing the non linear behavior of the interaction according to the attributes of the obstacle. Finally, a global description of the sound morphology associated with this type of interaction is proposed. This study constitutes a first step towards further perceptual investigations geared towards the development of intuitive synthesis controls.