Download Physical Model of the String-Fret Interaction
In this paper, a model for the interaction of the strings with the frets in a guitar or other fretted string instruments is introduced. In the two-polarization representation of the string oscillations it is observed that the string interacts with the fret in different ways. While the vertical oscillation is governed by perfect or imperfect clamping of the string on the fret, the horizontal oscillation is subject to friction of the string over the surface of the fret. The proposed model allows, in particular, for the accurate evaluation of the elongation of the string in the two modes, which gives rise to audible dynamic detuning. The realization of this model into a structurally passive system for use in digital waveguide synthesis is detailed. By changing the friction parameters, the model can be employed in fretless instruments too, where the string directly interacts with the neck surface.
Download A Physically-motivated Triode Model for Circuit Simulations
A new model for triodes of type 12AX7 is presented, featuring simple and continuously differentiable equations. The description is physically-motivated and enables a good replication of the grid current. Free parameters in the equations are fitted to reference data originated from measurements of practical triodes. It is shown, that the equations are able to characterize the properties of real tubes in good accordance. Results of the model itself and when embedded in an amplifier simulation are presented and align well.
Download FAUST-STK: a set of linear and nonlinear physical models for the FAUST programming language
The FAUST Synthesis ToolKit is a set of virtual musical instruments written in the FAUST programming language and based on waveguide algorithms and on modal synthesis. Most of them were inspired by instruments implemented in the Synthesis ToolKit (STK) and the program SynthBuilder. Our attention has partly been focused on the pedagogical aspect of the implemented objects. Indeed, we tried to make the FAUST code of each object as optimized and as expressive as possible. Some of the instruments in the FAUST-STK use nonlinear allpass filters to create interesting and new behaviors. Also, a few of them were modified in order to use gesture data to control the performance. A demonstration of this kind of use is done in the Pure Data program. Finally, the results of some performance tests of the generated C++ code are presented.
Download Harpsichord Sound Synthesis using a Physical Plectrum Model Interfaced with the Digital Waveguide
In this paper, we present a revised model of the plectrum-string interaction and its interface with the digital waveguide for simulation of the harpsichord sound. We will first revisit the plectrum body model that we have proposed previously in [1] and then extend the model to incorporate the geometry of the plectrum tip. This permits us to model the dynamics of the string slipping off the plectrum more comprehensively, which provides more physically accurate excitation signals. Simulation results are presented and discussed.
Download Analysis and Simulation of an Analog Guitar Compressor
The digital modeling of guitar effect units requires a high physical similarity between the model and the analog reference. The famous MXR DynaComp is used to sustain the guitar sound. In this work its complex circuit is analyzed and simulated by using state-space representations. The equations for the calculation of important parameters within the circuit are derived in detail and a mathematical description of the operational transconductance amplifier is given. In addition the digital model is compared to the original unit.
Download Interaction-optimized Sound Database Representation
Interactive navigation within geometric, feature-based database representations allows expressive musical performances and installations. Once mapped to the feature space, the user’s position in a physical interaction setup (e.g. a multitouch tablet) can be used to select elements or trigger audio events. Hence physical displacements are directly connected to the evolution of sonic characteristics — a property we call analytic sound–control correspondence. However, automatically computed representations have a complex geometry which is unlikely to fit the interaction setup optimally. After a review of related work, we present a physical model-based algorithm that redistributes the representation within a user-defined region according to a user-defined density. The algorithm is designed to preserve the analytic sound-control correspondence property as much as possible, and uses a physical analogy between the triangulated database representation and a truss structure. After preliminary pre-uniformisation steps, internal repulsive forces help to spread points across the whole region until a target density is reached. We measure the algorithm performance relative to its ability to produce representations corresponding to user-specified features and to preserve analytic sound–control correspondence during a standard density-uniformisation task. Quantitative measures and visual evaluation outline the excellent performances of the algorithm, as well as the interest of the pre-uniformisation steps.
Download Physical Modelling of a Wah-wah Effect Pedal as a Case Study for Application of the Nodal DK Method to Circuits with Variable Parts
The nodal DK method is a systematic way to derive a non-linear state-space system as a physical model for an electrical circuit. Unfortunately, calculating the system coefficients requires inversion of a relatively large matrix. This becomes a problem when the system changes over time, requiring continuous recomputation of the coefficients. In this paper, we present an extension of the DK method to more efficiently handle variable circuit elements. The method is exemplified with the Dunlop Crybaby wah-wah effect pedal, as the continuous change of the potentiometer position is an extremely important aspect of the wah-wah effect.
Download State of the Art in Sound Texture Synthesis
The synthesis of sound textures, such as rain, wind, or crowds, is an important application for cinema, multimedia creation, games and installations. However, despite the clearly defined requirments of naturalness and flexibility, no automatic method has yet found widespread use. After clarifying the definition, terminology, and usages of sound texture synthesis, we will give an overview of the many existing methods and approaches, and the few available software implementations, and classify them by the synthesis model they are based on, such as subtractive or additive synthesis, granular synthesis, corpus-based concatenative synthesis, wavelets, or physical modeling. Additionally, an overview is given over analysis methods used for sound texture synthesis, such as segmentation, statistical modeling, timbral analysis, and modeling of transitions. 2
Download Modelling of Brass Instrument Valves
Finite difference time domain (FDTD) approaches to physical modeling sound synthesis, though more computationally intensive than other techniques (such as, e.g., digital waveguides), offer a great deal of flexibility in approaching some of the more interesting real-world features of musical instruments. One such case, that of brass instruments, including a set of time-varying valve components, will be approached here using such methods. After a full description of the model, including the resonator, and incorporating viscothermal loss, bell radiation, a simple lip model, and time varying valves, FDTD methods are introduced. Simulations of various characteristic features of valve instruments, including half-valve impedances, note transitions, and characteristic multiphonic timbres are presented, as are illustrative sound examples.
Download Gestural Auditory and Visual Interactive Platform
This paper introduces GAVIP, an interactive and immersive platform allowing for audio-visual virtual objects to be controlled in real-time by physical gestures and with a high degree of intermodal coherency. The focus is particularly put on two scenarios exploring the interaction between a user and the audio, visual, and spatial synthesis of a virtual world. This platform can be seen as an extended virtual musical instrument that allows an interaction with three modalities: the audio, visual and spatial modality. Intermodal coherency is thus of particular importance in this context. Possibilities and limitations offered by the two developed scenarios are discussed and future work presented.