Download The Cpld As A General Physical Modeling Synthesis Engine
In this paper we propose a system based on a Complex Programmable Logic Device (CPLD) as a physical modeling synthesis engine and a hardware description language (VHDL) to implement the physical modeling synthesis algorithms. An evaluation of VHDL and CPLD technologies for this application was performed. As an example we have programmed the Karplus-Strong plucked string algorithm using VHDL on an Altera CPLD.
Download Material Design in Physical Modeling Sound Synthesis
This paper deals with designing material parameters for physical models. It is shown that the characteristic relation between modal frequencies and damping factors of a sound object is the acoustic invariant of the material from which the body is made. Thus, such characteristic relation can be used for designing damping models for a conservative physical model to represent a particular material.
Download Resynthesis of coupled piano strings vibrations based on physical modeling
This paper presents a technique to resynthesize the sound generated by the vibrations of two piano strings tuned to a very close pitch and coupled at the bridge level. Such a mechanical system produces doublets of components generating beats and double decays on the amplitudes of the partials of the sound. We design a waveguide model by coupling two elementary waveguide models. This model is able to reproduce perceptually relevant sounds. The parameters of the model are estimated from the analysis of real signals collected directly on the strings by laser velocimetry. Sound transformations can be achieved by modifying relevant parameters and simulate physical situations.
Download New techniques and Effects in Model-based Sound Synthesis
Physical modeling and model-based sound synthesis have recently been among the most active topics of computer music and audio research. In the modeling approach one typically tries to simulate and duplicate the most prominent sound generation properties of the acoustic musical instrument under study. If desired, the models developed may then be modified in order to create sounds that are not common or even possible from physically realizable instruments. In addition to physically related principles it is possible to combine physical models with other synthesis and signal processing methods to realize hybrid modeling techniques. This article is written as an overview of some recent results in model-based sound synthesis and related signal processing techniques. The focus is on modeling and synthesizing plucked string sounds, although the techniques may find much more widespread application. First, as a background, an advanced linear model of the acoustic guitar is discussed along with model control principles. Then the methodology to include inherent nonlinearities and time-varying features is introduced. Examples of string instrument nonlinearities are studied in the context of two specific instruments, the kantele and the tanbur, which exhibit interesting nonlinear effects.
Download A Wavelet-based Pitch Detector for Musical Signals
Physical modelling of musical instruments is one possible approach to digital sound synthesis techniques. By the term physical modelling, we refer to the simulation of sound production mechanism of a musical instrument, which is modelled with reference to the physics using wave-guides. One of the fundamental parameters of such a physical model is the pitch, and so pitch period estimation is one of the first tasks of any analysis of such a model. In this paper, an algorithm based on the Dyadic Wavelet Transform has been investigated for pitch detection of musical signals. The wavelet transform is simply the convolution of a signal f(t) with a dialated and translated version of a single function called the mother wavelet that has to satisfy certain requirements. There are a wide variety of possible wavelets, but not all are appropriate for pitch detection. The performance of both linear phase wavelets (Haar, Morlet, and the spline wavelet) and minimum phase wavelets (Daubechies’ wavelets) have been investigated. The algorithm proposed here has proved to be simple, accurate, and robust to noise; it also has the potential of acceptable speed. A comparative study between this algorithm and the well-known autocorrelation function is also given. Finally, illustrative examples of different real guitar tones and other sound signals are given using the proposed algorithm. KEYWORDS Physical modeling – wavelet transform – pitch – autocorrelation function.
Download Decomposition of steady state instrument data into excitation system and formant filter components
This paper describes a method for decomposing steady-state instrument data into excitation and formant filter components. The input data, taken from several series of recordings of acoustical instruments is analyzed in the frequency domain, and for each series a model is built, which most accurately represents the data as a source-filter system. The source part is taken to be a harmonic excitation system with frequency-invariant magnitudes, and the filter part is considered to be responsible for all spectral inhomogenieties. This method has been applied to the SHARC database of steady state instrument data to create source-filter models for a large number of acoustical instruments. Subsequent use of such models can have a wide variety of applications, including wavetable and physical modeling synthesis, high quality pitch shifting, and creation of “hybrid” instrument timbres.
Download Time-domain model of the singing voice
A combined physical model for the human vocal folds and vocal tract is presented. The vocal fold model is based on a symmetrical 16 mass model by Titze. Each vocal fold is modeled with 8 masses that represent the mucosal membrane coupled by non-linear springs to another 8 masses for the vocalis muscle together with the ligament. Iteratively, the value of the glottal flow is calculated and taken as input for calculation of the aerodynamic forces. Together with the spring forces and damping forces they yield the new positions of the masses that are then used for the calculation of a new glottal flow value. The vocal tract model consists of a number of uniform cylinders of fixed length. At each discontinuity incident, reflected and transmitted waves are calculated including damping. Assuming a linear system, the pressure signal generated by the vocal fold model is either convoluted with the Green’s function calculated by the vocal tract model or calculated interactively assuming variable reflection coefficients for the glottis and the vocal tract during phonation. The algorithms aim at real-time performance and are implemented in MATLAB.
Download Identification and Modeling of a Flute Source Signal
This paper addresses the modeling of the source signal of a flute sound obtained by «removing» the contribution of the resonator. The resulting sound has then a more regular spectral behavior and can be modeled using signal models. The decomposition of the source signal into a deterministic and a stochastic part has been made using adaptive filtering. The deterministic part can then be modeled by non-linear synthesis models, the parameters of which are obtained using perceptive criteria. Linear filtering are used to model the stochastic part of the source signal.
Download Artificial Intelligence based Modeling of Musical Instruments
In this paper, a novel research tool, which allows real-time implementation and evaluation of sound synthesis of musical instrument, is described. The tool is a PC-based application and allows the user to evaluate the effects of parameter changes on the sound quality in an intuitive manner. Tuning makes use of a Genetic Algorithm (GA) technique. Flute and plucked string modeling examples are used to illustrate the capabilities of the tool.
Download Musical Sound Effects in the SAS Model
Spectral models provide general representations of sound in which many audio effects can be performed in a very natural and musically expressive way. Based on additive synthesis, these models control many sinusoidal oscillators via a huge number of model parameters which are only remotely related to musical parameters as perceived by a listener. The Structured Additive Synthesis (SAS) sound model has the flexibility of additive synthesis while addressing this problem. It consists of a complete abstraction of sounds according to only four parameters: amplitude, frequency, color, and warping. Since there is a close correspondence between the SAS model parameters and perception, the control of the audio effects gets simplified. Many effects thus become accessible not only to engineers, but also to musicians and composers. But some effects are impossible to achieve in the SAS model. In fact structuring the sound representation imposes limitations not only on the sounds that can be represented, but also on the effects that can be performed on these sounds. We demonstrate these relations between models and effects for a variety of models from temporal to SAS, going through well-known spectral models.