Download Morphing Instrument Body Models
In this study we present morphing methods for musical instrument body models using DSP techniques. These methods are able to transform a given body model gradually into another one in a controlled way, and they guarantee stability of the body models at each intermediate step. This enables to morph from a certain sized body model to a larger or smaller one. It is also possible to extrapolate beyond original models, thus creating new interesting (out of this world) instrument bodies. The opportunity to create a time-varying body, i.e., a model that changes in size over time, results in an interesting audio effect. This paper exhibits morphing mainly via guitar body examples, but naturally morphing can also be extended to other instruments with reverberant resonators as their bodies. Morphing from a guitar body model to a violin body model is viewed as an example. Implementation and perceptual issues of the signal processing methods are discussed. For related sound demonstrations, see www.acoustics.hut.fi/demo/ dafx2001-bodymorph/.
Download Automating The Design Of Sound Synthesis Techniques Using Evolutionary Methods
Digital sound synthesizers, ubiquitous today in sound cards, software and dedicated hardware, use algorithms (Sound Synthesis Techniques, SSTs) capable of generating sounds similar to those of acoustic instruments and even totally novel sounds. The design of SSTs is a very hard problem. It is usually assumed that it requires human ingenuity to design an algorithm suitable for synthesizing a sound with certain characteristics. Many of the SSTs commonly used are the fruit of experimentation and a long refinement processes. A SST is determined by its functional form and internal parameters. Design of SSTs is usually done by selecting a fixed functional form from a handful of commonly used SSTs, and performing a parameter estimation technique to find a set of internal parameters that will best emulate the target sound. A new approach for automating the design of SSTs is proposed. It uses a set of examples of the desired behavior of the SST in the form of inputs + target sound. The approach is capable of suggesting novel functional forms and their internal parameters, suited to follow closely the given examples. Design of a SST is stated as a search problem in the SST space (the space spanned by all the possible valid functional forms and internal parameters, within certain limits to make it practical). This search is done using evolutionary methods; specifically, Genetic Programming (GP).
Download Modeling Collision Sounds: Non-Linear Contact Force
A model for physically based synthesis of collision sounds is proposed. Attention is focused on the non-linear contact force, for which both analytical and experimental results are presented. Numerical implementation of the model is discussed, with regard to accuracy and efficiency issues. As an application, a physically based audio effect is presented.
Download Sound Source Separation: Preprocessing For Hearing Aids And Structured Audio Codin
In this paper we consider the problem of separating different sound sources in multichannel audio signals. Different approaches to the problem of Blind Source Separation (BSS), e.g. the Independent Component Analysis (ICA) originally proposed by Herault and Jutten, and extensions to this including delays, work fine for artificially mixed signals. However the quality of the separated signals is severely degraded for real sound recordings when there is reverberation. We consider the system with 2 sources and 2 sensors, and show how we can improve the quality of the separation by a simple model of the audio scene. More specifically we estimate the delays between the sensor signals, and put constraints on the deconvolution coefficients.
Download Audio Content Transmission
Content description has become a topic of interest for many researchers in the audiovisual field [1][2]. While manual annotation has been used for many years in different applications, the focus now is on finding automatic contentextraction and content-navigation tools. An increasing number of projects, in some of which we are actively involved, focus on the extraction of meaningful features from an audio signal. Meanwhile, standards like MPEG7 [3] are trying to find a convenient way of describing audiovisual content. Nevertheless, content description is usually thought of as an additional information stream attached to the ‘actual content’ and the only envisioned scenario is that of a search and retrieval framework. However, in this article it will be argued that if there is a suitable content description, the actual content itself may no longer be needed and we can concentrate on transmitting only its description. Thus, the receiver should be able to interpret the information that, in the form of metadata, is available at its inputs, and synthesize new content relying only on this description. It is possibly in the music field where this last step has been further developed, and that fact allows us to think of such a transmission scheme being available on the near future.
Download Discrete Implementation Of The First Order System Cascade As The Basis For A Melodic Segmentation Model
The basis for a low-level melodic segmentation model and its discrete implementation is presented. The model is based on the discrete approximation of the one-dimensional convective transport mechanism. In this way, a physically plausible mechanism for achieving multi-scale representation is obtained. Some aspects of edge detection theory thought to be relevant for solving similar problems in auditory perception are briefly introduced. Two examples presenting the dynamic behaviour of the model are shown.
Download LIFT: Likelihood-Frequency-Time Analysis For Partial Tracking And Automatic Transcription Of Music
We propose a new method for analysing the time-frequency domain, called LiFT. It is especially designed for partial tracking in a polyphonic automatic transcription model. After the signal passes through a Q-constant filter bank, composed of twenty four quartertone filters, it is analysed thanks to a generalized maximum likelihood approach. Two hypotheses are tested: the first one is that the output signal of a filter is a cosine plus noise, the second one is that it corresponds to colored noise. This likelihood analysis is developed in two ways: temporally treating the samples and frequentially treating the short time Fourier transform of the signal. For these two approaches, we have tested the robustness to noise and the cosine detection power.
Download Separation Of Speech Signal From Complex Auditory Scenes
The hearing system, even in front of complex auditory scenes and in unfavourable conditions, is able to separate and recognize auditory events accurately. A great deal of effort has gone into the understanding of how, after having captured the acoustical data, the human auditory system processes them. The aim of this work is the digital implementation of the decomposition of a complex sound in separate parts as it would appear to a listener. This operation is called signal separation. In this work, the separation of speech signal from complex auditory scenes has been studied and an experimentation of the techniques that address this problem has been done.
Download EPS Models Of Am-Fm Vocoder Output For New Sounds Generations
The Phase Vocoder [1] was originally introduced as an alternative approach to speech coding but has won much greater acceptance among the music community as a tool both for sound analysis and composition [2]. Although dormant for some time, there has been a resurgence of interest in AM-FM speech signal descriptions in the last ten years [3], [4]. With the intention of building on some of the new ideas proffered, the aim of this work is to first consider their application to musical signals. It then demonstrates how paramaterisation of the extracted AM-FM information using EPS (Exponential Polynomial Signal) models allows modification of the individual large- and small-grain features of the AM and FM components, thus providing a new way for generating audio effects.
Download Magnitude-Complementary Filters For Dynamic Equalization
Discrete-time structures of first-order and second-order equalization filters are proposed. They turn to be particularly useful in applications where the equalization parameters are dynamically varied, such as in contexts of audio virtual reality. In fact, their design allows a simplified and more direct control of the filter coefficients, at the cost of some more computation cycles that are required, during each time step, by implementations on real-time processing devices.