Download Morphing Instrument Body Models
In this study we present morphing methods for musical instrument body models using DSP techniques. These methods are able to transform a given body model gradually into another one in a controlled way, and they guarantee stability of the body models at each intermediate step. This enables to morph from a certain sized body model to a larger or smaller one. It is also possible to extrapolate beyond original models, thus creating new interesting (out of this world) instrument bodies. The opportunity to create a time-varying body, i.e., a model that changes in size over time, results in an interesting audio effect. This paper exhibits morphing mainly via guitar body examples, but naturally morphing can also be extended to other instruments with reverberant resonators as their bodies. Morphing from a guitar body model to a violin body model is viewed as an example. Implementation and perceptual issues of the signal processing methods are discussed. For related sound demonstrations, see www.acoustics.hut.fi/demo/ dafx2001-bodymorph/.
Download Extending Digital Waveguides To Include Material Modelling
Digital Waveguides have been extensively used for musical instrument and room acoustics modelling. They can be used to form simplistic models for ideal wave propagation in one, two and three dimensions. Models in 1D for string and wind instrument synthesis and more recently a model for a drum, realised by interfacing 2D and 3D waveguide meshes, have been presented [1]. A framework is thus in place for the virtual construction of new or abstract musical instruments. However, straight-forward waveguides and waveguide meshes behave in an extremely indeal nature and phenomena such as stiffness and internal friction are often compromised or ignored altogether. In this paper we discuss and evaluate models which incorporate material parameters. We review a 1D bar model, and then present a 2D extension to model plates. We also discuss the problem of modelling frequency dependent damping, by describing a waveguide model of a visco-elastically damped string.
Download Is The Player More Influenced By The Auditory Than The Tactile Feedback From The Instrument?
What sensory feedback, tactile or auditory, is the more important for a musician when playing? In an attempt to answer this question, subjects were asked to play along with a metronome while the auditory feedback from their playing was manipulated. The preliminary results showed a tendency for matching sound with sound, i.e. players initiated strokes earlier as the delay increased. Increase in timing errors indicate a possible breakpoint around 55 ms. As the feedback was delayed even more subjects showed increased difficulties in maintaining a steady rhythm.
Download Classification Of Music Signals In The Visual Domain
With the huge increase in the availability of digital music, it has become more important to automate the task of querying a database of musical pieces. At the same time, a computational solution of this task might give us an insight into how humans perceive and classify music. In this paper, we discuss our attempts to classify music into three broad categories: rock, classical and jazz. We discuss the feature extraction process and the particular choice of features that we used- spectrograms and mel scaled cepstral coefficients (MFCC). We use the texture-of- texture models to generate feature vectors out of these. Together, these features are capable of capturing the frequency-power profile of the sound as the song proceeds. Finally, we attempt to classify the generated data using a variety of classifiers. we discuss our results and the inferences that can be drawn from them.
Download Additive Synthesis Of Sound By Taking Advantage Of Psychoacoustics
In this paper we present an original technique designed in order to speed up additive synthesis. This technique consists in taking into account psychoacoustic phenomena (thresholds of hearing and masking) in order to ignore the inaudible partials during the synthesis process, thus saving a lot of computation time. Our algorithm relies on a specific data structure called “skip list” and has proven to be very efficient in practice. As a consequence, we are now able to synthesize an impressive number of spectral sounds in real time, without overloading the processor.
Download An Efficient Pitch-Tracking Algorithm Using A Combination Of Fourier Transforms
In this paper we present a technique for detecting the pitch of sound using a series of two forward Fourier transforms. We use an enhanced version of the Fourier transform for a better accuracy, as well as a tracking strategy among pitch candidates for an increased robustness. This efficient technique allows us to precisely find out the pitches of harmonic sounds such as the voice or classic musical instruments, but also of more complex sounds like rippled noises.
Download Low-Cost Geometry-Based Acoustic Rendering
In this paper we propose a new sound rendering algorithm that allows the listener to move within a dinamic acoustic environment. The main goal of this work is to implement a real-time sound rendering software for Virtal Reality applications, which runs on lowcost platforms. The resulting numeric structure is a recursive filter able to efficiently simulate the impulse response of a large room.
Download Multiresolution Sinusoidal/Stochastic Model For Voiced-Sounds
The goal of this paper is to introduce a complete analysis/resynthesis method for the stationary part of voiced-sounds. The method is based on a new class of wavelets, the Harmonic-Band Wavelets (HBWT). Wavelets have been widely employed in signal processing [1, 2]. In the context of sound processing they provided very interesting results in their first harmonic version: the Pitch Synchronous Wavelets Transform (PSWT) [3]. We introduced the Harmonic-Band Wavelets in a previous edition of the DAFx [4]. The HBWT, with respect to the PSWT allows one to manipulate the analysis coefficients of each harmonic independently. Furthermore one is able to group the analysis coefficients according to a finer subdivision of the spectrum of each harmonic, due to the multiresolution analysis of the wavelets. This allows one to separate the deterministic components of voiced sounds, corresponding to the harmonic peaks, from the noisy/stochastic components. A first result was the development of a parametric representation of the HBWT analysis coefficients corresponding to the stochastic components [5, 7]. In this paper we present the results concerning a parametric representation of the HBWT analysis coefficients of the deterministic components. The method recalls the sinusoidal models, where one models time-varying amplitudes and time varying phases [8, 9]. This method provides a new interesting technique for sound synthesis and sound processing, integrating a parametric representation of both the deterministic and the stochastic components of sounds. At the same time it can be seen as a tool for a parametric representation of sound and data compression.
Download An Adaptive Technique For Modeling Audio Signals
In many applications of audio signal processing modeling of the signal is required. The most commonly used approach for audio signal modeling is to assume the audio signal as an (autoregressive) AR-process where the audio signal is locally stationary over a relatively short time interval. In this case the audio signal can be modeled with an all-pole IIR (infinite impulse response) filter, which leads to LPC (linear predictive coding) where the current input sample is predicted by a linear combination of past samples of the input signal. However, in practice the relatively short time interval (i.e. a frame) where the signal is stationary will vary significantly in the audio signal data stream. Also the information content of the frames will show considerable variation. For a proper modeling of an audio signal it is essential that a suitable frame size and appropriate number of model parameters is used instead of a constant frame size and model order. In this paper we present an adaptive frame-by-frame technique for modeling audio signals, which automatically adjusts the optimal modeling frame size and the optimal number of model parameters for each frame.
Download Sound Source Separation: Preprocessing For Hearing Aids And Structured Audio Codin
In this paper we consider the problem of separating different sound sources in multichannel audio signals. Different approaches to the problem of Blind Source Separation (BSS), e.g. the Independent Component Analysis (ICA) originally proposed by Herault and Jutten, and extensions to this including delays, work fine for artificially mixed signals. However the quality of the separated signals is severely degraded for real sound recordings when there is reverberation. We consider the system with 2 sources and 2 sensors, and show how we can improve the quality of the separation by a simple model of the audio scene. More specifically we estimate the delays between the sensor signals, and put constraints on the deconvolution coefficients.