Download Gestural Strategies for Specific Filtering Processes
The gestural control of filters implies the definition of these filters and the way to activate them with gesture. We give here the example of several real “virtual instruments” which rely on this gestural control. This way we show that music making is different from algorithm producing and that a good gestural control may substitute to, or at least complement, a complex scheme using digital audio effects in real time implementations .
Download Digital Audio Effects in the Wavelet Domain
Audio signals are often stored or transmitted in a compressed representation, which can pose a problem if there is a requirement to perform signal processing; it is likely it will be necessary to convert the signal back to a time domain representation, process, and then re-transform. This is timeconsuming and computationally intensive; it is potentially more efficient to apply signal processing while the signal remains in the transform domain. We have implemented a scheme whereby linear processing of the traditional type often instinctively understood by those working in the audio field may be applied to signals stored in a wavelet domain representation. Results are presented which demonstrate that the method produces the same output – to within the limits of machine precision – as timedomain processing, for less computational effort than would be required for the full explicit process through the time domain and back again. The potential benefits for linear effects processing (for example, EQ and sample-level delays and echoes) and also for non-linear processing such as dynamics processing, will be introduced and discussed.
Download Doppler Simulation and the Leslie
An efficient algorithm for simulating the Doppler effect using interpolating and de-interpolating delay lines is described. The Doppler simulator is used to simulate a rotating horn to achieve the Leslie effect. Measurements of a horn from a real Leslie are used to calibrate angle-dependent digital filters which simulate the changing, angle-dependent, frequency response of the rotating horn.
Download Implementation Strategies for Adaptive Digital Audio Effects
Adaptive digital audio effects require several implementations, according to the context. This paper brings out a general adaptive DAFx diagram, using one or two input sounds and gesture control of the mapping. Effects are classified according to the perceptive parameters that the effects modify. New adaptive effects are presented, such as martianization and vowel colorization. Some items are highlighted, such as specific problems of real-time and non real-time implementation, improvements with control curve scaling, and solutions to particular problems, like quantization methods for delay-line based effects. To illustrate, musical applications are pointed out.
Download Soundspotter - A Prototype System for Content-based Audio Retrieval
We present the audio retrieval system “Soundspotter,” which allows the user to select a specific passage within an audio file and retrieve perceptually similar passages. The system extracts framebased features from the sound signal and performs pattern matching on the resulting sequences of feature vectors. Finally, an adjustable number of best matches is returned, ranked by their similarity to the reference passage. Soundspotter comprises several alternative retrieval algorithms, including dynamic time warping and trajectory matching based on a self-organizing map. We explain the algorithms and report initial results of a comparative evaluation.
Download A Hybrid Approach to Musical Note Onset Detection
Common problems with current methods of musical note onset detection are detection of fast passages of musical audio, detection of all onsets within a passage with a strong dynamic range and detection of onsets of varying types, such as multi-instrumental music. We present a method that uses a subband decomposition approach to onset detection. An energy-based detector is used on the upper subbands to detect strong transient events. This yields precision in the time resolution of the onsets, but does not detect softer or weaker onsets. A frequency based distance measure is formulated for use with the lower subbands, improving detection accuracy of softer onsets. We also present a method for improving the detection function, by using a smoothed difference metric. Finally, we show that the detection threshold may be set automatically from analysis of the statistics of the detection function, with results comparable in most places to manual setting of thresholds.
Download Automatic Polyphonic Piano Note Extraction Using Fuzzy Logic in a Blackboard System
This paper presents a piano transcription system that transforms audio into MIDI format. Human knowledge and psychoacoustic models are implemented in a blackboard architecture, which allows the adding of knowledge with a top-down approach. The analysis is adapted to the information acquired. This technique is referred to as a prediction-driven approach, and it attempts to simulate the adaptation and prediction process taking place in human auditory perception. In this paper we describe the implementation of Polyphonic Note Recognition using a Fuzzy Inference System (FIS) as part of the Knowledge sources in a Blackboard system. The performance of the transcription system shows how polyphonic music transcription is still an unsolved problem, with a success of 45% according to the Dixon formula. However if we consider only the transcribed notes the success increases to 74%. Moreover, the results obtained in the paper presented in , show how the transcription can be used with success in a retrieval system, encouraging the authors to develop this technique for more accurate transcription results.
Download Polyphonic Transcription Using Piano Modeling for Spectral Pattern Recognition
Polyphonic transcription needs a correct identification of notes and chords. We have centered the efforts in piano chords identification. Pattern recognition using spectral patterns has been used as the identification method. The spectrum of the signal is compared with a set of spectra (patterns). The patterns are generated by a piano model that takes into account acoustic parameters and typical manufacturer criteria, that are adjusted by training the model with a few notes. The algorithm identifies notes and, iteratively, chords. Chords identification requires spectral substraction that is performed using masks. The analyzing algorithm used for training, avoids false partials detection due to nonlinear components and takes into account inharmonicity for spectrum segmentation. The method has been tested with live piano sounds recorded from two different grand pianos. Successful identification of up to four-notes chords has been carried out.
Download Survey on Extraction of Sinusoids in Stationary Sounds
This paper makes a survey of the numerous analysis methods proposed in order to extract the frequency, amplitude, and phase of sinusoidal components from stationary sounds, which is of great interest for spectral modeling, digital audio effects, or pitch tracking for instance. We consider different methods that improve the frequency resolution of a plain FFT. We compare the accuracies in frequency and amplitude of all these methods. As the results show, all considered methods have a great advantage over the plain FFT.
Download Sinusoidal Parameter Extraction and Component Selection in a non Stationary Model
In this paper, we introduce a new analysis technique particularly suitable for the sinusoidal modeling of non-stationary signals. This method, based on amplitude and frequency modulation estimation, aims at improving traditional Fourier parameters and enables us to introduce a new peak selection process, so that only peaks having coherent parameters are considered in subsequent stages (e.g. partial tracking, synthesis). This allows our spectral model to better handle natural sounds.