Download Different ways to write digital audio effects programs
This paper is a very basic one, where one tries to explain how one can write a digital audio effect in a non-real time situation with a very general mathematical language such as MATLAB, and how such digital audio effects can be used in the real life.
Download Visual representations of digital audio effects and of their control
This article gathers some reflections on how the graphical representations of sound and the graphical interfaces can help making and controlling digital audio effects. Time-frequency visual representations can help understand digital audio effects but are also a tool in themselves for these sound transformations. The basic laws for analysistransformations-resynthesis will be recalled. Graphical interfaces help for the control of effects. The choice of a relation between user control parameters and the effective values for the effect show the importance of the mapping and the graphical design. Graphical and sonic editor programs have both in common the use of plug-ins, which use the same kind of interface and deal with the same kind of approach.
Download Efficient linear prediction for digital audio effects
In many audio applications an appropriate spectral estimation from a signal sequence is required. A common approach for this task is the linear prediction [1] where the signal spectrum is modelled by an all-pole (purely recursive) IIR (infinite impulse response) filter. Linear prediction is commonly used for coding of audio signals leading to linear predictive coding (LPC). But also some audio effects can be created using the spectral estimation of LPC. In this paper we consider the use of LPC in a real-time system. We investigate several methods of calculating the prediction coefficients to have an almost fixed workload each sample. We present modifications of the autocorrelation method and of the Burg algorithm for a sample-based calculation of the filter coefficients as alternative for the gradient adaptive lattice (GAL) method. We discuss the obtained prediction gain when using these methods regarding the required complexity each sample. The desired constant workload leads to a fast update of the spectral model which is of great benefit for both coding and audio effects.
Download Traditional (?) implementations of a phase vocoder: the tricks of the trade
Download A-Dafx: Adaptive Digital Audio Effect
Digital effects are most of the time non-adaptive: they are applied with the same control values during the whole sound. Adaptive digital audio effects are controlled by features extracted from the sound itself. This means that both a time-frequency features extraction and a mapping from these features to effects parameters are needed. This way, the usual DAFx class is extended to a wider class, the adaptive DAFx one. Four A-DAFx are proposed in this paper, based on the phase vocoder technique: a selective timestretching, an adaptive granular delay, an adaptive robotization and an adaptive whisperization. They provide interesting sounds for electroacoustic and electronic music, with a great coherence between the effect and the original sound.
Download Gestural Strategies for Specific Filtering Processes
The gestural control of filters implies the definition of these filters and the way to activate them with gesture. We give here the example of several real “virtual instruments” which rely on this gestural control. This way we show that music making is different from algorithm producing and that a good gestural control may substitute to, or at least complement, a complex scheme using digital audio effects in real time implementations [1].
Download Implementation Strategies for Adaptive Digital Audio Effects
Adaptive digital audio effects require several implementations, according to the context. This paper brings out a general adaptive DAFx diagram, using one or two input sounds and gesture control of the mapping. Effects are classified according to the perceptive parameters that the effects modify. New adaptive effects are presented, such as martianization and vowel colorization. Some items are highlighted, such as specific problems of real-time and non real-time implementation, improvements with control curve scaling, and solutions to particular problems, like quantization methods for delay-line based effects. To illustrate, musical applications are pointed out.
Download Driving pitch-shifting and time-scaling algorithm with adaptive and gestural techniques
This article intends to demonstrate how a specific digital audio effect can benefit from a proper control, be it from sounds and/or from gesture. When this control is from sounds, it can be called “adaptive” or “sound automated”. When this control is from gesture, it can be called “gesturally controlled”. The audio effects we use for this demonstration are time-scaling and pitch-shifting in the particular contexts of vibrato, prosody change, time unfolding and rythm change.
Download Using Visual Textures for Sonic Textures Production and Control
This work takes place in the framework of a global research on the synthesis of sonic textures and its control through a gesturebased interaction in a musical practice. In this paper we present different strategies to link visual and sonic textures using similar synthesis processes; theoretical considerations underlying to this problematic are firstly exposed and several personal realizations, illustrating different approaches to design a gesturally controlled audio-visual system, are then described.