Download Simpl: A Python Library For Sinusoidal Modelling
Download Two‐Dimensional Fourier Processing of Rasterised Audio
There is continuous research effort into the expansion and refinement of transform techniques for audio signal processing needs, yet the two-dimensional Fourier transform has seldom been applied to audio. This is probably because audio does not readily allow the application of a 2D transform, unlike images for which its use is common. A signal mapping is first required to obtain a two-dimensional representation. However the 2D Fourier transform opens up potential for new or improved analysis and transformation of audio. In this paper, raster scanning is used to provide a simple mapping between one- and two-dimensional representations. This allows initial experimentation with the 2D Fourier transform, in which the 2D spectrum can be observed. A straightforward method is used to display the spectral data as a colour image. The transform gives information on two frequency axes, one in the typical audible frequency range and the other in the low frequency rhythmic range. This representation can be used to more easily observe rhythmic modulations in the signal. Some novel audio transformations are presented, allowing manipulation of rhythmic frequency content. The techniques developed using the 2D Fourier transform allow interaction with audio in a new domain, both analytically and creatively. This work shows how two common signal processing mechanisms can be combined to exciting effect for audio applications.
Download Performance of source spatialization and source localization Algorithms using Conjoint Models of Interaural Level and Time Cues
In this paper, we describe a head-model based on interaural cues (e.g. interaural level differences and interaural time differences). Based on this model, we proposed, in previous works, a binaural source spatialization method (SSPA), that we extended to a multispeaker spatialization technique that works on a speaker array in a pairwise motion (MSPA) [1], [2]. Here, we evaluate the spatialization techniques, and compare them to well-known methods (e.g. VBAP (Vector Base Amplitude Panning) [3]). We also test the robustness of a adapted conjoint localization method under noisy and reverberant conditions; this method uses spectra of recorded binaural signals, and tries to minimize the distance between the ILD and ITD based azimuth estimates. We show comparative results with the PHAT generalized cross-correlation localization method [4].
Download Using tensor factorisation models to separate drums from polyphonic music
This paper describes the use of Non-negative Tensor Factorisation models for the separation of drums from polyphonic audio. Improved separation of the drums is achieved through the incorporation of Gamma Chain priors into the Non-negative Tensor Factorisation framework. In contrast to many previous approaches, the method used in this paper requires little or no pre-training or use of drum templates. The utility of the technique is shown on real-world audio examples.
Download Sound synthesis using an allpass filter chain with audio‐rate coefficient modulation
This paper describes a sound synthesis technique that modulates the coefficients of allpass filter chains using audio-rate frequencies. It was found that modulating a single allpass filter section produces a feedback AM–like spectrum, and that its bandwidth is extended and further processed by non-sinusoidal FM when the sections are cascaded. The cascade length parameter provides dynamic bandwidth control to prevent upper range aliasing artifacts, and the amount of spectral content within that band can be controlled using a modulation index parameter. The technique is capable of synthesizing rich and evolving timbres, including those resembling classic virtual analog waveforms. It can also be used as an audio effect with pitch-tracked input sources. Software and sound examples are available at http://www.acoustics.hut.fi/publications/papers/dafx09-cm/
Download Acoustic rendering of particle‐based simulation of liquids in motion
This paper presents an approach to the synthesis of acoustic emission due to liquids in motion. First, the models for the liquid motion description, based on a particle-based fluid dynamics representation, and for the acoustic emission are described, along with the criteria for the control of the audio algorithms through the parameters of the particles system. Then, the experimental results are discussed for a configuration representing the falling of a liquid volume into an underlying rigid container.
Download Trans-synthesis System for Polyphonic Musical Recordings of Bowed-String Instruments
A system that tries to analyze polyphonic musical recordings of bowed-string instruments, extract synthesis parameters of individual instrument and then re-synthesize is proposed. In the analysis part, multiple F0s estimation and partials tracking are performed based on modified WGCDV (weighted greatest common divisor and vote) method and high-order HMM. Then, dynamic time warping algorithm is employed to align the above results with the score to improve the accuracy of the extracted parameters. In the re-synthesis part, simple additive synthesis is employed. Here, one can experiment on changing timbres, pitches and so on or adding vibrato or other effects on the same piece of music.
Download Alias-free Virtual Analog Oscillators Using a Feedback Delay Loop
The rich spectra of classic waveforms (sawtooth, square and triangular) are obtained by discontinuities in the waveforms or their derivatives. At the same time, the discontinuities lead to aliasing when the waveforms are digitally generated. To remove or reduce the aliasing, researchers have proposed various methods, mostly based on limiting bandwidth or smoothing the waveforms. This paper introduces a new approach to generate the virtual analog oscillators with no aliasing. The approach relies on generating an impulse train using a feedback delay loop, often used for the physical modeling of musical instruments. Classic waveforms are then derived from the impulse train with a leaky integrator. Although the output generated by this method is not exactly periodic, it perceptually sounds harmonic. While additional processing is required for time-varying pitch shifting, resulting in some high-frequency attenuation when the pitch changes, the proposed method is computationally more efficient than other algorithms and the high-frequency attenuation can be also adjusted.
Download Beat-Marker Location using a Probabilistic Framework and Linear Discriminant Analysis
This paper deals with the problem of beat-tracking in an audiofile. Considering time-variable tempo and meter estimation as input, we study two beat-tracking approaches. The first one is based on an adaptation of a method used in speech processing for locating the Glottal Closure Instants. The results obtained with this first approach allow us to derive a set of requirements for a robust approach. This second approach is based on a probabilistic framework. In this approach the beat-tracking problem is formulated as an “inverse” Viterbi decoding problem in which we decode times over beat-numbers according to observation and transition probabilities. A beat-template is used to derive the observation probabilities from the signal. For this task, we propose the use of a machine-learning method, the Linear Discriminant Analysis, to estimate the most discriminative beat-template. We finally propose a set of measures to evaluate the performances of a beattracking algorithm and perform a large-scale evaluation of the two approaches on four different test-sets.
Download Real-Time Beat-Synchronous Analysis of Musical Audio
In this paper we present a model for beat-synchronous analysis of musical audio signals. Introducing a real-time beat tracking model with performance comparable to offline techniques, we discuss its application to the analysis of musical performances segmented by beat. We discuss the various design choices for beat-synchronous analysis and their implications for real-time implementations before presenting some beat-synchronous harmonic analysis examples. We make available our beat tracker and beatsynchronous analysis techniques as externals for Max/MSP.