Download EPS Models Of Am-Fm Vocoder Output For New Sounds Generations
The Phase Vocoder [1] was originally introduced as an alternative approach to speech coding but has won much greater acceptance among the music community as a tool both for sound analysis and composition [2]. Although dormant for some time, there has been a resurgence of interest in AM-FM speech signal descriptions in the last ten years [3], [4]. With the intention of building on some of the new ideas proffered, the aim of this work is to first consider their application to musical signals. It then demonstrates how paramaterisation of the extracted AM-FM information using EPS (Exponential Polynomial Signal) models allows modification of the individual large- and small-grain features of the AM and FM components, thus providing a new way for generating audio effects.
Download Timbre Morphing using the Modal Distribution
We present techniques for timbre morphing between two audio signals based on the Modal distribution time-frequency representation of music signals. A signal synthesis method is described which resynthesises signals from Modal distributions. Direct resynthesis from the original signal produces a timbre that is almost indistinguishable from the source. In deciding which salient features to morph a relational graph representation of timbre is used and linear interpolation and non-linear warping are applied in performing the morph between Modal distributions.
Download New SndObj Library Classes for Sinusoidal Modeling
We present an object-oriented implementation for sinusoidal modelling based on the C++ Sound Object Library (SndObj). We outline the background to this analysis/synthesis technique and its inclusion in many well known methods of speech and music signal processing. Incorporation of such a well known technique into the SndObj library will enable the development of further audio processing techniques such as vocoding, time and pitch scaling and cross-synthesis on an object-oriented development platform.
Download GUI front-end for spectral warping
This paper describes a software tool developed in the Java language to facilitate time and frequency warping of audio spectra. The application utilises the Java Advanced Image Processing (AIP) API which contains classes for image manipulation and, in particular, for non-linear warping using polynomial transformations. Warping of spectral representations is fundamental to sound processing techniques such as sound transformation and morphing. Dynamic time warping has been the method of choice for many implementations of temporal and spectral alignment for morphing. This tool offers greater advantage by providing an interactive approach to warping, thus allowing greater flexibility in achieving a desired transformation. This application can then be used as input to a signal synthesis routine, which will recover the transformed sound.
Download Implementing Loudness Models in MATLAB
In the field of psychoacoustic analysis the goal is to construct a transformation that will map a time waveform into a domain that best captures the response of a human perceiving sound. A key element of such transformations is the mapping between the sound intensity in decibels and its actual perceived loudness. A number of different loudness models exist to achieve this mapping. This paper examines implementation strategies for some of the more well-known models in the Matlab software environment.
Download Timbral Attributes for Objective Quality Assessment of the Irish Tin Whistle
In this paper we extract various timbral attributes for a variety of Irish tin whistles, and use these attributes to form an objective quality assessment of the instruments. This assessment is compared with the subjective experiences of a number of professional musicians. The timbral attributes are drawn from those developed in the Timbre Model [1].
Download Alternative analysis-resynthesis approaches for timescale, frequency and other transformations of musical signals
This article presents new spectral analysis-synthesis approaches to musical signal transformation. The analysis methods presented here involve the use of a superior quality technique of frequency estimation, the Instantaneous Frequency Distribution (IFD), and partial tracking. We discuss the theory behind the IFD, comparing it to other existing methods. The partial tracking analysis employed in this process is explained fully. This is followed by a look into the three resynthesis methods proposed by this work, based on different approaches to additive synthesis. A number of transformations of musical signals are proposed to take advantage of the analysis-synthesis techniques. Performance details and specific aspects of this implementation are discussed. This is complemented by a look at some of the results of these methods in the time-stretching of audio signals, where they will be shown to perform better than many of the currently available techniques.
Download CMOS Implementation of an Adaptive Noise Canceller into a Subband Filter
In recent years the demand for mobile communication has increased rapidly. While in the early years of mobile phones battery life was one of the main concerns for developers speech quality is now becoming one of the most important factors in the development of the next generation of mobile phones. This paper describes the CMOS implementation of an adaptive noise canceller (ANC) into a subband filter. The ANC-Subband filter is able to reduce noise components of real speech without prior knowledge of the noise properties. It is predestined to be used in mobile devices and therefore, uses a very low clock frequency resulting in a small power consumption. This low power consumption combined with its small physical size enables the circuit also be used in hearing aids to efficiently reduce noise contained in the speech signal.
Download Comparing synthetic and real templates for dynamic time warping to locate partial envelope features
In this paper we compare the performance of a number of different templates for the purposes of split point identification of various clarinet envelopes. These templates were generated with AttackDecay-Sustain-Release (ADSR) descriptions commonly used in musical synthesis, along with a set of real templates obtained using k-means clustering of manually prepared test data. The goodness of fit of the templates to the data was evaluated using the Dynamic Time Warping (DTW) cost function, and by evaluating the square of the distance of the identified split points to the manually identified split points in the test data. It was found that the best templates for split point identification were the synthetic templates followed by the real templates having a sharp attack and release characteristic, as is characteristic of the clarinet envelope.
Download Streaming Frequency-Domain DAFx in Csound 5
This article discusses the implementation of frequency domain digital audio effects using the Csound 5 music programming language, with its streaming frequency-domain signal (fsig) framework. Introduced to Csound 4.13, by Richard Dobson, it was further extended by Victor Lazzarini in version 5. The latest release of Csound incorporates a variety of new opcodes for different types of spectral manipulations. This article introduces the fsig framework and the analysis and resynthesis unit generators. It describes in detail the different types of spectral DAFx made possible by these new opcodes.