Download Modulation and demodulation of steerable ultrasound beams for audio transmission and rendering
Nonlinear effects in ultrasound propagation can be used for generating highly directive audible sound. In order to do so, we can modulate the amplitude of the audio signal and send it to an ultrasound transducer. When played back at a sufficiently high sound pressure level, due to a nonlinear behavior of the medium, the ultrasonic signal gets self-demodulated. The resulting signal has two important characteristics: that of becoming audible; and that of having the same directivity properties of the ultrasonic carrier frequency. In this paper we describe the theoretical advantages of singlesideband (SSB) modulation versus a standard amplitude modulation (AM) scheme for the above-described application. We describe our near-field soundfield measuring experiments, and propose steering solutions for the array using two different types of transducers, piezoelectric or electrostatic, and the proper supporting hardware.
Download Measuring Sensory Consonance by Auditory Modeling
A current model of pitch perception is based on cochlear filtering followed by a periodicity detection. Such a computational model is implemented and then extended to characterise the sensory consonance of pitch intervals. A simple scalar measure of sensory consonance is developed, and to evaluate this perceptually related feature extraction the consonance is computed for musical intervals. The relation of consonance and dissonance to the psychoacoustic notions of roughness and critical bandwidth is discussed.
Download Interpolation of long gaps in audio signals using the warped Burg's method
This paper addresses the reconstruction of missing samples in audio signals via model-based interpolation schemes. We demonstrate through examples that employing a frequency-warped version of Burg’s method is advantageous for interpolation of long duration signal gaps. Our experiments show that using frequencywarping to focus modeling on low frequencies allows reducing the order of the autoregressive models without degrading the quality of the reconstructed signal. Thus a better balance between qualitative performance and computational complexity can be achieved.
Download Computer Instrument Development and the Composition Process
This text looks at the computer instrument development work and its influence on the composition process. As a preamble to the main discussion, the different types of software for sound generation and transformation are reviewed. The concept of meta-themes is introduced and explored in the context of contemporary music. Two examples of the author’s computer music work are used to discuss the complex relationship between software development and composition. The first piece provides an example of such relationships in the context of ‘tape’ music. The second explores the use of computer instruments in live electroacoustic music. The activities of composition and instrument creation will be shown to be at times indistinguishable and mutually dependent.
Download A Time-Variant Reverberation Algorithm For Reverberation Enhancement Systems
This paper presents a new time-variant reverberation algorithm that can be used in reverberation enhancement systems. In these systems, acoustical feedback is always present and time variance can be used to obtain more gain before instability (GBI). The presented time-variant reverberation algorithm is analyzed and results of a practical GBI test are presented. The proposed reverberation algorithm has been used successfully with an electro-acoustically enhanced rehearsal room. This particular application is briefly overviewed and other possible applications are discussed.
Download Automating The Design Of Sound Synthesis Techniques Using Evolutionary Methods
Digital sound synthesizers, ubiquitous today in sound cards, software and dedicated hardware, use algorithms (Sound Synthesis Techniques, SSTs) capable of generating sounds similar to those of acoustic instruments and even totally novel sounds. The design of SSTs is a very hard problem. It is usually assumed that it requires human ingenuity to design an algorithm suitable for synthesizing a sound with certain characteristics. Many of the SSTs commonly used are the fruit of experimentation and a long refinement processes. A SST is determined by its functional form and internal parameters. Design of SSTs is usually done by selecting a fixed functional form from a handful of commonly used SSTs, and performing a parameter estimation technique to find a set of internal parameters that will best emulate the target sound. A new approach for automating the design of SSTs is proposed. It uses a set of examples of the desired behavior of the SST in the form of inputs + target sound. The approach is capable of suggesting novel functional forms and their internal parameters, suited to follow closely the given examples. Design of a SST is stated as a search problem in the SST space (the space spanned by all the possible valid functional forms and internal parameters, within certain limits to make it practical). This search is done using evolutionary methods; specifically, Genetic Programming (GP).
Download Multipitch Estimation of Quasi-Harmonic Sounds in Colored Noise
This paper proposes a new multipitch estimator based on a likelihood maximization principle. For each tone, a sinusoidal model is assumed with a colored, Moving-Average, background noise and an autoregressive spectral envelope for the overtones. A monopitch estimator is derived following a Weighted Maximum Likelihood principle and leads to find the fundamental frequency (F0 ) which jointly maximally flattens the noise spectrum and the sinusoidal spectrum. The multipitch estimator is obtained by extending the method for jointly estimating multiple F0 ’s. An application to piano tones is presented, which takes into account the inharmonicity of the overtone series for this instrument.
Download Inharmonic Sound Spectral Modeling by Means of Fractal Additive Synthesis
In previous editions of the DAFX [1, 2] we presented a method for the analysis and the resynthesis of voiced sounds, i.e., of sounds with well defined pitch and harmonic-peak spectra. In a following paper [3] we called the method Fractal Additive Synthesis (FAS). The main point of the FAS is to provide two different models for representing the deterministic and the stochastic components of voiced-sounds, respectively. This allows one to represent and reproduce voiced-sounds without loosing the noisy components and stochastic elements present in real-life sounds. These components are important in order to perceive a synthetic sound as a natural one. The topic of this paper is the extension of the technique to inharmonic sounds. We can apply the method to sounds produced by percussion instruments as gongs, tympani or tubular bells, as well as to sounds with expanded quasi-harmonic spectrum as piano sounds.
Download Analysis and Correction of Maps Dataset
Automatic music transcription (AMT) is the process of converting the original music signal into the digital music symbol. The MIDI Aligned Piano Sounds (MAPS) dataset was established in 2010 and is the most used benchmark dataset for automatic piano music transcription. In this paper, error screening is carried out through algorithm strategy, and three data annotation problems are found in ENSTDkCl, which is a subset of MAPS, usually used for algorithm evaluation: (1) there are 342 deviation errors of midi annotation; (2) there are 803 unplayed note errors; (3) there are 1613 slow starting process errors. After algorithm correction and manual confirmation, the corrected dataset is released. Finally, the better-performing Google model and our model are evaluated on the corrected dataset. The F values are 85.94% and 85.82%, respectively, and it is correspondingly improved compared with the original dataset, which proves that the correction of the dataset is meaningful.
Download A Maximum Likelihood Approach to Blind Audio De-Reverberation
Blind audio de-reverberation is the problem of removing reverb from an audio signal without having explicit data regarding the system and/or the input signal. Blind audio de-reverberation is a more difficult signal-processing task than ordinary dereverberation based on deconvolution. In this paper different blind de-reverberation algorithms derived from kurtosis maximization and a maximum likelihood approach are analyzed and implemented.