Download Scalable Spectral Reflections In Conic Sections The object of this project is to present a novel digital audio effect based on a real-time, windowed block-based FFT and inverse FFT. The effect is achieved by mirroring the spectrum, producing a sound effect ranging from a purer rendition of the original, through a rougher one, to a sound unrecognisable from the original. A mirror taking the shape of a conic section is constructed between certain partials, and the modified spectrum is created by reflecting the original spectrum in this mirror. The user can select the type and continuously vary the amount of curvature, typically ‘roughening’ the input sound quite gratifyingly. We demonstrate the system with live real-time audio via microphone.
Download Gesturally-Controlled Digital Audio Effects This paper presents a detailed analysis of the acoustic effects of the movements of single-reed instrument performers for specific recording conditions. These effects are shown to be mostly resulting from the difference between the time of arrival of the direct sound and that of the first reflection, creating a sort of phasing or flanging effect. Contrary to the case of commercial flangers – where delay values are set by a LFO (low frequency oscillator) waveform – the amount of delay in a recording of an acoustic instrument is a function of the position of the instrument with respect to the microphone. We show that for standard recordings of a clarinet, continuous delay variations from 2 to 5 ms are possible, producing a naturally controlled effect.
Download Compositional Use Of Digital Audio Effects This article tracks the composers’ desire for increased control over continuous musical variables through examples for music from the last few centuries. Examples are given on the development of notation and orchestration. A special focus is provided on electroacoustic music as a natural continuation of this development, and typical types of timbral development and structural discourse are brought forward in an attempt to explain which parameters composers of electroacoustic music are considering in their work.
Download Separation Of Speech Signal From Complex Auditory Scenes The hearing system, even in front of complex auditory scenes and in unfavourable conditions, is able to separate and recognize auditory events accurately. A great deal of effort has gone into the understanding of how, after having captured the acoustical data, the human auditory system processes them. The aim of this work is the digital implementation of the decomposition of a complex sound in separate parts as it would appear to a listener. This operation is called signal separation. In this work, the separation of speech signal from complex auditory scenes has been studied and an experimentation of the techniques that address this problem has been done.
Download A Time-Variant Reverberation Algorithm For Reverberation Enhancement Systems This paper presents a new time-variant reverberation algorithm that can be used in reverberation enhancement systems. In these systems, acoustical feedback is always present and time variance can be used to obtain more gain before instability (GBI). The presented time-variant reverberation algorithm is analyzed and results of a practical GBI test are presented. The proposed reverberation algorithm has been used successfully with an electro-acoustically enhanced rehearsal room. This particular application is briefly overviewed and other possible applications are discussed.
Download Automating The Design Of Sound Synthesis Techniques Using Evolutionary Methods Digital sound synthesizers, ubiquitous today in sound cards, software and dedicated hardware, use algorithms (Sound Synthesis Techniques, SSTs) capable of generating sounds similar to those of acoustic instruments and even totally novel sounds. The design of SSTs is a very hard problem. It is usually assumed that it requires human ingenuity to design an algorithm suitable for synthesizing a sound with certain characteristics. Many of the SSTs commonly used are the fruit of experimentation and a long refinement processes. A SST is determined by its functional form and internal parameters. Design of SSTs is usually done by selecting a fixed functional form from a handful of commonly used SSTs, and performing a parameter estimation technique to find a set of internal parameters that will best emulate the target sound. A new approach for automating the design of SSTs is proposed. It uses a set of examples of the desired behavior of the SST in the form of inputs + target sound. The approach is capable of suggesting novel functional forms and their internal parameters, suited to follow closely the given examples. Design of a SST is stated as a search problem in the SST space (the space spanned by all the possible valid functional forms and internal parameters, within certain limits to make it practical). This search is done using evolutionary methods; specifically, Genetic Programming (GP).
Download Cue Point Processing: An Introduction Modern digital sound formats such as aiff, mpeg1/2/4/7, wav and ra support the use of cue points. A cue point may also be referred to as a seek point or a key frame. These mechanisms store meta data about the sound files. In this paper, we identity and describe how these formats are encoded, and process meta data information with a focus on cue points. Finally, we conclude with the direction of our future research for improving multimedia browsing mechanisms and additional applications by leveraging the use of cue points within those applications. Keywords: sound file formats, cue points, sound file, audio files, seek point, key frame, audio indexing