Download Musical timbre combfilter-coloration from reflections
Coloration is defined as changes in Timbre/ ”Klangfarbe”. Adding a reflection will automatically change the frequency response of a signal, giving some kind of coloration. This might be looked upon as distortion. However, reflections has been a natural part of sound distribution since the Greek amphi-theatres, indicating that some coloration must be acceptable, or even “wanted” , depending of the type of signal/musical material. The question is: “Which reflections give disturbing/unwanted coloration”? Part 1 gives a general overview of combfilter-effect for different time-delays. Part 2+3 gives the main results of a large practical investigation of coloration on orchestra platforms in concert halls. In Part 4+5, these results are compared with psycho-acoustical studies on coloration.
Download Musical Applications of Decomposition with Global Support
Much of today's musical signal processing is based upon local decomposition methods, often using short-range, windowed Fourier transforms. Such methods are supposed to mimick some aspects of human hearing. Using global decomposition instead, analyzing the complete source sound in one transform, opens up a new world of interesting sound manipulation methods. These methods are difficult to analyze in terms of human auditive perception or musical intuition but can produce exciting sounds. A program that explores these possibilities has been written and placed in the public domain.
Download A Hierarchical Constant Q Transform for Partial Tracking in Musical signals
This paper addresses a method for signal dependent timefrequency tiling of musical signals with respect to onsets and offsets of partials. The method is based on multi-level constant Q transforms where the calculation of bins in the higher levels of the transforms depend on the input signal content. The transform utilizes the signal energy in the subbands to determine whether the higher Q bins in the next level, that correspond roughly to the same frequency band in that level, will be calculated or not. At each higher level, the frequency resolution is increased by doubling the number of bins only for which there is significant energy in the previous level. The Q is adjusted accordingly for each level and is held constant within a level. Processing starts with a low Q that provides good time resolution and proceeds with higher levels until the desired maximum frequency resolution is achieved. The advantages of this method are twofold: First, the time resolution depends on the spacing of the frequency components in the input signal, potentially leading to reduced time smearing, and second, although signal dependent, the conditional calculation of higher Q levels of the transform has a direct consequence of reducing the number of operations in calculating the final spectrum for regular harmonic monophonic sounds. Partial tracking is performed using conventional peak picking and a birth-death strategy of frequency tracks. Testing is being carried out by resynthesizing the input sound from the extracted parameters using a sum of sinusoids with cubic interpolation for phase unwrapping between frames.
Download Envelope Model of Isolated Musical Sounds
This paper presents a model of the envelope of the additive parameters of isolated musical sounds, along with a new method for the estimation of the important envelope splitpoint times. The model consists of start, attack, sustain, release, and end segments with variable split-point amplitude and time. The estimation of the times is done using smoothed derivatives of the envelopes. The estimated split-point values can be used together with a curve-form model introduced in this paper in the analysis/synthesis of musical sounds. The envelope model can recreate noise-less musical sounds with good fidelity, and the method for the estimation of the envelope times performs significantly better than the classical percentagebased method.
Download Design and Understandability of Digital-Audio Musical Symbols for Intent and State Communication from Service Robots to Humans
Auditory displays for mobile service robots are developed. The design of digital-audio symbols, such as directional sounds and additional sounds for robot states, as well as the design of more complicated robot sound tracks are explained. Basic musical elements and robot movement sounds are combined. Two experimental studies, on the understandability of the directional sounds and on the auditory perception of intended robot trajectories in a simulated supermarket scenario, are described.
Download New techniques and Effects in Model-based Sound Synthesis
Physical modeling and model-based sound synthesis have recently been among the most active topics of computer music and audio research. In the modeling approach one typically tries to simulate and duplicate the most prominent sound generation properties of the acoustic musical instrument under study. If desired, the models developed may then be modified in order to create sounds that are not common or even possible from physically realizable instruments. In addition to physically related principles it is possible to combine physical models with other synthesis and signal processing methods to realize hybrid modeling techniques.
This article is written as an overview of some recent results in model-based sound synthesis and related signal processing techniques. The focus is on modeling and synthesizing plucked string sounds, although the techniques may find much more widespread application. First, as a background, an advanced linear model of the acoustic guitar is discussed along with model control principles. Then the methodology to include inherent nonlinearities and time-varying features is introduced. Examples of string instrument nonlinearities are studied in the context of two specific instruments, the kantele and the tanbur, which exhibit interesting nonlinear effects.
Download More Acoustic Sounding Timbre From Guitar Pickups
Amplified guitars with pickups tend to sound ’dry’ and electric, whether the instrument is acoustic or electric. Vibration or pressure sensing pickups for acoustic guitars do not capture the body vibrations with fidelity and in the electric guitar with magnetic pickups there often is no resonating body at all. Especially with an acoustic guitar there is a need to reinforce the sound by retaining the natural acoustic timbre. In this study we have explored the use of DSP equalization to make the signal from the pickup sound more acoustic. Both acoustic and electric guitar pickups are studied. Different digital filters to simulate acoustic sound are compared, and related estimation techniques for filter parameters are discussed.
Download Time-domain model of the singing voice
A combined physical model for the human vocal folds and vocal tract is presented. The vocal fold model is based on a symmetrical 16 mass model by Titze. Each vocal fold is modeled with 8 masses that represent the mucosal membrane coupled by non-linear springs to another 8 masses for the vocalis muscle together with the ligament. Iteratively, the value of the glottal flow is calculated and taken as input for calculation of the aerodynamic forces. Together with the spring forces and damping forces they yield the new positions of the masses that are then used for the calculation of a new glottal flow value. The vocal tract model consists of a number of uniform cylinders of fixed length. At each discontinuity incident, reflected and transmitted waves are calculated including damping. Assuming a linear system, the pressure signal generated by the vocal fold model is either convoluted with the Green’s function calculated by the vocal tract model or calculated interactively assuming variable reflection coefficients for the glottis and the vocal tract during phonation. The algorithms aim at real-time performance and are implemented in MATLAB.
Download Issues in Performance Prediction of Surround Systems for Sound Reinforcement Applications
Multichannel audio is set to change the way we listen to reproduced music, allowing the creation of spatial auditory images that will add, quite literally, more dimensions to the whole listening experience
Current methods for the objective assessment of the imaging imparted by holographic sound systems assume listening conditions that can be met exclusively in household installations, where the audience is usually located in a very restricted area and the acoustical properties of the room can be generally neglected.
In this paper we address the main limitations of these traditional evaluation techniques in the instance of surround sound systems serving large and acoustically non-ideal listening areas.
Download 3-D Audio with Dynamic Tracking for Multimedia Environtments
This papers deals with a 3-D audio system that has been developed for desktop multimedia environments. The system has the ability to place virtual sources at arbitrary azimuths and elevations around the listener’s head based on HRTF binaural synthesis. A listener seated in front of a computer and two loudspeakers placed at each side of the monitor have been considered. Transaural reproduction using loudspeakers has been used for rendering the sound field to listener ears. Furthermore the system can cope with slight movements of the listener head. Head position is monitored by means of a simple computer vision algorithm. Four head position coordinates (x,y,z,φ) in order to allow free movements of the listener are continuously estimated. Cross-talk cancellation filters and virtual sources locations are updated depending on these head coordinates.