Download A Wavelet-based Pitch Detector for Musical Signals
Physical modelling of musical instruments is one possible approach to digital sound synthesis techniques. By the term physical modelling, we refer to the simulation of sound production mechanism of a musical instrument, which is modelled with reference to the physics using wave-guides. One of the fundamental parameters of such a physical model is the pitch, and so pitch period estimation is one of the first tasks of any analysis of such a model. In this paper, an algorithm based on the Dyadic Wavelet Transform has been investigated for pitch detection of musical signals. The wavelet transform is simply the convolution of a signal f(t) with a dialated and translated version of a single function called the mother wavelet that has to satisfy certain requirements. There are a wide variety of possible wavelets, but not all are appropriate for pitch detection. The performance of both linear phase wavelets (Haar, Morlet, and the spline wavelet) and minimum phase wavelets (Daubechies’ wavelets) have been investigated. The algorithm proposed here has proved to be simple, accurate, and robust to noise; it also has the potential of acceptable speed. A comparative study between this algorithm and the well-known autocorrelation function is also given. Finally, illustrative examples of different real guitar tones and other sound signals are given using the proposed algorithm. KEYWORDS Physical modeling – wavelet transform – pitch – autocorrelation function.
Download Separation of Musical Instruments based on Perceptual and Statistical Principles
The separation of musical instruments acoustically mixed in one source is a very active field which has been approached from many different viewpoints. This article compares the blind source separation perspective and oscillatory correlation theory taking the auditory scene analysis as the point of departure (ASA). The former technique deals with the separation of a particular signal from a mixture with many others from a statistical point of view. Through the standard Independent Component Analysis (ICA), a blind source separation can be done using the particular and the mixed signals' statistical properties. Thus, the technique is general and does not use previous knowledge about musical instruments. In the second approach, an ASA extension is studied with a dynamic neural model which is able to separate the different musical instruments taking a priori unknown perceptual elements as a point of departure. Applying an inverse transformation to the output of the model, the different contributions to the mixture can be recovered again in the time domain.
Download Number Theoretic Transforms in Audio Processing
This paper is concerned with application of Number Theoretic Transforms (NTTs) to audio processing. The problem of dynamic range is of particular interest for this application and is therefore treated in some detail.
Download Musical timbre combfilter-coloration from reflections
Coloration is defined as changes in Timbre/ ”Klangfarbe”. Adding a reflection will automatically change the frequency response of a signal, giving some kind of coloration. This might be looked upon as distortion. However, reflections has been a natural part of sound distribution since the Greek amphi-theatres, indicating that some coloration must be acceptable, or even “wanted” , depending of the type of signal/musical material. The question is: “Which reflections give disturbing/unwanted coloration”? Part 1 gives a general overview of combfilter-effect for different time-delays. Part 2+3 gives the main results of a large practical investigation of coloration on orchestra platforms in concert halls. In Part 4+5, these results are compared with psycho-acoustical studies on coloration.
Download Musical Applications of Decomposition with Global Support
Much of today's musical signal processing is based upon local decomposition methods, often using short-range, windowed Fourier transforms. Such methods are supposed to mimick some aspects of human hearing. Using global decomposition instead, analyzing the complete source sound in one transform, opens up a new world of interesting sound manipulation methods. These methods are difficult to analyze in terms of human auditive perception or musical intuition but can produce exciting sounds. A program that explores these possibilities has been written and placed in the public domain.
Download A Hierarchical Constant Q Transform for Partial Tracking in Musical signals
This paper addresses a method for signal dependent timefrequency tiling of musical signals with respect to onsets and offsets of partials. The method is based on multi-level constant Q transforms where the calculation of bins in the higher levels of the transforms depend on the input signal content. The transform utilizes the signal energy in the subbands to determine whether the higher Q bins in the next level, that correspond roughly to the same frequency band in that level, will be calculated or not. At each higher level, the frequency resolution is increased by doubling the number of bins only for which there is significant energy in the previous level. The Q is adjusted accordingly for each level and is held constant within a level. Processing starts with a low Q that provides good time resolution and proceeds with higher levels until the desired maximum frequency resolution is achieved. The advantages of this method are twofold: First, the time resolution depends on the spacing of the frequency components in the input signal, potentially leading to reduced time smearing, and second, although signal dependent, the conditional calculation of higher Q levels of the transform has a direct consequence of reducing the number of operations in calculating the final spectrum for regular harmonic monophonic sounds. Partial tracking is performed using conventional peak picking and a birth-death strategy of frequency tracks. Testing is being carried out by resynthesizing the input sound from the extracted parameters using a sum of sinusoids with cubic interpolation for phase unwrapping between frames.
Download Envelope Model of Isolated Musical Sounds
This paper presents a model of the envelope of the additive parameters of isolated musical sounds, along with a new method for the estimation of the important envelope splitpoint times. The model consists of start, attack, sustain, release, and end segments with variable split-point amplitude and time. The estimation of the times is done using smoothed derivatives of the envelopes. The estimated split-point values can be used together with a curve-form model introduced in this paper in the analysis/synthesis of musical sounds. The envelope model can recreate noise-less musical sounds with good fidelity, and the method for the estimation of the envelope times performs significantly better than the classical percentagebased method.
Download Design and Understandability of Digital-Audio Musical Symbols for Intent and State Communication from Service Robots to Humans
Auditory displays for mobile service robots are developed. The design of digital-audio symbols, such as directional sounds and additional sounds for robot states, as well as the design of more complicated robot sound tracks are explained. Basic musical elements and robot movement sounds are combined. Two experimental studies, on the understandability of the directional sounds and on the auditory perception of intended robot trajectories in a simulated supermarket scenario, are described.
Download New techniques and Effects in Model-based Sound Synthesis
Physical modeling and model-based sound synthesis have recently been among the most active topics of computer music and audio research. In the modeling approach one typically tries to simulate and duplicate the most prominent sound generation properties of the acoustic musical instrument under study. If desired, the models developed may then be modified in order to create sounds that are not common or even possible from physically realizable instruments. In addition to physically related principles it is possible to combine physical models with other synthesis and signal processing methods to realize hybrid modeling techniques. This article is written as an overview of some recent results in model-based sound synthesis and related signal processing techniques. The focus is on modeling and synthesizing plucked string sounds, although the techniques may find much more widespread application. First, as a background, an advanced linear model of the acoustic guitar is discussed along with model control principles. Then the methodology to include inherent nonlinearities and time-varying features is introduced. Examples of string instrument nonlinearities are studied in the context of two specific instruments, the kantele and the tanbur, which exhibit interesting nonlinear effects.
Download More Acoustic Sounding Timbre From Guitar Pickups
Amplified guitars with pickups tend to sound ’dry’ and electric, whether the instrument is acoustic or electric. Vibration or pressure sensing pickups for acoustic guitars do not capture the body vibrations with fidelity and in the electric guitar with magnetic pickups there often is no resonating body at all. Especially with an acoustic guitar there is a need to reinforce the sound by retaining the natural acoustic timbre. In this study we have explored the use of DSP equalization to make the signal from the pickup sound more acoustic. Both acoustic and electric guitar pickups are studied. Different digital filters to simulate acoustic sound are compared, and related estimation techniques for filter parameters are discussed.