Download 3D graphics tools for sound collections
Most of the current tools for working with sound work on single soundfiles, use 2D graphics and offer limited interaction to the user. In this paper we describe a set of tools for working with collections of sounds that are based on interactive 3D graphics. These tools form two families: sound analysis visualization displays and model-based controllers for sound synthesis algorithms. We describe the general techniques we have used to develop these tools and give specific case studies from each family. Several collections of sounds were used for development and evaluation. These are: a set of musical instrument tones, a set of sound effects, a set of FM radio audio clips belonging to several music genres, and a set of mp3 rock song snippets.
Download Human Perception and Computer Extraction of Musical Beat Strength
Musical signals exhibit periodic temporal structure that create the sensation of rhythm. In order to model, analyze, and retrieve musical signals it is important to automatically extract rhythmic information. To somewhat simplify the problem, automatic algorithms typically only extract information about the main beat of the signal which can be loosely defined as the regular periodic sequence of pulses corresponding to where a human would tap his foot while listening to the music. In these algorithms, the beat is characterized by its frequency (tempo), phase (accent locations) and a confidence measure about its detection. The main focus of this paper is the concept of Beat Strength, which will be loosely defined as one rhythmic characteristic that could allow to discriminate between two pieces of music having the same tempo. Using this definition, we might say that a piece of Hard Rock has a higher beat strength than a piece of Classical Music at the same tempo. Characteristics related to Beat Strength have been implicitely used in automatic beat detection algorithms and shown to be as important as tempo information for music classification and retrieval. In the work presented in this paper, a user study exploring the perception of Beat Strength was conducted and the results were used to calibrate and explore automatic Beat Strength measures based on the calculation of Beat Histograms.
Download MUSESCAPE: An interactive content-aware music browser
Advances in hardware performance, network bandwidth and audio compression have made possible the creation of large personal digital music collections. Although, there is a significant body of work in image and video browsing, there has been little work that directly addresses the problem of audio and especially music browsing. In this paper, Musescape, a prototype music browsing system is described and evaluated. The main characteristics of the system are automatic configuration based on Computer Audition techniques and the use of continuous audio-music feedback while browsing and interacting with the system. The described ideas and techniques take advantage of the unique characteristics of music signals. A pilot user study was conducted to explore and evaluate the proposed user interface. The results indicate that the use of automatically extracted tempo information reduces browsing time and that continuous interactive audio feedback is appropriate for this particular domain.
Download Audio-Based Gesture Extraction on the ESITAR Controller
Using sensors to extract gestural information for control parameters of digital audio effects is common practice. There has also been research using machine learning techniques to classify specific gestures based on audio feature analysis. In this paper, we will describe our experiments in training a computer to map the appropriate audio-based features to look like sensor data, in order to potentially eliminate the need for sensors. Specifically, we will show our experiments using the ESitar, a digitally enhanced sensor based controller modeled after the traditional North Indian sitar. We utilize multivariate linear regression to map continuous audio features to continuous gestural data.
Download A Framework for Sonification of Vicon Motion Capture Data
This paper describes experiments on sonifying data obtained using the VICON motion capture system. The main goal is to build the necessary infrastructure in order to be able to map motion parameters of the human body to sound. For sonification the following three software frameworks were used: Marsyas, traditionally used for music information retrieval with audio analysis and synthesis, CHUCK, an on-the-fly real-time synthesis language, and Synthesis Toolkit (STK), a toolkit for sound synthesis that includes many physical models of instruments and sounds. An interesting possibility is the use of motion capture data to control parameters of digital audio effects. In order to experiment with the system, different types of motion data were collected. These include traditional performance on musical instruments, acting out emotions as well as data from individuals having impairments in sensor motor coordination. Rhythmic motion (i.e. walking) although complex, can be highly periodic and maps quite naturally to sound. We hope that this work will eventually assist patients in identifying and correcting problems related to motor coordination through sound.
Download Adaptive Harmonization and Pitch Correction of Polyphonic Audio Using Spectral Clustering
There are several well known harmonization and pitch correction techniques that can be applied to monophonic sound sources. They are based on automatic pitch detection and frequency shifting without time stretching. In many applications it is desired to apply such effects on the dominant melodic instrument of a polyphonic audio mixture. However, applying them directly to the mixture results in artifacts, and automatic pitch detection becomes unreliable. In this paper we describe how a dominant melody separation method based on spectral clustering of sinusoidal peaks can be used for adaptive harmonization and pitch correction in mono polyphonic audio mixtures. Motivating examples from a violin tutoring perspective as well as modifying the saxophone melody of an old jazz mono recording are presented.
Download One Billion Audio Sounds From Gpu-Enabled Modular Synthesis
We release synth1B1, a multi-modal audio corpus consisting of 1 billion 4-second synthesized sounds, paired with the synthesis parameters used to generate them. The dataset is 100x larger than any audio dataset in the literature. We also introduce torchsynth, an open source modular synthesizer that generates the synth1B1 samples on-the-fly at 16200x faster than real-time (714MHz) on a single GPU. Finally, we release two new audio datasets: FM synth timbre and subtractive synth pitch. Using these datasets, we demonstrate new rank-based evaluation criteria for existing audio representations. Finally, we propose a novel approach to synthesizer hyperparameter optimization.