Download A Framework for Sonification of Vicon Motion Capture Data
This paper describes experiments on sonifying data obtained using the VICON motion capture system. The main goal is to build the necessary infrastructure in order to be able to map motion parameters of the human body to sound. For sonification the following three software frameworks were used: Marsyas, traditionally used for music information retrieval with audio analysis and synthesis, CHUCK, an on-the-fly real-time synthesis language, and Synthesis Toolkit (STK), a toolkit for sound synthesis that includes many physical models of instruments and sounds. An interesting possibility is the use of motion capture data to control parameters of digital audio effects. In order to experiment with the system, different types of motion data were collected. These include traditional performance on musical instruments, acting out emotions as well as data from individuals having impairments in sensor motor coordination. Rhythmic motion (i.e. walking) although complex, can be highly periodic and maps quite naturally to sound. We hope that this work will eventually assist patients in identifying and correcting problems related to motor coordination through sound.
Download A New Paradigm for Sound Design
A sound scene can be defined as any “environmental” sound that has a consistent background texture, with one or more potentially recurring foreground events. We describe a data-driven framework for analyzing, transforming, and synthesizing high-quality sound scenes, with flexible control over the components of the synthesized sound. Given one or more sound scenes, we provide well-defined means to: (1) identify points of interest in the sound and extract them into reusable templates, (2) transform sound components independently of the background or other events, (3) continually re-synthesize the background texture in a perceptually convincing manner, and (4) controllably place event templates over the background, varying key parameters such as density, periodicity, relative loudness, and spatial positioning. Contributions include: techniques and paradigms for template selection and extraction, independent sound transformation and flexible re-synthesis; extensions to a wavelet-based background analysis/synthesis; and user interfaces to facilitate the various phases. Given this framework, it is possible to completely transform an existing sound scene, dynamically generate sound scenes of unlimited length, and construct new sound scenes by combining elements from different sound scenes. URL: http://taps.cs.princeton.edu/
Download User-Guided Variable-Rate Time-Stretching Via Stiffness Control
User control over variable-rate time-stretching typically requires direct, manual adjustment of the time-dependent stretch rate. For time-stretching with transient preservation, rhythmic warping, rhythmic emphasis modification, or other effects that require additional timing constraints, however, direct manipulation is difficult. For a more user-friendly approach, we present work that allows a user to specify a time-dependent stiffness curve to warp the time axis of a recording, while maintaining other timing constraints, such as a desired overall recording length or musical rhythm quantization (e.g. straight-to-swing), providing a notion of stretchability to sound. To do so, the user-guided stiffness curve and timing constraints are translated into the desired time-dependent stretch rate via a constrained optimization program motivated by a physical spring system. Once the time-dependent stretch rate is computed, appropriately modified variable-rate time-stretch processors are used to process the sound. Initial results are demonstrated using both a phase-vocoder and pitch-synchronous overlap-add processor.