Download The Sounding Gesture: An Overview
Sound control by gesture is a peculiar topic in Human-Computer Interaction: many different approaches to it are available, focusing each time on diversified perspectives. Our point of view is an interdisciplinary one: taking into account technical considerations about control theory and sound processing, we try to explore the expressiveness world which is closer to psychology theories. Starting from a state of the art which outlines two main approaches to the problem of ”making sound with gestures”, we will delve into psychological theories about expressiveness, describing in particular possible applications dealing with intermodality and mixed reality environments related to the Gestalt Theory. HCI design can indeed benefit from this kind of approach because of the quantitative methods that can be applied to measure expressiveness. Interfaces can be used in order to convey expressiveness, which is a plus of information that can help interacting with the machine; this kind of information can be coded as spatio-temporal schemes, as it is stated in Gestalt theory.
Download Resolving Wave Digital Filters with Multiple/Multiport Nonlinearities
We present a novel framework for developing Wave Digital Filter (WDF) models from reference circuits with multiple/multiport nonlinearities. Collecting all nonlinearities into a vector at the root of a WDF tree bypasses the traditional WDF limitation to a single nonlinearity. The resulting system has a complicated scattering relationship between the nonlinearity ports and the ports of the rest of the (linear) circuit, which can be solved by a Modified-NodalAnalysis-derived method. For computability reasons, the scattering and vector nonlinearity must be solved jointly; we suggest a derivative of the K-method. This novel framework significantly expands the class of appropriate WDF reference circuits. A case study on a clipping stage from the Big Muff Pi distortion pedal involves both a transistor and a diode pair. Since it is intractable with standard WDF methods, its successful simulation demonstrates the usefulness of the novel framework.
Download A Piano Model Including Longitudinal String Vibrations
In this paper a mixed-paradigm piano model is presented. The major development is the ability of modeling longitudinal string vibrations. Longitudinal string motion is the reason for the metallic sound of low piano notes, therefore its modeling greatly improves the perceptual quality of synthesized piano sound. In this novel approach the transversal displacement of the string is computed by a finite-difference string model and the longitudinal motion is calculated by a set of second-order resonators, which are nonlinearly excited by the transversal vibration. The soundboard is modeled by a multi-rate filter based on measurements of real pianos. The piano model is able to produce high-quality piano sounds in real-time with about 5–10 note polyphony on an average personal computer.
Download Nonlinear Allpass Ladder Filters in FAUST
Passive nonlinear filters provide a rich source of evolving spectra for sound synthesis. This paper describes a nonlinear allpass filter of arbitrary order based on the normalized ladder filter. It is expressed in FAUST recursively in only two statements. Toward the synthesis of cymbals and gongs, it was used to make nonlinear waveguide meshes and feedback-delay-network reverberators.
Download Modal Representation of the Resonant Body within a Finite Difference Framework for Simulation of String Instruments
This paper investigates numerical simulation of a string coupled transversely to a resonant body. Starting from a complete nite difference formulation, a second model is derived in which the body is represented in modal form. The main advantage of this hybrid form is that the body model is scalable, i.e. the computational complexity can be adjusted to the available processing power. Numerical results are calculated and discussed for simplied models in the form of string-string coupling and string-plate coupling.
Download Objective Evaluations of Synthesised Environmental Sounds
There are a range of different methods for comparing or measuring the similarity between environmental sound effects. These methods can be used as objective evaluation techniques, to evaluate the effectiveness of a sound synthesis method by assessing the similarity between synthesised sounds and recorded samples. We propose to evaluate a number of different synthesis objective evaluation metrics, by using the different distance metrics as fitness functions within a resynthesis algorithm. A recorded sample is used as a target sound, and the resynthesis is intended to produce a set of synthesis parameters that will synthesise a sound as close to the recorded sample as possible, within the restrictions of the synthesis model. The recorded samples are excerpts of selections from a sound effects library, and the results are evaluated through a subjective listening test. Results show that one of the objective function performs significantly worse than several others. Only one method had a significant and strong correlation between the user perceptual distance and the objective distance. A recommendation of an objective evaluation function for measuring similarity between synthesised environmental sounds is made.
Download Mechanical Sound Synthesis: And the New Application of Force-Feedback Teleoperation of Acoustic Musical Instruments
In Mechanical Sound Synthesis, real mechanical devices are employed to create sound. Users can interact directly with the variables of the sound synthesis, making interactions more intuitive to both users and audience. We focus on real-time feedback control for Mechanical Sound Synthesis and provide a classification scheme using the reality-virtuality continuum. We discover an apparently novel paradigm, which is described as augmented virtuality for real-time feedback control. Exploring this paradigm, we present preliminary results from a system enabling a user to teleoperate acoustic percussion instruments with the aid of force feedback. Mechanical looping of the teleoperation trajectories and their transformations enables the synthesis of lifelike sounds with superhuman characteristics that are nevertheless produced by mechanical devices.
Download Introducing Audio D-TOUCH: A tangible user interface for music composition and performance
"Audio d-touch" uses a consumer-grade web camera and customizable block objects to provide an interactive tangible interface for a variety of time based musical tasks such as sequencing, drum editing and collaborative composition. Three instruments are presented here. Future applications of the interface are also considered.
Download Assessing Applause Density Perception Using Synthesized Layered Applause Signals
Applause signals are the sound of many persons gathered in one place clapping their hands and are a prominent part of live music recordings. Usually, applause signals are recorded together or alongside with the live performance and serve to evoke the feeling of participation in a real event within the playback recipient. Applause signals can be very different in character, depending on the audience size, location, event type, and many other factors. To characterize different types of applause signals, the attribute of ‘density’ appears to be suitable. This paper reports first investigations whether density is an adequate perceptual attribute to describe different types of applause. We describe the design of a listening test assessing density and the synthesis of suitable, strictly controlled stimuli for the test. Finally, we provide results, both on strictly controlled and on naturally recorded stimuli, that confirm the suitability of the attribute density to describe important aspects of the perception of different applause signal characteristics.
Download Interacting With Digital Audio Effects Through a Haptic Knob With Programmable Resistance
Live music performances and music production often involve the manipulation of several parameters during sound generation, processing, and mixing. In hardware layouts, those parameters are usually controlled using knobs, sliders and buttons. When these layouts are virtualized, the use of physical (e.g. MIDI) controllers can make interaction easier and reduce the cognitive load associated to sound manipulation. The addition of haptic feedback can further improve such interaction by facilitating the detection of the nature (continuous / discrete) and value of a parameter. To this end, we have realized an endless-knob controller prototype with programmable resistance to rotation, able to render various haptic effects. Ten subjects assessed the effectiveness of the provided haptic feedback in a target-matching task where either visual-only or visual-haptic feedback was provided; the experiment reported significantly lower errors in presence of haptic feedback. Finally, the knob was configured as a multi-parametric controller for a real-time audio effect software written in Python, simulating the voltage-controlled filter aboard the EMS VCS3. The integration of the sound algorithm and the haptic knob is discussed, together with various haptic feedback effects in response to control actions.