Download A pickup model for the Clavinet
In this paper recent findings on magnetic transducers are applied to the analysis and modeling of Clavinet pickups. The Clavinet is a stringed instrument having similarities to the electric guitar, it has magnetic single coil pickups used to transduce the string vibration to an electrical quantity. Data gathered during physical inspection and electrical measurements are used to build a complete model which accounts for nonlinearities in the magnetic flux. The model is inserted in a Digital Waveguide (DWG) model for the Clavinet string for its evaluation.
Download Introducing Deep Machine Learning for Parameter Estimation in Physical Modelling
One of the most challenging tasks in physically-informed sound synthesis is the estimation of model parameters to produce a desired timbre. Automatic parameter estimation procedures have been developed in the past for some specific parameters or application scenarios but, up to now, no approach has been proved applicable to a wide variety of use cases. A general solution to parameters estimation problem is provided along this paper which is based on a supervised convolutional machine learning paradigm. The described approach can be classified as “end-to-end” and requires, thus, no specific knowledge of the model itself. Furthermore, parameters are learned from data generated by the model, requiring no effort in the preparation and labeling of the training dataset. To provide a qualitative and quantitative analysis of the performance, this method is applied to a patented digital waveguide pipe organ model, yielding very promising results.
Download Equalizing Loudspeakers in Reverberant Environments Using Deep Convolutive Dereverberation
Loudspeaker equalization is an established topic in the literature, and currently many techniques are available to address most practical use cases. However, most of these rely on accurate measurements of the loudspeaker in an anechoic environment, which in some occurrences is not feasible. This is the case, e.g. of custom digital organs, which have a set of loudspeakers that are built into a large and geometrically-complex piece of furniture, which may be too heavy and large to be transported to a measurement room, or may require a big one, making traditional impulse response measurements impractical for most users. In this work we propose a method to find the inverse of the sound emission system in a reverberant environment, based on a Deep Learning dereverberation algorithm. The method is agnostic of the room characteristics and can be, thus, conducted in an automated fashion in any environment. A real use case is discussed and results are provided, showing the effectiveness of the approach in designing filters that match closely the magnitude response of the ideal inverting filters.
Download Simplifying Antiderivative Antialiasing with Lookup Table Integration
Antiderivative Antialiasing (ADAA), has become a pivotal method for reducing aliasing when dealing with nonlinear function at audio rate. However, its implementation requires analytical computation of the antiderivative of the nonlinear function, which in practical cases can be challenging without a symbolic solver. Moreover, when the nonlinear function is given by measurements it must be approximated to get a symbolic description. In this paper, we propose a simple approach to ADAA for practical applications that employs numerical integration of lookup tables (LUTs) to approximate the antiderivative. This method eliminates the need for closed-form solutions, streamlining the ADAA implementation process in industrial applications. We analyze the trade-offs of this approach, highlighting its computational efficiency and ease of implementation while discussing the potential impact of numerical integration errors on aliasing performance. Experiments are conducted with static nonlinearities (tanh, a simple wavefolder and the Buchla 259 wavefolding circuit) and a stateful nonlinear system (the diode clipper).
Download Antialiasing in BBD Chips Using BLEP
Several methods exist in the literature to accurately simulate Bucket Brigade Device (BBD) chips, which are widely used in analog delay-based audio effects for their characteristic lo-fi sound, which is affected by noise, nonlinearities and aliasing. The latter is a desired quality, being typical of those chips. However, when simulating BBDs in a discrete-time domain environment, additional aliasing components occur that need to be suppressed. In this work, we propose a novel method that applies the Bandlimited Step (BLEP) technique, effectively minimizing aliasing artifacts introduced by the simulation. The paper provides some insights on the design of a BBD simulation using interpolation at the input for clock rate conversion and, most importantly, shows how BLEP can be effective in reducing unwanted aliasing artifacts. Interpolation is shown to have minor importance in the reduction of spurious components.
Download Spatializing Screen Readers: Extending VoiceOver via Head-Tracked Binaural Synthesis for User Interface Accessibility
Traditional screen-based graphical user interfaces (GUIs) pose significant accessibility challenges for visually impaired users. This paper demonstrates how existing GUI elements can be translated into an interactive auditory domain using high-order Ambisonics and inertial sensor-based head tracking, culminating in a realtime binaural rendering over headphones. The proposed system is designed to spatialize the auditory output from VoiceOver, the built-in macOS screen reader, aiming to foster clearer mental mapping and enhanced navigability. A between-groups experiment was conducted to compare standard VoiceOver with the proposed spatialized version. Non visually-impaired participants (n = 32), with no visual access to the test interface, completed a list-based exploration and then attempted to reconstruct the UI solely from auditory cues. Experimental results indicate that the head-tracked group achieved a slightly higher accuracy in reconstructing the interface, while user experience assessments showed no significant differences in self-reported workload or usability. These findings suggest that potential benefits may come from the integration of head-tracked binaural audio into mainstream screen-reader workflows, but future investigations involving blind and low-vision users are needed. Although the experimental testbed uses a generic desktop app, our ultimate goal is to tackle the complex visual layouts of music-production software, where an head-tracked audio approach could benefit visually impaired producers and musicians navigating plug-in controls.