Download Audio Processor Parameters: Estimating Distributions Instead of Deterministic Values
Audio effects and sound synthesizers are widely used processors in popular music. Their parameters control the quality of the output sound. Multiple combinations of parameters can lead to the same sound. While recent approaches have been proposed to estimate these parameters given only the output sound, those are deterministic, i.e. they only estimate a single solution among the many possible parameter configurations. In this work, we propose to model the parameters as probability distributions instead of deterministic values. To learn the distributions, we optimize two objectives: (1) we minimize the reconstruction error between the ground truth output sound and the one generated using the estimated parameters, asisit usuallydone, but also(2)we maximize the parameter diversity, using entropy. We evaluate our approach through two numerical audio experiments to show its effectiveness. These results show how our approach effectively outputs multiple combinations of parameters to match one sound.
Download A Parametric Equalizer with Interactive Poles and Zeros Control for Digital Signal Processing Education
This article presents ZePolA, a digital audio equalizer designed as an educational resource for understanding digital filter design. Unlike conventional equalization plug-ins, which define the frequency response first and then derive the filter coefficients, this software adopts an inverse approach: users directly manipulate the placement of poles and zeros on the complex plane, with the corresponding frequency response visualized in real time. This methodology provides an intuitive link between theoretical filter concepts and their practical application. The plug-in features three main panels: a filter parameter panel, a frequency response panel, and a filter design panel. It allows users to configure a cascade of firstor second-order filter elements, each parameterized by the location of its poles or zeros. The GUI supports interaction through drag-and-drop gestures, enabling immediate visual and auditory feedback. This hands-on approach is intended to enhance learning by bridging the gap between theoretical knowledge and practical application. To assess the educational value and usability of the plug-in, a preliminary evaluation was conducted with focus groups of students and lecturers. Future developments will include support for additional filter types and increased architectural flexibility. Moreover, a systematic validation study involving students and educators is proposed to quantitatively evaluate the plug-in’s impact on learning outcomes. This work contributes to the field of digital signal processing education by offering an innovative tool that merges the hands-on approach of music production with a deeper theoretical understanding of digital filters, fostering an interactive and engaging educational experience.
Download Zero-Phase Sound via Giant FFT
Given the speedy computation of the FFT in current computer hardware, there are new possibilities for examining transformations for very long sounds. A zero-phase version of any audio signal can be obtained by zeroing the phase angle of its complex spectrum and taking the inverse FFT. This paper recommends additional processing steps, including zero-padding, transient suppression at the signal’s start and end, and gain compensation, to enhance the resulting sound quality. As a result, a sound with the same spectral characteristics as the original one, but with different temporal events, is obtained. Repeating rhythm patterns are retained, however. Zero-phase sounds are palindromic in the sense that they are symmetric in time. A comparison of the zero-phase conversion to the autocorrelation function helps to understand its properties, such as why the rhythm of the original sound is emphasized. It is also argued that the zero-phase signal has the same autocorrelation function as the original sound. One exciting variation of the method is to apply the method separately to the real and imaginary parts of the spectrum to produce a stereo effect. A frame-based technique enables the use of the zero-phase conversion in real-time audio processing. The zero-phase conversion is another member of the giant FFT toolset, allowing the modification of sampled sounds, such as drum loops or entire songs.
Download Partiels – Exploring, Analyzing and Understanding Sounds
This article presents Partiels, an open-source application developed at IRCAM to analyze digital audio files and explore sound characteristics. The application uses Vamp plug-ins to extract various information on different aspects of the sound, such as spectrum, partials, pitch, tempo, text, and chords. Partiels is the successor to AudioSculpt, offering a modern, flexible interface for visualizing, editing, and exporting analysis results, addressing a wide range of issues from musicological practice to sound creation and signal processing research. The article describes Partiels’ key features, including analysis organization, audio file management, results visualization and editing, as well as data export and sharing options, and its interoperability with other software such as Max and Pure Data. In addition, it highlights the numerous analysis plug-ins developed at IRCAM, based in particular on machine learning models, as well as the IRCAM Vamp extension, which overcomes certain limitations of the original Vamp format.
Download Stable Limit Cycles as Tunable Signal Sources
This paper presents a method for synthesizing audio signals from nonlinear dynamical systems exhibiting stable limit cycles, with control over frequency and amplitude independent of changes to the system’s internal parameters. Using the van der Pol oscillator and the Brusselator as case studies, it is demonstrated how parameters are decoupled from frequency and amplitude by rescaling the angular frequency and normalizing amplitude extrema. Practical implementation considerations are discussed, as are the limits and challenges of this approach. The method’s validity is evaluated experimentally and synthesis examples show the application of tunable nonlinear oscillators in sound design, including the generation of transients in FM synthesis by means of a van der Pol oscillator and a Supersaw oscillator bank based on the Brusselator.
Download Lookup Table Based Audio Spectral Transformation
We present a unified visual interface for flexible spectral audio manipulation based on editable lookup tables (LUTs). In the proposed approach, the audio spectrum is visualized as a two-dimensional color map of frequency versus amplitude, serving as an editable lookup table for modifying the sound. This single tool can replicate common audio effects such as equalization, pitch shifting, and spectral compression, while also enabling novel sound transformations through creative combinations of adjustments. By consolidating these capabilities into one visual platform, the system has the potential to streamline audio-editing workflows and encourage creative experimentation. The approach also supports real-time processing, providing immediate auditory feedback in an interactive graphical environment. Overall, this LUT-based method offers an accessible yet powerful framework for designing and applying a broad range of spectral audio effects through intuitive visual manipulation.
Download A Non-Uniform Subband Implementation of an Active Noise Control System for Snoring Reduction
The snoring noise can be extremely annoying and can negatively affect people’s social lives. To reduce this problem, active noise control (ANC) systems can be adopted for snoring cancellation. Recently, adaptive subband systems have been developed to improve the convergence rate and reduce the computational complexity of the ANC algorithm. Several structures have been proposed with different approaches. This paper proposes a non-uniform subband adaptive filtering (SAF) structure to improve a feedforward active noise control algorithm. The non-uniform band distribution allows for a higher frequency resolution of the lower frequencies, where the snoring noise is most concentrated. Several experiments have been carried out to evaluate the proposed system in comparison with a reference ANC system which uses a uniform approach.
Download Compositional Application of a Chaotic Dynamical System for the Synthesis of Sounds
The paper presents a review of compositional application developed in the last years using a chaotic dynamical system in different sound synthesis processes. The use of chaotic dynamical systems in computer music has been a widespread practice for some time now. The experimentation presented in this work shows the use of a specific chaotic system: the Chua’s oscillator, within different sound synthesis methods. A family of new musical instruments has been developed exploiting the potential offered by the use of this chaotic system to produce complex timbres and sounds. The instruments have been used for the creation of musical pieces and for the realization of live electronics performances.
Download DiffVox: A Differentiable Model for Capturing and Analysing Vocal Effects Distributions
This study introduces a novel and interpretable model, DiffVox, for matching vocal effects in music production. DiffVox, short for “Differentiable Vocal Fx", integrates parametric equalisation, dynamic range control, delay, and reverb with efficient differentiable implementations to enable gradient-based optimisation for parameter estimation. Vocal presets are retrieved from two datasets, comprising 70 tracks from MedleyDB and 365 tracks from a private collection. Analysis of parameter correlations reveals strong relationships between effects and parameters, such as the highpass and low-shelf filters often working together to shape the low end, and the delay time correlating with the intensity of the delayed signals. Principal component analysis reveals connections to McAdams’ timbre dimensions, where the most crucial component modulates the perceived spaciousness while the secondary components influence spectral brightness. Statistical testing confirms the non-Gaussian nature of the parameter distribution, highlighting the complexity of the vocal effects space. These initial findings on the parameter distributions set the foundation for future research in vocal effects modelling and automatic mixing.
Download Improving Lyrics-to-Audio Alignment Using Frame-wise Phoneme Labels with Masked Cross Entropy Loss
This paper addresses the task of lyrics-to-audio alignment, which involves synchronizing textual lyrics with corresponding music audio. Most publicly available datasets for this task provide annotations only at the line or word level. This poses a challenge for training lyrics-to-audio models due to the lack of frame-wise phoneme labels. However, we find that phoneme labels can be partially derived from word-level annotations: for single-phoneme words, all frames corresponding to the word can be labeled with the same phoneme; for multi-phoneme words, phoneme labels can be assigned at the first and last frames of the word. To leverage this partial information, we construct a mask for those frames and propose a masked frame-wise cross-entropy (CE) loss that considers only frames with known phoneme labels. As a baseline model, we adopt an autoencoder trained with a Connectionist Temporal Classification (CTC) loss and a reconstruction loss. We then enhance the training process by incorporating the proposed framewise masked CE loss. Experimental results show that incorporating the frame-wise masked CE loss improves alignment performance. In comparison to other state-of-the art models, our model provides a comparable Mean Absolute Error (MAE) of 0.216 seconds and a top Median Absolute Error (MedAE) of 0.041 seconds on the testing Jamendo dataset.