Download Differentiable White-Box Virtual Analog Modeling Component-wise circuit modeling, also known as “white-box”
modeling, is a well established and much discussed technique in
virtual analog modeling. This approach is generally limited in accuracy by lack of access to the exact component values present in
a real example of the circuit. In this paper we show how this problem can be addressed by implementing the white-box model in a
differentiable form, and allowing approximate component values
to be learned from raw input–output audio measured from a real
device.
Download Identification of Nonlinear Circuits as Port-Hamiltonian Systems This paper addresses identification of nonlinear circuits for
power-balanced virtual analog modeling and simulation. The proposed method combines a port-Hamiltonian system formulation
with kernel-based methods to retrieve model laws from measurements. This combination allows for the estimated model to retain
physical properties that are crucial for the accuracy of simulations,
while representing a variety of nonlinear behaviors. As an illustration, the method is used to identify a nonlinear passive peaking
EQ.
Download Exposure Bias and State Matching in Recurrent Neural Network Virtual Analog Models Virtual analog (VA) modeling using neural networks (NNs) has
great potential for rapidly producing high-fidelity models. Recurrent neural networks (RNNs) are especially appealing for VA due
to their connection with discrete nodal analysis. Furthermore, VA
models based on NNs can be trained efficiently by directly exposing them to the circuit states in a gray-box fashion. However,
exposure to ground truth information during training can leave the
models susceptible to error accumulation in a free-running mode,
also known as “exposure bias” in machine learning literature. This
paper presents a unified framework for treating the previously
proposed state trajectory network (STN) and gated recurrent unit
(GRU) networks as special cases of discrete nodal analysis. We
propose a novel circuit state-matching mechanism for the GRU
and experimentally compare the previously mentioned networks
for their performance in state matching, during training, and in exposure bias, during inference. Experimental results from modeling
a diode clipper show that all the tested models exhibit some exposure bias, which can be mitigated by truncated backpropagation
through time. Furthermore, the proposed state matching mechanism improves the GRU modeling performance of an overdrive
pedal and a phaser pedal, especially in the presence of external
modulation, apparent in a phaser circuit.
Download Amp-Space: A Large-Scale Dataset for Fine-Grained Timbre Transformation We release Amp-Space, a large-scale dataset of paired audio
samples: a source audio signal, and an output signal, the result of
a timbre transformation. The types of transformations we study
are from blackbox musical tools (amplifiers, stompboxes, studio
effects) traditionally used to shape the sound of guitar, bass, or
synthesizer sounds. For each sample of transformed audio, the
set of parameters used to create it are given. Samples are from
both real and simulated devices, the latter allowing for orders of
magnitude greater data than found in comparable datasets. We
demonstrate potential use cases of this data by (a) pre-training a
conditional WaveNet model on synthetic data and show that it reduces the number of samples necessary to digitally reproduce a
real musical device, and (b) training a variational autoencoder to
shape a continuous space of timbre transformations for creating
new sounds through interpolation.
Download Transition-Aware: A More Robust Approach for Piano Transcription Piano transcription is a classic problem in music information retrieval. More and more transcription methods based on deep learning have been proposed in recent years. In 2019, Google Brain
published a larger piano transcription dataset, MAESTRO. On this
dataset, Onsets and Frames transcription approach proposed by
Hawthorne achieved a stunning onset F1 score of 94.73%. Unlike
the annotation method of Onsets and Frames, Transition-aware
model presented in this paper annotates the attack process of piano
signals called atack transition in multiple frames, instead of only
marking the onset frame. In this way, the piano signals around
onset time are taken into account, enabling the detection of piano onset more stable and robust. Transition-aware achieves a
higher transcription F1 score than Onsets and Frames on MAESTRO dataset and MAPS dataset, reducing many extra note detection errors. This indicates that Transition-aware approach has
better generalization ability on different datasets.
Download Improving Synthesizer Programming From Variational Autoencoders Latent Space Deep neural networks have been recently applied to the task of
automatic synthesizer programming, i.e., finding optimal values
of sound synthesis parameters in order to reproduce a given input
sound. This paper focuses on generative models, which can infer
parameters as well as generate new sets of parameters or perform
smooth morphing effects between sounds.
We introduce new models to ensure scalability and to increase
performance by using heterogeneous representations of parameters as numerical and categorical random variables.
Moreover,
a spectral variational autoencoder architecture with multi-channel
input is proposed in order to improve inference of parameters related to the pitch and intensity of input sounds.
Model performance was evaluated according to several criteria
such as parameters estimation error and audio reconstruction accuracy. Training and evaluation were performed using a 30k presets
dataset which is published with this paper. They demonstrate significant improvements in terms of parameter inference and audio
accuracy and show that presented models can be used with subsets
or full sets of synthesizer parameters.
Download An Audio-Visual Fusion Piano Transcription Approach Based on Strategy Piano transcription is a fundamental problem in the field of music
information retrieval. At present, a large number of transcriptional
studies are mainly based on audio or video, yet there is a small
number of discussion based on audio-visual fusion. In this paper,
a piano transcription model based on strategy fusion is proposed,
in which the transcription results of the video model are used to assist audio transcription. Due to the lack of datasets currently used
for audio-visual fusion, the OMAPS data set is proposed in this paper. Meanwhile, our strategy fusion model achieves a 92.07% F1
score on OMAPS dataset. The transcription model based on feature fusion is also compared with the one based on strategy fusion.
The experiment results show that the transcription model based on
strategy fusion achieves better results than the one based on feature
fusion.
Download One Billion Audio Sounds From Gpu-Enabled Modular Synthesis We release synth1B1, a multi-modal audio corpus consisting of 1
billion 4-second synthesized sounds, paired with the synthesis parameters used to generate them. The dataset is 100x larger than
any audio dataset in the literature. We also introduce torchsynth,
an open source modular synthesizer that generates the synth1B1
samples on-the-fly at 16200x faster than real-time (714MHz) on
a single GPU. Finally, we release two new audio datasets: FM
synth timbre and subtractive synth pitch. Using these datasets, we
demonstrate new rank-based evaluation criteria for existing audio
representations. Finally, we propose a novel approach to synthesizer hyperparameter optimization.
Download A Generative Model for Raw Audio Using Transformer Architectures This paper proposes a novel way of doing audio synthesis at the
waveform level using Transformer architectures. We propose a
deep neural network for generating waveforms, similar to wavenet . This is fully probabilistic, auto-regressive, and causal, i.e.
each sample generated depends on only the previously observed
samples. Our approach outperforms a widely used wavenet architecture by up to 9% on a similar dataset for predicting the next
step. Using the attention mechanism, we enable the architecture
to learn which audio samples are important for the prediction of
the future sample. We show how causal transformer generative
models can be used for raw waveform synthesis. We also show
that this performance can be improved by another 2% by conditioning samples over a wider context. The flexibility of the current
model to synthesize audio from latent representations suggests a
large number of potential applications. The novel approach of using generative transformer architectures for raw audio synthesis
is, however, still far away from generating any meaningful music
similar to wavenet, without using latent codes/meta-data to aid the
generation process.
Download Adaptive Pitch-Shifting With Applications to Intonation Adjustment in a Cappella Recordings A central challenge for a cappella singers is to adjust their intonation and to stay in tune relative to their fellow singers. During
editing of a cappella recordings, one may want to adjust local intonation of individual singers or account for global intonation drifts
over time. This requires applying a time-varying pitch-shift to the
audio recording, which we refer to as adaptive pitch-shifting. In
this context, existing (semi-)automatic approaches are either laborintensive or face technical and musical limitations. In this work,
we present automatic methods and tools for adaptive pitch-shifting
with applications to intonation adjustment in a cappella recordings. To this end, we show how to incorporate time-varying information into existing pitch-shifting algorithms that are based on
resampling and time-scale modification (TSM). Furthermore, we
release an open-source Python toolbox, which includes a variety
of TSM algorithms and an implementation of our method. Finally,
we show the potential of our tools by two case studies on global
and local intonation adjustment in a cappella recordings using a
publicly available multitrack dataset of amateur choral singing.