Download The Mix Evaluation Dataset Research on perception of music production practices is mainly concerned with the emulation of sound engineering tasks through lab-based experiments and custom software, sometimes with unskilled subjects. This can improve the level of control, but the validity, transferability, and relevance of the results may suffer from this artificial context. This paper presents a dataset consisting of mixes gathered in a real-life, ecologically valid setting, and perceptual evaluation thereof, which can be used to expand knowledge on the mixing process. With 180 mixes including parameter settings, close to 5000 preference ratings and free-form descriptions, and a diverse range of contributors from five different countries, the data offers many opportunities for music production analysis, some of which are explored here. In particular, more experienced subjects were found to be more negative and more specific in their assessments of mixes, and to increasingly agree with each other.
Download Sound Effects for a Silent Computer System This paper proposes the sonification of the activity of a computer system that allows the user to monitor the basic performance parameters of the system, like CPU load, read and write activity of the hard disk or network traffic. Although, current computer systems still produce acoustic background noise, future and emerging computer systems will be more and more optimized with respect to their noise emission. In contrast to most of the concepts of auditory feedback, which present a particular sound as a feedback to a user’s command, the proposed feedback is mediated by the running computer system. The user’s interaction stimulates the system and hence the resulting feedback offers more realistic information about the current states of performance of the system. On the one hand the proposed sonification can mimic the acoustical behavior of operating components inside a computer system, while on the other hand, new qualities can be synthesized that enrich interaction with the device. Different forms of sound effects and generation for the proposed auditory feedback are realized to experiment with the usage in an environment of silent computer systems.
Download An Audio-Visual Fusion Piano Transcription Approach Based on Strategy Piano transcription is a fundamental problem in the field of music
information retrieval. At present, a large number of transcriptional
studies are mainly based on audio or video, yet there is a small
number of discussion based on audio-visual fusion. In this paper,
a piano transcription model based on strategy fusion is proposed,
in which the transcription results of the video model are used to assist audio transcription. Due to the lack of datasets currently used
for audio-visual fusion, the OMAPS data set is proposed in this paper. Meanwhile, our strategy fusion model achieves a 92.07% F1
score on OMAPS dataset. The transcription model based on feature fusion is also compared with the one based on strategy fusion.
The experiment results show that the transcription model based on
strategy fusion achieves better results than the one based on feature
fusion.
Download Online Real-time Onset Detection with Recurrent Neural Networks We present a new onset detection algorithm which operates online in real time without delay. Our method incorporates a recurrent neural network to model the sequence of onsets based solely on causal audio signal information. Comparative performance against existing state-of-the-art online and offline algorithms was evaluated using a very large database. The new method – despite being an online algorithm – shows performance only slightly short of the best existing offline methods while outperforming standard approaches.
Download Probabilistic Reverberation Model Based on Echo Density and Kurtosis This article proposes a probabilistic model for synthesizing room impulse responses (RIRs) for use in convolution artificial reverberators. The proposed method is based on the concept of echo density. Echo density is a measure of the number of echoes per second in an impulse response and is a demonstrated perceptual metric of artificial reverberation quality. As echo density is related to the statistical measure of kurtosis, this article demonstrates that the statistics of an RIR can be modeled using a probabilistic mixture model. A mixture model designed specifically for modeling RIRs is proposed. The proposed method is useful for statistically replicating RIRs of a measured environment, thereby synthesizing new independent observations of an acoustic space. A perceptual pilot study is carried out to evaluate the fidelity of the replication process in monophonic and stereo artificial reverberators.
Download High frequency magnitude spectrogram reconstruction for music mixtures using convolutional autoencoders We present a new approach for audio bandwidth extension for music signals using convolutional neural networks (CNNs). Inspired by the concept of inpainting from the field of image processing, we seek to reconstruct the high-frequency region (i.e., above a cutoff frequency) of a time-frequency representation given the observation of a band-limited version. We then invert this reconstructed time-frequency representation using the phase information from the band-limited input to provide an enhanced musical output. We contrast the performance of two musically adapted CNN architectures which are trained separately using the STFT and the invertible CQT. Through our evaluation, we demonstrate that the CQT, with its logarithmic frequency spacing, provides better reconstruction performance as measured by the signal to distortion ratio.
Download A comparison of music similarity measures for a P2P application In this paper we compare different methods to compute music similarity between songs. The presented approaches have been reported by other authors in the field and we implemented minor improvements of them. We evaluated the different methods on a common database of MP3 encoded songs covering different genres, albums and artists. We used the best approach of the evaluation in a P2P scenario to compute song profiles and recommendations for similar songs. We will describe this integration in the second part of the paper.
Download Spherical Decomposition of Arbitrary Scattering Geometries for Virtual Acoustic Environments A method is proposed to encode the acoustic scattering of objects for virtual acoustic applications through a multiple-input and
multiple-output framework. The scattering is encoded as a matrix in the spherical harmonic domain, and can be re-used and
manipulated (rotated, scaled and translated) to synthesize various
sound scenes. The proposed method is applied and validated using
Boundary Element Method simulations which shows accurate results between references and synthesis. The method is compatible
with existing frameworks such as Ambisonics and image source
methods.
Download Unison Source Separation In this work we present a new scenario of analyzing and separating linear mixtures of musical instrument signals. When instruments are playing in unison, traditional source separation methods are not performing well. Although the sources share the same pitch, they often still differ in their modulation frequency caused by vibrato and/or tremolo effects. In this paper we propose source separation schemes that exploit AM/FM characteristics to improve the separation quality of such mixtures. We show a method to process mixtures based on differences in their amplitude modulation frequency of the sources by using non-negative tensor factorization. Further, we propose an informed warped time domain approach for separating mixtures based on variations in the instantaneous frequencies of the sources.
Download Neural Grey-Box Guitar Amplifier Modelling with Limited Data This paper combines recurrent neural networks (RNNs) with the discretised Kirchhoff nodal analysis (DK-method) to create a grey-box guitar amplifier model. Both the objective and subjective results suggest that the proposed model is able to outperform a baseline black-box RNN model in the task of modelling a guitar amplifier, including realistically recreating the behaviour of the amplifier equaliser circuit, whilst requiring significantly less training data. Furthermore, we adapt the linear part of the DK-method in a deep learning scenario to derive multiple state-space filters simultaneously. We frequency sample the filter transfer functions in parallel and perform frequency domain filtering to considerably reduce the required training times compared to recursive state-space filtering. This study shows that it is a powerful idea to separately model the linear and nonlinear parts of a guitar amplifier using supervised learning.