Book on

DAFX - Digital Audio Effects (Second Edition)

Edited by Udo Zölzer
ISBN: 978-0-470-66599-2
John Wiley & Sons, 2011


List of Authors

Jonathan S. Abel is a Consulting Professor at the Center for Computer Research in Music and Acoustics (CCRMA) in the Music Department at Stanford University where his research interests include audio and music applications of signal and array processing, parameter estimation, and acoustics. From 1999 to 2007, Abel was a co-founder and chief technology officer of the Grammy Award-winning Universal Audio, Inc. He was a researcher at NASA/Ames Research Center, exploring topics in room acoustics and spatial hearing on a grant through the San Jose State University Foundation. Abel was also chief scientist of Crystal River Engineering, Inc., where he developed their positional audio technology, and a lecturer in the Department of Electrical Engineering at Yale University. As an industry consultant, Abel has worked with Apple, FDNY, LSI Logic, NRL, SAIC and Sennheiser, on projects in professional audio, GPS, medical imaging, passive sonar and fire department resource allocation. He holds PhD and MS degrees from Stanford University, and an S.B. from MIT, all in electrical engineering. Abel is a Fellow of the Audio Engineering Society.

Xavier Amatriain is Researcher in Telefonica R&D Barcelona where he joined in June 2007. His current focus of research is on Recommender Systems and other Web Science related topics. He is also associate Professor at Universitat Pompeu Fabra, where he teaches Software Engineering and Information Retrieval. He has authored more than 50 publications including several book chapters and patents. Previous to this, Dr. Amatriain worked at the University of California Santa Barbara as Research Director, supervising research on areas that included Multimedia and Immersive Systems, Virtual Reality and 3D Audio and Video. Among others, he was Technical Director of the AllosPhere project and he lectured in the Media Arts and Technology program. During his PhD at the UPF (Barcelona), he was researcher in the Music Technology Group and he work on Music Signal Processing and Systems . At that time he initiated and coordinated the award-winning CLAM open source project for audio and music processing.

Daniel Arfib (1949-) received his diploma as “ingénieur ECP” from the Ecole Centrale of Paris in 1971 and is a “docteur-ingénieur” (1977) and “docteur es sciences” (1983) from the Université of Marseille II. After a few years in education or industry jobs, he has devoted his work to research, joining the CNRS (National Center for Scientific Research) in 1978 at the Laboratory of Mechanics and Acoustics (LMA) of Marseille (France). His main concern is to provide a combination of scientific and musical points on views on synthesis, transformation and interpretation of sounds using the computer as a tool, both as a researcher and a composer. As the chairman of the COST-G6 action named “Digital Audio Effects” he has been in the middle of a galaxy of researchers working on this subject. He also has a strong interest in the gesture and sound relationship, especially concerning creativity in musical systems. Since 2008, he is working in the field of sonic interaction design at the Laboratory of Informatics (LIG) in Grenoble, France.

David Berners is a Consulting Professor at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, where he has taught courses in signal processing and audio effects since 2004. He is also Chief Scientist at Universal Audio, Inc., a hardware and software manufacturer for the professional audio market. At UA, Dr. Berners leads research and development efforts in audio effects processing, including dynamic range compression, equalization, distortion and delay effects, and specializing in modeling of vintage analog equipment. Dr. Berners has previously held positions at the Lawrence Berkeley Laboratory, NASA Jet Propulsion Laboratory and Allied Signal. He received his PhD from Stanford University, MS from California Institute of Technology, and his SB from Massachusetts Institute of Technology, all in electrical engineering.

Stefan Bilbao received his BA in Physics at Harvard University ('92), then spent two years at the Institut de Recherche et Coordination Acoustique Musicale (IRCAM) under a fellowship awarded by Harvard and the Ecole Normale Superieure. He then completed the MSc and PhD degrees in Electrical Engineering at Stanford University ('96 and '01, respectively), while working at the Center for Computer Research in Music and Acoustics (CCRMA). He was subsequently a postdoctoral researcher at the Stanford Space Telecommunications and Radioscience Laboratory, and a Lecturer at the Sonic Arts Research Centre at the Queen's University Belfast. He is currently a Senior Lecturer in Music at the University of Edinburgh.

Jordi Bonada (1973- ) received a M.Sc. degree in Electrical Engineering from the Universitat Politècnica de Catalunya (Barcelona, Spain) in 1997, and a PhD degree in Computer Science and Digital Communications from the Universitat Pompeu Fabra (Barcelona, Spain) in 2009. Since 1996 he has been a researcher at the Music Technology Group of the same university while leading several collaboration projects with Yamaha Corp. He is mostly interested in the field of spectral-domain audio signal processing, with focus on time-scaling and singing voice modeling and synthesis.

Giovanni De Poli is an Associate Professor of Computer Science at the Department of Electronics and Informatics of the University of Padua, where he teaches “Data structures and algorithms” and “Processing systems for Music”. He is the Director of the Centro di Sonologia Computazionale (CSC) of the University of Padua. He is a member of the Executive Committee (ExCom) of the IEEE Computer Society Technical Committee on Computer Generated Music, member of the Board of Directors of AIMI (Associazione Italiana di Informatica Musicale), member of the Board of Directors of CIARM (Centro Interuniversitario di Acustica e Ricerca Musicale), member of the Scientific Committee of ACROE (Institut National Politechnique Grenoble), and Associate Editor of the International Journal of New Music Research. His main research interests are in algorithms for sound synthesis and analysis, models for expressiveness in music, multimedia systems and human-computer interaction, and the preservation and restoration of audio documents. He is the author of several scientific international publications, and has served in the Scientific Committees of international conferences. He is co-editor of the books Representations of Music Signals, MIT Press 1991, and Musical Signal Processing, Swets & Zeitlinger, 1996. Systems and research developed in his lab have been exploited in collaboration with digital musical instruments industry (GeneralMusic). He is the owner of patents on digital music instruments.

Kristjan Dempwolf was born in Osterode am Harz, Germany, in 1978. After finishing an apprenticeship as an Electronic Technician in 2002 he studied Electrical Engineering at the Technical University Hamburg-Harburg (TUHH). He spent one semester at the Norwegian University of Science and Technology (NTNU) in 2006 and obtained his Diplom-Ingenieur degree in 2008. He is currently working on a Doctoral degree at the Helmut Schmidt University Hamburg, Germany. His main research interests are real-time modeling and nonlinear audio systems.

Sascha Disch received his Diplom-Ingenieur degree in electrical engineering from the Technische Universität Hamburg-Harburg (TUHH), Germany in 1999. From 1999 to 2007 he was with the Fraunhofer Institut für Integrierte Schaltungen (FhG-IIS), Erlangen, Germany. At Fraunhofer, he worked in research and development in the field of perceptual audio coding and audio processing, including the MPEG standardization of parametric coding of multi-channel sound (MPEG Surround). From 2007 to 2010 he was a researcher at the Laboratorium für Informationstechnologie, Leibniz Universität Hannover (LUH), Germany and is also a PhD candidate. Currently, he is again with Fraunhofer and is involved with research and development in perceptual audio coding. His research interests include audio signal processing/coding and digital audio effects, primarily pitch shifting and time stretching.

Pierre Dutilleux graduated in thermal engineering from the Ecole Nationale Supérieure des Techniques Industrielles et des Mines de Douai (ENSTIMD) in 1983 and in information processing from the Ecole Nationale Supérieure d'Electronique et de Radioélectricité de Grenoble (ENSERG) in 1985. He developed from 1985 to 1991 audio and musical applications for the Syter real-time audio processing system designed at INA-GRM by J.-F.Allouis. After developing a set of audio processing algorithms as well as implementing the first wavelet analyser on a digital signal processor, he got a PhD in acoustics and computer music from the university of Aix-Marseille II in 1991 under the direction of J.-C.Risset. From 1991 through 2000 he worked as a research and evelopment engineer at the ZKM (Center for Art and Media Technology) in Karlsruhe. There he planned computer and digital audio networks for a large digital-audio studio complex, and he introduced live electronics and physical modelling as tools for musical production. He contributed to multimedia works with composers such as K. Furukawa and M. Maiguashca. He designed and realised the AML (Architecture and Music Laboratory) as an interactive museum installation. He has been a german delegate of the Digital Audio Effects (DAFx) project. In 2000 he changed his professional focus from music and signal processing to wind energy. He applies his highly differentiated listening skills to the characterisation of the noise from wind turbines. He has been Head of Acoustics at DEWI, the German Wind-Energy Institute. By performing diligent reviews of the acoustic issues of wind farm projects before construction, he can identify at an early stage the acoustic risks which might impair the acceptance of the future wind farm projects by the neighbours.

Gianpaolo Evangelista is Professor in Sound Technology at the Linköping University, Sweden, where he heads the Sound and Video Technology research group since 2005. He received the laurea in physics (summa cum laude) from “Federico II” University of Naples, Italy, and the M.Sc. and Ph.D. degrees in electrical engineering from the University of California, Irvine. He has been with the Centre d'Etudes de Mathématique et Acoustique Musicale (CEMAMu/CNET), Paris, France, with the Microgravity Advanced Research and Support (MARS) Center, Naples, Italy, with the “Federico II” University of Naples and with the Laboratory for Audiovisual Communications, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland. He is the author or coauthor of about 100 journal or conference papers and book chapters. He is a senior member of the IEEE and an active member of the DAFx (Digital Audio Effects) Scientific Committee. His interests are centered in audio signal representations, sound synthesis by physical models, digital audio effects, spatial audio, audio coding, wavelets and multirate signal processing.

Martin Holters was born in Hamburg, Germany, in 1979. He received the Master of Science degree from Chalmers Tekniska Högskola, Göteborg, Sweden, in 2003 and the Diplom-Ingenieur degree in computer engineering from the Technical University Hamburg-Harburg, Germany, in 2004. He then joined the Helmut-Schmidt-University - University of the Federal Armed Forces Hamburg where he received the Dr.-Ingenieur degree in 2009. The topic of his dissertation is delay-free audio coding based on adaptive differential pulse code modulation (ADPCM) with adaptive pre- and post-filtering. Since 2009 he is chief scientist in the department of signal processing and communications. He is active in various fields of audio signal processing research with his main focus still on audio coding and transmission.

Florian Keiler was born in Hamburg, Germany, in 1972. He received the Diplom-Ingenieur degree in electrical engineering from the Technical University Hamburg-Harburg (TUHH) in 1999 and the Dr.-Ingenieur degree from the Helmut-Schmidt-University - University of the Federal Armed Forces Hamburg in 2006. The topic of his dissertation is low-delay audio coding based on linear predictive coding (LPC) in subbands. Since 2005 he is working in the audio and acoustics research laboratory of Technicolor (formerly Thomson) located in Hanover/Germany. He is currently working in the field of spatial audio.

Tapio Lokki was born in Helsinki, Finland, in 1971. He has studied acoustics, audio signal processing, and computer science at the Helsinki University of Technology (TKK) and received an M.Sc. degree in electrical engineering in 1997 and a D.Sc. (Tech.) degree in computer science and engineering in 2002. At present Dr. Lokki is an Academy Research Fellow with the Department of Media Technology at Aalto University. In addition, he is an adjunct professor at the Department of Signal Processing and Acoustics at Aalto. Dr. Lokki leads his virtual acoustics team which aims to create novel objective and subjective ways to evaluate concert hall acoustics. In addition, the team develops physically-based room acoustics modeling methods to obtain authentic auralization. Furthermore, the team studies augmented reality audio and eyes-free user interfaces. The team is funded by the Academy of Finland and by Dr. Lokki's Starting Grant from the European Research Council (ERC). Dr. Lokki is a member of the editorial board of Acta Acustica united with Acustica. Dr. Lokki is a member of the Audio Engineering Society, the IEEE Computer Society, and Siggraph Helsinki Finland. In addition, he is the president of the Acoustical Society of Finland.

Alex Loscos received the B.S. and M.S. degrees in Signal Processing Engineering in 1997. In 1998 he joined the Music Technology Group of the Universitat Pompeu Fabra of Barcelona. After a few years as a researcher, lecturer, developer and project manager he co-founded Barcelona Music & Audio Technologies in 2006, a spin-off company of the research lab. In 2007 he became PhD in computer science and right after started as Chief Strategy Officer at BMAT. A year and a half later he took over the position of Chief Executive Officer which he currently holds. Alex is also a music passionate, an accomplished composer, and member of international distribution bands.

Sylvain Marchand has been an associate professor in the image and sound research team of the LaBRI (Computer Science Laboratory), University of Bordeaux 1, since 2001. He is also a member of the “Studio de Création et de Recherche en Informatique et Musique Électroacoustique” (SCRIME). Regarding the international DAFx (Digital Audio Effects) conference, he has been a member of the Scientific Committee since 2006, was Chair of the 2007 conference held in Bordeaux and has attended all DAFX conferences since the first one in 1998–where he gave his first presentation, as a Ph.D. student. Now, he is involved in several international conferences on musical audio, and he is also associate editor of the IEEE Transactions on Audio, Speech, and Language Processing. Dr Marchand is particularly involved in musical sound analysis, transformation, and synthesis. He focuses on spectral representations, taking perception into account. Among his main research topics are sinusoidal models, analysis/synthesis of deterministic and stochastic sounds, sound localization/spatialization (“3D sound”), separation of sound entities (sources) present in polyphonic music, or “active listening” (enabling the user to interact with the musical sound while it is played).

Jyri Pakarinen (1979- ) received MSc and DSc (Tech.) degrees in acoustics and audio signal processing from the Helsinki University of Technology, Espoo, Finland, in 2004 and 2008, respec- tively. He is currently working as a post-doctoral researcher and a lecturer in the Department of Signal Processing and Acoustics, Aalto University School of Science and Technology. His main research interests are digital emulation of electric audio circuits, sound synthesis through physical modeling, and vibro- and electroacoustic measurements. As a semiprofessional guitar player, he is also interested and involved in music activities

Enrique Perez Gonzalez was born in 1978 in Mexico City. He studied engineering communica- tions and electronics at the ITESM University in Mexico City, where he graduated in 2002. During his engineering studies he did a one-year internship at RMIT in Melbourne, Australia where he specialized in Audio. From 1999 to 2005 he worked at the audio rental company SAIM, one of the biggest audio companies in Mexico, where he worked as a technology manager and audio system engineer for many international concerts. He graduated with distinction with an MSc in music technology at the University of York in 2006, where he worked on delta sigma modulation systems. He completed his PhD in 2010 on Advanced Tools for Automatic Mixing at the Centre for Digital Music in Queen Mary, University of London.

Mark Plumbley has investigated audio and music signal analysis, including beat tracking, music transcription, source separation and object coding, using techniques such as neural networks, inde- pendent component analysis, sparse representations and Bayesian modeling. Prof Plumbley joined Queen Mary, University of London (QMUL) in 2002, he holds an EPSRC Leadership Fellowship on Machine Listening using Sparse Representations, and in Sept 2010 became Director of the Centre for Digital Music at QMUL. He is chair of the International Independent Component Anal- ysis (ICA) Steering Committee, a member of the IEEE Machine Learning in Signal Processing Technical Committee, and an Associate Editor for IEEE Transactions on Neural Networks.

Ville Pulkki received his MSc and DSc (Tech.) degrees from Helsinki University of Technology in 1994 and 2001, respectively. He majored in acoustics, audio signal processing and information sciences. Between 1994 and 1997 he was a full time student at the Department of Musical Education at the Sibelius Academy. In his doctoral dissertation he developed vector base amplitude panning (VBAP), which is a method for positioning virtual sources to any loudspeaker configuration. In addition, he studied the performance of VBAP with psychoacoustic listening tests and with modeling of auditory localization mechanisms. The VBAP method is now widely used in multi- channel virtual auditory environments and in computer music installations. Later, he developed with his group a method for spatial sound reproduction and coding, directional audio coding (DirAC). DirAC takes coincident first-order microphone signals as input, and processes output to arbitrary loudspeaker layouts or to headphones. The method is currently being commercialized. Currently, he is also developing a computational functional model of the brain organs devoted to binaural hearing, based on knowledge from neurophysiology, neuroanatomy, and from psychoacoustics. He is leading a research group in Aalto University (earlier: Helsinki University of Technology, TKK or HUT), which consists of 10 researchers. The group also conducts research on new methods to measure head-related transfer functions, and conducts psychoacoustical experiments to better understand the spatial sound perception by humans. Dr. Pulkki enjoys being with his family (wife and two children), playing various musical instruments, and building his summer place. He is the chairman of AES Finnish section and the co-chair of AES Technical Committee on Spatial Audio.

Josh Reiss is a senior lecturer with the Centre for Digital Music at Queen Mary, University of London. He received his PhD in physics from Georgia Tech. He made the transition to audio and musical signal processing through his work on sigma delta modulators, which led to patents and a nomination for a best paper award from the IEEE. He has investigated music retrieval systems, time scaling and pitch-shifting techniques, polyphonic music transcription, loudspeaker design, automatic mixing for live sound and digital audio effects. Dr. Reiss has published over 80 scientific papers and serves on several steering and technical committees. As coordinator of the EASAIER project, he led an international consortium of seven partners working to improve access to sound archives in museums, libraries and cultural heritage institutions. His primary focus of research, which ties together many of the above topics, is on state-of-the-art signal processing techniques for professional sound engineering.

Davide Rocchesso received the PhD degree from the University of Padua, Italy, in 1996. Between 1998 and 2006 he was with the Computer Science Department at the University of Verona, Italy, as an Assistant and Associate Professor. Since 2006 he has been with the Department of Art and Industrial Design of the IUAV University of Venice, as Associate Professor. He has been the coordinator of EU project SOb (the Sounding Object) and local coordinator of the EU project CLOSED (Closing the Loop Of Sound Evaluation and Design) and of the Coordination Action S2S (Sound-to-Sence; Sense-to-Sound). He has been chairing the COST Action IC-0601 SID (Sonic Interaction Design). Davide Rochesso authored or co-authored ove one hundred publications in scientific journals, books, and conferences. His main research interests are sound modeling for interaction design, sound synthesis by physical modeling, and design and evaluation of interactions.

Xavier Serra is Associate Professor of the Department of Information and Communication Technologies and Director of the Music Technology Group at the Universitat Pompeu Fabra in Barcelona. After a multidisciplinary academic education he obtained a PhD in computer music from Stanford University in 1989 with a dissertation on the spectral processing of musical sounds that is considered a key reference in the field. His research interests cover the understanding, modeling and generation of musical signals by computational means, with a balance between basic and applied research and approaches from both scientific/technological and humanistic/artistic disciplines.

Julius O. Smith teaches a music signal-processing course sequence and supervises related research at the Center for Computer Research in Music and Acoustics (CCRMA). He is formally a professor of music and associate professor (by courtesy) of electrical engineering at Stanford University. In 1975, he received his BS/EE degree from Rice University, where he got a solid start in the field of digital signal processing and modeling for control. In 1983, he received the PhD/EE degree from Stanford University, specializing in techniques for digital filter design and system identification, with application to violin modeling. His work history includes the Signal Processing Department at Electromagnetic Systems Laboratories, Inc., working on systems for digital communications, the Adaptive Systems Department at Systems Control Technology, Inc., working on research problems in adaptive filtering and spectral estimation, and NeXT Computer, Inc., where he was responsible for sound, music, and signal processing software for the NeXT computer workstation. Prof. Smith is a Fellow of the Audio Engineering Society and the Acoustical Society of America. He is the author of four online books and numerous research publications in his field.

Vesa Välimäki (1968- ) is Professor of Audio Signal Processing at the Aalto University, Department of Signal Processing and Acoustics, Espoo, Finland. He received the Doctor of Science in technology degree from Helsinki University of Technology (TKK), Espoo, Finland, in 1995. He has published more than 200 papers in international journals and conferences. He has organized several special issues in scientific journals on topics related to musical signal processing. He was the chairman of the 11th International Conference on Digital Audio Effects (DAFX-08), which was held in Espoo in 2008. During the academic year 2008–2009 he was on sabbatical leave under a grant from the Academy of Finland and spent part of the year as a Visiting Scholar at the Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, CA. He currently serves as an associate editor of the IEEE Transactions on Audio, Speech and Language Processing. His research interests are sound synthesis, audio effects processing, digital filters, and musical instrument acoustics.

Vincent Verfaille (1974- ) studied applied mathematics at INSA (Toulouse, France) to become an engineer in 1997. He then adapted to a carrier change, where he studied music technology (DEA-ATIAM, Université Paris VI, France, 2000; PhD in music technology at CNRS-LMA and Université Aix-Marseille II, France, 2003) and adaptive audio effects. He then spent a few years (2003–2009) as a post-doctoral research and then as a research associate in both the Sound Processing and Control Lab (SPCL) and the Input Device for Musical Interaction Lab (IDMIL) at the Schulich School of Music (McGill University, CIRMMT), where he worked on sound synthesis and control. He also taught digital audio effects and sound transformation at ENSEIRB and Université Bordeaux I (Bordeaux, France, 2002–2006), signal processing at McGill University (Montreal, Canada, 2006) and musical acoustics at University of Montréal (Montréal, Canada, 2008). He is now doing another carrier change, far away from computers and music.

Emmanuel Vincent received the BSc degree in mathematics from École Normale Supérieure in 2001 and the PhD degree in acoustics, signal processing and computer science applied to music from Université Pierre et Marie Curie, Paris, France, in 2004. After working as a research assistant with the Center for Digital Music at Queen Mary College, London, UK, he joined the French National Research Institute for Computer Science and Control (INRIA) in 2006 as a research scientist. His research focuses on probabilistic modeling of audio signals applied to source separation, information retrieval and coding. He is the founding chair of the annual Signal Separation Evaluation Campaign (SiSEC) and a co-author of the toolboxes BSS Eval and BSS Oracle for the evaluation of source separation systems.

Adrian von dem Knesebeck (1982- ) received his Diplom-Ingenieur degree in electrical engineering from the Technical University Hamburg-Harburg (TUHH), Germany in 2008. Since 2009 he has been working as a research assistant in the Department of Signal Processing and Communications at the Helmut Schmidt University in Hamburg. He was involved in several audio research projects and collaboration projects with external companies so far and is currently working on his PhD thesis.

Udo Zölzer (1958- ) received the Diplom-Ingenieur degree in electrical engineering from the University of Paderborn in 1985, the Dr.-Ingenieur degree from the Technical University Hamburg-Harburg (TUHH) in 1989 and completed a Habilitation in communications engineering at the TUHH in 1997. Since 1999 he has been a professor and head of the Department of Signal Processing and Communications at the Helmut Schmidt University – University of the Federal Armed Forces in Hamburg, Germany. His research interests are audio and video signal processing and communication. He is a member of the AES and the IEEE.

Top

Contents