Platform Grant: Centre for Digital Music

Lead Research Organisation: Queen Mary University of London
Department Name: Sch of Electronic Eng & Computer Science

Abstract

Digital Music is a rapidly growing research area. Its research covers a range of disciplines, including computer science, information science, audio engineering, digital signal processing and musicology. The music industry is undergoing radical change from a physical to digital distribution model. Digital music downloads are above $1.1bn worldwide and rising rapidly; in the UK these grew over 350% in 2005. The internet and peer-to-peer (P2P) technologies have opened the possibility for new or small bands to release their music to a wide range of listeners, although the problem remains of how their listeners may find them and their music. Finally, it must be stated that music is not simply a commodity to be delivered to consumers, but is also a medium for creative expression and social interaction: people do not merely consume music but engage with it. The Centre for Digital Music (C4DM) at Queen Mary University of London is a world-leading multidisciplinary research group in this field of Music and Audio Technology. Our current research is focussed in two main areas: signal processing of digital music; and digital music performance and interaction. By early 2007 C4DM will have about 40 full-time members, including academic staff, research staff, research students and visitors.This Platform Grant will provide the Centre for Digital Music with background funding to allow us to further enhance our international research reputation, and to continue to be a major contributor to the UK research base in this area.This will be achieved through:* Retention of key research staff;* Retaining key graduating PhD students as named researchers;* An internship and outreach scheme;* Maximizing research impact & exploitation; and* Adventurous interdisciplinary projects.These will be in addition to the usual type of research projects, funded by EPSRC, EU and others, which we will continue through the duration of the Platform Grant and beyond.

Publications

10 25 50
publication icon
Reiss JD (2008) Verification of chaotic behavior in an experimental loudspeaker. in The Journal of the Acoustical Society of America

publication icon
Robertson A (2015) Event-based Multitrack Alignment using a Probabilistic Framework in Journal of New Music Research

publication icon
Robertson A.N. (2009) Post-processing fiddle~: A real-time multi-pitch tracking technique using harmonic partial subtraction for use within live performance systems in Proceedings of the 2009 International Computer Music Conference, ICMC 2009

publication icon
S. Dixon (2011) The Temperament Police: The Truth, the Ground Truth and Nothing but the Truth in Proceedings of the 12th International Society for Music Information Retrieval Conference

publication icon
Tidhar D (2014) The temperament police in Early Music

publication icon
Uhle C. (2010) Determined source separation for microphone recordings using IIR filters in 129th Audio Engineering Society Convention 2010

 
Description The aim of this "Platform Grant" was to maintain and strengthen the Centre for Digital Music as a world-leading research group, through the provision of background funding to provide stability and flexibility in our recruitment of key research staff. We undertook this through Retention of key research staff, Retention of key graduating PhD students as researchers, Maximizing impact (e.g. through presentations at events), Interdisciplinary projects / feasibility studies, and Internship / outreach / visits.
Exploitation Route Potential beneficiaries of the research supported by this Platform Grant outside of the academic research community include anyone who could benefit from latest research in audio and music. Examples from various sectors are given below.

Commercial private sector:

* Commercial companies designing audio equipment, through easier access to new audio research

* Companies wishing to provide new ways for their customers to discover or access music based on music recommendation research

* Musicians and sound artists, through their ability to use new research methods and processes for new creative outputs

* Computer games companies, for improved game music and audio, or for new types of audio or music-based computer games

* Hearing aid and cochlear implant manufacturers, through access to new research applicable to hearing improvement

* Audio archiving companies, through access to the latest algorithms and methods for music information retrieval

* Television and radio companies, through ability to use the latest research in the design of new technology-based programmes or in new audio production processes.

Policy-makers and others in government and government agencies:

* Police and security services, through access to methods for analysing and separating audio signals.

Public sector, third sector and others:

* Healthcare workers who work with music and audio, such as music therapists, through new music and audio visualization tools

* Museums, through tools to measure and visualise acoustic properties of objects

* Libraries, through improved open access to our research results for the benefit of their users

* Standards organizations, through access to new research methods on which to base forthcoming standards

* Science promotion organizations, through availability of high-quality usable research tools attractive to people who may be interested in science.

Wider public:

* People interested in exploring music or other audio recordings at home, school, college or university, using the latest research, either for educational or general interest purposes

* Teachers in schools, colleges or universities who want to use our research-based software for teaching audio or music

* Audiences of creative output involving audio and music, through availability of new creative outputs or technology facilitated by our audio and digital music research. Beneficiaries of our research include: other researchers in the audio and digital music research; researchers in other fields who might use the results of our research, including musicologists, audio engineers, bio-acousticians, and auditory psychologists; and researchers who use techniques related to ours in other fields, such as medical signal processing or general semantic web research.
Sectors Creative Economy,Digital/Communication/Information Technologies (including Software)

URL http://c4dm.eecs.qmul.ac.uk/platform.html
 
Description Impacts include a number of artistic outputs, including experimental piano piece with Sarah Nicolls, a wearable musical instrument (the Serendiptichord) with artist Di Mainstone, and an Augmented Instruments Concert. A Beat and Rhythm Warping software API has been used in a rhythm morphing app by LickWorx.
First Year Of Impact 2008
Sector Creative Economy
Impact Types Cultural,Economic

 
Description Platform Grant: Digital Music
Amount £1,161,334 (GBP)
Funding ID EP/K009559/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 01/2013 
End 01/2018
 
Description Research visit by Maria Pantelli, New York University (NYU) 
Organisation New York University
Country United States 
Sector Academic/University 
PI Contribution Pantelli's PhD is part of a bigger collaboration, namely the "Deep History of Music (DHOM)" project, involving researchers from six universities. This research visit was expected to contribute to new data resources collected from NYU and affiliated institutions and is also linked to a grant proposal, written by professor Armand Leroi, Imperial, in collaboration with Queen Mary. During Maria's stay at New York University between 22 May and 15 August 2016 she had the opportunity to meet with people from both academia and the industry. In particular she attended a meeting between academics of New York University and NYU Abu Dhabi discussing potential collaborations between their institutions and Queen Mary. Following this meeting she has been invited to attend the 3rd workshop on "Cross-disciplinary and Multicultural Aspects of Musical Rhythms", Abu Dhabi, 17-20 March 2017, as a way to keep the NYU-QM collaboration going. What is more, she attended local events held at Spotify New York and she presented her work to industry-related audience promoting research of Queen Mary University of London. The findings of this work are summarized in a research paper (see next section) that has been submitted to the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing 2017. The work focuses on the characterisation of singing styles in world music and includes the development of a software package and an interactive visualization. For testing the algorithms developed in this research, music/speech annotations were created for a corpus of 360 world music recordings and melody contours were annotated for a set of 30 world music recordings.
Collaborator Contribution The purpose of this research visit was to increase knowledge and resources from collaboration with the music technology group at New York University (NYU). Maria and collaborators developed a software package and an interactive visualisation for characterisation of singing styles in world music. She also developed methods for the representation of music metadata using graphs, in particular hypergraphs, and the application of community detection algorithms to identify meaningful partitions of the graph. Maria collaborated with faculty and students in the Music Research Lab at NYU, as evidenced by the co-authored paper listed below: M. Panteli, R. Bittner, J. P. Bello, and S. Dixon, "Towards the Characterization of Singing Styles in World Music," IEEE International Conference on Acoustics, Speech and Signal Processing, submitted.
Impact For testing the algorithms developed in this research, music/speech annotations were created for a corpus of 360 world music recordings and melody contours were annotated for a set of 30 world music recordings. Other outcomes of this visit included the representation of music metadata using graphs, in particular hypergraphs, and the application of community detection algorithms to identify meaningful partitions of the graph. Maria and collaborators developed a software package and an interactive visualisation for characterisation of singing styles in world music. She also developed methods for the representation of music metadata using graphs, in particular hypergraphs, and the application of community detection algorithms to identify meaningful partitions of the graph. Maria collaborated with faculty and students in the Music Research Lab at NYU, as evidenced by the co-authored paper listed below: M. Panteli, R. Bittner, J. P. Bello, and S. Dixon, "Towards the Characterization of Singing Styles in World Music," IEEE International Conference on Acoustics, Speech and Signal Processing, submitted.
Start Year 2016
 
Title QM Live Music Lab 
Description Suite of software usable in live performance which is based on our research methods 
Type Of Technology Software 
Year Produced 2011 
Open Source License? Yes  
Impact Stark co-founded start-up Codasign 
URL http://livemusiclab.eecs.qmul.ac.uk/
 
Title SMALLbox 
Description Matlab toolbox for processing signals using adaptive sparse structured representations 
Type Of Technology Software 
Year Produced 2010 
Open Source License? Yes  
Impact Continued by sister FP7 project. Toolbox downloaded over 20,000 times. 
URL https://code.soundsoftware.ac.uk/projects/smallbox
 
Title real-time music transcription 
Description An open-source VAMP plugin version of a real-time music transcription algorithm. 
Type Of Technology Software 
Year Produced 2009 
Impact This was the first of its kind. It has resulted in 1000s of downloads, and has been used in further research at other institutions. 
 
Description Invited talk at industry convention 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Raised the profile of C4DM and its research to the audio engineering community of Latin America

Further invitations for talks and leading workshops at AES conventions. Discussion of this new emerging field in industry trade magazines and journals.
Year(s) Of Engagement Activity 2010
 
Description Networking at Audio Engineering Society Convention 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Reiss and Perez attended the 123rd AES Convention (New York, 2007) to explore collaborative research and exploitation possibilities of their automatic mixing technology. The discussions led to follow-up meetings and collaborations, described below.

This led to new links with DTS (and a subsequent return visit), Sony Entertainment, Midas Klark Teknik, and Wolfson Electronics. Other discussion with Djurek et al (U of Zagreb) led to an AES convention paper and a journal paper in JASA (Reiss et al, 2008a;b).
This trip was a major contributor to the creation of the spin-off company MixGenius.
Year(s) Of Engagement Activity 2007