DaCaRyH (Data science for the study of calypso-rhythm through history)

Lead Research Organisation: Queen Mary University of London
Department Name: Sch of Electronic Eng & Computer Science

Abstract

DaCaRyH (Data science for the study of calypso-rhythm through history) is a collaboration of ethnomusicologists and archivists in France, and data scientists and composers in the UK. DaCaRyH has 3 objectives: 1) to enrich the domain of ethnomusicology by integrating data science and music information retrieval (MIR) methods into ethnomusicological archives and research practices; 2) to enrich the domains of data science and MIR by integrating ethnomusicological use cases into the practice of the research and development of intelligent systems; 3) to study the concept of musical style through a comparative diachronic analysis of a music corpus, and to transform the features extracted from the same corpus into new styles. DaCaRyH is aligned primarily with "'The Digital Age' and its effects on tangible and intangible heritage", and secondarily with "representations and uses of the past." DaCaRyH will work specifically with the music tradition of the steel band calypso. This provides focus on a variety of real and challenging ethnomusicological questions, which in turn drive the development of data science and MIR technologies. DaCaRyH helps pave the way to "big cultural data," or the analysis of human culture at scales not possible without computational methods. DaCaRyH involves the Research Center for Ethnomusicology (CREM-LESC, France), and the Centre for Digital Music (C4DM, Queen Mary University of London, UK). CREM-LESC offers access to a large ethnomusicologic recordings database accessible worldwide through an online platform. C4DM is a world-leading group of specialists in data science applied to music. DaCaRyH will result in: two journal submissions (one in the respective fields of the PIs), a call for a special journal issue concerning cultural studies and data science, a music composition and performance project involving the use of the tools developed in DaCaRyH, and improved functionality integrated with the CREM-LESC ethnomusicological recordings archive.
 
Title Bastard Tunes 
Description Composition for ensemble by Oded Ben-Tal 
Type Of Art Composition/Score 
Year Produced 2017 
Impact Media attention: https://www.inverse.com/article/32276-folk-music-ai-folk-rnn-musician-s-best-friend and three public performances: May 23 2017 (London), Nov. 20 2017 (London), Dec. 9 2017 (Hamburg) 
URL https://www.youtube.com/playlist?list=PLdTpPwVfxuXpQ03F398HH463SAE0vR2X8
 
Title Composition: Safe Houses 
Description A work by Irish harper Úna Monaghan using the output of our music generation system. 
Type Of Art Composition/Score 
Year Produced 2017 
Impact A continued collaboration with Una. 
URL https://youtu.be/x6LS9MbQj7Y
 
Title Composition: Two show pieces and an interlude 
Description These pieces were composed using our music generation system 
Type Of Art Composition/Score 
Year Produced 2017 
Impact Three public performances: May 23 2017 (London), Nov. 20 2017 (London), Dec. 9 2017 (Hamburg) 
URL https://soundcloud.com/sturmen-1
 
Title The folk-rnn Session Books: 
Description The 100,000 tunes in these 24 volumes have been generated entirely by three computer models trained on over 23,000 tunes contributed by users of thesession.org. Each tune appears as ABC and in staff notation. 
Type Of Art Composition/Score 
Year Produced 2017 
Impact A selection of transcriptions was performed by Torbjörn Hultmark (http://hultmark.me) on November 18, 2016 at "C4DM presents A brief evening of electroacoustic music" at QMUL: https://www.youtube.com/watch?v=4kLxvJ-rXDs A selection of transcriptions was performed by Luca Turchet on Nov. 20 2017: https://youtu.be/pkf3VqPieoo A selection of transcriptions was performed by John Hughes on Dec. 9 2017: https://youtu.be/GmwYtNgHW4g A selection of transcriptions was performed by Una Monaghan on May 23 2017: https://youtu.be/x6LS9MbQj7Y 
URL https://highnoongmt.wordpress.com/2018/01/05/volumes-1-20-of-folk-rnn-v1-transcriptions/
 
Description The representation of music data can have a major impact on the quality of its computational modelling. We have studied two models, each built using a different representation of the same data: one model we build with high-level textual transcriptions, and the other is the Google model (magenta, basic rnn) built with low-level MIDI representations. Through expert elicitation we have found that the models built using the high-level textual representation demonstrate much more success when it comes to generation of music data having expected conventions. Another key finding -- or really a demonstration of best practices -- is the involvement of practitioners early in the modeling efforts. This can circumvent tricky ethical questions, and streamline the process. Finally, we have found that a variety of common quantitative features have limited applicability in answering ethnomusicological questions.
Exploitation Route Our software models are freely available for exploration: https://github.com/IraKorshunova/folk-rnn
Sectors Creative Economy,Education,Culture, Heritage, Museums and Collections

URL https://highnoongmt.wordpress.com
 
Description We have exploited our findings on music representations to engage audiences with new music. During our project, we organised one workshop and two concerts that demonstrated to the general public how computers and machine learning can be used to augment music traditions. These events helped us engage professional musicians and build lasting collaborations with them. We also gained media attention, which has attracted further collaborations.
First Year Of Impact 2017
Sector Creative Economy,Culture, Heritage, Museums and Collections
Impact Types Cultural

 
Description AHRC Follow-on Funding for Impact and Engagement
Amount £70,990 (GBP)
Funding ID AH/R004706/1 
Organisation Arts & Humanities Research Council (AHRC) 
Sector Public
Country United Kingdom
Start 09/2017 
End 07/2018
 
Description Joint project with CREM/LESC, CNRS, France 
Organisation Paris West University Nanterre La Défense
Country France 
Sector Academic/University 
PI Contribution Technical perspectives on state of the art in computational ethnomusicology practices.
Collaborator Contribution Scholastic uses of state of the art computational ethnomusicology for ethnomusicological research questions.
Impact Joint journal article: "Quel devenir pour l'ethnomusicologie? Zoom arrière : L'ethnomusicologie à l'ère du Big Data" submitted Dec. 15 2016 to a special issue of "Cahiers d'ethnomusicologie", France.
Start Year 2016
 
Description Kingston University 
Organisation Kingston University London
Country United Kingdom 
Sector Academic/University 
PI Contribution Technical approaches to and analysis of music transcription modelling
Collaborator Contribution Creative applications of transcription models, and perspective of "composition teacher" for analysis of results.
Impact - 2016 Concert at QMUL of work https://sites.google.com/site/c4dmconcerts1617/home/fixedmedia/brief - March 2017 workshop at Inside Out Festival http://www.insideoutfestival.org.uk/2017/events/folk-music-composed-by-a-computer/ - May 2017 Concert at QMUL of work https://www.eventbrite.co.uk/e/partnerships-tickets-31992055098
Start Year 2016
 
Title NiMFKS 
Description This software provides a means for concatenative sound synthesis via non-negative matrix factorisation. 
Type Of Technology Software 
Year Produced 2017 
Impact Paper at conference: "Nicht-negativeMatrixFaktorisierungnutzendesKlangsynthesenSystem (NiMFKS): Extensions of NMF-based concatenative sound synthesis", Proc. Int. Conf. Digital Audio Effects, 2017. 
URL https://code.soundsoftware.ac.uk/projects/nimfks
 
Title folk-rnn 
Description This is further development of the recurrent neural network for modeling music transcriptions. 
Type Of Technology Software 
Year Produced 2017 
Impact Many are listed on the project homepage: https://github.com/IraKorshunova/folk-rnn 
URL https://github.com/IraKorshunova/folk-rnn
 
Description Article in The Conversation 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact This article we wrote led to several other media requests: Dec. 23 2017 "AI Has Been Creating Music and the Results Are...Weird" PC Mag (http://uk.pcmag.com/news/92577/ai-has-been-creating-music-and-the-results-areweird)
Nov. 18, 2017 Le Tube avec Stéphane Bern et Laurence Bloch, France https://www.youtube.com/watch?v=LQQER9479Xk
June 3 2017, "An A.I. in London is Writing Its Own Music and It Sounds Heavenly" https://www.inverse.com/article/32276-folk-music-ai-folk-rnn-musician-s-best-friend
June 8 2017, "Computer program created to write Irish trad tunes" http://www.irishtimes.com/business/technology/computer-program-created-to-write-irish-trad-tunes-1.3112238
June 19 2017 "Folk-RNN is the Loquantur Rhythm artist of June" (providing music for phone call waits) https://zc1.campaign-view.com/ua/SharedView?od=11287eca6b3187&cno=11a2b0b20c9c037&cd=12a539b2f47976f3&m=4 (Here is a sample: https://highnoongmt.wordpress.com/2017/06/17/deep-learning-on-hold/)
June 18 2017, "Real Musicians Evaluate Music Made by Artificial Intelligence" https://motherboard.vice.com/en_us/article/irish-folk-music-ai
June 1 2017, "Can an AI Machine Hold Copyright Protection Over Its Work?" https://artlawjournal.com/ai-machine-copyright/
May 26 2017 The Daily Mail named our project "Bot Dylan" (http://www.dailymail.co.uk/sciencetech/article-4544400/Researchers-create-computer-writes-folk-music.html), and then didn't even link to this page. Plus the video they edited has no computer-generated music in it. Well done!
April 13 2017 "Eine Maschine meistert traditionelle Folk-Music" http://www.srf.ch/kultur/netzwelt/eine-maschine-meistert-traditionelle-folk-music
Year(s) Of Engagement Activity 2017
URL https://theconversation.com/machine-folk-music-composed-by-ai-shows-technologys-creative-side-74708
 
Description Autumn concert 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Concert of electronic and computer music, some of which result from project outcomes.
Year(s) Of Engagement Activity 2016
URL https://sites.google.com/site/c4dmconcerts1617/home/fixedmedia/brief
 
Description Concert: May 23 2017 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact This unique concert will feature works created with computers as creative partners drawing on a uniquely human tradition: instrumental folk music. We aren't so interested in whether a computer can compose a piece of music as well as a human, but instead how we composers, musicians and engineers can use artificial intelligence to explore creative domains we hadn't thought of before. This follows on from recent sensational stories of artificial intelligence making both remarkable achievements -- a computer beating humans at Jeopardy! -- and unintended consequences -- a chatbot mimicking racist tropes. We are now living in an age, for better or worse, when artificial intelligence is seamlessly integrated into the daily life of many. It is easy to feel surrounded and threatened, but at the same time empowered by these new tools. Find more information in our recent article at The Conversation: 'Machine folk' music composed by AI shows technology's creative side.

Our concert is centred around a computer program we have trained with over 23,000 ``Celtic'' tunes -- typically played in communities and festivals around Ireland, France and the UK. We will showcase works involving composers and musicians co-creating music with our program, drawing upon the features it has learned from this tradition, and combining it with human imagination. A trio of traditional Irish musicians led by Daren Banarsë will play three sets of computer-generated "Celtic" tunes. Ensemble x.y will perform a work by Oded Ben-Tal, which is a 21st century homage to folk-song arrangements from composers such as Brahms, Britten and Berio. They will also perform a work by Bob L. Sturm created from material the computer program has self-titled "Chicken." You will hear pieces performed on the fine organ of St Dunstan generated by two computer programs co-creating music together: our system generates a melody and another system harmonises it in the style of Bach chorale. Another work by Nick Collins at Durham blends computer models of three different musicians and composers: Iannis Xenakis, Ed Sheeran, and Adele. Our concert will provide an exciting glimpse into how new musical opportunities are enabled by partnerships: between musicians from different traditions; between scientists and artists; and last, but not least, between humans and computers.

Other Featured performers:
Úna Monaghan: a composer and researcher currently based at Cambridge, will perform her works for Irish harp and live electronics, combining elements of Irish traditional music with computer sound, controlled via motion sensor and pitch detection.
Elaine Chew: a musician and Professor at the Centre for Digital Music, will perform a series of solo piano works "re-composed" by MorpheuS.
Year(s) Of Engagement Activity 2017
URL https://highnoongmt.wordpress.com/2017/05/19/partnerships-concert-program/
 
Description Concert: Nov. 20 2017 Being Human Festival 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Machine learning has been making headlines with its sometimes alarming progress in skills previously thought to be the preserve of the human. Now these artificial things are 'composing' music. Our event, part concert, part talk, aims to demystify machine learning for music. We will describe how we are using state-of-the-art machine learning methods to teach a computer specific musical styles. We will take the audience behind the scenes of such systems, and show how we are using it to enhance human creativity in both music performance and composition. Human musicians will play several works composed by and with such systems. The audience will see how these tools can be used to augment human creativity and not replace it.
Year(s) Of Engagement Activity 2017
URL https://beinghumanfestival.org/event/music-in-the-age-of-artificial-creation/
 
Description Public workshop at 2017 London Inside Out Festival 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Folk music, by its nature, involves folks. So it's absurd to think a computer can compose such music. This unique event - part presentation, concert and workshop - shows how it's not absurd at all!

Our international research and creative team has been applying methods of "machine learning" to modeling folk music - specifically, tens of thousands of "Celtic" tunes typically played in pubs and festivals around Ireland, France and the UK. The result is a computer program that can generate an endless number of tunes. With human interpretation, these tunes can become music sharing a surprising number of qualities with "genuine" folk music.

This event kicks off with a fun presentation about what "machine learning" is, and how we are applying it to compose music. Then there will be a short concert of music played by master musicians, who will weave together traditional tunes with computer-generated ones.

Will you be able to tell which is which? Then the musicians will lead a workshop for attendees who bring their own instruments to learn to play, one phrase at a time, a computer-generated tune.

We will finish with a discussion and question and answer session.
Year(s) Of Engagement Activity 2017
URL http://www.insideoutfestival.org.uk/2017/events/folk-music-composed-by-a-computer/