Engaging three user communities with applications and outcomes of computational music creativity
Lead Research Organisation:
Queen Mary University of London
Department Name: Sch of Electronic Eng & Computer Science
Abstract
In recent years, machine learning and artificial intelligence systems have been grabbing headlines by defeating humans at the game of Go and the American TV game-show Jeopardy! The same technology have also been applied to other domains considered essentially human, like art and music. This project proposes an alternative viewpoint to the human-vs-machine paradigm: these technologies can produce co-creative partners that are useful, meaningful, and non-threatening.
This project facilitates four innovative and creative activities to engage with three user communities: Composers, Performers, and Listeners. The principal aim of this project is to engage these user communities with applications and outcomes of computational music creativity. A subsidiary aim is for the investigators to collaboratively learn from these communities about the implications of emergent computational music creativity in non-academic contexts.
Among its objectives, this project will develop a software application to enable anyone with an Internet connection to explore music co-creation. The project will also develop an online resource for the user community of this application and others, where users can share, comment and rate submitted outcomes. The technology underpinning this application comes from the AHRC-funded Care for the Future research project, "Data science for the study of calypso-rhythm through history" (DaCaRyH, AH/N504531/1). That project brings together ethnomusicologists, experts in data science and a composer to investigate how computational tools can be used to study calypso music, and how the study of music can be used to inform the design of computational analyses. An unanticipated outcome of DaCaRyH is that some of these analytical tools can be "inverted" to create systems that synthesise never-before-heard music that is surprisingly good. Preliminary work by the investigators (in the form of expert elicitations, a public workshop, and music concert) show there is great interest among listeners and practitioners for exploring computational creativity outside of the academic realm.
The project will engage public audiences with music arising from computational co-creativity. It will deliver a hands-on workshop for composers to learn to use the developed software application. It will also organise a symposium on computational music creativity for the general public. The conclusion of the project will feature a music concert showcasing new music created with the developed application. To motivate participation, the project will organise a competition to solicit original compositions created with the application, to be performed by a professional ensemble. The competition jury will include musicians, composers, and of course a critic built using artificial intelligence.
An especially novel outcome of this project is a professionally produced album of "machine folk" music. The album will feature music generated entirely by the system and edited, arranged, and performed by experienced musicians. The preliminary work of the investigators has found that good music performed by good musicians serves as the best ambassador for the value and potential usefulness of computational creativity. This album will serve as a lasting illustration for the benefits of the co-creative approach to machine learning tools.
This project links directly with the Digital Transformations theme by bringing state of the art digital technology to democratise participation in creative music making. It also relates to the Care for the Future theme by encouraging innovation and renewal in community music practice, and thus helping to broaden and deepen engagement from listeners, performers and composers. The project will serve to diversify the legacy of the DaCaRyH project.
This project facilitates four innovative and creative activities to engage with three user communities: Composers, Performers, and Listeners. The principal aim of this project is to engage these user communities with applications and outcomes of computational music creativity. A subsidiary aim is for the investigators to collaboratively learn from these communities about the implications of emergent computational music creativity in non-academic contexts.
Among its objectives, this project will develop a software application to enable anyone with an Internet connection to explore music co-creation. The project will also develop an online resource for the user community of this application and others, where users can share, comment and rate submitted outcomes. The technology underpinning this application comes from the AHRC-funded Care for the Future research project, "Data science for the study of calypso-rhythm through history" (DaCaRyH, AH/N504531/1). That project brings together ethnomusicologists, experts in data science and a composer to investigate how computational tools can be used to study calypso music, and how the study of music can be used to inform the design of computational analyses. An unanticipated outcome of DaCaRyH is that some of these analytical tools can be "inverted" to create systems that synthesise never-before-heard music that is surprisingly good. Preliminary work by the investigators (in the form of expert elicitations, a public workshop, and music concert) show there is great interest among listeners and practitioners for exploring computational creativity outside of the academic realm.
The project will engage public audiences with music arising from computational co-creativity. It will deliver a hands-on workshop for composers to learn to use the developed software application. It will also organise a symposium on computational music creativity for the general public. The conclusion of the project will feature a music concert showcasing new music created with the developed application. To motivate participation, the project will organise a competition to solicit original compositions created with the application, to be performed by a professional ensemble. The competition jury will include musicians, composers, and of course a critic built using artificial intelligence.
An especially novel outcome of this project is a professionally produced album of "machine folk" music. The album will feature music generated entirely by the system and edited, arranged, and performed by experienced musicians. The preliminary work of the investigators has found that good music performed by good musicians serves as the best ambassador for the value and potential usefulness of computational creativity. This album will serve as a lasting illustration for the benefits of the co-creative approach to machine learning tools.
This project links directly with the Digital Transformations theme by bringing state of the art digital technology to democratise participation in creative music making. It also relates to the Care for the Future theme by encouraging innovation and renewal in community music practice, and thus helping to broaden and deepen engagement from listeners, performers and composers. The project will serve to diversify the legacy of the DaCaRyH project.
Planned Impact
This project will impact users and beneficiaries outside the academic research community in three sectors.
# Public and third sector
- Professional music composers: One commercial composer with whom the investigators are collaborating has envisioned that the application to be developed in this project could help him overcome "the intimidation of the blank page". Were he to begin by selecting among a wealth of materials, the process of composition would flow. Both investigators have composed new music using the current music models, and can see great benefit to this approach. The application will facilitate more of this kind of interaction.
- Professional music performers: Several performers the investigators have worked with so far already benefited from this work. One performer remarked that the process of preparing to perform tunes generated from the system led him to reevaluate his relationship with technology and his ideas about creative practice. Another performer said they didn't initially like the idea of engaging with a computer composing music, but once he heard the results he changed his mind.
- Education: The application to be developed in this project could be used as a pedagogical assistant to music students. In fact, part of the testing of the prototype application in this project will use music students at Kingston University. This classroom setting will be useful to test the usability of the application itself, but also the potential for this approach for teaching students about music and creativity.
# Commercial private sector
- New co-creative music enterprises: the topic of machine learning for music applications is attracting the interest of many commercial enterprises (e.g., London-based JukeDeck, Sony's Flow machines, Google's Magenta project). The application to be designed in this project (as well as the proof of the quality of the music in the form of the album) will motivate the development of similar co-creative music products.
- Standardised music testing industry: In the UK, there is a large cottage industry of preparatory services for national examinations in music. The music models developed in this project could be used to assist in the evaluation of the answers, such as stylistic relevance and originality. These will enhance the validity of such exams and also make the marking more efficient.
# Wider Public
- Amateur and semi-professional musicians: Non-professional music making is widespread in the UK with amateur choirs, orchestras, ensembles, and bands assembling to enjoy music making. This project will engage people who want music to be part of their lives, and offer them new opportunities for active and creative engagement with music through the application of new technology. In fact, one of the participants at the recent workshop of the investigators (http://www.insideoutfestival.org.uk/2017/events/folk-music-composed-by-a-computer/) commented that she felt unable to channel her creativity into making music because she lacks training. With the application to be developed in this project, she could start by generating tunes and selecting the bits she likes, building as she goes. The benefit, therefore, is both in the form of a tangible musical output but also in the form of learning and developing of musical skills.
- Public at large: Public interest in the subject of the project is evidenced by the number of views of our article for The Conversation (https://theconversation.com/machine-folk-music-shows-the-creative-side-of-ai-74708) and the examples posted to The Bottomless Tune Box YouTube channel (https://www.youtube.com/channel/UC7wzmG64y2IbTUeWji_qKhA). This indicates that there is a non-academic audience interested in engaging with the larger question of this project: how can artificial intelligence have a beneficial impact on music?
# Public and third sector
- Professional music composers: One commercial composer with whom the investigators are collaborating has envisioned that the application to be developed in this project could help him overcome "the intimidation of the blank page". Were he to begin by selecting among a wealth of materials, the process of composition would flow. Both investigators have composed new music using the current music models, and can see great benefit to this approach. The application will facilitate more of this kind of interaction.
- Professional music performers: Several performers the investigators have worked with so far already benefited from this work. One performer remarked that the process of preparing to perform tunes generated from the system led him to reevaluate his relationship with technology and his ideas about creative practice. Another performer said they didn't initially like the idea of engaging with a computer composing music, but once he heard the results he changed his mind.
- Education: The application to be developed in this project could be used as a pedagogical assistant to music students. In fact, part of the testing of the prototype application in this project will use music students at Kingston University. This classroom setting will be useful to test the usability of the application itself, but also the potential for this approach for teaching students about music and creativity.
# Commercial private sector
- New co-creative music enterprises: the topic of machine learning for music applications is attracting the interest of many commercial enterprises (e.g., London-based JukeDeck, Sony's Flow machines, Google's Magenta project). The application to be designed in this project (as well as the proof of the quality of the music in the form of the album) will motivate the development of similar co-creative music products.
- Standardised music testing industry: In the UK, there is a large cottage industry of preparatory services for national examinations in music. The music models developed in this project could be used to assist in the evaluation of the answers, such as stylistic relevance and originality. These will enhance the validity of such exams and also make the marking more efficient.
# Wider Public
- Amateur and semi-professional musicians: Non-professional music making is widespread in the UK with amateur choirs, orchestras, ensembles, and bands assembling to enjoy music making. This project will engage people who want music to be part of their lives, and offer them new opportunities for active and creative engagement with music through the application of new technology. In fact, one of the participants at the recent workshop of the investigators (http://www.insideoutfestival.org.uk/2017/events/folk-music-composed-by-a-computer/) commented that she felt unable to channel her creativity into making music because she lacks training. With the application to be developed in this project, she could start by generating tunes and selecting the bits she likes, building as she goes. The benefit, therefore, is both in the form of a tangible musical output but also in the form of learning and developing of musical skills.
- Public at large: Public interest in the subject of the project is evidenced by the number of views of our article for The Conversation (https://theconversation.com/machine-folk-music-shows-the-creative-side-of-ai-74708) and the examples posted to The Bottomless Tune Box YouTube channel (https://www.youtube.com/channel/UC7wzmG64y2IbTUeWji_qKhA). This indicates that there is a non-academic audience interested in engaging with the larger question of this project: how can artificial intelligence have a beneficial impact on music?
People |
ORCID iD |
Bob Sturm (Principal Investigator) | |
Oded Ben-Tal (Co-Investigator) |
Publications
Sturm B
(2018)
Machine learning research that matters for music creation: A case study
in Journal of New Music Research
Sturm B
(2019)
Artificial Intelligence and Music: Open Questions of Copyright Law and Engineering Praxis
in Arts
Ben-Tal O
(2021)
How Music AI Is Useful: Engagements with Composers, Performers and Audiences
in Leonardo
Title | "Let's Have Another Gan Ainm" recording |
Description | A CD-length recording of music involving machine-generated material. |
Type Of Art | Composition/Score |
Year Produced | 2017 |
Impact | This technical report describes the findings of the experiment: http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A1248565&dswid=-2296 |
URL | https://soundcloud.com/oconaillfamilyandfriends |
Description | The music album we produced (https://soundcloud.com/oconaillfamilyandfriends) clearly demonstrates how traditional musicians can "collaborate" with Ai-generated material to create a convincing collection. Several reviews of the album praised it. |
Exploitation Route | The music recording is freely available: https://soundcloud.com/oconaillfamilyandfriends The software used to generate the material is also available: https://github.com/IraKorshunova/folk-rnn An online application is available: https://folkrnn.org/ |
Sectors | Creative Economy Digital/Communication/Information Technologies (including Software) Education |
URL | http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A1248565&dswid=250 |
Description | The music album we produced continues to attract listeners: https://soundcloud.com/oconaillfamilyandfriends |
First Year Of Impact | 2018 |
Sector | Creative Economy |
Impact Types | Cultural |
Description | ERC Consolidator Grant |
Amount | € 1,997,918 (EUR) |
Funding ID | 864189 |
Organisation | European Research Council (ERC) |
Sector | Public |
Country | Belgium |
Start | 09/2020 |
End | 09/2025 |
Title | Online implementation of folkrnn model |
Description | This website allows people to experiment with an Ai model in generating folk music. |
Type Of Material | Computer model/algorithm |
Year Produced | 2017 |
Provided To Others? | Yes |
Impact | This article discusses the use of the program: https://www.mitpressjournals.org/doi/abs/10.1162/leon_a_01959 Thousands of interactions were recorded and studied. |
URL | https://folkrnn.org/ |
Title | Online repository of music melodies |
Description | Crowd-sourced collection of melodies generated by an Ai system |
Type Of Material | Database/Collection of data |
Year Produced | 2017 |
Provided To Others? | Yes |
Impact | This resource is described in https://www.mitpressjournals.org/doi/abs/10.1162/leon_a_01959 Thousands of interactions were recorded and analyzed. |
URL | https://themachinefolksession.org/ |
Title | NiMFKS |
Description | This software provides a means for concatenative sound synthesis via non-negative matrix factorisation. |
Type Of Technology | Software |
Year Produced | 2017 |
Impact | Paper at conference: "Nicht-negativeMatrixFaktorisierungnutzendesKlangsynthesenSystem (NiMFKS): Extensions of NMF-based concatenative sound synthesis", Proc. Int. Conf. Digital Audio Effects, 2017. |
URL | https://code.soundsoftware.ac.uk/projects/nimfks |
Title | folk-rnn |
Description | This is further development of the recurrent neural network for modeling music transcriptions. |
Type Of Technology | Software |
Year Produced | 2017 |
Impact | Many are listed on the project homepage: https://github.com/IraKorshunova/folk-rnn |
URL | https://github.com/IraKorshunova/folk-rnn |
Description | Concert: Nov. 20 2017 Being Human Festival |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Public/other audiences |
Results and Impact | Machine learning has been making headlines with its sometimes alarming progress in skills previously thought to be the preserve of the human. Now these artificial things are 'composing' music. Our event, part concert, part talk, aims to demystify machine learning for music. We will describe how we are using state-of-the-art machine learning methods to teach a computer specific musical styles. We will take the audience behind the scenes of such systems, and show how we are using it to enhance human creativity in both music performance and composition. Human musicians will play several works composed by and with such systems. The audience will see how these tools can be used to augment human creativity and not replace it. |
Year(s) Of Engagement Activity | 2017 |
URL | https://beinghumanfestival.org/event/music-in-the-age-of-artificial-creation/ |