Real-time instrument transformation and augmentation with deep learning

Lead Research Organisation: Queen Mary, University of London
Department Name: Sch of Electronic Eng & Computer Science

Abstract

Expert performers spend many years developing skills on their instruments, and few performers have time and motivation to learn new and unfamiliar digital instruments. This project explores an approach to creating new instruments which build on the existing skills of expert players in which a familiar instrument can be used as a controller for other types of musical sounds.

Starting with an electric violin, electric guitar or similar instrument, a real-time audio analysis algorithm will be developed based on deep learning techniques such as variational recurrent autoencoders (VRAEs) to construct a low-dimensional latent space representation of the timbre and articulation. The results will be evaluated for perceptual salience to the performer, and resynthesis techniques will be explored which use the latent space representation to control other sound synthesis algorithms. The desired end result is an instrument which remains familiar to the performer in its physical form, whose sound is creatively new but also responds in predictable ways to the performer's actions. A priority is to look beyond typical high-level transcription features such as note onsets to focus on the nuanced, micro-level timbral control that expert performers expect from their instruments.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S022694/1 01/07/2019 31/12/2027
2424371 Studentship EP/S022694/1 14/09/2020 30/09/2024 Lewis Wolstanholme