Supporting Feature Engineering for End-User Design of Gestural Interactions

Lead Research Organisation: Goldsmiths University of London
Department Name: Computing Department

Abstract

Sensors for analysing human gesture and activity (such as accelerometers and gyroscopes) are becoming increasingly affordable and easy to connect to existing software and hardware. There is great, unexplored potential for these sensors to support custom gestural control and activity recognition systems. Applications include include the creation of bespoke gestural control interfaces for disabled people, new digital musical instruments, personalised performance analysis systems for athletes, and new embodied interactions for gaming and interactive art. The ability to easily create novel interactions with motion sensors also benefits schoolchildren and university students who are learning about computing through the use of sensors with platforms such as BBC micro:bit and Arduino.

We have previously established methods for enabling people without programming expertise to build custom gesturally-controlled systems, using interactive machine learning. These methods allow people to easily create new systems by demonstrating examples of human actions, along with the desired label or computer response for each action.

Unfortunately, many compelling applications of custom gesture and activity recognition require substantial pre-processing of raw sensor data (i.e., "feature engineering") before machine learning can be applied successfully. Experts first apply a variety of signal processing techniques to sensor data in order to make machine learning feasible. Many people who would benefit from the ability to create custom gestural interactions lack the signal processing and programming expertise to apply those methods effectively or efficiently. It is not known how to successfully expose control over feature engineering to non-experts, nor what the trade-offs among different strategies for exposing control might be.

Our hypothesis is that it is possible to support creation of more complex gestural control and analysis systems by non-experts. We propose to develop and compare three methods for exposing control over feature engineering to non-experts, each requiring varying degrees of user involvement.

The first method uses a fully automated approach, which attempts to computationally identify good features. The second method first elicits high-level information about the problem from the user, and then employs this information to better inform the automated approach. The third method directly involves the user in the feature engineering process. By leveraging users' ability to demonstrate new gestures, identify patterns in visualisations, and reason about the problem domain - as well as computers' ability to employ users' demonstrations to propose new relevant features - this interactive approach may yield more accurate recognisers. Such an approach may also help users learn about the utility of different features, enabling more efficient debugging of their systems and a better understanding of how to build other systems in the future.

We are interested in understanding both the accuracy and usability of these methods. We will evaluate each method with people training several types of gesture and activity recognisers. We will compare each method in terms of the accuracy of the final system, the time required for users to build the system, users' subjective experiences of the design process and quality of the final system, and improvements in users' ability to reason about building gestural interactions with sensors.

This research will enable a wider range of people to successfully use popular motion sensors to create bespoke gestural control and analysis systems, for use in a broader range of applications. Our techniques will be directly implemented in existing open-source software for interactive machine learning. The methods and study outcomes will also inform future work to support feature engineering for users creating real-time systems with other types of sensors.

Planned Impact

This project puts at its heart user-centred design and stakeholder engagement, in order to ensure that benefits are recognised in a number of different target audiences.

1. User groups in the general public:

New groups of people will gain the ability to create compelling, customised interactions with motion sensors, including:
- People with disabilities who can create new, customised gestural interfaces for controlling computers and games;
- Musicians creating new digital musical instruments;
- Athletes and trainers creating personalised performance monitoring and analysis systems;
- Artists creating new interactive experiences;
- "Quantified self" practitioners seeking new ways to understand or change their habits;
- "Hackers" and "makers" developing new interactions with sensing hardware for practical or creative use.

The ability to create more sophisticated gesture and activity recognition systems can enhance such people's quality of life, providing an increased sense of agency, new creative outlets, and a better understanding of themselves. We will integrate our techniques for end-user feature engineering directly into existing free software that provides GUIs for use by non-programmers. This means the above users will immediately be able to employ these techniques.

2. Users in the creative and digital industries

Individuals who currently design interactions as part of their professional practice-including many artists, musicians, and indie game designers-will be able to use sensors more effectively and efficiently in their work. Other people who currently employ sensors for non-commercial, personal use, such as hackers/makers, may become more likely to commercialise their designs as the sophistication of interactions they can create improves.

3. School students using sensors to learn about computing.

The introduction of 1 million BBC micro:bits into UK classrooms this year reflects recognition that work with sensors can engage students in learning computing, encouraging creative exploration of projects that connect computing to the physical world. Our project greatly expands the set of interactions that non-experts can easily build using micro:bit's accelerometers, and our findings will directly inform the design of future tools for students experimenting and creating interactions with sensors. The software tools released by our project will also be immediately usable by university students in physical computing classrooms who are building new interactions with sensors and platforms such as Arduino and Raspberry Pi. Increased student engagement with electronics presents opportunities for additional, longer-term economic benefits: students who build solid foundations in STEM skills will be better prepared to contribute to the 21st century workforce and to become technology innovators themselves.

Publications

10 25 50
 
Description We conducted three studies with people who had experience using interactive machine learning in one or more creative computing projects. Some had worked with sensors before but few had much signal processing knowledge or feature engineering experience. Each study entailed asking participants to create novel gesture recognisers using user interfaces that allowed for a variety of approaches to manual, automated, and interactive feature engineering. We used questionnaires, interviews, and log data to understand how different approaches to supporting feature engineering impacted on people's creative processes and design outcomes.

The key findings of these studies are:
1. Feature engineering of some sort was necessary to realise most participants' designs. Raw sensor data alone usually lead to worse empirical accuracy and lower subjective ratings than user- or automatically-selected features.
2. Leveraging a large number of features identified in advance by domain experts as relevant to human motion sensing facilitated a number of useful approaches to support participants' designs. These include (i) enabling the use of all such "expert" features (which performed surprisingly well for a number of tasks); (ii) implementing lightweight automated methods based on information gain for selecting from these features (this also often performed well, while reducing computational load compared to using all features); and (iii) enabling a number of interactive approaches to supporting human feature selection and reasoning, e.g., enabling users to add features that met certain criteria, and enabling users to visualise and explore all features ranked by automatically computed criteria.
3. It was still often challenging for participants to interactively select appropriate features using our best interfaces, and participants' final selected features were often outperformed on both empirical and subjective measures by both our lightweight automated methods and/or by using all available features. Participants sometimes reacted to the difficulty of choosing good features for a given task by changing the task (i.e., choosing different gestures). We observed that participants used knowledge they had about data representations-whether raw sensor data or other features they understood well-to choose target gestures that were easily learnable using those features, suggesting that, in practice, teaching users about features may be important in facilitating designs capable of recognizing a richer array of motions.
4. Many participants appeared to have difficulty in effectively evaluating and comparing alternative feature sets: there were many inconsistencies in their stated preferences for different feature sets, and often a large mismatch between their stated preferences and empirical classifier accuracies on user- provided test data. We therefore hypothesise that it may be helpful to explore how interfaces can better scaffold structured empirical experimentation with candidate features; simply supporting non-experts by providing implementations, explanations, and GUIs for interactively exploring and selecting features (as we did) may be inadequate. There are open questions about what such experimentation should look like in movement analysis applications in which conventional experimental approaches (e.g., comparing accuracy on held-out data) and embodied and subjective approaches to evaluation (e.g., moving in different ways and observing how a classifier responds) may both be relevant, possibly providing contradictory results.
Exploitation Route People using interactive machine learning to build new interactions with sensors can directly use the new software built for this project. This is especially relevant for professional creative practitioners (e.g., in interaction design, digital and performing arts, etc.) and students in creative computing fields who are learning about sensing and machine learning. This free software can also be used by people in other domains (e.g., building prototype human motion recognition systems for games or sports), and because it is open source, it can be modified for other related uses (e.g., at least one researcher has modified the software to support his experiments with EMG sensors).

The key findings inform both the design of new software tools for related tasks and the aims of subsequent research in interactive machine learning for human movement and the teaching of machine learning to creative practitioners. For instance, these findings have directly influenced the design of the InteractML tool being developed for game designers and VR creators within the current 4i grant, and they have influenced the design of research questions about using body motion in machine learning teaching in current research on a MOOC taught by McCallum, Fiebrink, and UAL colleagues.
Sectors Creative Economy,Digital/Communication/Information Technologies (including Software),Education,Leisure Activities, including Sports, Recreation and Tourism

 
Description Findings about how to make feature engineering more accessible to non-experts influenced the design of interactML (https://interactml.com/), free and open source software which was further developed through 2021 on the EPSRC 4i project (on which Fiebrink is Co-I) and through an Unreal Epic Megagrant (industry funding). This software has been released for both Unity and Unreal game engines, with the Unreal engine version seeing over 45,000 downloads since its release in November 2021. The InteractML tool is being used by game designers and artists developing 2D games as well as VR and AR experiences, where it helps them design new embodied interactions using interactive machine learning. The software developed in this project has additionally been used in workshop, undergraduate, and postgraduate level teaching at Goldsmiths and UAL about using sensors and machine learning to build embodied interactions.
First Year Of Impact 2020
Sector Creative Economy,Digital/Communication/Information Technologies (including Software),Education
Impact Types Cultural,Economic

 
Description 4i: Immersive Interaction design for Indie developers with Interactive machine learning
Amount £497,828 (GBP)
Funding ID EP/S02753X/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 09/2019 
End 08/2021
 
Description AHRC Research Grants - Standard
Amount £799,512 (GBP)
Funding ID AH/R002657/1 
Organisation Arts & Humanities Research Council (AHRC) 
Sector Public
Country United Kingdom
Start 05/2018 
End 04/2021
 
Description Art, Artifice & Intelligence: A UK-Japan partnership exploring art and AI
Amount £49,921 (GBP)
Funding ID ES/S014128/1 
Organisation Economic and Social Research Council 
Sector Public
Country United Kingdom
Start 01/2019 
End 04/2020
 
Description Google Focused Research Award
Amount $20,000 (USD)
Organisation Google 
Sector Private
Country United States
Start 02/2019 
End 09/2019
 
Description Transforming Collections: Reimagining Art, Nation and Heritage
Amount £2,947,162 (GBP)
Funding ID AH/W003341/1 
Organisation Arts & Humanities Research Council (AHRC) 
Sector Public
Country United Kingdom
Start 11/2021 
End 11/2024
 
Title Wekinator with Interactive Feature Visualisation and Selection 
Description We have implemented three iterations of mixed-initiative interfaces for end-user feature engineering within a new version of the Wekinator software. Each iteration has involved least one study involving participants learning about features and using a set of novel graphical user interfaces to select and engineer features; this has been followed by changes to the interface informed by the outcomes of the studies. Wekinator is an open-source, cross-platform software tool for end-user interactive machine learning, which Fiebrink has been developing since 2008. It has been downloaded over 40,000 times since 2015. The main users are digital artists and musicians, designers, students, and "hackers"/"makers". Wekinator is also used in the curriculum at a number of universities (including Goldsmiths, University of Colorado Boulder, New York University, and Columbia), and in the online course (MOOC) "Machine learning for musicians and artists" on the Kadenze platform. We have implemented new approaches for improving end-user feature engineering within the Wekinator software, and this is currently available on github (see the link below; work is in the "feature/" branches of the repository). 
Type Of Technology Software 
Year Produced 2019 
Open Source License? Yes  
Impact A number of students have used the software to realise novel creative projects (e.g., a game built in Unity with Leap Motion that enables players to define their own gestures). The software has enabled new approaches to teaching and hands-on learning about features for creative computing students at Goldsmiths and UAL. Knowledge gained from implementing and testing the software has influenced the design of the InteractML tool for interactive machine learning in Unity. 
 
Description Ars Electronica 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I gave an invited talk at Ars Electronica, a top-tier digital arts festival, in Linz, Austria, in September 2017. The talk was titled "Machine Learning as Creative, Collaborative Design Tool." Several hundred audience members attended, most of whom are professional digital artists, creative technology creators, or digital arts enthusiasts. Due to this event, I was also interviewed about creative end-user machine learning for Austrian radio station Ö1 and filmed for a new documentary about arts and machine learning.
Year(s) Of Engagement Activity 2017
URL https://www.aec.at/ai/en/symposium/
 
Description CCI After School Club: Building Musical Instruments with Machine Learning 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Schools
Results and Impact Fiebrink led a discussion and demo for local schoolchildren (which took place online due to COVID), introducing them to ways that sensors and machine learning could be used to design new gesturally-controlled musical instruments. This was the first introduction to most participants to the idea that machine learning could be used to make music, and their first introduction to hands-on tools that they could use at home.
Year(s) Of Engagement Activity 2020
 
Description Google PAIR Symposium 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact I was invited to give a talk about accessible end-user machine learning at a symposium organised by Google's PAIR (People + AI Research Initiative) in Cambridge, Massachusetts, USA in September 2017. Over 100 people attended in person, and the event was live-streamed online. The audience consisted of Google engineers and researchers, professional developers and researchers from other organisations (including academic and industry), and professional digital artists and designers, among others. The talk sparked excellent questions about the value of machine learning as a design tool, and the approaches for making machine learning more accessible to broader audiences. I was also invited to apply for Google funding for a research project related to the development of new machine learning teaching resources.
Year(s) Of Engagement Activity 2017
URL https://sites.google.com/view/pair-symposium2017/home
 
Description Guest lectures and workshop at Stanford University 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I gave two guest lectures (one in the Computer Music research group in the Music department, and one in Computer Science) to students and staff at Stanford University in April 2018. These lectures presented my research about making machine learning usable and useful for non-experts. I also gave a one-day workshop, open to members of both departments and also attended by two members of the public, teaching attendees how to use software developed in my research for machine learning in creative and interactive contexts.

As a result of this work, a number of people at Stanford are now using software developed in my research--both in teaching and in their own research. I was also invited to apply for funding for a collaboration with Stanford researchers (we are still waiting to find the outcome of this application).
Year(s) Of Engagement Activity 2018
URL https://hci.stanford.edu/courses/cs547/speaker.php?date=2018-04-20
 
Description Invited talk and workshop at Eyeo Digital Arts Festival in Minneapolis, USA 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I was invited to give a talk and lead a workshop at the Eyeo digital arts festival held in Minneapolis, MN in 2018. Eyeo 2018 had roughly 500 attendees, comprising mainly digital artists and creative technologists from around the world. The talk focused on my research on making usable machine learning tools for creative practitioners. I also led a day-long workshop that taught approximately 30 attendees how to use machine learning to create new real-time interactions with sensors, using software and knowledge produced in my research.
Year(s) Of Engagement Activity 2018
URL http://eyeofestival.com
 
Description Invited talk at DeepMind 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Industry/Business
Results and Impact I gave an invited talk to researchers and engineers at DeepMind (an AI company based in London). Approximately 60 people attended. My talk encouraged people to consider how machine learning can be useful to non-experts, including in end-user design applications, and how to make it more accessible to non-computer scientists.
Year(s) Of Engagement Activity 2018
 
Description Panelist at S+T+ARTS festival 2021 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Participated in a public panel, open to an in-person audience as well as live-streamed, on AI and music. This took place within the S+T+ARTS festival, part of the annual SONAR electronic music festival in Barcelona, Spain. The panel discussed the current state of AI and its relationship to tools for musicians and other creators.
Year(s) Of Engagement Activity 2021
URL https://aimusicfestival.eu/en/about-ai-and-music
 
Description Panelist for AI Artathon 2021 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I was a panelist for the AI Artathon 2021, part of the Global AI Summit, located in Saudi Arabia. As a panelist, I offered expertise on AI art practices and tools and helped to decide winners in the AI art competition. There were over 400 participants from 21 countries. The AI Artathon itself involved exhibitions and promotion of new artworks made with AI, as well as offering mentoring to AI artists and creative AI researchers. Works were shown at the SDAIA exhibition as well as Expo 2022 Dubai.
Year(s) Of Engagement Activity 2021
URL https://globalaisummit.org/
 
Description Processing Community Day London 2019 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Undergraduate students
Results and Impact Fiebrink gave an invited talk at Processing Community Day London 2020. The audience was a wide mixture of people interested in creative computing, from students to professionals in design, usability, and digital art. The talk introduced the audience to ways machine learning could be used with Processing and other creative tools, and described research challenges in working with machine learning with human movement and sensors.
Year(s) Of Engagement Activity 2019
 
Description Ran a workshop on AI for Art and Design at the Barbican 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Professional Practitioners
Results and Impact Fiebrink taught a 1-day workshop at the Barbican to an audience of professional creative practitioners in a variety of domains, as well as members of the general public. In the workshop, she taught them practical approaches to using machine learning to build creative interactions with sensors.
Year(s) Of Engagement Activity 2019
URL https://www.barbican.org.uk/whats-on/2019/event/ai-for-art-design
 
Description Ran creative machine learning workshop at Rewire festival 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Fiebrink led a workshop introducing machine learning for musicians and artists at the 2019 Rewire festival in The Hague, The Netherlands. Participants gained new practical skills in applying interactive machine learning to creating interactive systems with sensors, audio, and visuals.
Year(s) Of Engagement Activity 2019
URL https://www.eventbrite.nl/e/tickets-music-hackspace-workshop-machine-learning-for-artists-5778522315...
 
Description SXSW 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact I spoke at South By Southwest (SXSW), a top technology, arts, and entertainment festival, in Austin, Texas, USA in March 2018. I was a member of a panel on "AI and Radical Inclusion," speaking about this project and related work to make machine learning more accessible to non-computer-scientists. Because of this panel, I met the founders of a Los Angeles-based start-up, Artificial Knowing, who have been using my software to run hands-on outreach workshops to teach members of the general public about machine learning and to engage them more deeply in conversations about algorithmic bias and discrimination, as well as about new ways that machine learning might be used to benefit them. This thus connects me to a new group of potential beneficiaries of the software and techniques under development in this project. I also plan to build on some of their engagement activities for new public engagement activities I run in the UK.
Year(s) Of Engagement Activity 2018
URL https://schedule.sxsw.com/2018/events/PP80547
 
Description Speaker at the Serpentine Gallery: Aesthetics of New AI Interfaces 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Fiebrink was one of three invited panelists at an event titled "Aesthetics of New AI Interfaces," hosted by the Serpentine Gallery (online due to COVID) in February 2021. This event sparked discussion about the consequences and challenges of using AI in creative work.
Year(s) Of Engagement Activity 2021
URL https://www.serpentinegalleries.org/whats-on/aesthetics-of-new-ai-interfaces-panel-discussion/
 
Description Spotify Research Summit 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Industry/Business
Results and Impact More than a hundred Spotify employees from several countries joined others from industry and academia doing audio and music related research for this event. Fiebrink talked about making machine learning more usable and transparent, and its uses in creative practice. The talked sparked discussion about the challenges and possible impacts of more interactive, accessible machine learning.
Year(s) Of Engagement Activity 2019
URL https://spotifyresearchsummit.splashthat.com/