Supporting Feature Engineering for End-User Design of Gestural Interactions

Lead Research Organisation: Goldsmiths College
Department Name: Computing Department

Abstract

Sensors for analysing human gesture and activity (such as accelerometers and gyroscopes) are becoming increasingly affordable and easy to connect to existing software and hardware. There is great, unexplored potential for these sensors to support custom gestural control and activity recognition systems. Applications include include the creation of bespoke gestural control interfaces for disabled people, new digital musical instruments, personalised performance analysis systems for athletes, and new embodied interactions for gaming and interactive art. The ability to easily create novel interactions with motion sensors also benefits schoolchildren and university students who are learning about computing through the use of sensors with platforms such as BBC micro:bit and Arduino.

We have previously established methods for enabling people without programming expertise to build custom gesturally-controlled systems, using interactive machine learning. These methods allow people to easily create new systems by demonstrating examples of human actions, along with the desired label or computer response for each action.

Unfortunately, many compelling applications of custom gesture and activity recognition require substantial pre-processing of raw sensor data (i.e., "feature engineering") before machine learning can be applied successfully. Experts first apply a variety of signal processing techniques to sensor data in order to make machine learning feasible. Many people who would benefit from the ability to create custom gestural interactions lack the signal processing and programming expertise to apply those methods effectively or efficiently. It is not known how to successfully expose control over feature engineering to non-experts, nor what the trade-offs among different strategies for exposing control might be.

Our hypothesis is that it is possible to support creation of more complex gestural control and analysis systems by non-experts. We propose to develop and compare three methods for exposing control over feature engineering to non-experts, each requiring varying degrees of user involvement.

The first method uses a fully automated approach, which attempts to computationally identify good features. The second method first elicits high-level information about the problem from the user, and then employs this information to better inform the automated approach. The third method directly involves the user in the feature engineering process. By leveraging users' ability to demonstrate new gestures, identify patterns in visualisations, and reason about the problem domain - as well as computers' ability to employ users' demonstrations to propose new relevant features - this interactive approach may yield more accurate recognisers. Such an approach may also help users learn about the utility of different features, enabling more efficient debugging of their systems and a better understanding of how to build other systems in the future.

We are interested in understanding both the accuracy and usability of these methods. We will evaluate each method with people training several types of gesture and activity recognisers. We will compare each method in terms of the accuracy of the final system, the time required for users to build the system, users' subjective experiences of the design process and quality of the final system, and improvements in users' ability to reason about building gestural interactions with sensors.

This research will enable a wider range of people to successfully use popular motion sensors to create bespoke gestural control and analysis systems, for use in a broader range of applications. Our techniques will be directly implemented in existing open-source software for interactive machine learning. The methods and study outcomes will also inform future work to support feature engineering for users creating real-time systems with other types of sensors.

Planned Impact

This project puts at its heart user-centred design and stakeholder engagement, in order to ensure that benefits are recognised in a number of different target audiences.

1. User groups in the general public:

New groups of people will gain the ability to create compelling, customised interactions with motion sensors, including:
- People with disabilities who can create new, customised gestural interfaces for controlling computers and games;
- Musicians creating new digital musical instruments;
- Athletes and trainers creating personalised performance monitoring and analysis systems;
- Artists creating new interactive experiences;
- "Quantified self" practitioners seeking new ways to understand or change their habits;
- "Hackers" and "makers" developing new interactions with sensing hardware for practical or creative use.

The ability to create more sophisticated gesture and activity recognition systems can enhance such people's quality of life, providing an increased sense of agency, new creative outlets, and a better understanding of themselves. We will integrate our techniques for end-user feature engineering directly into existing free software that provides GUIs for use by non-programmers. This means the above users will immediately be able to employ these techniques.

2. Users in the creative and digital industries

Individuals who currently design interactions as part of their professional practice-including many artists, musicians, and indie game designers-will be able to use sensors more effectively and efficiently in their work. Other people who currently employ sensors for non-commercial, personal use, such as hackers/makers, may become more likely to commercialise their designs as the sophistication of interactions they can create improves.

3. School students using sensors to learn about computing.

The introduction of 1 million BBC micro:bits into UK classrooms this year reflects recognition that work with sensors can engage students in learning computing, encouraging creative exploration of projects that connect computing to the physical world. Our project greatly expands the set of interactions that non-experts can easily build using micro:bit's accelerometers, and our findings will directly inform the design of future tools for students experimenting and creating interactions with sensors. The software tools released by our project will also be immediately usable by university students in physical computing classrooms who are building new interactions with sensors and platforms such as Arduino and Raspberry Pi. Increased student engagement with electronics presents opportunities for additional, longer-term economic benefits: students who build solid foundations in STEM skills will be better prepared to contribute to the 21st century workforce and to become technology innovators themselves.

Publications

10 25 50
 
Description 4i: Immersive Interaction design for Indie developers with Interactive machine learning
Amount £497,828 (GBP)
Funding ID EP/S02753X/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Academic/University
Country United Kingdom
Start 09/2019 
End 08/2021
 
Description AHRC Research Grants - Standard
Amount £799,512 (GBP)
Funding ID AH/R002657/1 
Organisation Arts & Humanities Research Council (AHRC) 
Sector Public
Country United Kingdom
Start 05/2018 
End 04/2021
 
Description Art, Artifice & Intelligence: A UK-Japan partnership exploring art and AI
Amount £49,921 (GBP)
Funding ID ES/S014128/1 
Organisation Economic and Social Research Council 
Sector Public
Country United Kingdom
Start 01/2019 
End 04/2020
 
Description Google Focused Research Award
Amount $20,000 (USD)
Organisation Google 
Sector Private
Country United States
Start 02/2019 
End 09/2019
 
Title Wekinator with Interactive Feature Visualisation and Selection 
Description We have implemented the two iterations of mixed-initiative interfaces for end-user feature engineering within a new version of the Wekinator software. Wekinator is a open-source, cross-platform software tool for end-user interactive machine learning, which I have been developing since 2008. It has been downloaded over 25,000 times since 2015. The main users are digital artists and musicians, designers, students, and "hackers"/"makers". Wekinator is also used in the curriculum at a number of universities (including Goldsmiths, University of Colorado Boulder, New York University, and Columbia), and in the online course (MOOC) "Machine learning for musicians and artists" on the Kadenze platform. Improvement to this platform will thus have a wide reach. We have implemented our first approaches for improving end-user feature engineering within the Wekinator software, and this is currently available on github (see the link below; current work is in the "feature/" branches of the repository). We are conducting the third round of iterative evaluations with users now, and we will continue to refine the end-user feature engineering capabilities of the software based on our research findings. 
Type Of Technology Software 
Year Produced 2018 
Open Source License? Yes  
Impact Although the software was just released to students in November 2018, several Goldsmiths students have already used it to realise creative projects (e.g., a game built in Unity with Leap Motion that enables players to define their own gestures). 
 
Description Ars Electronica 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I gave an invited talk at Ars Electronica, a top-tier digital arts festival, in Linz, Austria, in September 2017. The talk was titled "Machine Learning as Creative, Collaborative Design Tool." Several hundred audience members attended, most of whom are professional digital artists, creative technology creators, or digital arts enthusiasts. Due to this event, I was also interviewed about creative end-user machine learning for Austrian radio station Ö1 and filmed for a new documentary about arts and machine learning.
Year(s) Of Engagement Activity 2017
URL https://www.aec.at/ai/en/symposium/
 
Description Google PAIR Symposium 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact I was invited to give a talk about accessible end-user machine learning at a symposium organised by Google's PAIR (People + AI Research Initiative) in Cambridge, Massachusetts, USA in September 2017. Over 100 people attended in person, and the event was live-streamed online. The audience consisted of Google engineers and researchers, professional developers and researchers from other organisations (including academic and industry), and professional digital artists and designers, among others. The talk sparked excellent questions about the value of machine learning as a design tool, and the approaches for making machine learning more accessible to broader audiences. I was also invited to apply for Google funding for a research project related to the development of new machine learning teaching resources.
Year(s) Of Engagement Activity 2017
URL https://sites.google.com/view/pair-symposium2017/home
 
Description Guest lectures and workshop at Stanford University 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I gave two guest lectures (one in the Computer Music research group in the Music department, and one in Computer Science) to students and staff at Stanford University in April 2018. These lectures presented my research about making machine learning usable and useful for non-experts. I also gave a one-day workshop, open to members of both departments and also attended by two members of the public, teaching attendees how to use software developed in my research for machine learning in creative and interactive contexts.

As a result of this work, a number of people at Stanford are now using software developed in my research--both in teaching and in their own research. I was also invited to apply for funding for a collaboration with Stanford researchers (we are still waiting to find the outcome of this application).
Year(s) Of Engagement Activity 2018
URL https://hci.stanford.edu/courses/cs547/speaker.php?date=2018-04-20
 
Description Invited talk and workshop at Eyeo Digital Arts Festival in Minneapolis, USA 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I was invited to give a talk and lead a workshop at the Eyeo digital arts festival held in Minneapolis, MN in 2018. Eyeo 2018 had roughly 500 attendees, comprising mainly digital artists and creative technologists from around the world. The talk focused on my research on making usable machine learning tools for creative practitioners. I also led a day-long workshop that taught approximately 30 attendees how to use machine learning to create new real-time interactions with sensors, using software and knowledge produced in my research.
Year(s) Of Engagement Activity 2018
URL http://eyeofestival.com
 
Description Invited talk at DeepMind 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Industry/Business
Results and Impact I gave an invited talk to researchers and engineers at DeepMind (an AI company based in London). Approximately 60 people attended. My talk encouraged people to consider how machine learning can be useful to non-experts, including in end-user design applications, and how to make it more accessible to non-computer scientists.
Year(s) Of Engagement Activity 2018
 
Description SXSW 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact I spoke at South By Southwest (SXSW), a top technology, arts, and entertainment festival, in Austin, Texas, USA in March 2018. I was a member of a panel on "AI and Radical Inclusion," speaking about this project and related work to make machine learning more accessible to non-computer-scientists. Because of this panel, I met the founders of a Los Angeles-based start-up, Artificial Knowing, who have been using my software to run hands-on outreach workshops to teach members of the general public about machine learning and to engage them more deeply in conversations about algorithmic bias and discrimination, as well as about new ways that machine learning might be used to benefit them. This thus connects me to a new group of potential beneficiaries of the software and techniques under development in this project. I also plan to build on some of their engagement activities for new public engagement activities I run in the UK.
Year(s) Of Engagement Activity 2018
URL https://schedule.sxsw.com/2018/events/PP80547