Hierarchical Motor Learning - Neuromimetic principles for Human Robotics

Lead Research Organisation: Imperial College London
Department Name: Dept of Bioengineering

Abstract

My aim is to develop robotics and machine learning technology to help people affected by paralysis regain a measure of motor independence. To this end, I aim to adopt Brain-Computer Interfaces (BCIs), or more specifically Brain-Robot Interfaces, to control robotic actuators that restore motor abilities to people with upper arm paralysis resulting from stroke and muscular dystrophy. In this area of robotics, I hope to replace the functionality of the paralysed hand and arm through cognitive operation of appropriate dexterous robotic arms. This cognitive operation is delivered through a neuromimetic artificial intelligence system that decodes high level task intentions of the paralysed user and translates these into orchestrated robotic actions. While this is ambitious in aims, I believe I can deliver a working proof of principle of this approach in a defined desk-based scenario. Because BCIs focus mainly on single, low-level actions, paralysed users must currently learn low-level tasks such as controlling individual elbow or finger joints, when their actual intention, for example, may simply be to pick up a cup. Recent research (Abbott and Faisal,
2012) has aimed to circumvent this problem using gaze-tracking to decode user intention.

However, this paradigm precludes gaze-independent movements. As such, it will be
worthwhile to develop task-level cognitive robotic control interfaces. Doing so will result in a reduction of cognitive load on the end-user, making robotic limbs easier to control. To achieve this task however, it will be important to fully understand how tasks are controlled in a hierarchical manner.

Learning and controlling complex motor behaviours to solve real world tasks is a
computationally-hard problem, due to the large number of degrees of freedom associated with movement in three-dimensional space (Shenoy, Sahani and Churchland, 2013). These are distributed over many independent actuators, which are coordinated across different subtasks in order to accomplish larger tasks. As such, the sophistication and success biological systems have achieved is something computer science and robotics seeks to emulate. Biological motor systems have evolved over millions of years to be optimised for interaction with their respective environments, and so biomimetics, or learning from biology, could help us to engineer systems that can learn to control motor systems. To this end, I take inspiration from neuroscience - the idea of motor primitives (Bernstein, 1967; Thoroughman and Shadmehr, 2000; Ijspeert, Nakanishi and Schaal, 2002) is a close analogy to the terminal symbols of robotic actions. We can think of motor primitives in the lowest sense as both sub-tasks and terminal symbols, and of tasks as groupings of terminal symbols that follow rules resembling formal grammars. I will machine learn these from kinematic data of humans performing these tasks, using switching dynamical systems or sparse eigenmotion decomposition (Thomik et al., 2015).
In humans, complex movement patterns are thought to be generated through the chaining of
combinations of such motor primitives (d'Avella et al., 2006; Ruckert and d'Avella, 2013 ).
The simplest way to 'decode' these motor primitives, for translation into robotic control
commands, is to investigate motor cortical and cerebellar output in animal models - recent
research has shown evidence of movement-based functional organisation in motor cortex,
and cortical mapping studies are shedding light on biological grasp and reach control strategies (Graziano, 2006; Brown and Teskey, 2014; Hira et al, 2015). Understanding and extracting motor cortical signals and motor primitive kinematics from human hand and arm
movement will facilitate a better understanding of the neural representations of movement
(Schack and Ritter, 2013).

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509486/1 01/10/2016 30/09/2021
1975540 Studentship EP/N509486/1 30/09/2017 29/09/2021 John Alexander Harston
 
Description Whilst this work is ongoing, we are building predictive models linking human motor behaviour to the visuomotor system to try and understand the relationship between the two in complex, real-world environments. We have submitted a preprint on BioRXiv showing that by incorporating full human kinematic information (that is, movement of the whole body) into our models we can predict gaze behaviour (where one is going to look) in the future better than we can by modelling simple past gaze behaviour - this is an existing challenge in the field of gaze research. Findings like this are an important first step in understanding the full extent of integrated hierarchical motor control in humans - that is, allowing us to understand how we exist in our very complex environments, how we perceive our world, and then interact with our surroundings.
Exploitation Route Upon completion future researchers will be able to pick up where this research leaves off and continue developing richer models of hierarchical human visuomotor behaviour. The insights and datasets we gather represent a platform from which others will be able to extend our insights, perhaps with larger computational power, or with richer datasets.
Sectors Digital/Communication/Information Technologies (including Software),Healthcare,Pharmaceuticals and Medical Biotechnology