Project Title: Integrating computer vision with neural interfacing for semi-autonomous control of robotic limbs.

Lead Research Organisation: Imperial College London
Department Name: Bioengineering

Abstract

Background: Humans depend on their hands for a multitude of everyday activities and tasks, thus the loss of one's hand can have a large impact on the way in which they interact with other objects. Use of a prosthesis can enhance an amputee's lifestyle to some extent, acting as a partial substitute for the lost limb. Most prosthetic systems rely on sequential and proportional control, such that the user drives the prosthesis to adjust multiple degrees of freedom through input actions [1]. Thus, the user is responsible for most of the steps of grasping, which biologically takes place through a sequence of phases, from planning to execution, and involves the integration of sensory information from multiple sources [1].
Aims: In this project we will pursue the notion that sensory information, including visual information, can be collected and input through the robotic system itself, allowing for semi-autonomous control of the prosthesis, such that the user needs to perform fewer input actions to grasp an object in the correct alignment. Thus, the project will study the potential for using shared control for grasping objects in prosthesis. Cameras and potentially other sensors, such as accelerometers, will be placed on the prosthetic itself, and the system will receive information both from these sensors and from the user themselves (e.g. myoelectric control). The end goal is to have the full system as a complete embedded solution.
Research steps: Alongside performing a thorough literature review surrounding the research subject, we will choose the sensors to be incorporated into the prosthetic, as well as their mounting points. Computer vision and machine learning algorithms will be implemented in order to select the optimal grasp type to be used based on multiple factors, such as the object and the most appropriate contact points. Alongside this, we will be fortifying a robust electromyography (EMG) interface incorporated with a prosthesis. The computer vision, grasp-selective system will then be incorporated into the prosthesis to create a prototype involving the fully-embedded, shared control of the two systems. Finally, studies will be performed to understand whether or not the completed system improves an individual's grip compared to a standard multiple degree of freedom prosthesis.

Publications

10 25 50