Incorporating self- and world-models in neural networks for flexible robot learning and control
Lead Research Organisation:
University of Sheffield
Department Name: Computer Science
Abstract
Established methods for robotic control are inflexible in adapting to new tasks. Recently, deep neural network based methods for interactive control, termed reinforcement learning, have shown promise in self-learning to solve tasks. However, they require a huge number of, often random, interactions with the environment for each new task. On the contrary, human brains learn models of their bodies and the environment to efficiently predict and plan their decisions and movements, and can adapt online. Extant model-based schemes for control have been hampered by poorly learned models. This project will distill and improve diverse advances in cognitively-inspired model-based reinforcement learning to enable robots to self-learn new tasks and adapt to perturbations fast and flexibly. We will learn a multi-level model of a compliant robotic arm and its environment, and then use this model for planning and control. This architecture will enable the robot to self-learn to attain goal states, via planning at a higher, human-interpretable level on its internal model with minimal real-world interactions, and also to adapt online. The student will benchmark the architecture on accurate reaching, and stacking blocks, building towards industrial use cases.
Organisations
Studentship Projects
Project Reference | Relationship | Related To | Start | End | Student Name |
---|---|---|---|---|---|
EP/T517835/1 | 01/10/2020 | 30/09/2025 | |||
2784464 | Studentship | EP/T517835/1 | 07/01/2022 | 06/07/2025 | Reabetswe Nkhumise |