Robot Learning and Control for Manipulation and Self-organization of Complex Systems

Lead Research Organisation: Imperial College London
Department Name: Design Engineering

Abstract

Deep Reinforcement Learning (Deep RL) has recently achieved great success in many complex game environments, including Atari, Go, and most recently StarCraft II. However, the sample complexity of these methods is not nearly good enough to enable their application to physical domains -- e.g., robotics. Therefore, my research focuses on developing new generic methods for improving upon the sample efficiency of Deep RL algorithms. My recent works have achieved this through exploitation of structure for action space representation in order to enable scalability and sample efficiency of Deep RL algorithms in high-dimensional action spaces, formalising the general setting of RL with respect to time limits, and a method for prioritizing of starting states in RL to help improve exploration and, thus, sample efficiency.

The following is the list of my publications to date (1 and 2 are published in the top conferences for AI and Machine Learning, and 3 is a preprint):
1) Action Branching Architectures for Deep Reinforcement Learning, AAAI Conference on Artificial Intelligence (AAAI), 2018.
2) Time Limits in Reinforcement Learning, International Conference on Machine Learning (ICML), 2018.
3) Prioritizing Starting States for Reinforcement Learning, arXiv:1811.11298, 2018.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509486/1 01/10/2016 30/09/2021
1863439 Studentship EP/N509486/1 01/10/2016 15/12/2020 Arash Tavakoli