Data- and model-based Reinforcement Learning for Performance, Requirements, and Multi-Agent setups

Lead Research Organisation: University of Oxford

Abstract

Brief description of the context of the research including potential impact:

Despite many recent successes in the field of AI, AI systems can still only solve a narrow set of tasks in a restricted environment. Reinforcement learning (RL) is a machine learning technique that holds promise for achieving generality because almost all real-world cognitive tasks can be cast as a reinforcement learning problem. This is one where an agent is coupled with an environment and gets reward according to which action it takes in each situation. The agent must decide on a policy of actions to maximise its expected cumulative future reward.
Two key shortcomings limiting the applications of current RL systems are reward misspecification and inefficient sampling. Reward misspecification refers to the fact that it is difficult for a user to codify exactly what they want in an objective function. This can result in negative side effects or 'reward hacking' where an agent learns to exploit a loophole in the objective function to gain reward for undesired behaviours. RL's inefficient sampling refers to the fact that RL agents must currently acquire vast amounts of experience before reaching any degree of competence at a task.
Inverse Reinforcement Learning (IRL) and Active Learning try to address these shortcomings. IRL seeks to determine the objective function given observations of optimal behaviour. Several approaches to IRL have recently been put forward including Maximum entropy IRL, Cooperative IRL and Bayesian IRL. The idea behind Active Learning is that if one prioritises training on data, trajectories, or samples that would result in the greatest learning effect, then one can significantly increase the sample efficiency of learning systems (including RL agents or IRL algorithms). By addressing shortcomings in existing RL systems, I will be advancing and expediting the project of creating safe and scalable RL systems to tackle real world problems and benefit humanity.

Aims and Objectives:

- Develop novel approaches to combat reward misspecification and sampling inefficiencies.
- Extend existing frameworks to multi-agent settings.

Novelty of the research methodology:

AI safety is a nascent field which aims to address potential near-, medium-, and long-term risks of AI technologies. Current AI concerns include social media, algorithmic bias, security, and privacy, and as the applications of AI become more powerful and pervasive, it is clear that research progress should be seen through a safety lens. With an eye on safety, we hope to improve upon existing RL approaches and extend existing frameworks to multi-agent settings.

Alignment to EPSRC's strategies and research areas:

- Artificial Intelligence technologies
- Statistics and applied probability
- Theoretical Computer Science

Companies or collaborators involved: None

Planned Impact

AIMS's impact will be felt across domains of acute need within the UK. We expect AIMS to benefit: UK economic performance, through start-up creation; existing UK firms, both through research and addressing skills needs; UK health, by contributing to cancer research, and quality of life, through the delivery of autonomous vehicles; UK public understanding of and policy related to the transformational societal change engendered by autonomous systems.

Autonomous systems are acknowledged by essentially all stakeholders as important to the future UK economy. PwC claim that there is a £232 billion opportunity offered by AI to the UK economy by 2030 (10% of GDP). AIMS has an excellent track record of leadership in spinout creation, and will continue to foster the commercial projects of its students, through the provision of training in IP, licensing and entrepreneurship. With the help of Oxford Science Innovation (investment fund) and Oxford University Innovation (technology transfer office), student projects will be evaluated for commercial potential.

AIMS will also concretely contribute to UK economic competitiveness by meeting the UK's needs for experts in autonomous systems. To meet this need, AIMS will train cohorts with advanced skills that span the breadth of AI, machine learning, robotics, verification and sensor systems. The relevance of the training to the needs of industry will be ensured by the industrial partnerships at the heart of AIMS. These partnerships will also ensure that AIMS will produce research that directly targets UK industrial needs. Our partners span a wide range of UK sectors, including energy, transport, infrastructure, factory automation, finance, health, space and other extreme environments.

The autonomous systems that AIMS will enable also offer the prospect of epochal change in the UK's quality of life and health. As put by former Digital Secretary Matt Hancock, "whether it's improving travel, making banking easier or helping people live longer, AI is already revolutionising our economy and our society." AIMS will help to realise this potential through its delivery of trained experts and targeted research. In particular, two of the four Grand Challenge missions in the UK Industrial Strategy highlight the positive societal impact underpinned by autonomous systems. The "Artificial Intelligence and data" challenge has as its mission to "Use data, Artificial Intelligence and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030". To this mission, AIMS will contribute the outputs of its research pillar on cancer research. The "Future of mobility" challenge highlights the importance the autonomous vehicles will have in making transport "safer, cleaner and better connected." To this challenge, AIMS offers the world-leading research of its robotic systems research pillar.

AIMS will further promote the positive realisation of autonomous technologies through direct influence on policy. The world-leading academics amongst AIMS's supervisory pool are well-connected to policy formation e.g. Prof Osborne serving as a Commissioner on the Independent Commission on the Future of Work. Further, Dr Dan Mawson, Head of the Economy Unit; Economy and Strategic Analysis Team at BEIS will serve as an advisor to AIMS, ensuring bidirectional influence between policy objectives and AIMS research and training.

Broad understanding of autonomous systems is crucial in making a society robust to the transformations they will engender. AIMS will foster such understanding through its provision of opportunities for AIMS students to directly engage with the public. Given the broad societal importance of getting autonomous systems right, AIMS will deliver core training on the ethical, governance, economic and societal implications of autonomous systems.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S024050/1 01/10/2019 31/03/2028
2242815 Studentship EP/S024050/1 01/10/2019 30/09/2023 James Fox