Robustness of Deep Learning Perception Models

Lead Research Organisation: University of Oxford


EPSRC areas: Artificial intelligence technologies, Artificial intelligence and robotics theme, Verification and correctness
Company involved: Toyota Europe

Brief description of the context of the research including potential impact:
Deep learning (DL) has advanced dramatically in recent years, resulting in widespread take up of the technology in real-world applications, including face recognition and autonomous driving, but perception models can be unstable wrt adversarial examples, where a small modification to input causes a misclassification. To ensure safety and security of applications, rigorous methodologies are needed that facilitate the development of robust perception models that can be incorporated within automated controllers for robotic applications. A typical approach to evaluating robustness of DL models is through heuristic search for adversarial examples (e.g., gradient-based, stochastic search), which offers no guarantees that adversarial examples do not exist if not found. An alternative, more powerful, method is to employ automated verification, which aims to provide provable guarantees on the model behaviour in a given scenario. While there has been much progress recently in this direction, the focus has been mainly on local robustness with respect to simple input manipulations, and there is a lack of frameworks that support natural geometric transformations and contextual effects that are typical in autonomous driving applications. Successful deployment of DL perception models crucially depends on the ability to provably guarantee their robustness against a wide range of natural and contextual transformations. This project will develop new methods for providing such semantic robustness guarantees for DL perception models.

Aims and Objectives:
In this research programme we will build on the techniques developed by the PI for deep learning (e.g. Safety Verification of Deep Neural Networks, CAV 2017) and their extension to Bayesian neural networks (e.g. Probabilistic Safety for Bayesian Neural Networks, UAI 2020) to develop the theoretical foundations for evaluating robustness of 3D perception models that are amenable to inclusion within automated vehicle and robotic controllers. The research will be informed by concrete application scenarios and sensing frameworks, and will draw on a range of symbolic and neural techniques, including Bayesian learning, uncertainty quantification, constraint solving and causal reasoning. The focus will be on investigating research questions and developing frameworks that are relevant for practical applications.

Novelty of the research methodology:
This project will focus on the intersection between representation learning and semantic robustness. Most prior work has focused on classification robustness under bounded pixel perturbations. We posit that semantic robustness is a stronger robustness condition that requires not only stable label predictions under perturbations but also that the intermediate representations of an input are stable under semantic perturbations. Therefore, we will focus on the robustness of representation learning, particularly as regarding complex geometric transformations and contextual changes.

Planned Impact

AIMS's impact will be felt across domains of acute need within the UK. We expect AIMS to benefit: UK economic performance, through start-up creation; existing UK firms, both through research and addressing skills needs; UK health, by contributing to cancer research, and quality of life, through the delivery of autonomous vehicles; UK public understanding of and policy related to the transformational societal change engendered by autonomous systems.

Autonomous systems are acknowledged by essentially all stakeholders as important to the future UK economy. PwC claim that there is a £232 billion opportunity offered by AI to the UK economy by 2030 (10% of GDP). AIMS has an excellent track record of leadership in spinout creation, and will continue to foster the commercial projects of its students, through the provision of training in IP, licensing and entrepreneurship. With the help of Oxford Science Innovation (investment fund) and Oxford University Innovation (technology transfer office), student projects will be evaluated for commercial potential.

AIMS will also concretely contribute to UK economic competitiveness by meeting the UK's needs for experts in autonomous systems. To meet this need, AIMS will train cohorts with advanced skills that span the breadth of AI, machine learning, robotics, verification and sensor systems. The relevance of the training to the needs of industry will be ensured by the industrial partnerships at the heart of AIMS. These partnerships will also ensure that AIMS will produce research that directly targets UK industrial needs. Our partners span a wide range of UK sectors, including energy, transport, infrastructure, factory automation, finance, health, space and other extreme environments.

The autonomous systems that AIMS will enable also offer the prospect of epochal change in the UK's quality of life and health. As put by former Digital Secretary Matt Hancock, "whether it's improving travel, making banking easier or helping people live longer, AI is already revolutionising our economy and our society." AIMS will help to realise this potential through its delivery of trained experts and targeted research. In particular, two of the four Grand Challenge missions in the UK Industrial Strategy highlight the positive societal impact underpinned by autonomous systems. The "Artificial Intelligence and data" challenge has as its mission to "Use data, Artificial Intelligence and innovation to transform the prevention, early diagnosis and treatment of chronic diseases by 2030". To this mission, AIMS will contribute the outputs of its research pillar on cancer research. The "Future of mobility" challenge highlights the importance the autonomous vehicles will have in making transport "safer, cleaner and better connected." To this challenge, AIMS offers the world-leading research of its robotic systems research pillar.

AIMS will further promote the positive realisation of autonomous technologies through direct influence on policy. The world-leading academics amongst AIMS's supervisory pool are well-connected to policy formation e.g. Prof Osborne serving as a Commissioner on the Independent Commission on the Future of Work. Further, Dr Dan Mawson, Head of the Economy Unit; Economy and Strategic Analysis Team at BEIS will serve as an advisor to AIMS, ensuring bidirectional influence between policy objectives and AIMS research and training.

Broad understanding of autonomous systems is crucial in making a society robust to the transformations they will engender. AIMS will foster such understanding through its provision of opportunities for AIMS students to directly engage with the public. Given the broad societal importance of getting autonomous systems right, AIMS will deliver core training on the ethical, governance, economic and societal implications of autonomous systems.


10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S024050/1 30/09/2019 30/03/2028
2579432 Studentship EP/S024050/1 30/09/2021 29/09/2025 Aleksandar Petrov