Enhancing Security and Robustness of AI/ML Systems in Robotics
Lead Research Organisation:
UNIVERSITY COLLEGE LONDON
Department Name: Computer Science
Abstract
This PhD research project focuses on enhancing the security and robustness of
AI/ML systems in robotics. As these technologies become integral to autonomous systems,
they introduce vulnerabilities that adversaries can exploit, potentially compromising safety
and functionality. The research aims to identify and address security threats in applications
like autonomous path planning, computer vision, and reinforcement learning.
The study includes simulating adversarial attacks, such as brute-force attacks on path-planning
algorithms, to uncover weaknesses and environmental factors that may affect system integrity.
By implementing these attacks on both simulated and real-world robotic platforms, the research seeks to understand the limitations of current AI/ML algorithms under adversarial
conditions.
A comprehensive security assessment will catalogue various AI/ML uses in robotics, prioritising
those most at risk. The project will cyclically evaluate multiple AI/ML applications, using
adversarial machine learning methods to test and improve their robustness. This iterative
approach ensures a detailed examination of vulnerabilities and the development of effective
countermeasures.
The ultimate goal is to create a robust security framework applicable to various robotic systems,
ensuring safe and reliable operations across industries. This research addresses the critical need
for improved security in AI/ML applications, providing solutions that support the evolving
landscape of robotics technology.
AI/ML systems in robotics. As these technologies become integral to autonomous systems,
they introduce vulnerabilities that adversaries can exploit, potentially compromising safety
and functionality. The research aims to identify and address security threats in applications
like autonomous path planning, computer vision, and reinforcement learning.
The study includes simulating adversarial attacks, such as brute-force attacks on path-planning
algorithms, to uncover weaknesses and environmental factors that may affect system integrity.
By implementing these attacks on both simulated and real-world robotic platforms, the research seeks to understand the limitations of current AI/ML algorithms under adversarial
conditions.
A comprehensive security assessment will catalogue various AI/ML uses in robotics, prioritising
those most at risk. The project will cyclically evaluate multiple AI/ML applications, using
adversarial machine learning methods to test and improve their robustness. This iterative
approach ensures a detailed examination of vulnerabilities and the development of effective
countermeasures.
The ultimate goal is to create a robust security framework applicable to various robotic systems,
ensuring safe and reliable operations across industries. This research addresses the critical need
for improved security in AI/ML applications, providing solutions that support the evolving
landscape of robotics technology.
Organisations
People |
ORCID iD |
| Adrian Szvoren (Student) |
Studentship Projects
| Project Reference | Relationship | Related To | Start | End | Student Name |
|---|---|---|---|---|---|
| EP/S022503/1 | 31/03/2019 | 23/11/2028 | |||
| 2872049 | Studentship | EP/S022503/1 | 30/09/2023 | 29/09/2027 | Adrian Szvoren |