Turing AI Fellowship: Trustworthy Machine Learning

Lead Research Organisation: University of Cambridge
Department Name: Engineering

Abstract

Machine learning (ML) systems are increasingly being deployed across society, in ways that affect many lives. We must ensure that there are good reasons for us to trust their use. That is, as Baroness Onora O'Neill has said, we should aim for reliable measures of trustworthiness. Three key measures are:
Fairness - measuring and mitigating undesirable bias against individuals or subgroups;
Transparency/interpretability/explainability - improving our understanding of how ML systems work in real-world applications; and
Robustness - aiming for reliably good performance even when a system encounters different settings from those in which it was trained.

This fellowship will advance work on key technical underpinnings of fairness, transparency and robustness of ML systems, and develop timely key applications which work at scale in real world health and criminal justice settings, focusing on interpretability and robustness of medical imaging diagnosis systems, and criminal recidivism prediction. The project will connect with industry, social scientists, ethicists, lawyers, policy makers, stakeholders and the broader public, aiming for two-way engagement - to listen carefully to needs and concerns in order to build the right tools, and in turn to inform policy, users and the public in order to maximise beneficial impacts for society.

This work is of key national importance for the core UK strategy of being a world leader in safe and ethical AI. As the Prime Minister said in his first speech to the UN, "Can these algorithms be trusted with our lives and our hopes?" If we get this right, we will help ensure fair, transparent benefits across society while protecting citizens from harm, and avoid the potential for a public backlash against AI developments. Without trustworthiness, people will have reason to be afraid of new ML technologies, presenting a barrier to responsible innovation. Trustworthiness removes frictions preventing people from embracing new systems, with great potential to spur economic growth and prosperity in the UK, while delivering equitable benefits for society. Trustworthy ML is a key component of Responsible AI - just announced as one of four key themes of the new Global Partnership on AI.

Further, this work is needed urgently - ML systems are already being deployed in ways which impact many lives. In particular, healthcare and criminal justice are crucial areas with timely potential to benefit from new technology to improve outcomes, consistency and efficiency, yet there are important ethical concerns which this work will address. The current Covid-19 pandemic, and the Black Lives Matter movement, indicate the urgency of these pressing issues.
 
Description Advisory board member of the Centre for Data Ethics and Innovation
Geographic Reach National 
Policy Influence Type Participation in a advisory committee
Impact I have provided advice since 2018, continuing now. Reports include CDEI bias review, Nov 2020 https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making CDDO algorithmic transparency standard, produced with help of CDEI, Nov 2021 https://www.gov.uk/government/collections/algorithmic-transparency-standard
URL https://www.gov.uk/government/collections/algorithmic-transparency-standard
 
Description Co-organized and co-hosted workshops with data protection and healthcare regulators to discuss good governance
Geographic Reach National 
Policy Influence Type Contribution to new or Improved professional practice
Impact Contributing to updates on governance standards being developed. More information should be available later.
 
Description Contributed to Science and Technology Select Committee on reproducibility of research
Geographic Reach National 
Policy Influence Type Gave evidence to a government review
 
Description Contributed to the Justice and Home Affairs Committee's inquiry on new technologies and the application of the law
Geographic Reach National 
Policy Influence Type Gave evidence to a government review
URL https://committees.parliament.uk/writtenevidence/39076/pdf/
 
Description Accenture AI Leaders Podcast on Responsible AI 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Accenture AI Leaders Podcast on Responsible AI
Year(s) Of Engagement Activity 2022
URL https://aileaders.libsyn.com/ai-leaders-podcast-14-responsible-ai
 
Description Co-organised Alan Turing Institute 2021 Workshop on Interpretability, Safety, and Security in AI 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Co-organised a two day event that brought together leading researchers along with other stakeholders in industry and society to discuss issues surrounding trustworthy artificial intelligence. The conference presented an overview the state-of-the-art within the wide area of trustworthy artificial intelligence including machine learning accountability, fairness, privacy, and safety; including an overview of emerging directions in trustworthy artificial intelligence, and engage with academia, industry, policy makers, and the wider public.

The conference had two parts, running on the two consecutive days. The first was an academic-oriented event followed by a public-oriented event.
Year(s) Of Engagement Activity 2021
URL https://www.turing.ac.uk/events/interpretability-safety-and-security-ai
 
Description Co-organised ELLIS Human-Centric Machine Learning Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Co-organised a day-long workshop of talks and panels on human-centric machine learning. The workshop brought together leading experts, from diverse backgrounds, at the forefront of two themes:
- The differential treatment by algorithms of historically under-served and disadvantaged communities
- The development of machine learning systems to assist humans for better performance, rather than replace them.
The day included talks followed by Q&A and panel discussions with audience participation.
Year(s) Of Engagement Activity 2021
URL https://sites.google.com/view/hcml2021
 
Description Co-organised NeurIPS 2021 Workshop on AI for Science: Mind the Gaps 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Co-organised a one day workshop on gaps that stifle AI advancement:
- Unrealistic methodological assumptions or directions.
- Overlooked scientific questions.
- Limited exploration on the intersections of multiple disciplines.
- Science of science. How AI can facilitate the practice of scientific discovery itself is often undiscussed.
- Responsible use and development of AI for science.

The goal of this workshop aimed to bridge each of the above gaps via:
- Discussing directions in ML that are likely/unlikely to have an impact across scientific disciplines and identify reasons behind them.
- Bringing to the front key scientific questions with untapped potential for use of ML methodological advances.
- Pinpointing grand challenges at the intersection of multiple scientific disciplines (biology, chemistry, physics, neuroscience, etc).
- Highlighting how ML can change or complement classic scientific methods and transform the science of scientific discovery itself.

The workshop included invited and contributed talks, poster sessions and a panel discussion. A mentorship program was also facilitated. 52 workshop papers were accepted for this workshop.
Year(s) Of Engagement Activity 2021
URL https://ai4sciencecommunity.github.io/
 
Description Co-organised NeurIPS 2021 Workshop on Human-Centered AI 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Co-organised a one day workshop aimed at bringing together researchers and practitioners from the NeurIPS and Human-Centered AI (HCAI) communities and others with convergent interests in HCAI. The workshop had an emphasis on diversity and discussion, and explored research questions that stem from the increasingly wide-spread usage of machine learning algorithms across all areas of society, with a specific focus on understanding both technical and design requirements for HCAI systems, as well as how to evaluate the efficacy and effects of HCAI systems. The workshop had 20 accepted papers under four themes: Ethics, Human(s) and AI(s), Methods, and XAI Explainable AI.
Year(s) Of Engagement Activity 2021
URL https://sites.google.com/view/hcai-human-centered-ai-neurips/home
 
Description Co-organised NeurIPS 2021 Workshop on Privacy in Machine Learning 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Co-organised a one-day workshop the focused on privacy-preserving machine learning techniques for large-scale data analysis, both in the distributed and centralized settings, and on scenarios that highlight the importance and need for these techniques (e.g., via privacy attacks). The workshop included talks from invited speakers, panel discussions and presentations of accepted workshop papers. 36 papers were accepted for the workshop.
Year(s) Of Engagement Activity 2021
URL https://priml2021.github.io/
 
Description Radio 4 Rutherford and Fry discussant on AI alignment following Stuart Russell's final Reith lecture 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Radio 4 Rutherford and Fry discussant on AI alignment following Stuart Russell's final Reith lecture
Year(s) Of Engagement Activity 2021
URL https://www.bbc.co.uk/programmes/m0012q27