Turing AI Fellowship: Trustworthy Machine Learning

Lead Research Organisation: University of Cambridge
Department Name: Engineering


Machine learning (ML) systems are increasingly being deployed across society, in ways that affect many lives. We must ensure that there are good reasons for us to trust their use. That is, as Baroness Onora O'Neill has said, we should aim for reliable measures of trustworthiness. Three key measures are:
Fairness - measuring and mitigating undesirable bias against individuals or subgroups;
Transparency/interpretability/explainability - improving our understanding of how ML systems work in real-world applications; and
Robustness - aiming for reliably good performance even when a system encounters different settings from those in which it was trained.

This fellowship will advance work on key technical underpinnings of fairness, transparency and robustness of ML systems, and develop timely key applications which work at scale in real world health and criminal justice settings, focusing on interpretability and robustness of medical imaging diagnosis systems, and criminal recidivism prediction. The project will connect with industry, social scientists, ethicists, lawyers, policy makers, stakeholders and the broader public, aiming for two-way engagement - to listen carefully to needs and concerns in order to build the right tools, and in turn to inform policy, users and the public in order to maximise beneficial impacts for society.

This work is of key national importance for the core UK strategy of being a world leader in safe and ethical AI. As the Prime Minister said in his first speech to the UN, "Can these algorithms be trusted with our lives and our hopes?" If we get this right, we will help ensure fair, transparent benefits across society while protecting citizens from harm, and avoid the potential for a public backlash against AI developments. Without trustworthiness, people will have reason to be afraid of new ML technologies, presenting a barrier to responsible innovation. Trustworthiness removes frictions preventing people from embracing new systems, with great potential to spur economic growth and prosperity in the UK, while delivering equitable benefits for society. Trustworthy ML is a key component of Responsible AI - just announced as one of four key themes of the new Global Partnership on AI.

Further, this work is needed urgently - ML systems are already being deployed in ways which impact many lives. In particular, healthcare and criminal justice are crucial areas with timely potential to benefit from new technology to improve outcomes, consistency and efficiency, yet there are important ethical concerns which this work will address. The current Covid-19 pandemic, and the Black Lives Matter movement, indicate the urgency of these pressing issues.


10 25 50