Turing AI Fellowship: Interactive Annotations in AI

Lead Research Organisation: University of Bristol
Department Name: Engineering Mathematics

Abstract

With the prevalence of data-hungry deep learning approaches in Artificial Intelligent (AI) as the de facto standard, now more than ever there is a need for labelled data. However, while there have been interesting recent discussions on the definition of readiness levels of data, the same type of scrutiny on annotations is still missing in general: we do not know how or when the annotations were collected or what their inherent biases are. Additionally, there are now forms of annotation beyond standard static sets of labels that call for a formalisation and redefinition of the annotation concept (e.g., rewards in reinforcement learning or directed links in causality).

During this Fellowship we will design and establish the protocols for transparent annotations that empowers the data curator to report on the process, the practitioner to automatically evaluate the value of annotations and the users to provide the most informative and actionable feedback. This Fellowship will address all these through a holistic human-centric research agenda, bridging gaps in fundamental research and public engagement with AI.

The Fellowship aims to lay the foundations for a two-way approach to annotations, where the paradigm is shifted from annotations simply being a resource to them becoming a means for AI systems and humans to interact. The bigger picture is that, with annotations seen as an interface between both entities, we will be in a much better position to guide the relation of trust in between learning systems and users, where users translate their preferences into the learning systems' objective functions. This approach will help produce a much needed transformation in how potentially sensitive aspects of AI become a step closer to being reliable and trustworthy.

Publications

10 25 50